After receiving our third reminder to “Please share your thoughts and experiences with generative AI tools,” it sure seems like the university is about to adopt rigorous policies on AI usage. That’s great — generative AI has entered that scary phase in which policy has yet to catch up with technology, and a school known for its AI research doesn’t want to be caught with its pants down. The survey suggests that the administration genuinely cares about what students think, and is looking to implement reasonable policies.

There’s a reason why ChatGPT has spurred this conversation about AI. It’s an incredibly powerful tool and, frankly, scarily good at its job. But when you look past the novelty and take a critical eye to the content it produces, the flaws start to show.

ChatGPT can write sentences, but are they actually good sentences? It’s a procedural algorithm with no conception of style, personality, or humor. Lead Copy Editor Jimmy Baracia said he once prompted ChatGPT to write an essay for his class (after he’d already written it himself) and found that the product wasn’t very interesting. It was wordy, the sentences ran on, and the whole essay circled around without making much of a point. Sports Editor Haley Williams pointed out that if you ask ChatGPT to elaborate on something it just said, it will reproduce the same idea with twice as many words — an editor’s nightmare. 

There’s also the fact that ChatGPT is often wrong. When it doesn’t know something, it will fabricate a reasonable-sounding statement because it doesn’t know what a fact is. This summer, New York lawyers Peter LoDuca and Stephen Schwartz were sanctioned after filing a GPT-written legal brief containing completely fictitious cases

Kate Myers, our Art Editor, chimed in with the much-needed CFA perspective by pointing out that AI art is trained on real artists’ work, and that it could easily be seen as infringing on copyright. AI art has even started showing up in posters around campus (don’t think we didn’t notice). Though at a glance they look fine, just a moment of scrutiny makes a lot of weird details pop out. AI art has inconsistencies and asymmetries that a human artist would recognize and adjust, giving away the ruse no matter how crisp the image is. 

For all its flaws, generative AI can be extremely useful. It can help you get started with a hard math problem or coding assignment, create correctly formatted references, and provide you with good resources when researching a new topic (although, as Kate Myers would testify, if you ask it for sources it also gives you a lot of duds). Imposing an outright ban on generative AI would make luddites out of an administration that should be forward-thinking.

That being said, there are obviously problems afoot. ChatGPT opens up whole new vectors for cheating and plagiarism at a scale unthinkable to past generations of students. However, any good policy on AI in the classroom should recognize that ChatGPT isn’t actually an “intelligence.” Frankly, it’s not even that scary. Too harsh a policy would fall for the reactionary (and, dare we say, tech-fetishist) narrative that generative AI is something we’ve never seen before. It’s just a really smart machine that can produce a lot of words, and when used smartly it can save you a lot of time on the menial parts of assignments. The problem comes when you let it take over the job of producing original thoughts. And besides, isn’t that the fun part anyway?

,

Leave a Reply

Your email address will not be published. Required fields are marked *