An image of the AI detector, ZeroGPT. Shannon Horning/ SciTech Editor

Carnegie Mellon students seem to be big fans of generative artificial intelligence. At least, that’s what I’ve gathered from the sample of students sitting directly in front of me in lecture. However, many of these students need to hide this love of theirs due to various class syllabi prohibiting or restricting the use of AI tools. For some examples, see: 

“You cannot use any generative AI tools for any assignment in this class: not for drafting or brainstorming, not for writing, not for coding, not for getting started or for getting unstuck, nothing. Using one will be treated like plagiarism. Spelling and grammar checkers are OK.” – (36-402, Cosma Shalizi);

“IF YOU ARE CAUGHT PLAGIARIZING (which includes copying material produced by “AI” writing systems and presenting it as your own!), YOU WILL FAIL THIS COURSE. – (79-302, Jay Aronson);

“We allow the use of generative AI tools (GPT, Claude, etc.) in the completion of homework assignments with appropriate disclosure (see above). Using these tools is not in itself an academic integrity violation, but you must include all uses in your collaboration statement. Failure to disclose is an academic integrity violation, even if the use itself is OK.” – (10-701, Geoff Gordon and Max Simchowitz).

Most classes have zero-tolerance policies, although some will allow AI for certain assignments, or build assignments specifically to be done with AI. To help professors, the Eberly Center (for Teaching Excellence and Educational Innovation) has put together a few sample paragraphs that professors can put in their syllabi to use as policies, ranging from most restrictive (“Example 1: Students may NOT use generative AI in any form”) to permissive or even supportive (“Example 4: Students are fully encouraged to use generative AI”).

The amount of students seen using ChatGPT, DeepSeek, Claude, LLama, Gemini, Copilot, etc. does not seem correspond with the amount of classes that allow the use of these tools, so unless everyone in my gen eds are writing books for personal use and building codebases in preparation for Y-combinator, many of these syllabi are being flouted. Carnegie Mellon likely has more accurate information on this than The Tartan does, as the University Education Council built and mass-emailed out a survey meant to “gather [student] thoughts and experiences related to generative AI tools” in Jan. 2024. Sadly the university doesn’t seem to have released the results of this survey publicly yet.

This disconnect between student and professor means that students will disguise their AI use, just as they used to disguise their professionally written essays and Chegg-inspired Calc I homework. Students live in fear of the almighty “AI detector,” and an economic niche has been created as AI detection startups feud with AI detection avoidance startups, seemingly primarily in the form of ads on my Instagram. The good news is that companies like GPTZero and ZeroGPT and Turnitin and gazillions of others get to prey on educators’ hopes and dreams, and companies that don’t need the search engine optimization boost get to charge up to $79 a month for the “Pro Plan” of their “AI Content Detector Bypassers,” advertising “Undetectable by all AIs” and “No weird or random words.” The bad news is that this entire cottage industry is a fraud. Numerous studies have found AI detectors to be unreliable enough that only a rube would take them at their word. And obviously this unreliability means any of the numerous “AI Humanizer” plugins featured on the ChatGPT “Explore GPTs” page don’t need to do anything to be perceived as working.

So is there any hope for professors trying to force their students to actually learn? Probably not. As an educator (well, technically just a teaching assistant getting paid double the shockingly low Pennsylvania minimum wage) I prefer that my students learn by doing instead of letting an AI wander into the answer. This is not only because I’m a great guy that cares about my students taking advantage of their neuroplasticity and building those valuable neural-pathways, but because I feel bad when their lack of innate knowledge causes them distressing results on our meticulously fine-tuned exams. As they say, you can lead a gift horse to water, but you can’t make it drink in the mouth.

Author

  • Zachary Gelman

    I wrote a bunch of articles here that your modern browser think will brick your computer: https://thetartan.club.cc.cmu.edu/staff/zgelman

    View all posts

Leave a Reply

Your email address will not be published. Required fields are marked *