Another semester is upon us, but for the student body here at the University of Massachusetts, and many other universities, the temptation to cheat has never been higher. Placed under the stress of trying to balance a busy life and heavy workload, young people are turning to artificial intelligence in droves to alleviate some of their academic pressure.
Cheating on assignments has never been easier. Students don’t even have to seek out the information and resources needed to plagiarize or skirt the guidelines. With the advent and popularization of AI chatbots like ChatGPT, the information comes to them at the click of a button in fully formatted, ready-to-submit writing.
The world of higher education over the past few years has been immersed in intensive discourse about AI usage in the classroom. UMass’s policy leaves it up to the instructor to decide if AI will be permitted to use in their course, specifically stipulating that students should assume AI is prohibited unless explicitly told otherwise by their professor. While some professors have chosen to integrate AI into their classes, many others have steered away from it, especially in the humanities. Every semester, I receive syllabuses that stress the importance of originally generated writing and warn me that any writing done by AI will be swiftly caught.
As AI advances and develops, the assurance that AI writing will be undoubtedly rooted out begins to feel increasingly bold. Those who know how to use it well will provide it with the sources needed to create topical writing, run it through the machine multiple times to edit the tone accordingly, and perhaps, even integrate the AI developed writing into their own original writing seamlessly. For professors who don’t intimately know the writing style of a given student, this can be near impossible to catch.
Some professors choose to utilize AI detectors like TurnItIn as a means to discourage those tempted to pass in unoriginal writing. Those detectors, much like AI itself, are still in development, meaning their reliability is questionable. There has been enough reporting of both false positives, flagging originally generated writing as AI, and false negatives, not flagging AI generated writing at all, to reduce these tools into more of a scare tactic than anything else.
Additionally, these AI detectors have been shown to target writing done by neurodivergent students and students whose native language is not English. So, who’s really being punished? Furthermore, what are we trying to teach?
These policies turn professors into police officers and build a foundation of suspicion between the teacher and the student. That is not a conducive learning environment for either party. While I understand that professors need to ensure the work students are doing is actually their own, I feel that the most extensive efforts should be made in convincing students why the work they’re doing is important while getting them to buy into the course being taught.
Students in college are preparing themselves for the workforce, where they will inevitably be confronted with writing of some form and genre. If they have completed college never having written for themselves, they will have stunted their writing skills. They have never allowed themselves to grow. They have never tested themselves. They have never tried, failed and gone back to the drawing board with a new idea, an important lessons to take to the workplace.
If they continue to use AI to write in the professional sphere, it will begin to show when they are asked to stand by their words. AI writing is great at comparing and contrasting existing ideas, granted the sources it pulls from are credible. Where I find it lacks is making an assertion of its own. An AI thesis is shaky. It would much rather make a statement than an argument, and its analysis rarely goes further than regurgitation of points a human already made.
Therein lies the truth about why AI writing is so bland and generic in comparison to human written work. Writing is a social act. It’s meant for human engagement. Anytime you read a work of writing, you are responding to it in some way: whether that be connecting with it, disagreeing with it, placing yourself in the shoes of the author or with another reaction entirely. When you read anything written by AI, you’re still reacting to it, but you lose the other half of the engagement. There’s no author to disagree with, to question or to place in societal and historical context. There’s just a machine.
It’s been said by many academics that writing is thinking externalized. That’s what professors are really looking for: a student’s externalized thoughts. As a peer tutor at the UMass Writing Center, what I encounter most often with AI are students who turned to it because they wanted to sound more academic and were fearful of the feedback they would receive if their own work did not live up to this arbitrary standard.
Perhaps if professors stressed that they would rather hear their student’s thoughts, with all the idiosyncrasies that make their voice unique, rather than a robotic regurgitation of big words, students would begin to believe it too.
Fiona McFarland can be reached at [email protected].
