Can you remember the last time a professor didn’t warn you against using artificial intelligence for your assignments? I certainly can’t. This isn’t a reality unique to the University of Massachusetts. 86 percent of learners globally use AI in their studies, with 54 percent using it weekly. Using AI to meet academic deadlines has become commonplace and institutions are understandably concerned.
In the face of this new reality, it is tempting to say that we should mitigate and even abstain from AI use entirely. The consensus is straightforward, AI replaces our natural capacity for critical and creative thinking and thus if we stop using it, we are invariably going to return to prime creative form or rediscover lost skills.
I believe that reasoning to this end is counterproductive. The issue with AI use is not a matter of whether we use it or not. The discourse here should focus on how we utilize it. The notion that AI use alone directly contributes to a downturn in one’s own intellectual abilities is an oversimplification of the AI use conundrum.
I want to make it clear. I am not saying we should continue to succumb to the temptation of using AI to generate whole assignment submissions. The act of cheating in and of itself is a key violation of academic integrity. However, if we stop to consider why using AI in this way is bad, AI use becomes much less justifiable.
With this in mind, I advocate for a reevaluation of how we perceive AI. We must reflect through the lens of our ethical responsibility as students and ask ourselves, how can AI be an instrument to enhance our capabilities, not replace them?
Not using AI isn’t the answer, using AI effectively is.
The ramifications of misusing AI have been comprehensively discussed and exhaustively rehashed over the past few years, they require no further explanation. Once again, my thinking here circles back to the idea of using AI to think for us, which is why I also strongly disagree with instructors using AI-based detection software to screen their students’ submissions. If it’s wrong for students to let AI do their thinking, it’s just as wrong for instructors to let AI detection tools do their judging. Accountability should be mutual. Real understanding comes from human interaction, not algorithmic suspicion.
Something as simple as having short one-on-one discussions with students about their work to discern whether they actually understood what they wrote could be a mutually beneficial remedy to this problem. Students have to prove that they know their work, and instructors can gauge if their students are full of it. Taking AI out of these processes and returning to human-centered interactions would make AI use in education more egalitarian.
Paradoxically, this discourse around AI could help us return to more hands-on learning, fostering better relationships. The fear of inappropriate AI use has led to a return to pre-pandemic academic norms that reward class participation. A balance still must be struck; instructors must open their students’ eyes to the benefits of proper AI use while also not allowing them to fall into overreliance.
In my junior year writing class, our professor continuously makes it clear to us not only if we are allowed to use AI at all, but also how we can use it, on an assignment-by-assignment basis. In a current assignment she is allowing us to use AI to assist in making annotations for sources. However, she requires that we do the literature review ourselves without leaning too heavily on AI.
This is a perfect example of how instructors can foster responsible AI use in students. While allowing AI to handle the tedious but not as cognitively intensive act of summarizing, it still requires us to analyze critically how our chosen sources interact with one another.
I am someone who covets clear communication from my instructor on what their expectations are, so I appreciate when they put in the effort. Not merely stating whether AI is allowed at all but also helping us decide when AI usage is wise.
AI represents a new, unknown frontier. The implications of widespread AI use are yet to be fully understood, especially in education. As with most complex contemporary issues, a degree of nuance is crucial when discussing the implications of AI use.
For the foreseeable future though, it is here to stay. If a tool for progress is available, it would not only be wasteful but also futile to not take advantage of it. We, as a collective educational community, must strive to use the tools we are afforded responsibly.
Ultimately, AI is neither our replacement nor our savior, it’s a mirror. It reflects how willing we are to take ownership of our learning. If we use it responsibly, it can sharpen our minds; if not, it dulls them. The choice, as always, is ours.
Diko Karim can be reached at [email protected]
