The University of Massachusetts has long harbored reservations about the integration of artificial intelligence tools into academic settings, in particular with platforms like ChatGPT, which are perceived to lead to academic dishonesty. I’ve encountered numerous cases where the mere mention of AI was met with skepticism, and in two of my courses, professors even banned the use of AI, admitting to a lack of familiarity. This points to a prevailing misunderstanding of AI’s potential utility in the classroom.
Having engaged and experimented with large language models such as ChatGPT and Google Bard since their public releases, I am convinced that it’s time to question prevailing misconceptions and consider the potential of AI as a valuable educational tool.
First, it is essential to examine the University’s current academic honesty policy within their Academic Regulations publication, which emphasizes that students must showcase their individual learning during examinations and exercises, explicitly stating a firm stance against cheating, plagiarism, fabrication or any form of dishonesty within the University community.
A significant aspect of the discussion around AI concerning the current policy revolves around plagiarism, defined by the University as knowingly presenting the words or ideas of another as one’s own work. This completely makes sense, and I fully support this stance. While using ChatGPT to generate an entire essay would lead to plagiarism, as it draws from various internet sources without appropriate citations, it’s crucial to recognize that it can be employed in numerous ways without compromising academic integrity.
Not only can LLMs like ChatGPT be used as effective brainstorming tools, but they can also serve as an ever-available teacher, guiding students through various tasks. Notably, it excels in reviewing and critiquing work, offering valuable insights and pinpointing areas for improvement.
ChatGPT, for instance, can also function as an advanced Grammarly, identifying errors, suggesting improvements and enhancing overall writing quality. In more STEM-focused subject areas such as computer science, mathematics and engineering, among others, it proves to be an invaluable resource for explaining concepts and providing step-by-step explanations. Many students in these fields, including friends of mine, attest to its effectiveness in supplementing lectures, discussions and online resources.
Meanwhile, teachers can harness the collaborative potential of LLMs for lesson planning. By engaging with the model, educators can access a wealth of creative ideas and innovative suggestions to enhance lesson content. ChatGPT, Bard and other LLMs become valuable partners in refining teaching materials, providing insights to make lessons more engaging and effective. This opens new avenues for educators to enrich their teaching strategies and create more dynamic and impactful learning experiences for their students.
At the same time however, AI tools have certain defects to be wary of. For students, it’s essential to approach these tools with caution, acknowledging that calculation errors and reasoning mistakes are not uncommon. It is also impossible to completely get rid of all types of bias that may exist within an AI model, which may color the answers it provides for more subjective, essay questions. The decision to use AI should be an individual’s prerogative, as it involves a balance between leveraging technology for assistance and actively honing one’s skills.
Contrary to the current negative connotation around the use of AI at UMass, I advocate for a more open-minded approach. Rather than shying away from AI, we should embrace its inevitability and recognize its potential. AI should not replace human effort but should empower us to complete tasks more effectively, by complementing our work.
Yash Manuja can be reached at [email protected].