After the launch of ChatGPT in November 2022, educators grew worried about how generative Artificial Intelligence (AI) technologies would be applied in software platforms commonly used at colleges and universities like the University of Massachusetts.
In Jan. 2023, the Center for Teaching and Learning at UMass published a website to help educators navigate potential student use of ChatGPT and other generative AI resources, suggesting classroom and campus policies.
In Spring 2023, Chancellor Kumble Subbaswamy and the Rules Committee, with Chancellor Javier Reyes’ reaffirmation, formed a joint task force on generative AI (JTFGAI) “to evaluate the appropriate use of AI technologies across our teaching, research, service, and administrative operations.”
Two years later, UMass is continuing to update academic integrity policies regarding the use of AI in addition to creating student organizations to better understand the fundamentals of the technology while keeping student safety and critical thinking skills as a top priority.
Michelle Trim, a faculty member in the Manning College of Information and Computer Sciences (CICS) at UMass, senior lecturer and a Chancellor’s Leadership Fellow, focuses on the use of AI in education on campus.
After getting her Ph.D. from Michigan Technological University, Trim worked in technical communication and researched the connection between technology and human agency. She was first exposed to the basics of AI in the form of socio-technical systems, where she realized that technology could be used to help people make decisions.
“When I had grown up in a context of tinkerers, electricians and ham radio operators, technology was something exciting and something that facilitated you doing things you wanted to do as a way to solve problems,” Trim said. “I became aware of this notion of technology as a thing that can drive human development either intentionally or inadvertently.”
In 2000, Bill Joy published an article about the fears and threats that the future of technology, especially AI, posed for the 21st century. Joy relayed that if people go all in for efficiency and convenience, the world will likely look different than people anticipated and leave many unhappy with the outcome. This level of thinking formed some of Trim’s own opinions.
“Self-reflection and metacognitive development is really important for a skill that we’re trying to help students develop,” Trim said. “That’s something that’s really risky for students to offload onto something else that feels laborious” but “the labor is what gets you to the learning.”
Trim does not permit her students to use AI for work in her classes. She described the importance of students developing their critical thinking skills to meet learning goals, which she assesses in her writing assignments. “If they didn’t do the writing, then I can’t assess their learning.”
Francine Berman, Director of Public Interest Technology and the Stuart Rice Honorary Research Professor in CICS, views AI as a double-edged sword. While AI is a powerful new resource, it should be used to enhance, rather than replace students’ development of their ability to communicate, assess and maintain the integrity of their education and work.
Ethan Zuckerman, associate professor of public policy, communication and information and director of the UMass Initiative for Digital Public Infrastructure, has similar guidelines for AI use in his classroom.
Zuckerman encourages students to use Large Language Models (LLMs) and AI as a software to explain concepts like a tutor. He also encourages his students who speak English as a second language to use LLMs.
Despite these classroom polices, Zuckerman is hesitant to allow students to use AI to brainstorm ideas. These fears stem from how the system takes the initial question asked and compares it to other similar questions, producing a pattern typical for the culture and community in which the question was asked. According to Zuckerman, the AI systems follow a cycle of training to produce answers from patterns that people have “fed these systems.”
As AI becomes more expensive and free platforms develop paid models to get better performing engines, the university has invested in Microsoft Co-Pilot. When students log into co-pilot with their UMass credentials, it provides “some data privacy and some data protection.”
“The idea is …that anytime you’re using a commercial Gen AI tool, anything you enter into the prompt gets captured and added back to the [broad AI search engine], that then is part of the data set that they train the model on,” Trim said.
Trim is experimenting with running a LLMs that keeps prompt data and products in a single system which constrains AI use to a smaller field of knowledge. This helps make searches and answers more target specific. According to Trim, UMass Information Technology is working to develop localized instances of Gen AI models that provide more data privacy for the UMass campus.
While the JTFGAI declared there would be no campus-wide policy regarding how educators and students should use AI, Trim said that there were still efforts to create “policies that respect and underscore and strengthen academic honesty.”
“UMass has long been an institution associated with upholding the public good,” Trim said. “I want us to have policies that exist to equip our students, our researchers, our staff [and] our faculty with tools for decision making that enable them to be proud of how they engage with these new technologies that are in keeping with what’s fair, what’s legal and what’s ethical.”
According to Zuckerman, the three main concerns when outlining policies were the benefits and limitations of AI, the goals of educators and the ethical concerns involved when working with AI.
These resource-intensive systems, according to Zuckerman, are built on the invisible labor of scholars, authors and people in the developing world who are training the systems.
For the future, Zuckerman is focusing on the techno-social implications of fairness, authority and representation within the AI culture in CICS.
He says that “questions like fairness and bias are now in the front and center” and there is a continuous effort to learn “where there’s efficiency and where we lose utterly important things in the process.”
Berman emphasized that while AI is increasingly used as a resource for students and in the professions they will enter after graduating, AI should be created, used and managed responsibly.
“In the end, humans are always accountable,” Berman said. “It’s up to all of us to determine how to minimize the risks of AI to promote societal progress.”
Kalina Kornacki can be reached at [email protected]. Kalana Amarasekara can be reached at [email protected].
Editor’s Note: This story was edited to accurately reflect any factual errors/misspellings.