On Wednesday, Oct. 8, the University of Massachusetts Amherst College of Social and Behavioral Sciences hosted a webinar featuring former U.S. Deputy Chief Technology Officer of the Biden administration, Alexander Macgillivray. He discussed his experience helping shape United States Artificial Intelligence policy.
Macgillivray’s talk, “The Past and Future of AI and Regulation: A Practitioner’s View,” served as the keynote event for the UMass Social Science Series, “AI and Us: Rethinking Research, Work, and Society.” The series aimed to bring leading voices from campus and industry for talks, workshops and interactive events that explore the role of AI in the social sciences.
He began his talk by describing the blueprint for an AI Bill of Rights, the first major AI regulation project he and the Biden administration worked on. Completed in October 2022 before ChatGPT was released, it aimed to ensure that AI would be safe, effective and accountable.
The Blueprint for the AI Bill of Rights intends to be a guide to protect society from threats artificial intelligence may pose to democracy, equity and important resources. The five principles used to guide the design are safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, considerations and fallback, as explained by the Office of Science and Technology Policy.
Macgillivray said there was immediately a lot of interest in the regulations that would be implemented. The bill seemed to be applying well to the feelings people had
According to an IEEE article, other AI researchers agree that the choice of language in the bill clearly AI governance as an important civil rights issue that deserves expanded protections.
Macgillivray further discussed President Biden’s Executive Order 14110, which defined the administration’s policy goals regarding AI. At this point, Macgillivray was no longer working in government, but he shared observations about the administration’s approach.
“From the outside, you could see that the Biden administration was struggling with how to encourage the benefits while reducing the harms,” Macgillivray said. He said the executive order was trying to answer multiple questions at once; harms, risks and benefits that the new technology was generating.
Macgillivray further discussed that while the Biden administration set restrictions for where AI chips could be exported to and where large-scale AI model training could take place, Macgillivray said that restrictions relaxed under the Trump administration.
This was done by reversing the Biden Administration’s Framework for Artificial Intelligence Diffusions, the AI Diffusion Rule, which aimed to regulate any global transactions of artificial intelligence and created license requirements for any exports being done.
The Trump administration reversed this in May 2025, right before the framework would have begun to take effect. The U.S Department of Commerce said this reversal was necessary because it “would have undermined U.S. diplomatic relations with dozens of countries by downgrading them to second-tier status.”
Executive Orders 14179 and 14141, both issued by the Trump administration, focus on increasing infrastructure investment, reducing regulations on power plants and data centers and limiting government use of ideologically biased AI.
Macgillivray said that although the Executive Order to limit use of ideologically based AI passed, it has not to his knowledge been properly implemented. “Though the government is still, I think, using Elon Musk’s AI, which has a bunch of hard coded things to make sure that it is ideologically consistent with Elon.”
Grok is a generative AI chatbot made by Musk. In September 2025, the Grok for Government agreement was approved by the Trump administration, making the AI bot accessible to all federal agencies, according to the General Services Administration.
The chatbot has received backlash as, according to a New York Times analysis of the chatbot, Musk has the AI programmed to reflect his own political priorities.
According to Macgillivray, an unproductive trend in the AI policy world is when people predict a single future for AI and then tailor their policy proposals to only that narrow outcome.
“If you believe in a different scenario, you end up having this conversation that is sort of disjointed because the basic assumptions underlying the policy proposals are just radically different,” he said. To combat this, he suggested considering all potential futures for AI and creating policy solutions that can adapt to any and all outcomes.
Macgillivray also suggested that focusing on the harms can help combat basing policy proposals around hypotheticals. He described how important it is that policymakers and government officials understand this technology as deeply as possible and get the people who understand it to have as much input as possible.
“We need to bring talent into the government. We need to train them,” Macgillivary said. “The government itself needs to try out these technologies, try building them responsibly, and see what works and what doesn’t, so that we can regulate more efficiently.”
Pearl Davis can be reached [email protected].
