The faculty of the college passed a motion to require all professors to have an AI policy in their syllabi this spring after deliberations last semester. The rules regarding AI are now up to the professor’s discretion.
The new policy “says that all syllabi starting this semester are supposed to have a generative AI policy,” according to religious studies professor Brett Hendrickson, chair of the Faculty & Educational Policy Committee.
“But the motion did not specify the content of that policy at all,” Hendrickson continued. “That was left up to the instructor … it has to be addressed in general.”
A working group, chaired by Associate Dean of Teaching & Learning Tracie Addy, first met throughout the summer to discuss AI and its impacts. A new committee was then formed to discuss a common blanket policy about AI that would be implemented across all syllabi.
“There isn’t necessarily a right answer,” government & law professor Joshua Miller said regarding the options for policies. “One is that AI is fine. Then there are people on the opposite side, who would say, ‘It’s horrifying.’”
Miller, alongside other professors, argued there could be common ground.
“You’re trying to get an education and think for yourself, and you’re having a machine think for you,” Miller said. “But if you use it, and if you footnote that you’ve used it, then it’s not plagiarism.”
Christopher Rafferty ’26 noticed that professors have recently paid extra attention to the AI section of their syllabi compared to previous semesters.
“More time has passed and AI’s become more prevalent,” Rafferty said. “At the start, I think a lot of teachers hadn’t caught on yet, but now they’re a little more aware.”
Rafferty supported the faculty’s final decision.
“I think it’s fine if [professors] have differing opinions,” Rafferty said. “Not all the classes are going to have the same struggle with AI as it is.”
While the future of AI and the policies surrounding it at the college remains to be seen, some professors hold an optimistic view about how it can be incorporated into the classroom.
“I’m just hoping that we can figure out ways to use [AI] positively and train students who are going to have to use this in the workplace,” biology professor and committee member Elaine Reynolds said. “So I think the only way to deal with it is to figure out how to use it. I don’t think we’re going backwards with this.”
Katherine Groo, a film & media studies professor, warned of the potential effects of ignoring the downsides of AI.
“We should think carefully about how best to protect our students, especially their capacity to work and write … and create for themselves,” Groo wrote in an email. “I would love to see an ‘opt-out’ policy for students. Faculty who find them valuable should absolutely be able to use these tools, but students should not be required to train them, which in many cases basically amounts to offering their creative labor for free to external commercial parties.”
Groo emphasized the potentially harmful effects of AI in her syllabi.
“If students are learning from large language models, they are indeed still learning from all of the work and writing that scholars both past and present have done,” Groo wrote.
“I think it’s going to have to be a situation where we all get used to what’s possible with this new technology,” Hendrickson said. “And, given what’s possible, I think we’re gonna have to recalibrate.”
Andreas Pelekis ’26 contributed reporting.