Over the last year, the use of generative artificial intelligence (GAI) across college campuses has left university leaders scrambling to deal with the technology’s permanence and implications on student work. With no university-wide policies on the use of artificial intelligence (AI) in the classroom in place, faculty and administrators remain unclear on how to regulate its surging usage alongside Fordham’s academic integrity policies.
Generative AI is a technology that creates original content — including images, text or other media — based on an input request. This form of AI was first created in the 1960s, and use of it has surged in recent years.
A 2020 survey from McKinsey and Company, a global management consulting firm, showed that the use of AI has more than doubled since 2017, while investment in the AI space continues to grow.
Faculty at Fordham Lincoln Center have taken various approaches to combat acts of plagiarism through the use of generative AI tools such as Chat Generative Pre-trained Transformer (ChatGPT) and Claude. Others have accepted the technology’s presence on college campuses to be ubiquitous, providing professors with the liberty to determine the role that generative AI can play in their courses.
Christine Fountain, an associate professor in the sociology department, shared that professors are unsure of the best course of action. As a result, many have tested different policies in their syllabi to address the issue.
“There’s been a discussion going on through an email server that a lot of the professors are on, where we’re talking about how we’re going to use it and different ideas and then we’re sharing documents,” she said.
Over the summer of 2023, Dennis Jacobs, provost and senior vice president for academic affairs, created the AI Vision Committee — a group of faculty from different departments whose goal was to strategize and guide professors on dealing with AI usage in their future classes.
Aditya Saharia, professor of information systems, technology and operations at the Gabelli School of Business, and Yijun Zhao, associate professor in the department of computer and information science,chaired the committee and published a report with recommendations to all faculty across the Rose Hill and Lincoln Center campuses on handling AI.
The report provides sample statements for use by faculty on syllabi depending on their comfort level with generative AI usage in class. Saharia and Zhao also recommend that the university’s Academic Integrity policies be updated to include generative AI.
The administrative response from the Provost’s office and the AI Vision Committee report suggest a faculty-led response to this issue for the time-being, in which various recommendations are facing a trial and error period. In an email sharing the report to faculty, Jacobs shared that a new resource page from his office will help “individual instructors decide on the extent to which they would like to engage GAI in their courses and how to do so.”
Fountain highlighted that the report created by the AI Vision Committee was not shared in time for faculty to adequately plan their courses accordingly. Based on prior experience, however, and discourse among professors, Fountain noted that she has implemented a policy of “transparency” in her classroom, with students being required to share why they used AI and with which prompts.
“My policy is just that it needs to be used in ways that complement rather than replace critical thinking and engagement with the materials,” Fountain said. “Because any use, just quoting, just using something that somebody said without attributing it, it’s plagiarism.”
The sociology professor found that simply banning the use of generative AI programs was not effective in stopping the use of the software last year. She compares the usage of generative AI with and without crediting it to the use of a quote in an assignment.
“If a student turned in an essay where half of the essay was a quote … you wouldn’t get an A for that essay,” Fountain said. “If you were upfront about where it came from, that wouldn’t be considered plagiarism, but it wouldn’t be good work.”
Gregory Donovan, director of the new media and digital design program, has taken a similar approach in his syllabi and restructured his grading system to discourage the use of generative AI.
In his Eloquentia Perfecta 4 class, which culminates into one final paper, Donovan distributed points from the final paper toward smaller, related assignments to be completed throughout the class. By doing so, Donovan said he found that it was easier to recognize who had used ChatGPT because the final paper would be “disjointed” from the assignments done throughout the course.
“It was really just looking at these final papers and seeing, it doesn’t line up with the past three months of work that you’ve been developing,” he said. “There’s no way to show how you got from point A to point B.”
Fountain and Donovan agree that it would be futile to prove that students who use generative AI for assignments have plagiarized, especially with no readily available technology to determine the usage. Instead, they do not consider the work to be deserving of a good grade.
Timothy McManus, adjunct professor of information systems in the Gabelli School of Business, implemented a policy in his class that allows generative AI to be used on specific assignments — this approach differs from Fountain and Donovan. For these assignments, he has students create the datasets and then utilize GAI softwares to analyze the data, so students can experiment with the tool.
“Let’s use it and let’s check it out, and let’s have some fun with it,” he said. “But let’s also use it in such a way that you’d never be able to use it to cheat the assignment. You actually have to do the first half of the assignment to get to the second half, which is the fun part of using the AI.”
Students noted that AI policies have significantly differed among their classes.
Robert Betancourt, Fordham College at Lincoln Center (FCLC) ’24, said that generative AI has not been mentioned at all throughout his four years at Fordham until the fall 2023 semester, when two of his professors mentioned it. He added that according to his own observations, it’s been rare to see students utilize generative AI for homework.
He shared that he has seen its usage more for everyday tasks, such as writing emails. Betancourt expressed concern on the rise of false plagiarism cases, in which he specifically cites a worry that professors may be mistakenly accusing students of using artificial intelligence for their assignments.
“I don’t think the technology exists yet (to catch AI plagiarism) but I’ve heard of a lot of false cases where students are quoted for using AI,” he said.
Betancourt noted that while he doesn’t believe such accusations create distrust between professors and students, it remains a concern for students.
According to Hunter Duffy, FCLC ’26, every one of her professors has spoken about AI usage on class assignments. She noted that all but one have banned the usage of generative AI, with one professor allowing use of AI for creating essay outlines and crediting the software used.
“The only professor who has allowed me to use AI is my English professor,” she said. “I get the idea that we have to learn to work with these tools, or we’ll be put out of jobs by them.”
Similarly, Jasmine White, FCLC ’27, said that one of her professors allows students to use software like ChatGPT as a tool for academic writing.
The convenience of using generative AI is not lost upon White, who noted she used ChatGPT on a short essay during college applications and once in her senior year English class due to “senioritis kicking in.”
The university’s academic integrity policy has not made any changes related to generative AI at the time of publication. The faculty-run committee’s recommendations are not officially embedded within any of the university’s policies.