
Columnist Sangitha Aiyer argues that professors misunderstand how students utilize AI, such as in computer science.
Credit: Karen WongWe’re experiencing an artificial intelligence epidemic. Watch as it infiltrates every paper, problem set, and lab report that Penn students turn in. It’s nearly impossible to identify a single word, let alone sentence that wasn’t composed by ChatGPT or the newest, fanciest large language model, the narrative suggests. But despite constant siren sounds from news sources, students, and even the University itself, the reality is far less dramatic: AI use is not as omnipresent as professors believe.
In fact, this culture of suspicion is doing more harm than good. Following almost every written assignment I’ve submitted over the past semester, the professor has stood in front of the class and berated us for our supposed use of AI. Excerpts of our papers are shown on the projector and we are lectured about the importance of thinking for ourselves, rather than relying on technology to write out ideas and sentences. Yet on every occasion, I’ve found myself looking around at my classmates to find them just as confused as I am. We are being accused of something that most of us did not do.
In the Computer Science department, for example, cheating through AI-use may not be uncommon; however, neither is getting caught for it. Students like myself who have to wade through time-consuming weekly coding and written assignments are still hesitant to turn to AI to lighten the load, given how harsh the consequences can be. Simply put, the convenience of asking AI to generate work is not worth the consequences of violating strict academic integrity guidelines. Beyond the risk, there is also the reality that AI does not deliver the quality or accuracy that many of us need. In a computer science context, AI-generated code is usually inefficient or fails to compile, and in humanities courses, work written by AI lacks nuance or detail required to grasp complex topics.
It would be naive and likely inaccurate to imply that students do not turn to AI in any capacity. Of course, most of us have ChatGPT bookmarked and are no stranger to what the tool is capable of. Penn has addressed AI’s ubiquity themselves, with their release of University-wide guidelines, which acknowledge issues of transparency, accountability, and bias, and provide guidance for educators, students, and researchers. The general consensus is that AI-use is acceptable, when used for the right reasons. But even with this clarification in place, Penn’s culture surrounding it is one that is riddled with suspicion; the commonly-held belief seems to be that students are bending the rules, if not outright breaking them.
What has resulted is a sort of a witch hunt, in an attempt to hold students accountable for what professors believe is AI use. The issue with this mentality is that it is almost impossible to accurately differentiate between AI and human-produced work. Because AI detectors are unreliable, all that’s left is best judgment about what a human is “supposed” to sound like. And with large language models becoming increasingly advanced, this line between AI and human-produced work is all the more blurred.
It is a combination of my fear of getting caught, desire to learn concepts thoroughly, and belief that I can produce better work than it that keeps me from relying on AI to complete assignments. I can’t speak for every other Penn student and I don’t pretend like I can. However, the paranoia that each and every one of us is turning to AI to commit academic dishonesty speaks volumes about the lack of trust professors place in their students.
Rather than viewing every turned in assignment with skepticism, professors should focus on creating an environment where conversations about AI can be had openly. Including clear AI policies in syllabi, and clarifying expectations about what is and is not acceptable at the beginning of the semester are crucial first steps. But beyond policy, our classroom culture must change too. Professors must believe that students are doing good work with integrity, and that they have the capacity to use AI responsibly. Trust in the classroom is not about abandoning standards, but assuming honesty. In the new age of AI, let’s build a culture that is rooted in trust, not suspicion.
SANGITHA AIYER is a College junior studying cognitive science from Singapore. Her email is saiyer@sas.upenn.edu.
The Daily Pennsylvanian is an independent, student-run newspaper. Please consider making a donation to support the coverage that shapes the University. Your generosity ensures a future of strong journalism at Penn.
Donate