Skip to Content, Navigation, or Footer.
Saturday, Feb. 14, 2026
The Daily Pennsylvanian

Penn working group publishes white paper about AI use in classrooms

12-06-23 Williams Hall (Derek Wong).jpg

The Price Lab for Digital Humanities recently published a white paper about the use of artificial intelligence in classrooms. 

The paper, titled “High Standards for Critical Reading, Writing and Research in the Age of AI,” was released in December 2025 and outlines concerns about the growing role of AI in academic spaces. The paper was authored by the Critical Approach to AI Working Group and warns that this development raises questions about the future of critical thinking and student skill development.

Price Labs Managing Director Stewart Varner and Associate Director of Digital Research in the Humanities JD Porter launched the group — composed of faculty and researchers across disciplines — in 2025. 

In an interview with The Daily Pennsylvanian, Varner explained that the group initially wanted to understand how these technologies work and share their findings to help people “make smarter decisions about how they want to use it” in the classroom.

The white paper argues that the widespread use of generative AI makes it harder to assess student learning and is leading to “shallower knowledge” and “skill atrophy.”

“We are at risk of losing fundamental skills that would harm students individually, both personally and professionally,” digital humanities professor Emily Hammer said.

She added that the consequences could extend to Penn and “society much more broadly.”

The paper also outlines recommendations for how the University can respond, including by establishing clear guidance around the appropriate use of generative AI in coursework. 

Kerry McAuliffe, an ABD Ph.D. candidate in the English Department and lecturer in the School of Social Policy & Practice, told the DP that the working group has proposed a policy framework to help create “guardrails around AI use.”

“We have the example of a stoplight system,” McAuliffe said, explaining that “green light courses” would allow broader use of AI, while “red light courses” would prohibit it.

Other members emphasized the need for broader institutional action. 

Porter explained that humanities instructors have been grappling with what he described as a “persistent or even growing problem” of teaching critical reading, writing, and research skills in the age of AI.

“Using AI to summarize the same work is, at best, akin to asking a roommate what it is about,” the paper read. “Having ChatGPT write a response is akin to copying that roommate’s work.”

The responsibility of preventing such usage currently “falls on specific instructors and on students,” according to Porter.  

“We think that’s not really viable,” he added. 

The paper emphasized that the importance of reading and writing is not limited to the humanities. Rather, these are “indispensable skills regardless of the specific fields in which they are applied.”

Porter suggested that a solution “has to come from outside specific classrooms,” calling for University-level policies that require AI-free coursework and provide clearer institutional support.

“We’re trying to encourage instructors to think about how this technology might be used in their classrooms, and to think about how it could either help or stand in the way of accomplishing the learning goals they have for their class,” Varner said. “It’s really trying to give instructors resources.”