The Daily Pennsylvanian is a student-run nonprofit.

Please support us by disabling your ad blocker on our site.

13b08f1d-f02c-418f-bbee-da77180b7e38
Credit: Ani Nguyen Le

In the absence of a University-wide policy for ChatGPT, Penn professors are creating a patchwork of approaches regarding the use of artificial intelligence in their classes.

The viral chatbot, which OpenAI launched in November, can generate human-sounding responses to a multitude of prompts and is proficient at writing, synthesizing text, and coding. Its abilities rival those of Penn students: A Wharton professor recently found that ChatGPT would pass a Wharton MBA exam, and the newly-released GPT-4 scored in the 93rd percentile on a simulated SAT exam. 

Unlike some of its peer institutions, Penn has not published a dedicated policy governing the use of artificial intelligence by students. Without such a policy, six Penn professors and administrators spoke with The Daily Pennsylvanian about how they are tackling the use of ChatGPT, from banning to mandating its use. 

Community Standards and Accountability Associate Director Danielle Crowl wrote that she believes that students who use ChatGPT without explicit permission from professors are considered to have violated University policies from Penn’s Center for Community Standards and Accountability. 

Crowl wrote that the use of ChatGPT depends "on the facts of the allegation" and cited sections A and B of the Code of Academic Integrity: Cheating and Plagiarism.

“Fortunately, ChatGPT as it stands now is not good at citation and therefore the CSA has been able to detect plagiarism violations when students are using this AI," Crowl wrote.

Still, some professors have taken additional steps to prevent students from excessively using ChatGPT to write code or assignments. Stephen Pettigrew, the director of data science at Penn's Program on Opinion Research and Election Studies, wrote in his PSCI 3800: "Applied Data Science" syllabus that, while using online resources can be beneficial to solve issues, taking code directly from online is unacceptable.

“As somebody who has taught this stuff for a while, I get a good sense of what are the common mistakes that students make,” Pettigrew said. “And if a student were to turn in ChatGPT-written code, and there’s weird mistakes in it, it’s probably going to set off alarm bells in my head.”

Critical Writing Program Director Matthew Osborn said that the program was allowing students to "explore and experiment" with ChatGPT, with a caveat — "as long as they’re aware of and understand what appear to be some substantial limitations and some inaccuracies in the content that they generate."

“You should note that foundation models have a tendency to hallucinate, make up incorrect facts and fake citations, and produce inaccurate outputs,” a draft policy for the Critical Writing Program reads. “You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or from one of these foundation models.”

The policy also emphasizes the need for students to appropriately cite “foundation models" like ChatGPT.

Wharton professor Ethan Mollick is taking a more emboldened approach. He supports the usage of ChatGPT in his classes, mandating that students use it for several assignments. Mollick said that ChatGPT and similar AI-based systems are useful tools that can be used for the betterment of education.

“We have to recognize and learn how to use these tools and not fight against them,” Mollick said. “I also think that we can accomplish things educationally that we could not accomplish before by using tools in this way.”

Mollick is not alone in his approach. Design professor Sebastien Derenoncourt also requires his students to use ChatGPT. Derenoncourt said that he assigned his students to "specifically" use AI tools in their midterm work, and a text generator to help them with their paper. 

“[M]y emphasis has been that ChatGPT and similar tools are there to help them expand and outline their abilities," Derenoncourt said. 

Pettigrew, Osborn, Mollick, and Derenoncourt's approaches exemplify what Bruce Lenthall, the executive director of the Center for Teaching and Learning, said is a wide variety of policies regarding ChatGPT across the University. However, he said that student ChatGPT usage is limited by flaws in its outputs, as users have to know the subject matter "pretty well" to identify mistakes. 

"[T]here’s a real risk associated with it — even if you’re not going to be caught — if you say things that are nonsense," Lenthall said. "If you’re a student, you should absolutely know that you are using it at your own risk."

Mollick said that his embrace of ChatGPT cannot be copied by professors in all other subjects. In English composition courses, he said, professors will want to assign blue book tests or have students use computers that are disconnected from the internet.

Beyond ChatGPT's current abilities, professors raised questions about the chatbot's future in education, ranging from its increasing abilities to concerns about equity. Pettigrew said that he will need to change his assignments to prevent students from easily cheating on them using AI. 

“Part of the thing that ChatGPT is going to force, at least for me, is being more diligent about keeping my materials fresh and new, so that a chatbot cannot regurgitate an answer from online back to me,” Pettigrew said.

Osborn said that he is confident the Critical Writing Program’s current curriculum would continue to be as helpful for students in the future, albeit with minor changes. By contrast, Lenthall said that priorities in education may need to be reevaluated as AI improves and exceeds human capabilities.

“If ChatGPT gets to the point where it can write a better paper than you can, what’s the point of you learning that skill?” Lenthall asked. "[I]f we don’t have an answer for that, then the fact that students can cheat is irrelevant, as getting the degree won’t be worth anything. Because if ChatGPT is better at it than a college student, then we don’t need that college student."

Pettigrew said that the onus is on students to "take their own learning into their own hands" as AI potentially makes it easier to cut corners on assignments. 

“It’s important that human beings know how to do certain things, because ultimately, ChatGPT is never going to cure cancer, and never come up with that creative solution to a problem that human beings have worked on for a long time," Pettigrew said. 

Lenthall said that instructors need to implement barriers to cheating to prevent students from feeling pressure to do so from other students, because students are most likely to cheat when they see everyone else doing so. He also said that ChatGPT will likely be monetized in the future, potentially creating an "in-class divide" between those who can afford it and those who cannot.

As AI advances at a faster pace, Derenoncourt said that ChatGPT's global proliferation will change his baseline assumptions about his students. He said that AI could increase equity, citing how most of his students are nonnative English speakers and benefit from a tool that lets them express themselves in grammatically correct ways.

“I’m going to have higher expectations,” Derenoncourt said. “I’m going to be more willing to tell students that they can do better than this, because they have the tools to be able to do this work, [and] I’ll also be able to assign longer papers without feeling guilty about it.”