Skip to Content, Navigation, or Footer.
Monday, Feb. 9, 2026
The Daily Pennsylvanian

We are losing the point of education

Guest Column | When AI replaces critical thinking

01-17-23 ChatGPT Photoillustration (Abhiram Juvvadi).jpg

A generation ago, homework meant wrestling with confusion. You stared at a blank page, tried an idea, watched it fail, tried again, and eventually learned not just the answer, but why it was the answer. Today, for many students, that struggle is replaced with something much faster: copy, paste, and submit.

Generative artificial intelligence didn’t invent shortcuts, but it certainly changed the scale. Students can now outsource not only the final product of their assignments but also the thinking that should produce it. In doing so, they risk losing the habit education is meant to build: asking how and why.

At Penn, it’s extremely visible. Essays appear in seconds. Problem sets arrive with clean “steps.” Interview preparation becomes prompt, memorize, and repeat. The output looks polished, but the cognitive work of testing ideas, noticing gaps, and building intuition is increasingly optional.

That’s the danger: not just cheating, but intellectual atrophy.

One helpful way to understand this is through the concept of cognitive offloading: using external tools to reduce mental effort. Offloading can help in the moment, but it leads to worse memory in the long run

There’s also evidence that heavy reliance on AI tools can be associated with weaker critical thinking outcomes. One study of university students reports a negative relationship between “AI dependence” and critical thinking. Over time, reliance trains a passive posture; instead of constructing an argument, you select one. Instead of debugging your reasoning, you edit the wording. That shift from authoring to accepting is exactly how critical thinking gets dulled.

You don’t have to claim that AI “makes students dumb” to take this seriously. It’s enough to notice the incentive: if you outsource the hardest thinking, you get fewer repetitions of the skill you’re supposed to build. 

Our brain’s neuroplasticity explains why repetition matters. If we repeatedly grapple with concepts, we strengthen the neural connections and circuits that support reasoning. If we avoid that effort, we don’t reinforce them. 

Even outside of AI, convenience tools can tax cognition. “Brain drain” research suggests that the mere presence of a smartphone can reduce available cognitive capacity, even when it isn’t used.

At Penn, now students are asking “Does this look right?” instead of “Do I understand this?” Curiosity has become compliance, and reasoning has become formatting.

To be clear, ChatGPT and other AI tools can support learning. They can explain concepts, generate examples, and help you get unstuck. But for many students, they have now become a crutch — a way to avoid the uncomfortable stage where you don’t yet know what you’re doing. That discomfort is not wasted time. It’s where you form questions, locate gaps, and build judgment.

Penn says it teaches students “how” to think, not what to think. But we are often incentivized to reward the opposite: clean submissions over messy drafts, answers over reasoning, and speed over struggle. When the grading system values polish, AI becomes the optimizer.

So, what’s the fix?

One practical solution is short, viva-style oral defenses — in person and face-to-face with a professor or teaching assistant. Instead of simply turning in an essay, proof, or project, students should spend five to 10 minutes explaining what they did, why they chose that approach, where they struggled, and what they would change. You can’t copy-paste understanding. You either know it, or you don’t.

This isn’t just a hypothetical. This spring, CIS 5200 Machine Learning introduced oral components that pushed students to explain and defend their reasoning rather than simply submit polished results. That structure makes understanding visible and makes it harder to hide behind a convincing but hollow draft.

We can scale this without overwhelming staff: rotate brief vivas for a subset of assignments, run small-group defenses in recitation, and do quick reasoning checks during office hours. Pair that with clear AI-use rules: what’s allowed, what must be cited, and what must be explainable unassisted. And just as importantly, these conversations re-center what college should reward: clarity of thought, not just clarity of prose. They create space for Quakers to say, “I’m unsure,” and then reason forward anyway.

This isn’t a call to ban AI. This is a plea to prioritize education where AI can’t replace thinking — only support it. Because if Penn becomes a place where the main skill is how to prompt, paste, and polish, we’re not educating thinkers. We’re training efficient editors. Penn must aim higher.

AMOGH SARANGDHAR is a second-year master’s student studying computer science from New Jersey. His email is amoghsar@seas.upenn.edu.