Confessions of an AI-Optimist Turned Reluctant Cassandra
From democratizing genius to manufacturing mediocrity at industrial scale
Three years ago, I became one of higher education’s early AI evangelists, armed with a battle cry that now haunts me: ‘Use and abuse AI responsibly’. I genuinely believed generative AI would function as a skill leveler, lifting collective intelligence to unprecedented heights. Instead, I have watched it become a cognitive wrecking ball, and a recent MIT Media Lab '(Kosmyna et al. 2025) study confirms my worst classroom observations: excessive reliance on AI-driven solutions contributes to ‘cognitive atrophy’ and shrinking critical thinking abilities.
My cautious optimism has not entirely evaporated. I have witnessed beautiful applications: students with brilliant ideas but poor organizational skills using AI to structure their thoughts coherently. Struggling readers developing self-directed learning methods to grasp difficult concepts faster. Powerful speakers who could not write leveraging AI to capture their original verbal insights. Students accessing materials in languages they cannot read, learning as though they were native speakers. These innovators taught me applications I never imagined, transparently sharing their ethical approaches. But here is the statistical gut punch: these responsible users represent less than five percent of my students, and that percentage shrinks every semester.
The remaining ninety-five percent? They are outsourcing everything: research, reading, thinking, writing, translations, editing. The evidence is damning: In one class of fourteen students, four selected the same book and produced nearly identical critical reviews. Worse yet, in a cohort of three students, two chose the same book and highlighted identical arguments. Before AI, I never encountered such patterns. These are not coincidences, they are statistical fingerprints of delegated research. I have watched students spend minimal time on course platforms, never attend class, never speak a word, yet produce high-quality outputs in no time. Submission rates plummet when I explicitly ban AI for specific tasks. Flagged submissions showing 100 percent AI generation have multiplied. Students emotionally respond to AI-generated posts from classmates as if they were genuine human reflections.
Scholars at Harvard would recognize these symptoms immediately. Many students lack understanding of AI’s computational and Bayesian foundations, leading them to place excessive confidence in outputs. They do not grasp that AI platforms draw from essentially the same databases -- ask different systems the same question, receive remarkably similar answers because the underlying data is identical. As one researcher notes, AI excels at absorbing vast data and making predictive calculations, but ‘machines calculate and they do not have human experiences’. You might use AI to write a job application letter identical to everyone else’s AI-generated letters, thereby losing the job. The metaphor is clear: ‘the owl sits on your shoulder and not the other way around’.
The crisis deepens when we examine what AI cannot do. While it can engage in processes resembling critical thinking -- data analysis, problem-solving, modeling -- it lacks human experience, ethical reasoning, and moral insight. Its processes remain purely recursive. AI can tell you how to assemble components but cannot build devices relating to human contexts. Machine learning depends on statistical adjustments: humans self-organize life around meaning. Taking hand notes leads to greater recall than keystroke notes. Predictive text features change our word choices. Given these trends, frequent multi-context LLM use will inevitably alter how users approach reasoning tasks.
My classroom has become an unintended laboratory confirming these warnings. Weekly submissions, class participation, and formal papers increasingly display mechanical, illogical, vague ‘contributions’. Students who were once average or mediocre but remain committed to honest learning now outpace their peers. Meanwhile, the extreme minority who genuinely leverage AI as a learning tool advance so rapidly that two alarming phenomena emerge: (i) desperate classmates cut more corners trying to catch up and (ii) those falling behind descend at unprecedented rates. I genuinely wonder if most students can ever bridge this expanding chasm.
AI is not democratizing education, it is accelerating inequality. The gap between the knowledgeable few and everyone else (including promising minds) widens at terrifying speed. We are not building future leaders capable of adding new value to society, we are mass-producing students who think AI will solve their problems. But as researchers warn, human challenges are complex and can only be solved by humans. AI cannot yet engage in reflective thinking or deep system analysis. The hype surrounding LLMs as general reasoners upon which we can offload thinking serves the interests of technology producers, not learners.
No learning occurs unless brains actively engage in making meaning. If students use AI to do work for them rather than with them, cognitive development flatlines. AI is here permanently, so we must determine how to collaborate with it in ways that advance our goals as educators, learners, and humans. It is time every faculty member shares experiences, impressions, and best practices. If my insights resonate with colleagues in similar positions, we must collectively reverse these trends before we graduate an entire generation who confused convenience for competence and mistook pattern-matching for wisdom.
#AIinEducation #CriticalThinking #HigherEducation #DigitalLiteracy #CognitiveAtrophy
Reference:
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., ... & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv preprint arXiv:2506.08872, 4.