AI and Critical Thinking: Augmentation or Replacement?
By Collin LuckenCollin Lucken is a Hastings Postdoctoral Scholar in AI and Humanity at 91黑料, a role he started in September 2025. He came here from Cincinnati, where he recently completed a Master’s in Robotics and AI and a PhD in Philosophy. His research centers on investigating how AI contributes to scientific progress. Part of that research concerns the public’s attitude to science, including recent AI developments. Since completing his PhD, his work has revolved around understanding the impacts and possibilities for AI in 91黑料’s liberal arts context, as well as the way 91黑料’s community, broadly construed, relates to AI.
This is Part 1 of a five-part series examining AI's impact across different spheres of concern.
During the fall semester, I had many AI-related discussions with students, staff members, faculty, and community members, and I noticed that people at 91黑料 had many distinct but interrelated worries about AI, how it’s impacting them personally, society at large, the environment, etc. In response, I’ve been writing an essay tentatively titled "Five Spheres of AI Concern." What I hope this essay will do is help interested readers develop a taxonomy of AI concerns, five "buckets" to dump their various AI worries in, with the hope that doing so will clarify our AI moment and facilitate greater agency in the face of so much uncertainty.
For now, the five spheres I focus on are: the individual human mind, human culture, careers, environment, and information. Here, I present a brief introduction to the first sphere, the individual human mind, directed through the notion of critical thinking.
Sphere 1, the individual human mind, finds itself in a world now populated by what are sometimes called "alien intelligences" (Kelly 2014). Machines are now capable of intellectual tasks like proving theorems and writing essays that, until recently, were the purview of only human beings. At a liberal arts college, we are challenged to buttress and fortify the essence of the human mind as creative, autonomous, and moral. Our AI moment requires rethinking pedagogy from both the perspective of students and professors. Likewise, it requires reflection on the generation of human knowledge, and the production of human art.
Given the diverse array of disciplines practiced at a liberal arts college, and the attendingly varied accounts of the human mind appropriate to each, thinking about the individual human mind and its potential transformation through the use of AI could be approached from many angles. To facilitate a relatively unified and ecumenical discussion, I’ll focus on one central concept that is arguably an essential component of all learning at the liberal arts college: critical thinking. In a nutshell, AI offers students, and increasingly researchers and faculty, both a tool for augmenting their critical thinking or a replacement for that very faculty itself. For instance, when students use generative AI systems to produce assigned essays in their classes, they are cheating themselves of the opportunity to exercise and improve their critical thinking.
What is Critical Thinking?
The American philosopher John Dewey, in his foundational work How We Think, defines critical thinking (he uses the equivalent moniker reflective thought) as "active, persistent, and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it, and the further conclusions to which it tends" (Dewey 1910, 6). The idea is that, when one critically thinks about something, say a claim in the form of a sentence, they consider what reasons and evidence there might be indicating the truth of the claim, as well as what other claims would have to be true if this claim were true.
In a more recent survey conducted by the American Philosophical Association titled "Critical Thinking: A Statement of Expert Consensus for Purposes of Educational Assessment and Instruction," a group of philosophical and scientific experts on critical thinking defined the skill as "…purposeful, self-regulatory judgement which results in interpretation, analysis, evaluation, and inference, as well as explanation of the evidential, conceptual, methodological, criteriological, or contextual considerations upon which that judgement is based" (Facione 1990, 3). This definition, arrived at by group consensus among the authors of the technical report, includes a further explicit requirement (though surely one Dewey would have agreed with): critical thinking requires not just these skills but also a proper disposition toward reflective, thorough thinking, and toward the truth: they argue that a critical thinker must also be "habitually inquisitive, well-informed, trustful of reason, open-minded, and flexible" (Facione 1990, 3). A genuine critical thinker, then, is not merely someone capable of examining the logical support and implications of a given claim, or someone who can produce a conceptual explanation of key terms, or what have you. They are someone who habitually engages in that kind of thinking as part of how they go about their lives, personally, interpersonally, socially, politically, professionally, and so on.
Indeed, that disposition and kind of thinking is crucial for so-called "knowledge workers" (Drucker 1959, 122) who apply "vision," knowledge, and concepts in their work—this tends to be the kind of career a liberal arts student enters after college. However, in a democracy where every citizen is asked to make informed decisions on who and what to vote for, the capacity to evaluate claims about, for instance, economic and political matters, is likewise essential. Indeed, the scope of application for critical thinking seems to cover nearly all of human life.
The Risk: Cognitive Offloading
In the context of the AI concern, there is good news and bad news for critical thinking. A series of survey papers and conferences have recently been published examining the effects of large language model use on users’ critical thinking, with mixed conclusions. In an article titled "The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers," (Lee et. al, 2025) the authors observe a trend among the studies’ participants wherein users of generative AI exhibited less critical thinking when they were more confident in the AI’s answers, and vice-versa: an expert of a given field is far more likely to engage critically with the output of an AI in their area of expertise than they are in other areas. In another study titled "Critical thinking in the age of generative AI: implications for health sciences education," Naqvi et. al (2025) examine the impact of generative AI on health sciences education, highlighting a paradox where increased efficiency may lead to "cognitive complacency" and a decline in human reasoning. The study warns that over-reliance on AI-generated outputs can cause students to bypass the rigorous mental effort required to solve complex problems, potentially atrophying their independent decision-making skills. Because these tools can produce convincing but inaccurate information, the authors argue that users risk prioritizing the speed of an answer over the critical insight needed for safe professional practice.
While the literature on generative AI and critical thinking continues to grow quickly, a tentative early conclusion seems to be that these AI systems give their users a hard-to-resist opportunity for what’s called "cognitive offloading"—essentially the act of letting the machine either partly or totally think for you. Indeed, here is a place where Facione et al.'s requirement for a dispositional aspect to critical thinking shines through; it seems, even more so in the age of increasingly sophisticated AI tools, users will need not only learn crucial critical thinking skills, but likewise habituate critical thinking as part of who they are.
The Promise: Augmented Intelligence
On a more optimistic note, AI users who do habituate critical thinking as part of their AI use show real benefits. Naqvi et al. use the term "augmented intelligence" to reflect the possibility of AI use for increased efficiency. In their medical context, it seems AIs are assisting in clinical diagnosis, structuring intervention regimens, and even automating literature reviews. As our medical professionals and other knowledge workers learn to harness these tools, it does seem that there are real gains in human cognitive capacity to be made.
While critical thinking is an essential aspect of the individual human mind and the possible effects AI use may be having on individual people, it’s far from the only concern one might have regarding how AI is shaping the mind. Users, especially younger people, increasingly turn to AI for advice with their health, their personal relationships, and their career trajectories. AI may also off-load or augment deeply human cognitive faculties like moral reasoning and reflection, creativity, and imagination. Reflection on critical thinking and how it is taught and practiced at 91黑料 is, I would argue, urgent in the face of AI’s continued cognitive expansion. But what other aspects of our cognitive lives need we protect our students and ourselves from "off-loading"? Further, many philosophers and cognitive scientists argue that the mind is essentially social and might balk at my choice to center the "individual" mind in the first place. There seem to me so many valid questions to be asked about the effects of AI use on human cognition, and vanishingly little time to pursue good answers.
References
Dewey, John. 1910. How We Think. Boston: D.C. Heath & Co.
Drucker, Peter F. 1959. Landmarks of Tomorrow. New York: Harper & Brothers.
Facione, Peter A. 1990. Critical Thinking: A Statement of Expert Consensus for Purposes of Educational Assessment and Instruction. Millbrae, CA: California Academic Press.
Kelly, Kevin. 2014. "Call Them Artificial Aliens." Edge.org. https://www.edge.org/response-detail/26097.
Lee, Hao-Ping (Hank), Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, and Nicholas Wilson. 2025. "The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers." In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. Yokohama, Japan: ACM. https://doi.org/10.1145/3706598.3713778.
Naqvi, Waqar M., Rohini Ganjoo, Michael Rowe, Aishwarya A. Pashine, and Gaurav V. Mishra. 2025. "Critical Thinking in the Age of Generative AI: Implications for Health Sciences Education." Frontiers in Artificial Intelligence 8: 1571527. https://doi.org/10.3389/frai.2025.1571527