Why We Overestimate Ourselves When AI Helps Us Think

New research reveals a troubling pattern: AI tools improve our performance while simultaneously making us believe we're even better than we actually are.

Person confidently presenting at whiteboard with AI assistant interface glowing subtly in background

A curious thing happens when people use AI to help them think. They get better at the task, which is expected, but they also become convinced they’re much better than they actually are, which isn’t. This gap between actual performance and perceived competence has been documented in a growing body of research, and the implications extend far beyond individual productivity. We may be creating a generation of professionals who have genuinely useful AI-assisted skills but dangerously inflated beliefs about what they can do without that assistance.

The most recent study to confirm this pattern, published in early 2026, tested participants on logical reasoning problems both with and without AI assistance. When using AI, participants improved their scores significantly. Nothing surprising there. But when researchers asked participants to estimate how well they would perform on similar problems without AI help, something interesting emerged. Those who had used AI systematically overestimated their unassisted abilities, while those who had never used AI for the task showed no such inflation. The AI hadn’t just helped them perform better. It had convinced them they were smarter than they were.

This finding connects to a broader question about what happens when cognitive tools become so seamless that we forget they’re tools at all. We’ve been augmenting our cognition with technology for millennia, from writing systems that externalize memory to calculators that offload arithmetic. But previous tools were obviously tools. You knew you were using a calculator. The question now is whether AI assistance has become so fluid, so conversational, so integrated into our thinking process that we’ve started to mistake its contributions for our own insights.

The Metacognitive Blind Spot

Psychologists have long studied metacognition, our ability to think about our own thinking, and they’ve found it’s surprisingly unreliable. People consistently misjudge their competence, sometimes overestimating and sometimes underestimating depending on the domain and their actual skill level. The classic finding, often called the Dunning-Kruger effect, shows that people with limited knowledge in a domain tend to overestimate their abilities, partly because they lack the expertise to recognize what they don’t know.

AI assistance appears to create a new version of this blind spot. When you ask an AI a question and it provides a clear, well-reasoned answer, the experience feels collaborative rather than dependent. You’re the one who thought to ask the question. You’re the one who recognized the good answer when you saw it. You’re the one who integrated that answer into your broader thinking. The AI just provided some information along the way. This framing makes it easy to internalize the AI’s contribution as your own insight, especially when the assistance happens quickly and conversationally.

Split illustration showing perceived competence versus actual competence with AI assistance
The gap between perceived and actual competence widens when AI assistance feels seamless and collaborative.

The research suggests this isn’t just a failure to notice AI’s contribution. It’s an active process of credit misattribution that happens even when people are explicitly told that AI is helping them. In one experiment, participants were reminded before each task that they were using AI assistance. Despite this constant reminder, they still overestimated their unassisted abilities afterward. Knowing you’re being helped isn’t enough to prevent you from unconsciously taking credit for the help you received.

This has obvious implications for professional contexts where AI tools are increasingly common. A lawyer who uses AI to research case law might develop an inflated sense of their own legal knowledge. A programmer who relies on AI code completion might overestimate their ability to write code from scratch. A medical professional who uses AI diagnostic tools might become too confident in their unassisted clinical judgment. In each case, the AI creates real value by improving performance, while simultaneously creating a distorted self-image that could lead to poor decisions when AI assistance isn’t available.

The Fluency Trap

Part of what makes AI assistance so cognitively invisible is its fluency. When an AI produces text that flows naturally, answers questions conversationally, and generates ideas that seem plausible, it creates what psychologists call processing fluency, a sense of ease and naturalness that our brains interpret as a signal of truth and competence. We experience the AI’s output as obviously correct, and because the experience is seamless, we don’t stop to question where the correctness came from.

This fluency trap is distinct from the older problem of automation bias, the tendency to over-rely on automated systems because we assume computers are more accurate than humans. Automation bias makes us trust the machine too much. The fluency trap makes us trust ourselves too much, because we’ve unconsciously absorbed the machine’s capabilities into our self-concept. The result is a kind of cognitive chimera: actual abilities that are machine-dependent combined with a self-image of machine-independent competence.

The researchers found that fluency plays a measurable role in the overconfidence effect. When AI assistance was made deliberately clunky and awkward, with obvious delays and mechanical responses, participants showed less overconfidence afterward. The awkwardness served as a constant reminder that an external tool was involved, making it harder to mistake the tool’s contributions for native ability. This suggests that the seamlessness we value in AI interfaces may come with hidden cognitive costs.

Visualization of smooth flowing AI responses blending imperceptibly into human thought bubbles
The seamlessness of modern AI interfaces makes it difficult to distinguish machine contributions from our own thoughts.

Consider how this differs from using a traditional reference tool like an encyclopedia. When you look something up in a book, the boundary between your knowledge and the book’s information remains clear. You know what you knew before consulting the book and what you learned from it. But when you have a conversation with an AI, that boundary blurs. The AI’s responses feel like natural extensions of your own thinking rather than discrete chunks of external information. You might struggle afterward to remember which insights were yours and which the AI provided.

The Expertise Paradox

Interestingly, the overconfidence effect isn’t uniform across skill levels. People with moderate existing expertise show the largest gaps between perceived and actual unassisted ability. Beginners know they’re beginners and tend to attribute AI improvements to the AI. True experts have enough knowledge to maintain accurate self-assessment even when AI contributes. It’s the middle group, competent but not expert, that’s most vulnerable to absorbing AI capabilities into their self-image.

This creates a paradox for professional development. The people most likely to benefit from AI assistance, those who have solid foundations but haven’t mastered their domain, are also the people most likely to develop inflated self-assessments from using it. They’re skilled enough that AI augmentation feels like a natural extension of their abilities, but not skilled enough to accurately judge where their abilities end and AI’s begin.

The implications for training and education are significant. If students learn with constant AI assistance, they may develop skills that are genuinely useful in AI-augmented environments while simultaneously developing inaccurate beliefs about their standalone capabilities. When they encounter situations where AI isn’t available or appropriate, they may make poor judgments based on an inflated sense of what they can do. The challenge for educators is finding ways to build AI-augmented competence while maintaining accurate metacognition.

Some researchers have suggested that deliberate practice without AI, combined with explicit calibration exercises where people predict their performance and then see actual results, might help maintain accurate self-assessment. Others argue that as AI becomes ubiquitous, unaugmented performance matters less, and we should simply accept that future competence will be inherently tool-dependent. The debate mirrors broader questions about how we should think about human capability in an age of pervasive cognitive technology.

The Social Dimension

The overconfidence effect has a social dimension that makes it particularly concerning. When people believe they’re more competent than they are, they seek positions of authority, advocate for their opinions more forcefully, and resist feedback that challenges their self-image. If AI systematically inflates the confidence of its users, we might expect to see these users dominating discussions, securing leadership positions, and shaping decisions in ways that don’t reflect their actual unaugmented capabilities.

Meeting room scene with confident AI-assisted presenter while others defer to their apparent expertise
Overconfident AI users may dominate professional environments, shaping decisions based on inflated self-assessments.

This connects to how recommendation algorithms shape our information environment in ways we often don’t notice. Just as algorithms curate content that reinforces our existing preferences, AI assistance may curate our self-image by consistently helping us succeed and thereby convincing us that success comes from our own abilities. The technology becomes invisible through its effectiveness, which makes its influence on our self-perception particularly hard to detect or resist.

There’s also a competitive dynamic at play. In professional environments where AI use is common but not universal, people who use AI have real performance advantages. If they also develop inflated confidence, they may appear more competent than non-users even beyond their actual AI-augmented performance. This creates selection pressure for AI adoption not just for the performance benefits but for the confidence benefits, potentially spreading both the advantages and the metacognitive distortions throughout professional communities.

The sunk cost fallacy offers an interesting parallel. Just as we struggle to abandon investments we’ve already made, even when abandonment is rational, we may struggle to abandon self-assessments we’ve formed through AI-augmented success. Having internalized a sense of competence, we resist information that would require revising that self-image downward, even when the evidence for revision is clear.

The Bigger Picture

What does it mean for human cognition when our most powerful thinking tools are also our most invisible ones? This question has no simple answer, but the research on AI-induced overconfidence suggests we need to think carefully about the relationship between cognitive augmentation and self-knowledge.

Previous cognitive tools, from writing to calculators to search engines, extended human capability while remaining obviously external. You knew your notes were in a notebook. You knew your calculations were on a calculator. You knew your information came from Google. The externality of these tools preserved a clear boundary between self and tool, even as that boundary became more permeable with each generation of technology.

AI threatens to dissolve this boundary in ways we’re only beginning to understand. When a tool thinks conversationally, responds to natural language, and produces outputs that feel like natural extensions of our own reasoning, the usual markers of externality disappear. We’re left with enhanced capability and diminished self-knowledge, a combination that creates new kinds of cognitive vulnerability.

This doesn’t mean we should abandon AI assistance. The benefits are too significant, and the technology is too embedded in professional workflows to retreat from. But it does suggest we need new practices for maintaining accurate self-assessment in AI-augmented environments. Deliberate practice without AI, calibration exercises, explicit acknowledgment of AI contributions, and cultural norms that value accurate self-assessment over confidence could all help preserve the metacognitive clarity that AI assistance tends to erode.

The deeper issue is whether our conception of individual competence needs updating for an age of ubiquitous cognitive augmentation. Perhaps the relevant question isn’t what you can do without AI, but how effectively you can collaborate with AI and how accurately you understand the nature of that collaboration. If so, the overconfidence researchers have documented isn’t just a bug to be fixed but a symptom of a broader mismatch between our inherited concepts of individual ability and the reality of technologically augmented cognition.

For now, the practical takeaway is simple: AI makes you better and makes you think you’re even better than that. Knowing this pattern exists is the first step toward correcting for it. The second step is developing habits that keep you honest about where your thinking ends and your tools begin, even when those tools are designed to feel like a seamless extension of yourself.

Written by

Casey Cooper

Topics & Discovery Editor

Casey Cooper is a curious generalist with degrees in both physics and history, a combination that reflects an unwillingness to pick just one interesting thing to study. After years in science communication and educational content development, Casey now focuses on exploring topics that deserve more depth than a Wikipedia summary. Every article is an excuse to learn something new and share it with others who value genuine understanding over quick takes. When not researching the next deep-dive topic, Casey is reading obscure history books, attempting to understand quantum mechanics (still), or explaining something fascinating to anyone who will listen.