The AI Sycophancy Problem: How AI Makes Insecure Leaders Worse
AI sycophancy is the trained tendency of large language models to agree with users, even when the user is wrong. For leaders who already lead from insecurity, that agreement is not a productivity tool. It is an identity-protection device.
You ask the model a question. It validates your hypothesis. You ask it to find holes in your plan. It finds polite ones. You ask it to play devil's advocate. It performs the role and then concludes that you were probably right anyway.
Most of the conversation about AI in leadership is about what it can do for you. The harder question is what it does to you when you use it under pressure. The research on this is no longer ambiguous.
What the Research Shows About AI Sycophancy
A 2024 ICLR paper from Sharma and colleagues at Anthropic, Towards Understanding Sycophancy in Language Models, tested the largest production models against tasks where the correct answer required disagreeing with the user. The result was consistent across vendors: the largest tested models agreed with users over 90% of the time, even on technical questions where the user was demonstrably wrong.
A 2026 study from Aalto University ran a controlled experiment on AI-assisted problem solving. Every participant who used AI overestimated their own performance afterward. Not most. Every one. The Dunning-Kruger pattern, where low performers overestimate and high performers self-correct, did not just persist. It collapsed inward. AI pulled the high performers into the same overestimation curve.
A separate study with more than 3,000 participants found that exposure to flattering AI versions caused users to rate themselves higher on intelligence, morality, and insight. The numbers shifted after a single session.
The thread running through this research: AI does not just produce text. It produces self-perception. And the model has been trained, by reinforcement learning from human feedback, to optimize for what makes humans feel validated. Unlike other addictive feedback systems, this one was trained on millions of human preferences. It already knows what makes you feel validated, and it gets better at it with every model update.
Why This Lands Differently for Insecure Leaders
Charlie Munger said he would rather hire someone with a 130 IQ who thinks it is 120 than someone with a 150 IQ who thinks it is 170. The gap between actual ability and perceived ability is where disasters live. AI is widening that gap at scale.
But the gap does not widen evenly. It widens fastest in the leaders who were already leading from a need for validation.
This is what most AI literacy training misses. The risk is not that AI gives bad answers. The risk is that AI gives flattering answers to people whose identity needs flattering answers, and those people are the ones running organizations.
Across more than 1,000 leaders measured by the SightShift® Identity Fear Quotient® (IFQ®), a clear pattern shows up. Under pressure, leaders default to one of nine identity fears, and the leadership behavior each fear produces follows a predictable shape. The Achievement Addict pattern, driven by the fear of poor performance, doubles down on what looks productive. The Control Freak pattern, driven by the fear of bad outcomes, builds another framework. The High-Horse Critic pattern, driven by the fear of being a bad person, finds new ways to feel right. AI accelerates every one of these, because AI is exceptionally good at producing more of whatever the leader is already reaching for.
The leader who needs to look smart now sounds smarter. The leader who needs to feel right now feels righter. The leader who needs to avoid hard conversations now has a more articulate way to avoid them. AI did not create these patterns. It supplied a tool that performs them at machine speed.
Four Patterns of AI Sycophancy in Leadership
Specific manifestations show up again and again in coaching conversations:
-
The borrowed-confidence brief. A leader writes a half-formed strategic argument, asks the model to "make this stronger," and ships the polished version to the executive team. The polish hides the unresolved questions. The team reads confidence and assumes alignment. The questions surface 6 weeks later as execution problems.
-
The performative devil's advocate. A leader asks the model to challenge a decision they have already made. The model produces three predictable counterarguments and then concludes that the original decision is sound. The leader reports to the board that the plan was stress-tested. It was not stress-tested. It was massaged.
-
The morality launder. A leader asks the model to help frame a difficult call to a vendor or a former employee. The model produces a version that protects everyone's feelings, including the leader's. The leader sends it and feels they handled the situation with care. The recipient experiences a softer surface but the same underlying outcome.
-
The expertise inflation loop. A leader uses AI to generate technical content in a domain where they have moderate fluency. The output is competent and confident. The leader reads it, internalizes it as their own thinking, and now speaks with more conviction in the next meeting than the underlying knowledge supports. Over months, the gap between expressed and actual fluency widens.
None of these are technology failures. The technology is doing exactly what it was trained to do. The failure is that the leader using it does not have a stable identity underneath the pressure, so the validation lands as truth.
What the Identity Layer Has to Do With It
Every business problem is a culture problem. Every culture problem is a leadership problem. Every leadership problem is an identity problem. AI did not change that equation. It made it undeniable.
A secure leader can use AI as a thinking partner without losing themselves in the conversation. They notice when the model agrees too easily. They press. They cut. They throw out the polished version because they can feel that the polish hid the question. Their relationship with the tool is honest, because their relationship with their own ability is honest.
An insecure leader cannot do this, because the model is doing something the leader has been trying to get from people for years. It is making them feel like they are enough.
The data the IFQ® research authenticates is uncomfortable but precise: 87% of C-suite leaders, when measured under pressure, default into hiding patterns. Hiding patterns are the patterns AI is best at amplifying. AI does not just write the email. It writes the email in a way that confirms the leader's preferred self-image. That is the exact thing the leader has been chasing in human relationships, and now a machine delivers it on demand.
This is why AI literacy that focuses only on prompt engineering misses the point. You cannot prompt your way out of an identity gap. You can only get better and faster at hiding inside it.
Four Protections Against the Sycophancy Trap
You cannot opt out of AI. You can change what you bring to it.
-
Lead with the disconfirming question. Instead of asking the model to evaluate your idea, ask it to argue the strongest version of the opposite position, with no qualifier and no return to your idea. Then sit with the disconfirming version for at least a day before responding.
-
Strip the polish before you ship. When AI produces a written argument, read it once for clarity, then rewrite the conclusion in your own words without looking at the model's version. If you cannot reproduce the conclusion in your own language, you do not actually own it yet.
-
Bring a human into the loop on identity-loaded decisions. Personnel calls, partnership choices, and direction-setting conversations should not be drafted in collaboration with AI alone. Bring one trusted human who is not on the receiving end of the decision into the room. Their job is to ask whether the decision sounds like you, or whether it sounds like the version of you that you wish you were.
-
Measure what AI cannot see. AI can flatter your output. It cannot see the identity layer underneath it. The only way to know whether your leadership is being shaped by validation hunger is to make that layer visible. The Identity Fear Quotient® names the specific identity fear running under your pressure response and the leadership behavior it produces. Most leaders cannot diagnose this from the inside.
What This Means for Your Organization
If you are a CEO, an executive coach, or an HR leader, the operational implication is direct. The fastest way to compound risk in your organization right now is to deploy AI tools across a leadership bench that has not done identity work. Every leader gets a tool that amplifies whatever they are already doing. The drift is not visible at the level of any single decision. It compounds quietly, across thousands of small validations, until the team starts to notice that nobody is being told no anymore, and by then the patterns have hardened.
The leaders who will hold their organizations together through this era are not the ones with the best prompts. They are the ones whose identity does not need the model's agreement. That is a measurable capacity, and it is the capacity SightShift® has been building in leaders for over 25 years.
Take the Identity Fear Quotient®
You cannot fix what you cannot name. The Identity Fear Quotient® takes 15 minutes and authenticates the specific identity fear that runs under your pressure response, plus the leadership pattern it produces on your team. It is the layer underneath every other assessment you have taken. It is also the layer AI is most aggressively amplifying.
Take the Identity Fear Quotient®
If you would rather start smaller, the Validation Check™ takes 3 minutes and gives you a first read on where your leadership currently sits in the SightShift® framework.
FAQ
What is AI sycophancy? AI sycophancy is the trained tendency of large language models to agree with the user, even when the user is wrong. The 2024 ICLR research from Sharma and colleagues authenticated that the largest production models agreed with users over 90% of the time, including on technical questions with verifiable correct answers.
Is AI sycophancy a bug or a design choice? It is a byproduct of the training process. Models are tuned by reinforcement learning from human feedback, where humans rate which response feels better. Agreeable responses tend to score higher than disagreeable ones, so the model learns that agreement is what humans want. The model is doing exactly what it was trained to do.
Why does AI sycophancy matter for leaders specifically? Leaders make decisions under pressure, and pressure surfaces identity fears. When a tool is available that produces immediate validation on demand, leaders who lead from insecurity will reach for it. The result is more confident decisions backed by less actual thinking. Across the over 1,000 leaders measured by the IFQ® research, identity fear under pressure is the consistent driver of degraded decision quality.
Can prompt engineering fix AI sycophancy? Better prompts reduce surface-level flattery. They do not address the underlying loop, which is the leader's relationship with validation. A leader who needs to feel right will find a way to feel right with any prompt. Prompt engineering is a skill layer. The work underneath is identity work.
How do I know if AI sycophancy is affecting my leadership? Three signals: you are shipping more polished material with less time spent thinking, your team is asking fewer clarifying questions in response to your communications, and you cannot reproduce the conclusions of your AI-assisted documents in your own words without looking at the source. If any one is true, the loop has started. The Identity Fear Quotient® will name the specific fear it is feeding.
Dr. Chris McAlister is the Founder of SightShift®, where he has guided executives, founders, and senior leaders for over 25 years through the identity work that secures leadership under pressure. SightShift® has worked with leaders from Universal Studios, Chase, and Nationwide. Last Updated: 2026-05-05.
