← All articles

The Case for Using AI Less

3 min read
The Case for Using AI Less

Everyone is measuring how much AI their team uses. Almost nobody is tracking what their team can no longer do on its own.

The efficiency case is settled

Faster reports. Cleaner data. Better first drafts. No serious person argues against that. The question is what happens after the efficiency phase ends and the dependency phase begins.

Speed and capability are not the same thing. A team that produces deliverables faster with AI is not necessarily a team that thinks more clearly. It may be a team that has outsourced the thinking.

The quiet risk

When AI writes the first draft of every strategy document, outlines every analysis, and summarizes every meeting, the team gets faster. It also gets dependent.

The risk is quiet. Nobody notices that the analyst who used to build sharp financial models now struggles without a template. That the strategist who used to read the market and form a clear direction now waits for the AI to suggest one. The skill didn’t disappear overnight. It atrophied from disuse.

Research supports this pattern. A 2025 study from Microsoft Research and Carnegie Mellon found that workers who trusted AI more reported less critical thinking effort. The pattern was consistent: higher confidence in the tool, lower engagement with the work. The gains in output speed were real. So were the losses in cognitive independence.

Why this matters for decisions

A team that can only think clearly with AI assistance is a team that can’t recover when the AI gets it wrong.

And in any decision that involves judgment, context, or nuance, the AI gets it wrong more often than its confidence suggests. It generates plausible answers. Plausible is not the same as correct, especially when the stakes involve strategy, people, or risk.

The CEO who receives an AI-generated strategic analysis needs people who can evaluate it independently. If nobody in the room can form their own read on the market, the AI’s output becomes the default. Not because it was right, but because there was no alternative perspective.

The management question

This is not an argument against AI. It is a management question: which tasks should stay manual on purpose, not because AI can’t do them, but because your team’s judgment depends on practicing it?

Three areas where intentional manual work protects capability:

Strategy sessions where the first draft is human. Before anyone opens a tool, the team forms its own direction. AI can refine, challenge, or pressure-test. But the initial thinking needs to come from people who understand the context.

Analysis reviews where the analyst forms their own read first. Check the model, yes. But not before the analyst has a hypothesis of their own. If the analyst’s only function is validating AI output, you have a reviewer, not an analyst.

Decision briefings where the recommendation has a human author. Someone needs to own the conclusion. “The AI suggested this” is not accountability. “I reviewed the data and recommend this” is.

The principle

More AI is not always better AI. The organizations that get the most from AI are the ones that know where to draw the line. Not the line between “AI can do this” and “AI can’t do this.” The line between “AI should do this” and “our people need to keep doing this.”

That line is a management decision. And it is one of the most important AI decisions a leader will make.


Related: Your AI Tools Are Multiplying. Your People Aren’t Keeping Up. examines BCG’s research on where the productivity tipping point actually lives.

Ron Gold Founder, A-Eye Level
Read the original post on LinkedIn Get the weekly signal