← All articles

Is Your AI Strategy Theater? A Three-Test Boardroom Audit

8 min read
Hand-drawn ink and watercolor illustration on a warm cream off-white background, composed as a symmetric horizontal head-on view of an executive boardroom that is simultaneously a theater stage. The back wall of the room is dramatically framed by heavy steel-blue velvet stage curtains hanging from the ceiling all the way down to the floor, drawn open and tied back on each side with thick gold tasseled cord. Between the parted curtains, the back wall shows as a tall pale cream rectangular surface. Suspended from the ceiling above the room, a single theatrical pendant spotlight on a slim dark metal armature casts a clearly visible bright warm cream-gold circular pool of light onto the area above and around the long rectangular boardroom table at the center of the frame. The boardroom table is dark and polished, oriented horizontally across the middle of the composition, viewed from a slight elevation. The table top carries a few small dark shapes that read as folders or papers but are not legible at viewing distance. Empty chairs line the table on all sides in a regular rectangular pattern, the chairs with light cream backs and darker seats. The floor in the lower foreground is pale with a subtle diamond grid pattern visible. Palette: warm cream off-white background and walls and floor, muted steel-blue for the velvet curtains, warm gold-amber for the curtain tassel cord and the spotlight pool, dark charcoal for the line work, dark gray for the table and chair seats. Loose ink line work with watercolor wash and visible brush strokes. No text, no labels, no figures, no faces.

A 2026 enterprise survey by WRITER and Workplace Intelligence, based on 1,200 C-suite executives and 1,200 non-technical employees actively using AI at work, asked executives whether their AI strategy was actually guiding internal decisions, or was more for show.

75% of executives picked “more for show.”

The number is striking on its own. It gets stranger inside the data around it.

The same survey: 64% of CEOs fear they could lose their job if they fail to lead their organization through the AI transition. 48% call current AI adoption disappointing. 60% of companies are planning layoffs for employees who do not adopt AI. And 39% have no formal plan to drive revenue from any of it.

Read those four numbers in sequence. The pressure to show AI motion is industry-wide. The doubt about whether that motion is producing value is industry-wide. The willingness to fire people over it is industry-wide. And the absence of a revenue thesis underneath it is also industry-wide.

This is the gap a CEO sits inside. Theater is not a fringe complaint from a few cynical operators. It is the modal admission of the people who own the strategy. Three-quarters of them are telling a survey, anonymously, that what they are presenting upstairs is performance.

The diagnostic worth running, then, is not why so many strategies are theater. The diagnostic worth running is what the other 25% built that the rest did not.

This article is that diagnostic, in three operator tests a CEO can run on a Tuesday, plus the counter-argument the tests have to survive, plus a five-question audit a board member could run in the next meeting without slides.

The Three Operator Tests

A real AI strategy passes three checks. None of them require new tooling. All three can be run by reading the strategy and asking the room a single question.

Test 1. It names a decision someone made because of it last week

The first check is simple. Ask anyone in the C-suite to name one specific decision the AI strategy caused in the last seven days. Not “informed by.” Not “in line with.” Made because of it. A vendor was not picked. A project was killed. A hire was deferred. A capability was bought instead of built. Something specific, with a date.

This catches the most common failure mode of the theater class: the strategy as horizon-setting document. Horizon-setting documents do not couple to operating decisions. They sit one altitude above them, gesturing at direction without ever forcing a choice. The “informed by” framing is the giveaway. A strategy that informs everything decides nothing.

A passing strategy looks like this: someone in the room can name the decision, the date, and what would have been chosen otherwise. The counterfactual is visible. A failing strategy produces sentences like “we are aligned with the AI strategy on this one” without ever saying which strategy clause did the aligning, or what the alignment ruled out.

Test 2. It sets a budget ceiling that forces tradeoffs

The second check is whether the AI budget has a ceiling that forces a choice between two real options. If every department head’s new tool request gets funded, the strategy is not allocating capital. It is rubber-stamping discretionary spend.

A budget ceiling is the structural feature that turns AI strategy from aspirational language into a governance instrument. Without one, the strategy can absorb every plausible AI use case the organization surfaces, because there is no scarce resource to ration. A scarce resource is what creates the priority conversation. The priority conversation is what produces a strategy.

A passing strategy looks like this: the AI budget has an explicit cap. Inside the cap, two reasonable use cases competed in the last quarter and one of them lost on stated grounds. A failing strategy looks like this: AI spend grew with the demand for AI spend, and no use case was deferred, declined, or scoped down.

This is the test that separates governance from procurement. Governance allocates against a constraint. Procurement processes requests. Most of the strategies sitting in the 75% are procurement workflows wearing governance labels. The four allocation rules for an AI budget made the same observation in spending terms: when no scarce resource forces a tradeoff between two reasonable line items, the budget conversation defaults to ratification rather than allocation.

Test 3. It survives the people who wrote it

The third check is the durability test. If the strategy was produced by an external consulting engagement, can the in-house leadership describe it without referring to the deliverables? Is there a named internal owner accountable for the decisions the strategy will produce next quarter? Does the strategy still apply if the consulting team offboards on Friday?

A strategy that lives inside a consulting deck and dies when the engagement closes was never internal guidance. It was an artifact. Artifacts are easier to produce than guidance, which is why three-quarters of executives are sitting on them.

A passing strategy has a named owner inside the company who can describe what the strategy says to start doing, what it says to stop doing, and what it says to defer. A failing strategy has a deck and a steering committee, both of which dissolve the moment the engagement ends.

A Counter-Argument Worth Taking Seriously

The strongest objection to all three tests is that they punish appropriate caution.

The argument runs like this. AI capability is moving faster than any decision-grade artifact can survive a quarter. A strategy that names specific decisions, sets hard ceilings, and assigns durable owners is a strategy that bets against the velocity of the underlying technology. The 25% with crisp strategies are over-committed. The 75% labeled “theater” are preserving optionality in a market that has not stabilized. What looks like performance is intelligent waiting.

The argument is real. It deserves a response, not a dismissal.

The response is that the three tests do not require commitment. They require visible coupling between strategy and decisions, including the decision to defer. A wait-and-see strategy passes Test 1 if it names the specific deferrals: “we are not buying vendor X until the Q3 capability review.” It passes Test 2 if it sets an exploration ceiling instead of an open-ended discovery budget. It passes Test 3 if there is a named internal owner of the wait-and-see protocol who is accountable for revisiting it on a stated date.

The diagnostic is not measuring conviction. It is measuring whether the strategy produces traceable, dated coupling to operating decisions, including negative ones. A theater strategy fails the tests because it does not couple to anything, in either direction. An optionality strategy passes the tests as long as the optionality is itself a decision someone made and someone owns.

The objection is correct that some strategies should defer. The tests still apply. Deferral is also a decision.

What the 25% Likely Built Differently

The survey does not describe the strategies in the passing quarter. It only counts who is in which group. The shape of a passing strategy has to be reasoned out from the failure modes the tests catch.

Two structural features are common to strategies that survive the three checks.

The first is that the strategy is treated as an internal protocol for AI decisions, not a presentation deck for the board. The audience is the executive team making weekly choices about vendor selection, hiring, build-versus-buy, and budget reallocation. The artifact is whatever format makes those choices crisp: a one-page allocation rule, a vendor decision tree, a deferral list. The board sees it as a byproduct, not as the deliverable.

The second is that the strategy carries a quarterly review against the decisions it actually drove, not against the movement of the AI capability frontier. The review answers one question: “what did the strategy decide for us this quarter, and would we have decided differently without it?” Strategies that cannot answer the question on their first review get rewritten or retired. Strategies that pass the question accumulate authority, because the organization can point to specific decisions and say “we made that one because of this.”

These two features are why the 25% looks different from the outside. Shorter documents. Narrower scope. More frequent revision. Conversations about what got declined this quarter rather than conversations about what got ratified.

The Five-Minute Board Audit

A board member looking at this data and wondering where the company actually sits has five questions to ask. None of them require slides.

  1. Name one specific decision your AI strategy made happen this quarter. What was the date?
  2. What is the explicit AI budget ceiling, and what use case did not get funded because of it?
  3. Who is the named internal owner of the strategy after the consulting engagement ends?
  4. What does the strategy say to stop doing or to defer, not just what it says to start?
  5. When is the next review against the decisions the strategy actually drove?

Five answers. Five minutes. Either the answers exist, or the company is much closer to the 75% than it wants to admit. The format follows the five-question CEO Diagnostic introduced for AI budget allocation: short, specific, no slides, ownership-naming. The diagnostic shape transfers because the failure shape transfers, strategies that do not name decisions look identical to budgets that do not name the workflow class.

The survey numbers are the floor of the conversation, not the ceiling. The Writer + Workplace Intelligence data shows the rate at which strategies are theater. It does not show the cost of letting them stay that way. The cost is downstream: the 39% with no revenue plan, the 48% calling adoption disappointing, the 23% seeing ROI from agents. A strategy that does not guide internal decisions does not guide internal returns either.

If a CEO running through the five questions cannot answer the first one with a specific decision and a specific date, the strategy is a draft. Drafts are fine. Drafts presented as instruments are not.

Questions this article gets

Why is the 75% 'theater' figure so striking when companies are obviously doing AI work?

The number measures whether the strategy guides internal decisions, not whether the company is doing AI activity. Most of the 75% are running pilots, signing vendor contracts, and producing strategy decks. What they are admitting in the survey is that none of it forces a specific decision a CEO can name on a Tuesday. Activity and strategy have separated. The Writer + Workplace Intelligence question is the rare instrument that catches the separation explicitly, because it asks executives whether the strategy guides decisions, not whether they have one. A strategy can exist as a document and still fail the question, because existing as a document is not the same as guiding decisions. That is the gap the 75% admits.

How is the three-test diagnostic different from a typical AI maturity assessment?

Maturity assessments rate the company against a capability ladder: data infrastructure in place, governance committee formed, use cases identified, training delivered. The three tests do not rate maturity. They check whether the strategy has produced traceable, dated coupling to operating decisions in the last quarter. A company can score high on every maturity rubric and still fail Test 1, because passing Test 1 requires naming a specific decision the strategy caused, not naming a capability the company has built. The diagnostic measures decision-coupling, not infrastructure. The reason this distinction matters is that infrastructure can accumulate without ever producing a decision, and the survey's 75% is the population where that has happened.

What if the company is genuinely in wait-and-see mode? Doesn't the diagnostic punish appropriate caution?

The article addresses this counter-argument directly. The three tests do not require commitment, only visible coupling between strategy and decisions, including the decision to defer. A wait-and-see strategy passes Test 1 if it names the specific deferral with a date and a trigger condition ("not buying vendor X until the Q3 capability review"). It passes Test 2 if it sets an exploration ceiling instead of an open-ended discovery budget. It passes Test 3 if a named internal owner is accountable for the wait-and-see protocol and for revisiting it on the stated date. Caution is not the failure mode the diagnostic catches. Coupling absence is. Deferral is also a decision.

Ron Gold Founder, A-Eye Level
Read the original post on LinkedIn Get one email a week