← All articles

70% of the S&P 500 Discussed AI on Their Last Earnings Call. Only 1% Quantified What It Did.

5 min read
70% of the S&P 500 Discussed AI on Their Last Earnings Call. Only 1% Quantified What It Did.

Goldman Sachs published a report in March 2026 with the blunt title “AI-nxiety.” The headline finding was that there is no meaningful relationship between AI adoption and productivity at the economy-wide level. AI’s measured contribution to GDP sits at 0.1 to 0.2 percentage points, a number that would disappoint anyone who has watched the S&P 500 rally on AI promises for two years.

But the same report contained a second finding that most coverage buried. When Goldman’s analysts looked at specific use cases where management teams actually measured outcomes, two domains stood out: customer support and software development. In those cases, the median productivity gain was 30%.

The gap between those two findings is the entire AI measurement problem in one data set. 70% of S&P 500 companies discussed AI on their last earnings call. 54% discussed it in the context of productivity. But only 10% quantified the impact on specific use cases, and just 1% quantified what AI did to their earnings.

The Solow Paradox Returns

Fortune drew the historical parallel explicitly. In 1987, economist Robert Solow observed that “you can see the computer age everywhere but in the productivity statistics.” For a decade, companies bought PCs, installed networks, and computerized operations with no measurable productivity gain at the macro level. The gains eventually appeared, but only after organizations stopped adding computers to existing workflows and started redesigning workflows around what computers made possible.

The AI version of that paradox is now playing out with remarkable similarity. An NBER study of 6,000 executives across the US, UK, Germany, and Australia found that while 67% of firms report using AI, 90% report no measurable impact on employment or productivity over three years. Executives project just a 1.4% productivity increase over the next three years, a number so modest it barely registers against the hundreds of billions being invested.

The Solow paradox took roughly a decade to resolve. It resolved not because computers got better, but because organizations finally restructured around them. The implication for AI is the same: the returns will not show up by adding AI to existing processes and measuring with existing metrics. They will show up when the processes and the metrics both change.

The Measurement That Fails and the One That Works

The reason most companies see no AI return is structural, not technological. Most apply capital expenditure logic to AI, expecting a clear payback within two quarters. When the return does not appear in the P&L on schedule, the conclusion is that AI is not working.

McKinsey’s 2026 “Superagency” report puts the maturity problem in sharp focus. 92% of companies plan to increase AI spending. Only 1% consider themselves mature in AI deployment. Only 19% of US C-suite reported revenue increases over 5% from generative AI, while 36% reported no revenue change at all.

Meanwhile, Deloitte’s State of AI found that 66% of organizations report productivity gains, but only 20% report revenue increases. 74% hope to grow revenue through AI, creating a 54-point gap between aspiration and reality. The pattern across every major survey is the same: efficiency gains are real and widespread, but they are not translating into the financial metrics that boards and investors track.

The reason is that traditional ROI was designed for machines that depreciate. You buy a machine, it produces output, the output generates revenue, the machine loses value over time. The math is linear and the timeline is predictable. AI does not depreciate in the same way. It compounds. An employee who learns to use AI for research this quarter uses that capability to do better analysis next quarter, which leads to better decisions the quarter after that. The value is real, but it shows up as capability expansion, not as a line item.

The Hidden Cost That Distorts the Numbers

Most dashboards never capture a second measurement problem. Workday’s 2026 research surveyed 1,600 employees and found that 85% reported saving 1 to 7 hours per week with AI tools. The number sounds like a clear win until the next finding: 37 to 40% of the time “saved” gets consumed by reviewing, correcting, and verifying AI-generated output. Workday named this the “AI tax.”

The AI tax explains why executive perception and employee reality diverge so sharply. A Section AI Consulting survey of 5,000 white-collar workers found that over 40% of executives claim AI saves them 8 or more hours per week, while two-thirds of non-management staff report saving under 2 hours or no time at all. The gap extends beyond time savings into every dimension of AI strategy: 80% of C-suite say their company has a clear AI policy, compared to just 20% of individual contributors.

If the C-suite believes AI saves 8 hours and the people doing the work report under 2, the organization is building its AI strategy on a number that does not match reality. The CFO layoff paradox showed a similar dynamic: 44% of CFOs planned AI-related headcount reductions while Goldman Sachs found no measurable link between AI adoption and productivity. The measurement gap is not just an accounting problem. It drives real decisions about staffing, investment, and strategy.

What the 1% Do Differently

The companies in McKinsey’s 1% (those that consider themselves mature in AI deployment) are not using better models or spending more money. They measure differently.

Goldman’s AI Adoption Tracker, built on Census Bureau data, shows that only 19% of US establishments have adopted AI at all. Among those that have, OpenAI’s enterprise data shows employees save 40 to 60 minutes per day. But the more telling number is that 75% report they can now complete tasks that were previously impossible for them, not just faster, but new.

The distinction is the key to resolving the measurement paradox. UC Berkeley’s David Gallacher argues that organizations are applying “industrial-era metrics to a cognitive-era transformation” and proposes replacing single-number ROI with a multi-dimensional framework: Return on Efficiency (time saved), Quality Enhancement (error rates and output complexity), Workforce Capability Expansion (tasks now possible that were not before), and Strategic Positioning (competitive advantages created).

The practical version of that framework comes down to one question that the AI model decision framework also points toward: not “what’s our return on this AI investment?” but “what can our people do now that they couldn’t do six months ago?”

If your team can now produce first drafts of market analysis that used to take a week, that is capability expansion. If your support team resolves 30% more tickets without adding headcount, that is measurable even if it does not show up as “AI ROI” in the traditional sense. If an employee who used to need a data analyst can now query databases directly, the value is not in time saved but in decisions made faster and with fewer dependencies.

The Question for Monday Morning

MIT found that 95% of enterprise generative AI projects fail to show measurable financial returns within 6 months. The number sounds like an indictment of AI until you consider the measurement: financial returns within 6 months, using metrics designed for capital expenditures. By that standard, the first decade of corporate computing also failed.

The companies that will separate from the pack over the next two years are the ones that stop asking “is our AI working?” and start asking “what has changed about what our people can do?” That shift, from measuring cost recovery to measuring capability expansion, is not a philosophical distinction. It determines whether the board sees a failing investment or a compounding asset. The data says the gains are real. The question is whether your metrics can see them.


Related: Your Board Is Asking About AI. What Does Your Report Actually Say? examines the other side of this measurement problem. If your organization cannot quantify what AI does to earnings, the board report built on those numbers is hollow.

Ron Gold Founder, A-Eye Level
Read the original post on LinkedIn Get the weekly signal