Last week’s post asked the diagnostic question: if Stanford’s 2026 AI Index shows a 3-4x productivity spread across functions and the same chapter hedges on whether the gains aggregate cleanly at the firm level, what does a CEO do with the 2026 AI budget? This is the prescription side.
Corporate AI investment more than doubled in 2025 to $581.7 billion (Stanford HAI, 2026 AI Index Report). The 2026 decision is not whether to spend, it is how to split the spend under Stanford’s hedge that macro-level productivity evidence “remains early and mixed.” Four rules fall out of Chapter 4, each with a verification step a finance team can run.
1. Allocate by function-level delta, not by firm-level average
Chapter 4 compiles measured gains of 14-15% in customer support, 26% in software development, 50% in marketing output, and 55% throughput for accountants, alongside a 19% slowdown for experienced open-source developers using AI assistance (Stanford HAI, 2026 AI Index Report, Chapter 4, Figure 4.4.27). Pro-rata spending against an average hides a 4-5x real gap. The verification step is a live measurement of one production task per function before the budget locks, not a vendor pilot.
2. Fund training at the ratio the data now supports
A study of 12,000 European firms compiled in the same chapter found AI adoption boosted labor productivity by 4%, with training adding 5.9 percentage points for every 1% of AI spend directed to it (Aldasoro et al., 2026, Stanford HAI 2026 AI Index Report). That puts a marginal-return number on the Trailblazer split flagged in the AI budget amplification gap: 60% of AI budget into upskilling among Trailblazers vs. 27% among Pragmatists (BCG AI Radar 2026). Training is the lever most finance teams underweight because it does not sit in the AI line item.
3. Budget against the learning-penalty column, not only the gross gain
Chapter 4 cites software engineers using AI to learn new libraries showing no measurable speed improvement and what the researchers call “learning penalties” (Shen and Tamkin, 2025, Stanford HAI 2026 AI Index Report), alongside the 19% slowdown for experienced developers (Becker et al., 2025, Stanford HAI 2026 AI Index Report). An allocation model that only tracks the 26% gain under-reports the long-run curve. The verification step is paired: speed today and skill depth six months out. One number without the other is a partial ledger.
4. Price in the junior displacement risk
Employment for software developers ages 22-25 has fallen close to 20% from its 2022 peak while older age groups grew, and among the most AI-exposed occupations the 22-25 cohort sits roughly 16% below the least-exposed (Brynjolfsson et al., 2025, Stanford HAI 2026 AI Index Report). McKinsey’s same-chapter survey finds one-third of respondents expect a workforce decrease next year, rising to 35% at organizations with at least $1 billion in revenue. Treating reduction as pure cost saving misses the senior-pipeline gap underneath. The verification step is a pipeline check: are junior hires tracking the productivity gain, or are you quietly removing seats? The workflow-redesign logic applies at the budget layer too: returns come from redesigning around capability, not cutting around it.
The 2026 Allocation Question
Four rules will not close Stanford’s aggregation hedge. They move the conversation from a percentage of revenue to a function-by-function delta, a training ratio, a penalty column, and a pipeline risk.
So before the 2026 AI budget locks, ask the question that comes before the total: which of these four rules is your finance team currently pricing in, and which one did the last budget round quietly drop?