The most interesting aspect of the current AI infrastructure buildout is not the absolute dollar figures, but the implicit assumption embedded within them: that demand for intelligence is effectively unbounded, and that whoever produces the most of it will capture value proportionate to it. This is, I think, the proposition that deserves to be questioned, and I'm not sure it survives close inspection.
Within a fairly short time horizon, the median user of a language model will have local access to a model that is, for their purposes, sufficient. The marginal utility of a smarter model, for the recruiter drafting an email or the developer scaffolding a CRUD app, is going to look an awful lot like the marginal utility of a faster CPU did around 2010: technically real, practically irrelevant. The models are fast enough, smart enough, cheap enough and the bottleneck moves elsewhere.
The question, then, is who actually pays for the frontier. Intelligence agencies, offensive and defensive cyber operations, quantitative finance, pharmaceutical research, and the relatively small set of corporates whose work is genuinely adversarial or genuinely bottlenecked by reasoning at the edge of human capability. This is a legitimate market. Compared to the capital being committed, it is also quite small, closer in shape to the defense industrial base than to anything resembling consumer software economics.
The other issue, and the one I think proponents of the bull case have not adequately answered, is that the labor market itself does not behave the way the superintelligence thesis assumes it does. Firms do not, as a general matter, compete to hire the most intelligent person available, they compete to hire someone sufficiently capable, sufficiently aligned, and sufficiently cheap for a given role. The premium for raw cognitive ability above some threshold is real but bounded, and it accrues disproportionately to a handful of fields. If this is how the market for human cognition is structured, it's not obvious to me why the market for machine cognition should be structured differently.
What this implies, to my mind, is that the AI value chain is going to mirror essentially every preceding technology cycle: the bulk of economic value will accrue at the application and integration layer, where intelligence is sufficient and the differentiating factors are distribution, workflow lock-in, and cost. The frontier itself becomes something closer to a strategic asset: high-margin, geopolitically important, commercially modest relative to the capital required to sustain it. Does this justify the economic and existential uncertainty we are risking by investing in these technologies as a global society?