The AI in Citrini’s Doomsday Scenario Stands For American Institutions
And what that means for their analysis
Grüezi!
A viral scenario has been circulating in financial circles: AI improves so fast that it destroys white-collar employment within 24 months, triggering a consumption collapse that brings down the US financial system by 2028.
The scenario’s wrong about almost everything – except the financial system bit.
And that’s the problem, because the real vulnerabilities it stumbles across don’t need AI to set them off.
1. Citrini Has Two Points. One is good.
Citrini’s doomsday AI scenario has been doing the rounds, and it deserves to be taken seriously – but not for the reasons everyone thinks.
It makes two very different claims that have been treated as a single argument.
The first claim is that American finance has built a dangerous private equity–insurance–reinsurance pipeline, concentrating risk in opaque offshore structures that are vulnerable to credit stress.
The second – the one that got everyone selling IBM shares – is that AI will improve exponentially and be adopted so rapidly that it triggers mass white-collar unemployment within 24 months, causing the consumption collapse that lights the blue touch paper.
The first argument is largely correct and pretty important. But the second is contradicted by current evidence on scaling diminishing returns, enterprise adoption rates, technology diffusion history and the way that American consumers actually respond to income shocks.
So let’s separate them out – credit the diagnosis of financial vulnerability, and then ask what’s actually likely to trigger it.
2. The financial plumbing really is that dangerous.
So let’s start with what Citrini gets right.
Private equity-owned insurers now manage more than $500bn in US life and retirement assets. Insurance companies and PE sponsors moved $130bn in assets offshore in 2024 alone, bringing the total to $1.1 trillion.
The strategy – PE asset manager, US life insurer, offshore Bermuda reinsurer – is dominated by half a dozen household names: Apollo, KKR, Ares, Blackstone, Brookfield, Carlyle.
The IMF says PE-influenced life insurers are more vulnerable to corporate defaults and credit downgrades. The Financial Stability Oversight Council has warned that offshore reinsurance could allow contagion to creep into the system when things get stressful.
And we’ve already had a dress rehearsal. 777 Partners’ Bermuda reinsurer had its licence revoked in October 2024, and the firm subsequently collapsed.
The most revealing indicator is what the companies behind it are doing. Apollo – the firm that pioneered this pipeline – is de-risking.
It’s increasing liquidity at Athene, investing tens of billions in Treasuries, roughly halving its CLO investments and accelerating its exit from software company loans it believes face AI-related difficulties.
Apollo CEO Marc Rowan has warned publicly of a “risk of contagion” in segments where private capital has grown rapidly with minimal regulatory oversight.
When the man who built the machine starts hedging against it, pay attention.
And none of this requires AI as the trigger. It requires a sufficiently large credit event.
AI is just Citrini’s chosen detonator – and it’s the wrong one.
3. The AI capability curve is S-shaped, not exponential.
The Citrini scenario needs AI to keep getting better, fast enough that displaced workers can’t redeploy. The evidence says the improvement curve is flattening.
The industry consensus shifted in 2025. Ilya Sutskever – co-founder of OpenAI, and its former chief scientist – declared that AI had moved from an “age of scaling” to an “age of research.” Pre-training data is finite, scaling shows diminishing returns, and models generalise far worse than people despite benchmark success. Sutskever has shifted from defending scaling to selling a research-first alternative, but his point still lands.
A 2026 benchmark analysis found the correlation between model size and performance drops sharply at scale: strong under 10 billion parameters, weak above 100 billion. Frontier models have, for over a year, appeared to reach a ceiling.
The disappointment around GPT-5 made this visible to the broader market. Gains now come from architectural innovation and reasoning techniques – a fundamentally slower trajectory than the exponential curve the crisis scenario assumes.
The focus has shifted to operations – Claude Code, Codex and OpenClaw.
This is a classic S-curve: rapid initial improvement followed by diminishing returns. If AI improvement is logarithmic rather than exponential, the feedback loop that the doom scenario requires – AI gets better, companies fire workers, savings fund more AI, AI gets better – slows down rather than speeds up.
Every previous general-purpose technology has done exactly this.
4. Enterprise AI is stuck in pilot purgatory – and 24 months won’t fix that.
Even if the capability curve weren’t flattening, the adoption timeline would kill the scenario on its own. Ninety-five percent of generative AI pilots fail to move beyond the experimental phase, according to MIT’s GenAI Divide report.
Fewer than ten percent of companies have AI agents deployed in production, per a Recon Analytics survey of more than 120,000 enterprise respondents. More than half of CEOs report getting “nothing” from their AI adoption efforts, according to PwC’s 2026 Global CEO Survey. Citrini’s nightmare requires mass agentic AI adoption by early 2027 and nothing in current enterprise data makes that timeline defensible.
Take cloud computing. Despite big cost advantages, massive investment by vendors, and over 15 years of development, well over half of businesses are still “migrating” work to the cloud – and a fifth of the work that was “migrated” has had to be brought back on-site after initial migration. If companies can’t complete cloud migration in 15 years, then mass autonomous AI deployment in 24 months looks a little over-optimistic.
A 24-month timeline wouldn’t compress a normal transition. It would eliminate it entirely.
And this links to something that may be a more immediate crisis, and one that Citrini largely ignores: the $200bn-plus annual AI capex run-rate that’s funded by hyperscaler balance sheets betting on future revenues that haven’t materialised.
What is more likely is not an AI-driven structural crisis but a financial crisis driven by AI overinvestment. The bubble bursts.
That’s a different beast from the consumption-collapse scenario, and arguably a more likely one.
5. Slow-Motion White-Collar Wipe-Out
The US labour market data is where some concessions must be made.
White-collar job displacement is real and measurable. Sectors like finance, insurance, information, professional and business services have cut jobs on net for the past three years despite solid GDP growth, down nearly 2 percent since their November 2022 peak.
White-collar job openings fell by more than a third between Q1 2023 and Q1 2025. In 2025 the US economy added only 181,000 jobs – one of the worst years outside a recession.
But the data shows a restructuring that’s playing out over years, not an acute consumption collapse. US healthcare added 123,500 jobs in January 2026 alone. Construction added 33,000. America’s overall unemployment remains low.
US workers are moving between sectors – slowly and painfully, and often at lower wages – but they are moving.
Acemoglu and Autor’s task-based framework explains why: automation proceeds task by task, not job by job. AI doesn’t replace a lawyer, but it replaces document review; the lawyer still shows up, but bills fewer hours. From 1980–2015, roughly half of employment growth was in occupations where job titles or tasks changed a lot – tasks created as other get displaced.
The crisis scenario’s strongest intellectual argument is that AI’s generality makes historical comparisons invalid. As Autor himself notes, there’s no economic law that requires automation and new task creation to balance out – and recent evidence suggests automation is currently outpacing task creation.
But the pace matters enormously. A slow imbalance produces a difficult decade.
Citrini demands a sudden, overwhelming imbalance completed within a couple of years. That is the difference between a structural adjustment and a systemic crisis.
Nothing in the current data supports the rapidity that their scenario requires.
6. AI Equals American Institutions
The real catastrophe Citrini describes is not a function of technology. It’s a function of broken institutions.
AI could mean just American institutions.
The PE-insurance-reinsurance pipeline is uniquely American. Europe’s insurance regulation under Solvency II imposes very different capital requirements.
The fiscal doom loop – where lost income and payroll tax revenue trigger government crisis – depends on America’s tax system: which is highly dependent on workers' paying income tax, doesn’t have federal VAT, and where healthcare is tied to employment.
Countries with consumption-tax-based revenue and universalised healthcare would process the same productivity shock through entirely different channels.
The $13 trillion 30-year fixed-rate mortgage market doesn’t exist anywhere else. The at-will employment system that allows for rapid mass redundancies is an American legal construct.
The same technological shock, even at the pace Citrini imagines, would hit very differently in any other advanced economy.
In systems with stronger automatic stabilisers, collective bargaining, longer notice periods and state healthcare, the Citrini transmission mechanism (from displacement to consumption collapse to financial contagion) simply doesn’t happen.
This is what makes their dystopian essay interesting despite its flawed technology forecast. It’s an accidental institutional critique.
Citrini treats American institutions as a kind of universal economic logic – as if the PE pipeline, the tax structure, and the employment-tied healthcare system are natural features rather than deliberate policy choices.
And so a politically-designed choice appears an inevitability.
7. Wrong timeline, Right Destination?
So what actually might happen?
The AI capex bubble is a real risk. Over $200bn a year is flowing into infrastructure for technology CEOs say hasn’t delivered anything yet.
As Jamie Dimon said: “We have an LLM model, 150,000 people use it every week. They think they’re saving 4 hours a day. That’s not in an NPV [net present value]. We don’t see the 4 hours a day in terms of reduced headcount...”
If ROI expectations adjust – and the S-curve evidence and adoption data both suggest they will – then the correction hits the tech sector first, then ripples through the financial structures that funded it.
Perez’s framework gives this a name: the turning point crash between installation and deployment. We’ve seen it before, with railways, electrification, and the internet.
White-collar displacement, meanwhile, will probably continue building quietly.
But here’s the complication that cuts against the Citrini timeline while supporting its ultimate concern. Displaced white-collar workers don’t stop spending immediately.
Research by Ganong and Noel, using nearly 200,000 JPMorgan Chase bank accounts, found that spending falls only 6 percent at the onset of unemployment, remains largely stable during benefits receipt, then drops sharply when benefits exhaust.
Higher-income households have substantially larger credit buffers: savings, unused credit capacity, home equity lines. They borrow through income shocks for months, maintaining something close to prior consumption.
This further demolishes the 24-month consumption-collapse timeline. But it also means that the eventual correction compounds lost income with accumulated debt serviced at credit card APRs.
Total US household debt hit a record $18.8 trillion at the end of 2025, up $740bn in a single year. Credit card balances reached $1.28 trillion – a two thirds increase since their pandemic low. Delinquencies for households earning over $150k more than doubled between January 2023 and late 2024, growing faster than for any other income group.
The top 10 percent of earners now account for roughly half of total US consumer spending. And when consumption is that concentrated, the eventual adjustment hits harder.
The real danger, then, is probably not a sharp AI-triggered crisis arriving by 2028. It’s a slow accumulation of white-collar credit stress over three to five years, compounding with whatever other shocks arrive – a capex correction, a geopolitical disruption, a bog-standard recession – and eventually interacting with the dangerous financial plumbing Citrini so usefully describes.
The crisis they imagine is dramatic, fast, and driven by a single technological cause.
The crisis that’s actually building is slower, messier, and any explosion will have multiple fuses.
That makes it harder to predict, harder to describe, and considerably harder to prevent.
Thanks for reading!
Best
Adrian




