The moment it stops being theoretical


A friend was telling me about their recent job interviews. Not the questions they struggled with or the technical challenge they bombed—that’s normal interview anxiety. No, they were frustrated because they’d spent forty-five minutes talking to an AI chatbot that analyzed their “enthusiasm for remote work” and tracked their eye movements while asking whether they’d “misrepresent themselves 3.8x more than average candidates.”

The speed with which I would exit an AI interview would rival even the most seasoned teenager’s alt-tab skills.

But here’s the thing—this isn’t just another complaint about awkward technology or corporate efficiency theater. This moment, repeated thousands of times daily across hiring processes, represents something more fundamental breaking down. AI was supposed to eliminate the boring parts of work, the repetitive drudgery that didn’t require human judgment. Instead, it’s eliminating the human parts of work—the connection, the authenticity, the actual assessment of whether someone can do the job.

And that pattern—AI deployed to dehumanize rather than enhance, to reduce costs rather than create value—isn’t limited to hiring. It’s the story of a $560 billion investment that’s generated just $35 billion in revenue. It’s the story of public trust collapsing from cautious optimism to outright skepticism. It’s the story of two interconnected crises that could reshape the entire technology industry if either one fails.



What in the dystopian hellscape is going on


Let’s stay with that hiring example for a moment, because it crystallizes the problem. Companies now use AI to screen resumes, conduct first-round interviews, analyze voice patterns for “culture fit,” track eye movements for “engagement signals,” and evaluate “authenticity” through speech pattern analysis. The justification? Candidates are supposedly committing fraud “3.8x higher rates”—though higher than what, when, or where remains conveniently unspecified.

So AI is good when companies use it to dehumanize candidates, but it’s bad when candidates use it to stay competitive in that same dehumanizing process. Got it.

The fundamental issue, is that “AI was supposed to do the parts of the jobs humans found boring or repetitive.” Taking meeting notes. Generating documentation. Processing expense reports. The stuff that makes you think “surely a computer could do this.” Instead, we’re using it for the parts that desperately need human judgment—assessing whether someone will thrive in a role, whether they’ll mesh with a team, whether their particular combination of skills and experience makes them the right fit for this specific challenge.

If I’m hiring a human for a role, I don’t care about grading their every word or analyzing their eye movement patterns. Doing so is fundamentally counterproductive since it encourages candidates to be as boring and uninteresting as possible while maintaining perfect eye contact and regurgitating perfect phrases, lest they don’t get the job.

Honestly? I don’t think I want to hang out with the people who AI likes anyway.

But this pattern—deploying AI in ways that optimize for the wrong things—appears everywhere once you start looking. McDonald’s AI drive-thru adding 260 Chicken McNuggets to orders. Klarna claiming AI “did the work of 700 agents” before quietly reversing course when quality tanked. Google’s AI Overview suggesting you add glue to pizza. These aren’t bugs. They’re symptoms of a systematic misapplication of technology by companies that haven’t figured out what problems AI actually solves.



The numbers that should terrify you


OpenAI, the company most synonymous with the AI revolution, lost $5 billion in 2024 on revenue of just $3.7 billion. Let that sink in. They’re spending $2.25 for every dollar they earn. The projected loss for 2025? $14.4 billion. Between 2023 and 2028, the company expects cumulative losses of $44 billion before potentially—possibly, theoretically—reaching cash-flow positive status in 2029 or 2030.

Despite these staggering losses, OpenAI’s valuation rocketed from $157 billion in October 2024 to somewhere between $300-500 billion today. That’s a tripling in twelve months with no profitability in sight. These aren’t valuations based on business fundamentals. They’re bets that future revenue will somehow, miraculously, justify present spending.

Anthropic tells the same story with different numbers. Revenue jumped from $1 billion annually in 2024 to a projected $9 billion by end of 2025—impressive growth, right? Except they’re burning through $2.7-3 billion annually and don’t expect to break even until 2028. The valuation inflated tenfold in eighteen months, from $18.4 billion to $183 billion. Again: not a sustainable business, but a bet on a future that may never materialize.

The cost structure explains why profitability remains this elusive unicorn. Every ChatGPT query costs real money to compute. Every Claude conversation. Every AI-generated image. Inference—running trained models—accounts for 80-90% of total AI lifetime expenses. Unlike traditional software where marginal costs approach zero as you scale, AI scales in reverse. More users mean more costs, linearly and predictably.

Cursor, the AI coding assistant, reportedly sends 100% of its revenue to Anthropic just for model access. Perplexity AI spent 164% of revenue on compute providers in 2024. When your biggest customers operate at negative margins, when the companies building businesses on your platform lose money on every transaction, the entire ecosystem becomes brittle.

The hyperscalers—Microsoft, Google, Amazon, Meta—are betting their futures on AI infrastructure with combined 2025 capital expenditures projected at $325-380 billion, up 46% from 2024. These spending levels now exceed 1% of U.S. GDP quarterly. Microsoft committed $88.7 billion for fiscal 2025. Amazon projected $105-125 billion. Yet Meta’s CFO acknowledged that “genAI work is not going to be a meaningful driver of revenue this year or next year.”

Read that again. Companies are spending hundreds of billions building infrastructure for revenue that their own financial officers admit won’t materialize for years. The gap between investment and returns has never been wider in technology history.



When the guy who called 2008 places a billion-dollar bet


Michael Burry isn’t some permabear shouting about every market downturn. He’s the investor who correctly identified the 2008 subprime mortgage crisis before it happened, who placed massive bets against the housing market when everyone called him crazy, who was proven devastatingly right when the entire financial system nearly collapsed.

In early November 2025, Burry disclosed regulatory filings showing over $1 billion in put options against AI leaders: $912 million against Palantir (66% of his portfolio) and $187 million against Nvidia. His thesis is explicit and detailed, laid out in his Substack newsletter “Cassandra Unchained.”

“I am not claiming Nvidia is Enron. It is clearly Cisco,” Burry wrote. He’s identifying Nvidia as today’s equivalent of Cisco during the dot-com bubble—the essential hardware supplier powering a massive capital investment cycle that will eventually see overbuilt supply meet far less demand than expected.

But his most damning observation concerns accounting practices. Burry projects that tech companies are understating depreciation by $176 billion between 2026-2028 by extending the useful life of AI computing equipment from realistic 2-3 year product cycles to artificially long periods. This accounting treatment inflates reported earnings—Oracle by 26.9%, Meta by 20.8%—creating the illusion of profitability where losses actually exist.

His criticism of “circular financing” cuts deeper. OpenAI owns 10% of AMD while Nvidia invested $100 billion in OpenAI. Microsoft is both a major OpenAI shareholder and one of Nvidia’s largest customers (representing roughly 20% of Nvidia revenue). CoreWeave, funded by Nvidia, serves OpenAI’s compute needs. “True end demand is ridiculously small,” Burry observed. “Almost all customers are funded by their dealers.”

When an investor who successfully predicted the last major financial crisis places billion-dollar bets against your industry and explicitly compares your accounting practices to pre-2008 mortgage fraud, that’s not speculation. That’s pattern recognition.



The trust collapse nobody wants to acknowledge


Here’s the part where we connect the financial instability back to that AI interview experience. Because these crises aren’t separate—they’re feeding each other in ways that threaten the entire AI business model.

According to Pew Research, only 17% of Americans believe AI will have a positive impact on the U.S. over the next 20 years. A full 51% are more concerned than excited about AI’s increased use in daily life. These figures represent dramatic worsening since 2021, when optimism still outweighed concern.

The gap between expert opinion and public sentiment has become a chasm. While 56% of AI experts see positive impact ahead, only 17% of the general public agrees. While 73% of experts expect positive employment effects, only 23% of the public shares that view. Either the experts are completely disconnected from reality, or the public has accurately identified risks the industry refuses to acknowledge.

The trust erosion isn’t abstract. It’s rooted in lived experience. That AI interview that made you feel reduced to “an n-dimensional matrix.” The AI-generated customer service response that completely missed your problem. The hiring algorithm that rejected your application before a human ever saw it. The chatbot that confidently gave you wrong information. The image generator that couldn’t distinguish historical accuracy from Reddit jokes.

That “authenticity is a limited resource that is diminishing every day.” Every time someone encounters AI deployed to reduce costs rather than add value, deployed to eliminate human judgment rather than enhance it, that person becomes incrementally more skeptical. Not of technology in general—of this specific application pattern where efficiency trumps everything else.

Consumer behavior reflects this skepticism in ways that directly threaten revenue. A Washington State University study found that products described as “AI-powered” are consistently less popular with consumers. Gartner surveys show 64% of customers prefer companies NOT use AI for customer service, with 53% willing to switch to competitors that don’t. Only 3% of smartphone owners are willing to pay extra for AI features.

Three percent.

When you’re building a business model that depends on massive consumer adoption and widespread willingness to pay premium prices, 3% willingness to pay is a death sentence. It means the revenue models underpinning all those billion-dollar valuations are built on fundamentally wrong assumptions about demand.



The enterprise adoption crisis


Maybe consumer skepticism doesn’t matter, you might argue. Maybe the real money is in enterprise. Except enterprise adoption is stalling for the same reasons, just with bigger price tags attached to the disappointment.

The MIT NANDA study found that 95% of enterprise AI pilots yield zero measurable business return despite $30-40 billion in GenAI investment. BCG reports 74% of companies haven’t seen tangible value from AI initiatives. McKinsey’s surveys show 80%+ of organizations haven’t realized significant bottom-line AI impact. Gartner predicts 30% of GenAI projects will be abandoned by end of 2025 due to poor data quality, escalating costs, or unclear value.

Monte Carlo’s 2024 survey of data professionals reveals the pressure cooker: 100% feel pressure from leadership to implement GenAI strategy, yet 90% believe their leaders do NOT have realistic expectations for what’s technically feasible or capable of driving business value. Only 12% of companies have dedicated AI teams—meaning 84% rely on existing data teams to figure it out.

This is the reality on the ground for data professionals. Leadership demands AI transformation. Budgets flow to AI initiatives. But the actual work of implementation falls to teams who know the technology isn’t mature enough, the data isn’t clean enough, the use cases aren’t well-defined enough. The pressure to deploy anyway—to show “innovation,” to justify spending, to keep up with competitors—leads to implementations that fail quietly, generate minimal value, and burn credibility.

The case studies pile up. McDonald’s ended its three-year AI drive-thru partnership with IBM in June 2024 after viral videos showed the system adding 260 Chicken McNuggets to orders and repeatedly misinterpreting requests. Klarna initially claimed AI was “doing the work of 700 agents” after cutting 22% of its workforce, then reversed course with a “major hiring initiative” in May 2025. The CEO admitted AI was actually “doing work of 700 really bad agents”—quality decline cost customers, and the rollback cost approximately $15 million with no net gain.

IBM’s Watson Health, once touted as transformative for cancer diagnosis, never reached production use, ran over budget, and was eventually sold for approximately $1 billion after multi-billion dollar investment. The Humane Ai Pin and Rabbit R1 represent hardware category failures, with slashed prices following weak sales and critical reviews noting the products solve “problems that don’t exist.”

Each failure compounds the trust deficit. Each over-promised and under-delivered implementation makes the next pitch harder. Each news story about AI gone wrong creates more skeptics who start questioning whether this technology delivers value or just disrupts things that were working fine.



The 2008 playbook, now in tech


The parallels between current AI investment patterns and pre-2008 financial crisis have moved from concerning to alarming. Deutsche Bank is now using Synthetic Risk Transfers (SRTs) to hedge AI infrastructure loans—structures that analysts say “strongly resemble CDOs and CDSs that amplified the 2008 financial crisis.” Tech companies shift AI spending into Special Purpose Vehicles (SPVs) to obscure true costs, echoing Enron’s infamous off-balance-sheet accounting.

The leverage buildup mirrors pre-crisis patterns. Goldman Sachs reports hyperscalers took on $121 billion in debt over the past year—a 300%+ increase from typical levels. Meta raised $29 billion in debt for a single Louisiana data center. Oracle operates at a 500% debt-to-equity ratio to compete in AI data centers. Morgan Stanley estimates Big Tech will spend $3 trillion on AI infrastructure through 2028, with cash flows covering only half.

The dot-com parallels are equally sobering. In the late 1990s, 80+ million miles of fiber optic cable were laid based on inflated demand projections; 85-95% remained unused (“dark fiber”) years after the bubble burst. Today’s equivalent is the massive AI data center buildout—Meta constructing a facility covering a “significant part of Manhattan”—predicated on demand growth that may never materialize. The $500 billion Stargate Project promises unprecedented AI infrastructure investment for enterprise AI adoption that’s already stalling.

The NASDAQ took 15 years to recover from its 2000 peak. Cisco shares peaked at $80 and still haven’t returned to that level twenty-five years later. Those aren’t cautionary tales from ancient history—they’re previews of what happens when infrastructure investment radically outpaces actual demand.



What the insiders are saying


Perhaps most telling are the warnings from industry insiders themselves. Sam Altman, OpenAI’s CEO, acknowledges that investors are “overexcited about AI” and predicts “people will overinvest and lose money during this phase.” Google CEO Sundar Pichai warns of “elements of irrationality” and says “no company is going to be immune” if the bubble bursts. Jeff Bezos called the current environment “kind of an industrial bubble.”

When the people building and funding AI companies publicly warn about bubble dynamics, when they’re literally saying investors will lose money, that’s not bearish speculation. That’s reality acknowledgment.

Ray Dalio’s proprietary bubble indicator sits at approximately 80% of historical bubble peaks, with his AI chatbot estimating a 65-75% probability of meaningful correction in AI equities by end of 2026. Morgan Stanley’s Lisa Shalett warned of a “Cisco moment” within 24 months, describing the market as a “one-note narrative” entirely dependent on AI capital expenditures.

The timing predictions are getting specific. Motley Fool analysts suggest the bubble could burst in 2025 due to unsustainable valuations. Ruchir Sharma of Rockefeller International predicts 2026 if interest rates rise. BeyondTrust researchers argue “artificial inflation of AI has already peaked in 2024.” A November 2025 Nature analysis noted that “many financial analysts now agree there is an ‘AI bubble,’ some speculate it could finally burst in the next few months.”



Preparing for what comes next


Whether the bubble bursts catastrophically or deflates gradually, data professionals can take specific steps to build resilient careers. The skills that remain human-essential include critical thinking and complex decision-making, cross-contextual reasoning (connecting insights across domains), ethical judgment in novel situations, relationship building and stakeholder management, and what researchers call “AI discernment”—knowing when to trust AI versus human judgment.

Technical preparation should emphasize both breadth and depth. Core competencies remain essential: probability, statistics, Python, SQL. But add ML fundamentals including embeddings and model monitoring. Develop cloud platform expertise. Build governance and ethics capabilities that will be increasingly valuable as regulation tightens. Document measurable business impact, not just technical achievements—when budgets tighten, demonstrated ROI becomes the difference between retention and reduction.

For data leaders, the most important preparation may be honest communication. Setting realistic expectations with executive leadership about what AI can deliver—and on what timeline—protects both the organization and the team. The companies that navigate the potential correction successfully will be those treating AI as a tool for specific problems rather than a magic solution for every challenge.



What happens when authenticity runs out


I desperately want to be surrounded by real people with real hobbies and passions. I don’t care if a candidate loves remote work, I care about their humanity. I don’t care if a candidate tripped over their words a few times, I care about their authenticity.

This isn’t nostalgia for pre-AI times. It’s recognition that authenticity, human connection, genuine judgment—these aren’t inefficiencies to optimize away. They’re the core value proposition. They’re what makes work meaningful, what makes organizations effective, what makes products worth buying.

When companies deploy AI to eliminate these elements—to reduce humans to vectors, to automate judgment, to replace connection with efficiency—they’re not just failing to capture value. They’re actively destroying it. The companies that will survive aren’t those with the most sophisticated AI. They’re those that figured out how to use AI to enhance human capability rather than replace it, to preserve authenticity rather than eliminate it.

The crisis isn’t coming. It’s already here, building pressure daily in the gap between investment and returns, between promises and delivery, between what AI can actually do and what we’ve been told it will become. The question isn’t whether there will be a reckoning. The question is whether you’ll be positioned to navigate it when it arrives—and whether the companies you work for will still treat you like a person when they do.