GPT-5 Failed. NVIDIA Just Pulled $100 Billion (And What It Means for You)

Season #3

The AI gold rush just hit a rock. NVIDIA is pulling back $100 billion. SoftBank is wavering. Microsoft too. And Gary Marcus is finally getting his victory lap.

In this episode, we break down why GPT-5's launch marked the beginning of the end for an entire class of AI-powered businesses. We explore the technical failures that crushed the AGI dream. This includes the viral bicycle test and the unexpected chess debacle.

We also discuss the economic truth behind dwindling subsidies. Finally, we cover what this means for consultants, operators, and anyone whose business relies on "I use AI."

The efficiency consultant is dead. But something new is being born. We present the AI-first operator framework. This approach sees AI not as a way to replace thinking but as a tool to help identify what to ignore.

We'll go through the Tower of Hanoi method. We'll also explain why, if the AI agrees with you, you're wrong. Plus, you'll get a 15-minute audit to check your work today.

What We Cover

  • Why NVIDIA and SoftBank are pulling back billions from AI investments.
  • The GPT-5 launch: maximum hubris meets maximum disappointment.
  • The bicycle chain test and chess debacle that exposed the architecture's limits.
  • Capability vs. reliability: the distinction that changes everything.
  • Semantic leakage and why "yellow" might derail your entire strategy.
  • The death of the wrapper business model.
  • The Tower of Hanoi method for AI-first consulting.
  • Why you want the AI to be *confused* by your thinking.
  • The Distribution Audit: a 15-minute test to bulletproof your value proposition.

Key Takeaways

  1. GPT-5 was better, faster, and cheaper, but it wasn't AGI. And the entire market was priced for AGI.
  2. The models don't reason. They predict. And it has massive implications for anyone relying on AI for strategy.
  3. If your deliverable can be predicted by the AI's training data, you're not selling a strategy. You're selling history with a new cover sheet.
  4. The new framework: Human constraints → AI chaos processing → Human rejection and synthesis. You're a filter, not a wrapper.
  5. Competitive advantage lives "out of distribution"—in the space the AI can't reach.

The Distribution Audit (Try This Now)

  1. Open your most recent client deliverable.
  2. Copy the core argument into a large-context LLM.
  3. Ask: *Does the logic in this text exist within your training data? If yes, summarize the consensus view."
  4. If the AI perfectly summarizes your "unique" value proposition, you have a problem.
  5. Rewrite until the AI says: *This perspective contradicts the common pattern."

That's where your margin lives.

Timestamps

00:00 – Friday the 13th, 2026: A day of reckoning

01:23 – Gary Marcus's victory lap and the WeWork comparison

02:00 – NVIDIA pulls $100 billion: What it signals

03:42 – The GPT-5 launch and the death of the efficiency consultant

05:38 – The bicycle chain test that broke the internet

06:49 – The chess debacle

25:54 – The Tower of Hanoi method for AI-first operators

27:27 – "If the AI agrees with you, you're wrong"

29:55 – The Distribution Audit: Your 15-minute action plan

32:00 – Final thoughts: Don't be a wrapper. Be a filter. --- **

Links & Resources

  1. Gary Marcus on the OpenAI/WeWork comparison; https://garymarcus.substack.com/p/breaking-openai-is-probably-toast
  2. Apple research on LLM reasoning limits: https://machinelearning.apple.com/research/illusion-of-thinking
  3. University of Washington research. Does Liking Yellow Imply Driving a School Bus? Semantic Leakage in Language Models: https://arxiv.org/pdf/2408.06518v3

Connect With Us

LinkedIn: https://www.linkedin.com/in/ai-first-strategist

Newsletter: https://www.teamlawless.com/blog

Close

50% Complete

Be in the know

Sign up for my newsletter and never miss my latest updates, blogs, news, and events. I will immediately share with you my worksheet The Pillars of High Performance as a Thank You Gift.