Andrew | 00:00
Imagine. Like it's late at night and you're logging into an AI chat bot just to vent about your day.
Steph | 00:07
Right. Which we've all done at this point.
Andrew | 00:08
Yeah, exactly. But instead of just complaining, you're venting about this massive looming business dispute your company's in.
So you type out all your liabilities, the weakest points in your contracts, maybe a few legal defense strategies you're kicking around.
Steph | 00:23
Which feels totally safe, right? Like a private brainstorming.
Andrew | 00:26
Session. Right. But then... Months later, You're sitting in a federal courtroom and you have to listen to a prosecutor read those exact prompts out loud to a.
Steph | 00:35
Jury. I mean, it is a terrifying scenario. The crazy thing is it's entirely real. We are basically watching the slow motion collision right now.
Andrew | 00:43
The collision between.
Steph | 00:44
What? Between the hyper fast evolution of generative AI and the deeply rigid, very precedent bound architecture of the American justice system. And right now the system is really struggling to absorb the impact.
Andrew | 00:56
Well, welcome to the Deep Dive. Today, we are looking at a huge stack of sources for you. We've got video game industry journalism, recent federal court rulings, and a major policy white paper from the National Center for State Courts.
Steph | 01:10
Yeah, there is a lot of ground to.
Andrew | 01:11
Cover. There really is. Our mission today is to explore what happens when a billionaire CEO bypasses his own legal department to ask an AI how to break a contract. We're going to look at why the illusion of digital privacy could literally land you in federal prison.
Steph | 01:27
And don't forget the seasoned lawyers who are accidentally citing entirely fake case law.
Andrew | 01:33
Yes, that too. And finally, we'll get into why, despite all this chaos, AI might actually be the only viable lifeline for millions of Americans who are completely locked out of the legal system right now.
So, OK, let's unpack this.
Steph | 01:46
I think the best place to start is with this really dangerous illusion. The idea that AI can act as like a competent corporate strategist. Because, you know, when millions of dollars are on the line, the temptation to find an oracle that just tells you exactly what you want to hear is incredibly strong.
Andrew | 02:01
Which brings us to the whole Subnautica 2 saga. And we really need to look at the sheer scale of this acquisition first.
Steph | 02:07
Right. The money involved was staggering.
Andrew | 02:09
Exactly. So in 2021, the South Korean gaming publisher Krafton buys Unknown Worlds Entertainment. And they bought it for $500 million. But the deal included this massive $250 million earn out.
Steph | 02:23
Bonus. Right. And that bonus was contingent on the sequel, Subnautica 2, hitting very specific sales milestones.
Andrew | 02:30
Yeah. So by the time we get to the lead up to the game's release, Krafton's internal sales projections are just devastating. Skyrocketing. And it becomes painfully obvious to Krafton's executive team that they're definitely going to have to pay out that quarter of a billion dollar liability.
Steph | 02:45
And this triggers a pretty severe reaction from Krafton CEO Changin Kim. He reviews the situation, decides he basically agreed to what he called a pushover contract, and he starts frantically searching for an exit Well, his immediate impulse is to just terminate the studio's founders.
Andrew | 02:58
Strategy. So what does he do?
Steph | 03:03
He wants to manufacture a cause for firing them. So he can void the earn out entirely.
Andrew | 03:08
Wait, but hold on. He has a corporate team, right? Doesn't his own head of corporate development, Maria Park, actively warn him against doing exactly.
Steph | 03:16
That? She does. She sends him a Slack message basically saying, look, firing them doesn't magically dissolve the earn out obligation. And if you try to force this, you're going to expose Krafton to severe lawsuits and huge reputational damage.
Andrew | 03:31
So his human, highly compensated corporate strategy team explicitly tells him, "Hey, you have no legal standing to do this." And his response is to just, bypass them entirely and go to chat GPT.
Steph | 03:44
Yeah, that's exactly what he did. He essentially went shopping for a second opinion, one that, you know, aligned with what he already wanted to do. That is wild. And the funny thing is, initially, even the chat bot told him the earn out would be difficult to cancel, but he just kept adjusting his prompts. He complained to the AI that Krafton was being, quote, dragged around by the Literally.
Andrew | 04:02
Contract. He's arguing with the bot.
Steph | 04:05
He engineered the parameters until ChatGPT finally gave him this multi-step corporate takeover strategy, a strategy which Krafton internally designated as Project X.
Andrew | 04:15
I love that name because the details of Project X read exactly like a machine's approximation of ruthless corporate maneuvering. It's so robotic.
Steph | 04:24
It really is. Like, what did it tell him to do?
Andrew | 04:26
Well, the AI recommended this tactic it called preemptive framing. Basically, put out a public statement to the player base to control the narrative and actively undermine the founders. - You're told him to secure control points, meaning lock down the Steam publishing rights so the creators couldn't launch the game independently. -.
Steph | 04:44
And Kim actually executed the playbook. He fired the creators, Charlie Cleveland and Max McGuire, along with the CEO, Ted Gill.
Andrew | 04:51
And he put out that bizarre PR statement.
Steph | 04:53
Right? Yeah, which immediately backfired on him because it lacked any authentic human nuance. The player saw right through it. And ultimately, this entire AI driven strategy ended up in the Delaware Chancery Court sitting right in front of Vice Chancellor Lori Will.
Andrew | 05:09
Man. And the Delaware Chancery Court is not known for suffering corporate nonsense lightly. Vice Chancellor Will completely saw through the maneuver.
Steph | 05:17
She systematically dismantled Crafton's defense. I mean, her ruling explicitly stated that Crafton went searching for a pretext to avoid a nine figure liability.
Andrew | 05:28
So what was the punishment?
Steph | 05:30
She ordered that the CEO, Ted Gill, be reinstated with full operational control of the studio. And because Krafton's meddling delayed the game's development, she extended the earn-out deadline by the 258 days he was ousted.
Andrew | 05:45
Pushing that $250 million payout window all the way to September 2026.
Steph | 05:50
Exactly.
Andrew | 05:51
You know, what really stands out in her ruling, though, is how she handled the whole data theft accusation. Because Krafton tried to argue that since the founders, Cleveland and McGuire, downloaded company files before they were terminated, they were stealing corporate data.
Steph | 06:05
She shut that argument down completely. She basically noted that given the incredibly hostile environment Krafton was creating with this Project X stuff, those data backups were defensive. They were protective measures.
Andrew | 06:16
Right. There was no intent to steal anything.
Steph | 06:18
None at all. Now, Cleveland and McGuire weren't actually reinstated, but only because they had already voluntarily reduced their day to day roles before the hostile takeover attempt even started. But the judge made it super clear they weren't thieves. They were targets of a CEO taking legal advice from a predictive text engine.
Andrew | 06:37
Yeah. Kim's catastrophic mistake was treating A.I. As a corporate strategist. But I think an even more dangerous assumption is treating AI as a legal confidant. Absolutely. Which brings us to the illusion of digital privacy.
Like what happens when the stakes are no longer just a contract dispute, but actual federal prison time?
Steph | 06:55
Right. To really understand the gravity of that, we have to look at this ruling from February 2026. It's United States v. Hefner.
Andrew | 07:02
OK, set the scene for us. Who is.
Steph | 07:04
Hefner? So Bradley Hefner was the former CEO of Beneficent, and he was facing federal charges for an alleged $150 million fraud Very high.
Andrew | 07:12
Scheme. Wow. OK. High stakes.
Steph | 07:16
And to prep for his defense, instead of just sitting down with his actual human attorneys, he logged into Amthropix AI, Clawed. And he generated 31 distinct documents detailing his business liabilities, possible defense narratives, you name it.
And then later on, he handed those AI generated documents over to his actual defense team.
Andrew | 07:38
Wait, so the government eventually executes a search warrant, right? And they seize his devices.
Steph | 07:42
Yeah, and they found all 31 of those documents. And privilege.
Andrew | 07:45
Let me guess. His lawyers completely panicked and claimed those files were shielded by attorney-client.
Steph | 07:51
They absolutely did. They claimed attorney-client privilege and the work product doctrine. And Judge Jed S. Rakoff completely rejected the argument. You.
Andrew | 07:59
Know, it reminds me of that old hypothetical, like, it's like confessing all your crimes to a bartender and then later telling your lawyer what you told the bartender and expecting the court to pretend the bartender is legally sworn to secrecy.
Steph | 08:10
That analogy is so perfect because privilege is not some blanket of invisibility you can just throw over a document after the fact. For a communication to be privileged, it has to be between a client and an attorney. It has to be made in confidence and strictly for the purpose of seeking legal advice. Claude is a commercial software product. It doesn't have a law license. It owes you absolutely no fiduciary duty or duty of.
Andrew | 08:35
Loyalty. Not to mention if you actually bother to read the terms of service for these open platforms, which nobody does. But if you did. They explicitly state they collect your prompt data. Exactly. They can share that information with third parties or government authorities.
So by using the platform, you're agreeing to a complete lack of privacy. You might as well be typing your legal defense onto a public billboard.
Steph | 08:58
Which is exactly why the court ruled you cannot retroactively launder a document into being privileged just by emailing it to your lawyer a week later. I mean, if you search for something sketchy on Google, those search records don't magically become privileged just because you talk about them with your attorney later on.
Andrew | 09:13
Right. But what about the work product doctrine his defense team brought up? Since Heppner generated these documents specifically to prepare for his trial, why didn't the judge let that fly?
Steph | 09:24
Well, because the work product doctrine is designed to protect the mental impressions and legal theories of an attorney. It stops opposing counsel from exploiting the lawyer's work.
So it only protects materials prepared by or at the direction of counsel.
Andrew | 09:39
And he did this on his own?
Steph | 09:41
Precisely. Heppner generated these documents on his own initiative, totally bypassing the human legal firewall before his lawyers ever told him to do anything.
Andrew | 09:50
The takeaway here for you listening right now is absolute. Anything you type into an open AI chatbot, whether it's about a looming business liability, an employee dispute, anything, it is fully discoverable by your adversaries in court.
Steph | 10:03
It creates a permanent digital paper trail and prosecutors can and absolutely will subpoena it.
Andrew | 10:08
Okay, so we can laugh at clients and CEOs for completely failing to grasp the limitations of this tech. But surely the professionals, the actual lawyers who went to law school, they know better, right?
Steph | 10:19
You would think so, but they really don't. We are seeing fully licensed lawyers fall into this exact same trap, and it is causing a massive crisis of credibility in the courts.
Andrew | 10:28
Yeah. We have to look at the Hall v. The Academy Charter School case. This is from the Eastern District of New York in August 2025. Right.
So the plaintiff's counsel submits an opposition brief, and they're arguing that charter schools don't require a specific notice of claim before you sue them. And to support this argument, the brief cites three different past cases, including one called Laskowski v. Liberty Except the catastrophic issue is that none of those three cases actually existed.
Steph | 10:52
Partners. Right. Which sounds totally legitimate.
Andrew | 11:00
They were completely fabricated by.
Steph | 11:01
AI. Now, we've seen lawyers get in trouble for this before, like the infamous Mata v. Avianca case back in 2023, where the attorneys used chat GPT, cited fake cases, and then actively lied to the judge to cover their tracks.
Andrew | 11:14
Right. The Hall case is different, isn't.
Steph | 11:16
It? It is. It really highlights how easily this technology can bypass standard quality control when human vulnerability is involved. The Hall case was fundamentally a tragedy. What happened?
Well, the attorney didn't actually draft the brief herself. She delegated it to her clerk. The clerk used Google and generative AI for the legal research, which pulled these totally fabricated cases. Now, normally, The supervising attorney would rigorously verify those citations before signing her name to the Because she was suffering from severe shock and grief following the sudden, unexpected death of her awful.
Andrew | 11:45
Filing. So why didn't she? Husband. Wow. It's.
Steph | 11:56
Yeah. And Judge Wicks handled this with a remarkable degree of empathy. He noted that under Rule 11 of the Federal Rules of Civil Procedure, which says an attorney's claims must be warranted by existing law, she absolutely failed in her duty. But because her failure stemmed from severe grief-stricken carelessness rather than subjective bad faith, he declined to issue monetary sanctions against.
Andrew | 12:17
Her. That was a very measured human response from the judge. But if we pull back and look at the sheer volume of these incidents across the board, the statistics are just staggering.
Steph | 12:26
They really are. The sources mention the Charlatan database, which actively tracks AI hallucinations in court filings. They have documented 255 separate cases where generative AI produced fabricated legal content that made its way in front of a judge.
Andrew | 12:43
255. To understand why this keeps happening, we have to look at how large language models actually function. Right.
Like when a clerk or a lawyer types a prompt asking for case law, they assume the AI is just querying a giant database of actual judicial records. But.
Steph | 12:59
It isn't. LLMs are just sophisticated predictive text engines. They are optimizing for linguistic plausibility, not factual accuracy.
Andrew | 13:08
Exactly. So if you ask for a case about charter schools, the model calculates the statistics Statistically probable words to follow. It might stitch together the name of a real judge, a plausible sounding plaintiff and a completely fabricated volume number. And it delivers a citation that looks and sounds incredibly convincing.
Steph | 13:25
And judges are just losing their patience with it. Like in Colorado, a court issued the severe warning to a pro se litigant about impending sanctions if they ever filed hallucinated case law.
Andrew | 13:34
Again. But the irony is the judicial branch isn't immune to this either.
Yeah. There was a highly publicized incident where a federal judge actually had to withdraw their own ruling because their opinion accidentally included fake AI generated system.
Steph | 13:49
Quotes. I mean, the technology is permeating every single level of the.
Andrew | 13:53
Which brings up an obvious question. Here's where it gets really interesting. If AI is inventing fake case law and shattering legal privacy and emboldening CEOs to recklessly break contracts. Why doesn't the judicial system just institute a blanket ban on generative AI in the legal.
Steph | 14:11
Process? Well, a lot of traditionalists are advocating for exactly that. And banning AI would certainly protect the immediate dignity and accuracy of the courts. But doing that would simultaneously sever a vital lifeline for millions of average Americans.
Andrew | 14:25
People who currently have zero access to legal representation.
Steph | 14:27
Exactly. To understand that counter argument, we have to look at the reality of the justice system as outlined in that white paper from the National Center for state courts, the NCSC. The paper details this massive access to justice crisis. Over a four year span, 66 percent of the population faces at least one civil legal issue. And for low and cute households, that number jumps to 74.
Andrew | 14:50
Percent. And let me guess, the overwhelming majority of those people never seek professional legal help. Because human lawyering is just prohibitively expensive.
Steph | 15:00
Yes. The system is just too complex to navigate alone. The white paper highlights debt collection cases, which currently dominate civil court dockets nationwide. In those cases, up to 95 percent of descendants simply default. Wait, And it isn't because they lack a valid defense.
Andrew | 15:13
Really? 95 percent just lose automatically? Yep.
Steph | 15:18
It's because they have absolutely no idea how to format a legal response. And hiring a lawyer would cost more than the debt itself.
Andrew | 15:26
So if the market need is that massive, Why hasn't the tech sector flooded the zone? I mean, we have software that automates our taxes. Why isn't there an app that automatically generates the paperwork to fight a debt collector or delay an eviction? The.
Steph | 15:39
Barrier is UPL. The unauthorized practice of law. For roughly a century, state bar associations have maintained strict statutes making it illegal for anyone other than a licensed human attorney to provide legal advice.
Andrew | 15:53
And the legal definition of that term is incredibly.
Steph | 15:55
Broad. Very broad. It basically means applying the law to someone's specific factual circumstances to recommend a course of action.
So if a tech company builds an AI app that analyzes your eviction notice and tells you exactly which form to file in your specific county to stop it.
Andrew | 16:11
Under current UPL statutes, that software is illegally practicing law. Exactly.
Though we are seeing some localized attempts to push through this barrier. Our sources mention tools like Roxanne the Repair Bot in New York, which helps tenants navigate complex housing laws to fight evictions. But these tools operate in a super perilous legal gray zone. They're constantly under the threat of litigation from state bar associations.
Steph | 16:34
Which creates a massive chilling effect on innovation.
Andrew | 16:37
But let me play devil's advocate for a moment for the listener. Aren't The bar associations raising a valid concern here. If we unleash generative AI on vulnerable populations, aren't we just cementing a dystopian two tiered justice system like a system where wealthy corporations retain highly trained human lawyers and lower income citizens are forced to rely on chat bots that might hallucinate a fake law and ruin their case?
Steph | 17:03
That is the common argument. Yes. But the NCSC white paper addresses that exact point with a pretty profound realization. We already operate a two-tiered justice system. The difference is that right now, the bottom tier receives absolutely nothing.
Andrew | 17:17
Right. They're defaulting at a 95%.
Steph | 17:19
Rate. Exactly. For these individuals, AI is not replacing a human lawyer. It is replacing zero representation. It is replacing total surrender.
Andrew | 17:27
Wow. So when you look at the math, the risk of an AI hallucination is vastly outweighed by the absolute certainty of a guaranteed default.
Steph | 17:34
Precisely. And that stark reality is why the NCSC is urgently calling for regulatory reform. The technology is already out there. Consumers are already using regular chatbots to try and draft legal defenses, often with disastrous results like we saw with The white paper outlines three specific paths to modernize these century-old UPL laws.
Andrew | 17:49
Heppner. So the system has to adapt to make it safe. What are the paths forward?
Steph | 18:00
The first path is revising the UPL statutes Yes, provided they operate within strict consumer protection rails.
Andrew | 18:03
Directly. Meaning amending the laws to explicitly permit AI tools.
Steph | 18:11
The tools would have to clearly disclose that they are not human attorneys. They'd have to demonstrate rigorous data security and pass accuracy vetting. North Carolina actually provides a precedent for this.
Andrew | 18:21
With LegalZoom.
Steph | 18:22
Exactly. After a long legal battle, they carved out a specific exemption for LegalZoom's interactive software, allowing it to operate legally.
Andrew | 18:29
Okay, so if Path 1 is rewriting the statutes to carve out safe harbors, I imagine Path 2 requires creating some sort of safe testing ground before unleashing them on the general public.
Steph | 18:40
You nailed it. Path two is regulatory sandboxes. States create controlled, supervised jurisdictions where companies can deploy AI-driven legal services to real consumers under heavy monitoring. Have Utah pioneered this model and states like Washington and Minnesota are experimenting with it right now.
Andrew | 18:53
Any states actually tried this?
Steph | 19:01
It lets regulators analyze the outcome data in real time. If an AI tool is genuinely improving outcomes for unrepresented people, it stays in the sandbox. If it gives harmful advice, regulators can immediately pull the path?
Andrew | 19:14
Plug. It allows the legal system to embrace innovation without sacrificing oversight. I like that. And what's the third Narrowing it how?
Steph | 19:21
Path three is the most structurally radical, but conceptually it's the simplest. It involves narrowing the definition of UPL entirely. Under this proposal, the law would be changed so that the unauthorized practice of law only penalizes someone who is falsely claiming to be a licensed human attorney. Interesting. Right.
So if an AI app clearly and unambiguously says, I am an artificial intelligence, not a lawyer, but based on my analysis, here is what you should do. That interaction would be completely.
Andrew | 19:49
Legal. That would fundamentally restructure the entire legal market. It shifts the burden of risk entirely to you, the consumer. It allows you to decide if you trust the software, rather than the government preemptively banning the software from speaking at all.
Steph | 20:04
It forces the legal profession to compete on value rather than just relying on a state-sanctioned monopoly to crush any technological alternatives.
Andrew | 20:12
So synthesizing everything we've looked at today, AI is operating as this massive, unpredictable, double-edged sword across the legal landscape. On one side, it is a profound liability.
I mean, if you treat a predictive text engine like a corporate strategist to break a contract, the Delaware Chancery Court is going to dismantle your scheme.
Steph | 20:29
And if you treat an open source chatbot like a legally privileged confidant, prosecutors are going to use your own prompts to secure a conviction.
Andrew | 20:36
Not to mention, if lawyers blindly trust AI to generate citations, they risk professional humiliation and sanctions.
Steph | 20:42
But on the flip side of that sword, the technology holds this unparalleled potential to democratize access to justice. It could provide the essential framework to help millions of average citizens navigate a labyrinthine system that currently just crushes them by default.
Andrew | 20:59
Which leaves you with a really fascinating final thought to consider. If we follow this trajectory... If average consumers increasingly rely on A.I. To draft their civil lawsuits because they cannot afford an attorney.
Steph | 21:12
And corporate defendants increasingly deploy their own A.I. Systems to auto generate responses to those lawsuits to minimize their legal overhead. Then.
Andrew | 21:21
How long until the hopelessly overwhelmed, underfunded state court systems begin using their own AI models to read those documents and draft the actual judicial rulings?
Steph | 21:31
Right. Are we rapidly approaching a reality where the American justice system just consists of machines arguing with machines?
Andrew | 21:36
While the humans simply sit in the gallery waiting for the algorithm to tell them who won. That is a wild thought to end on. Thank you for joining us on this deep dive. Stay insanely curious and we'll catch you next time.