Andrew | 00:00
So, I want you to imagine for a second that you're sitting in a highly classified boardroom. Across the table, you've got Representatives of the United States Right.
Steph | 00:09
Military. Which is already an intimidating room to be in.
Andrew | 00:13
But you aren't negotiating the price of fighter jets or troop logistics. You are actively haggling over the definitions of two specific phrases. Mass surveillance and.
Steph | 00:26
Autonomous weapons. Yeah. And what gets decided in that specific room is going to dictate how the most powerful AI systems on the planet interact with global warfare and honestly, your own domestic privacy.
Andrew | 00:37
It sounds like a scene from a near future sci fi thriller. But this is exactly what's been playing out in reality over the last few weeks. Welcome to today's deep dive. Glad.
Steph | 00:46
To be here for this one. Yeah. Because the implications of these boardroom fights extend way beyond the Pentagon or, you know, a few executives in Silicon Valley.
Andrew | 00:54
Exactly. If you're a citizen whose data is constantly being scraped and stored, or if you're just someone worried about automated decisions in high stakes environments, this impacts you directly.
Steph | 01:03
The foundational rules for how AI interacts with the state are being written right now, mostly behind closed.
Andrew | 01:09
Doors. So our mission today is to unpack this incredibly chaotic weekend in the tech world. A weekend where the Pentagon terminated a massive contract with one AI giant, signed a highly controversial one with another, and just ignited this massive firestorm.
Steph | 01:25
We're drawing heavily on Casey Newton's recent column, What is OpenAI going to do when the truth comes out? Along with some great reporting from other tech.
Andrew | 01:33
Journalists. We're going to look at the corporate maneuvering, the fuzzy legal loopholes, and how this exact friction is bleeding into a secondary crisis involving insider trading on prediction market.
So let's start with the inciting incident, the Pentagon's.
Steph | 01:47
Ultimatum. Right.
So to understand this ultimatum, we have to look at Anthropic first. They're one of the leading AI companies, often seen as the more... Safety conscious rival to open AI.
Andrew | 01:58
Yeah, they really market themselves that way.
Steph | 02:00
They do. And the Pentagon followed through on a severe threat to terminate the military's contract with them. The sticking point was that Anthropic refused to amend its agreement to permit what the military called all lawful use of its there.
Andrew | 02:13
Tech. Anthropic just drew a really hard, very public line in the sand.
Steph | 02:18
Exactly. They explicitly prohibited the use of their systems for mass domestic surveillance and autonomous weapons, and they just refused to budge on it.
Andrew | 02:28
But the Pentagon didn't just say, OK, cool, we'll find another vendor. They aggressively retaliated. And I mean, why would the military play such extreme hardball over a software licensing.
Steph | 02:38
Agreement? Because the scale of the retaliation is genuinely staggering. The military threatened to designate Anthropic as a supply chain huge.
Andrew | 02:46
Risk. Which is a massive deal. It's.
Steph | 02:48
To put that in perspective, that's a label historically reserved for corporate extensions of foreign adversaries. You know, companies like Huawei, entities viewed as active structural threats to U.S. National.
Andrew | 02:59
Security. So by slapping that label on an American company like Anthropic, the military was basically trying to block any prime contractor working with the government from using Anthropic's products at all.
Steph | 03:11
Right. Dean Ball, a former AI advisor in the White House, he argued this move strikes at the core principles of private property. The government was essentially telling a private American company, do business on our exact terms or we will systematically cripple your ability to do business entirely.
Andrew | 03:28
It just highlights how desperate the Department of Defense is to get these frontier models into their infrastructure. They are terrified of losing the geopolitical A.I. Arms race. Absolutely. But this is where the whiplash of that weekend really sets in. Enter Open A.I. On Friday morning, OpenAI CEO Sam Altman sent an internal memo to his staff that sounded like he was standing shoulder to shoulder with.
Steph | 03:52
Entropic. Yeah, he wrote that OpenAI shared those exact same red lines. He specifically stated humans should remain in the loop for high stakes decisions and that their AI should never be used for mass surveillance.
Andrew | 04:03
But then the narrative just completely flipped in a matter of hours. By Friday evening, Altman announced on X that OpenAI had officially reached an agreement with the Pentagon for classified AI deployment.
Steph | 04:16
And he claimed that those same red lines, the ones Anthropic had just fought and bled for, were somehow baked right into OpenAI's new contract.
Andrew | 04:24
Which, given his history, makes that Friday night announcement look very suspicious. It forces you to wonder, did the Pentagon actually compromise? Or did OpenAI just offer a semantic workaround that Anthropic refused to stomach?
Steph | 04:38
Right. Did the Pentagon officials who were just raging against Anthropix suddenly make a massive exception for OpenAI? It raised serious red flags. And Optimist.
Andrew | 04:46
Keech Hagee explores this dynamic brilliantly in the book The.
Steph | 04:49
She does. Hagee details how former OpenAI executives like Mir Maradi and Ilya Sutskiver observed this distinct pattern in Sam Altman's leadership. They felt he had a specific playbook where he would just say whatever was necessary in the moment to get his way.
Andrew | 05:04
And then if a glaring discrepancy emerged later, he would act like it was just an innocent Like, I must have misspoken.
Steph | 05:11
Exactly. Setzgever reportedly felt this pattern was inherently dishonest and created systemic chaos inside the company.
So the question the whole tech world was asking that Friday night was, did everyone just walk into this exact same trap?
Andrew | 05:24
And when journalists actually started digging into the text of the agreement, the facade of those red lines just completely crumbled. Hayden Fields reporting for The Verge revealed that OpenAI's deal is significantly softer than Entropic's. Much softer. It hinges entirely on three very specific, highly elastic words. Any lawful use.
Steph | 05:44
And the definition of lawful in the context of government operations is incredibly broad. If you look line by line at the OpenAI terms, it basically boils down to this. If an action is technically legal under current U.S. Law or executive order, the military can use OpenAI's tech to do it.
Andrew | 06:01
And historically, we know the U.S. Government has stretched the definition of technically legal to cover massive, sweeping domestic surveillance programs. We've seen it with bulk data collection, warrantless wiretapping.
Steph | 06:11
For the last two decades, constantly.
Andrew | 06:14
But doesn't that render OpenAI's internal safeguards totally useless? Like they say they're building classifiers and stacy stacks to block domestic surveillance.
Steph | 06:22
On paper.
Andrew | 06:23
Yes. But think about what mass surveillance actually looks like to an AI model. If the government legally buys a massive spreadsheet of location data from a commercial data broker, which happens every day, and then asks a GPT model to analyze it for patterns, the model doesn't recognize a system of oppression.
Steph | 06:41
No, it just sees a data processing task.
Andrew | 06:43
Right. It won't trigger any internal mass surveillance alarms because the data acquisition was technically lawful.
Steph | 06:50
That is the exact blind spot. And that elasticity applies to the physical world, too, especially when we talk about autonomous weapons. Bloomberg recently pointed out that OpenAI is participating in a competition to develop software that allows drones to be controlled via voice commands.
Andrew | 07:05
Which is wild because Anthropic actually participated in that competition, too. Though apparently their CEO, Dario Amodei, objected mostly because the voice tech doesn't work reliably yet, not necessarily drawing a hard moral line on that specific project.
Steph | 07:19
Yeah, but it highlights the tension perfectly. If OpenAI's usage policy explicitly bans weapons development, how does building voice controls for a military drone fit into that framework?
Andrew | 07:29
It's the classic dual use dilemma. If you build the navigation system or the voice interface for a drone, but you aren't the one actively attaching the explosive payload, Does your software count as a weapon system? Where does the tool end and the weapon begin?
Steph | 07:44
Sarah Shocker, who led OpenAI's geopolitics team for three years, she provides some incredible insight on this. In her substack, she points out that you can read these policies in drastically different ways. Is a voice-to-digital tool considered an active part of a kill chain? Or is the AI treated in isolation from the larger weapon?
Andrew | 08:03
And those borders of the law are exceptionally fuzzy. Terms like meaningful human control or human in the loop are fiercely debated by legal scholars. There's no absolute consensus on what adequate human supervision actually looks like in high-speed warfare.
Steph | 08:16
And the Pentagon knows exactly how to exploit that fuzziness. Ross Anderson's reporting in The Atlantic showed that during the anthropic negotiations, the Pentagon kept trying to sneak these little escape hatches into the.
Andrew | 08:28
Drafts. Yeah, they'd verbally agree to ban fully autonomous killing machines, but then insist on adding phrases like "as appropriate." into the crisis.
Steph | 08:37
Text. As appropriate, which just implies the terms can be overridden later based on a future administration's interpretation of a.
Andrew | 08:44
So when you combine that highly flexible definition of lawful with the government's history of broad surveillance and then wrap it all in a rushed Friday night announcement that contradicts an internal memo, I mean, the response was bound to be explosive.
Steph | 08:58
The immediate reaction was staggering. Over that weekend, Sensor Tower estimated that chat GPT uninstall spiked by nearly 300%. Wow. On the chat GPT subreddit, a post titled, You're now training a war machine, racked up over 32,000 upvotes. Meanwhile, Anthropic's Claude app shot to number one in the App Store. You even had Katy Perry posting screenshots of signing up for a Claude Pro subscription.
Andrew | 09:21
When Katy Perry is weighing in on AI military contracts, you know it's hit the mainstream. But the consumer backlash was only half of it. The labor response across the tech industry was arguably way more.
Steph | 09:33
Significant. Yeah, a coalition of 700,000 workers across Amazon, Google, and Microsoft organized to demand their companies reject the Pentagon's advances on dual-use AI.
Andrew | 09:43
Plus, an open letter from Google and OpenAI employees explicitly refusing to build for what they call the Department of War. They specifically cited the demands for mass surveillance and autonomous systems without strict oversight.
Steph | 09:55
It is rare to see that level of cross-company solidarity in Silicon Valley. Inside OpenAI, the internal drama was just spilling out onto the Internet in real time. Researcher Leo Gao openly called the contract snippet leadership posted window safeguard.
Andrew | 10:09
Dressing. He basically said it's just all lawful use followed by legal fluff to look like a.
Steph | 10:15
And the pushback to his comment is so revealing about the corporate culture there. His colleague, Boaz Barak, responded by defending OpenAI's culture, saying he was proud employees felt empowered to push back. But then he dismissed Gao's critique entirely.
Andrew | 10:29
Right. He basically said, well, Gao is a great technical researcher, but he's not a lawyer or a national security expert, implying he just didn't understand government contracts.
Steph | 10:38
Which completely backfired because right after Barack posted that, Brad Carson jumped into the thread. Carson is a former Army general counsel and former undersecretary of defense. He basically replied, I'm not sure if my resume makes me an expert, but Leo Gao's interpretation of the contract is correct.
Andrew | 10:56
You literally cannot ask for better legal validation than the former top lawyer for the U.S. Army agreeing with the engineer.
Steph | 11:03
It perfectly encapsulates the tension. The technical talent sees the glaring loopholes. Corporate relies on institutional authority to smooth it over. And the actual legal experts side with the engineers.
Andrew | 11:14
Which brings us to the ultimate legal reality of this deal. Jessica Tillipman, a professor at GW Law, pointed out the central unresolved conflict here. OpenAI claims it retains full discretion over its internal safety classifiers. But the.
Steph | 11:29
Contract- So what happens when the unstoppable force meets the immovable object? What happens if OpenAI's safety classifier blocks a specific military use case that the government deems completely lawful?
Andrew | 11:42
If OpenAI says, "No, that violates our ethics," and the military says, "It's lawful and we have an immediate operational requirement," Which provision actually controls the contract?
Steph | 11:53
You don't know. Because the specific language governing that clash remains classified. But till at midnight, the Pentagon's reaction to Anthropic showed they are willing to take a maximalist, highly aggressive view of their rights when national security is.
Andrew | 12:06
Invoked. So believing a vague internal SASE stack will stop the DOD from accessing capabilities they feel legally entitled to is, well, it's either impossibly naive or just intentionally deceptive.
Steph | 12:17
And it seems the pressure of that reality, plus the public backlash, finally got to Altman. By Monday evening, he was backpedaling hard. He posted that they shouldn't have rushed the announcement, called it sloppy and promised to amend the contract.
Andrew | 12:30
He even stated clearly that the NSA wouldn't be using GPT models at all. The Pentagon reportedly agreed to those Monday changes, which sounds like an.
Steph | 12:39
Improvement. On the surface, sure. But we have to zoom out and look at the financial gravity here. We are not talking about a scrappy startup making a principled stand. OpenAI recently raised $110 billion at a $730 billion valuation.
Andrew | 12:54
With 900 million weekly active users. You cannot justify a near trillion dollar valuation on $20 consumer subscriptions. You need massive, sustained enterprise and government scale.
Steph | 13:04
Revenue. The financial pressure from investors to secure those endless defense budgets is just immense.
So despite the Monday walkbacks, the long term concern remains. Given the Pentagon's approach to national security and the deep elasticity of the word lawful, it seems disturbingly likely that GQT models will eventually power operations that a large portion of the public opposes.
Andrew | 13:24
Casey Newton asked a sobering question in his column. Americans already show deep distrust toward AI. How will the public react when it's inevitably revealed that GPT models are powering ICE deportation raids or being utilized in overseas drone conflicts?
Steph | 13:41
The truth always comes out eventually. The people living under AI surveillance or operating in the flight path AI-assisted drones will be the ones who ultimately find out what open AI actually agreed to.
Andrew | 13:52
Yeah, but that gap between rapid tech development and sluggish regulation isn't just happening in military contracts. We're seeing the exact same regulatory wild west play out regarding prediction markets.
Steph | 14:03
This is such a fascinating parallel story. Prediction markets are platforms where users buy and sell shares based on the outcomes of future events. And recently, OpenAI had to fire an employee who was actively using confidential company info for personal gain on.
Andrew | 14:17
PolyMarket. It's a blatant example of insider trading happening in a total regulatory blind spot. Just think about the mechanics of it. An OpenAI employee knows exactly when a new language model is scheduled to drop. They log on to Polymarket, find a market titled, Will OpenAI Release a New Model Before March 1st? And just buy thousands of shares guaranteeing a profit because they have non-public info. It.
Steph | 14:39
Is the literal definition of illegal insider trading in traditional finance. But these markets operate in a massive gray area. And the messiness compounds when you look at the platforms themselves. Take Kelshi, another major prediction market.
Andrew | 14:54
Yeah, the reporting highlights a huge contradiction in how Kelshi justifies its operations. They recently voided all bets regarding the potential ouster of Iranian Supreme Leader Ali Khamenei.
Steph | 15:04
And Kelshi's CEO claimed they took this action because they refused to list markets directly tied to death. They said they want to prevent people from profiting off assassination or.
Andrew | 15:13
Mortality. Which sounds noble on paper. Until you look at their actual track record. Critics immediately pointed out that Kalshi previously allowed bets to settle on whether former President Jimmy Carter would attend an upcoming inauguration.
Steph | 15:25
They fully understood that people were basically betting on whether the 100-year-old former president, who is in hospice care, would pass away before the event.
Andrew | 15:34
As one user pointed out, voiding the common A market likely had absolutely nothing to do with a moral stance on death. It was just about protecting Kelshi's bottom line and managing PR in a volatile geopolitical moment.
Steph | 15:45
And the stakes for these markets are escalating fast. The Associated Press just announced a partnership with Kelshi to make U.S. Election result data available on the platform ahead of the 2026 midterms.
Andrew | 15:57
So now you are integrating major journalistic institutions with unregulated betting platforms.
Steph | 16:02
Exactly. You have exploding trading volumes on global events, elections, wars, central bank decisions, but without actively enforced rules against insider trading. A co-founder of the new site Aftermath compared the current regulatory approach to early sports drug It is.
Andrew | 16:19
Testing. That's a great analogy.
Steph | 16:21
The platforms will drag a few obvious low-level offenders, like the OpenAI employee, into the public square to show they're doing something. But they look the other way the rest of the time while the trading volume just surges.
Andrew | 16:32
And when you have political or corporate insiders leveraging hidden advantages to profit off democratic elections or international conflicts, it just fundamentally degrades public trust in the data these markets produce.
Steph | 16:45
Which brings us to the core issue tying all this together.
Andrew | 16:48
Right. When we synthesize everything we've unpacked today, a very unsettling theme emerges. Whether it's the U.S. Military stretching the definition of lawful use for AI-driven mass surveillance, or prediction markets navigating insider trading and geopolitical death pools. The technology is just moving at light speed.
Steph | 17:06
The financial incentives are astronomical and the rules meant to govern all of this are crawling behind, desperately trying to catch up.
Andrew | 17:13
And in that widening gap between the tech and the regulation, the definitions of what is acceptable, what is ethical, and what is lawful are being rewritten. But not by democratic consensus. They're being rewritten by the entities with the most power and the most money in closed boardrooms.
Steph | 17:28
Which brings us to a final thought for you to mull over. We know the Pentagon is eager to leverage AI to maintain global supremacy. And we know OpenAI just achieved a $730 billion valuation, bringing immense pressure from investors to generate enterprise-scale revenue.
Andrew | 17:46
So if the definition of lawful is already highly flexible today, how might the sheer financial gravity of future multi-billion dollar military contracts subtly rewrite the moral code of the AI models you interact with every single day?
Steph | 18:00
Could the AI sitting in your pocket drafting your emails and summarizing your meetings eventually be quietly tuned to view surveillance not as a privacy violation, but simply as the standard lawful baseline of its.
Andrew | 18:13
Existence? It is the critical question of the next decade. The safety stacks and usage policies drafted in a startup environment today might not survive the immense financial and geopolitical pressures of tomorrow. Thank you for joining us on this deep dive.
Steph | 18:26
Keep questioning the systems being built around.
Andrew | 18:27
You. Pay close attention to the fine print and we will catch you next time.