OpenAI, the Pentagon, and the Truth About Autonomous Warfare

Uncategorized Mar 04, 2026

 

 

There was a moment on Friday, February 27, 2026, that we'll remember for a long time. Sam Altman sent an internal memo to his staff. It was clear and surprisingly principled. Humans stay in the loop on high-stakes decisions. OpenAI's technology will never be used for mass surveillance.

By Friday evening, he announced a classified contract with the Pentagon. Same day. What happened?

What Anthropic Actually Did

To understand what OpenAI agreed to, you first have to understand what Anthropic refused.

The Pentagon warned it would end the military contract unless Anthropic agreed to change its terms. They wanted to allow what the military referred to as "all lawful use" of its technology. Anthropic said no. Specifically, they drew hard lines against mass domestic surveillance and autonomous weapons. They held those lines publicly. They paid for it.

The retaliation was severe enough to warrant attention on its own. The military threatened to designate Anthropic as a supply chain risk. That label has usually been for companies linked to foreign adversaries. Companies like Huawei. Structural threats to national security.

They were ready to label an American company for not allowing domestic surveillance. Think about what this shows.

The Three Words That Matter

When journalists actually read OpenAI's contract, the Friday night red lines dissolved quickly. The agreement doesn't hinge on ethics. It hinges on three words: "any lawful use."

That might sound like a meaningful limit. It isn't. The U.S. government has spent 20 years expanding the meaning of "lawful." This includes bulk data collection, warrantless wiretapping, and secret surveillance programs. Many of these programs were unknown to the public until leaks revealed them.

Here's what that looks like in practice. A government agency can buy a large dataset of location data from a commercial broker. This happens daily. If they then ask a GPT model to analyze it for behavior patterns, the model doesn’t alert anyone. It sees a data processing task. The acquisition was lawful. The analysis is just computation.

No internal safety classifier catches that. Because nothing technically illegal happened. That is the blind spot. And the Pentagon knows exactly where it is.

When the Lawyers Agreed With the Engineers

The pushback inside OpenAI was immediate. Researcher Leo Gao called the contract language "window dressing." Boaz Barak, his colleague, defended the company culture. He dismissed Gao's critique, saying Gao wasn't a lawyer or a national security expert.

Then Brad Carson entered the thread. Former Army General Counsel and former Undersecretary of Defense. He read the exchange and replied that Gao's interpretation of the contract was correct. The former top lawyer for the U.S. Army sided with the engineer, not the executive.

That's the whole story in one paragraph.

The Consumer Response Was Decisive

Over that weekend, ChatGPT uninstalls spiked nearly 300%. A post titled "You're now training a war machine" gathered over 32,000 upvotes. Anthropic's Claude shot to number one in the App Store.

Seven hundred thousand workers at Amazon, Google, and Microsoft united. They want their companies to reject Pentagon advances on dual-use AI. Google and OpenAI workers signed an open letter. They refused to build for what they called the Department of War.

That kind of cross-company solidarity is rare in Silicon Valley. It doesn't stem from policy disagreements. It happens when people feel a core belief is being violated.

By Monday, Altman was walking it back. He called the announcement sloppy and promised to amend it. The Pentagon reportedly agreed to changes.

Perhaps that's a resolution. Maybe a $730 billion valuation and 900 million weekly active users shift focus away from principles, especially with the defense budget at stake. You cannot justify numbers like that on $20 consumer subscriptions. You need government-scale revenue. And the government knows it.

The Parallel Story Nobody Is Connecting

A second thread runs beneath this, highlighting the same structural problem. An OpenAI employee was recently fired for insider trading on Polymarket, a prediction market platform. The mechanics are straightforward. You know when a new model is scheduled to drop. You find an active market asking whether OpenAI will release a model before a specific date. You buy shares. You profit from information nobody else legally has.

In traditional finance, that's a federal crime.

In prediction markets, it's a gray area. The platforms are handling it the way early sports organizations handled doping. They make a public example of one low-level offender to prove accountability. Then the volume surges. The co-founder of Aftermath compared today's regulations to early drug testing in sports. That analogy is precise.

The Associated Press announced a partnership with Kalshi. They will integrate U.S. election result data before the 2026 midterms. Major journalistic institutions are now formally integrated with unregulated betting platforms. Political insiders with non-public information can still trade freely based on election outcomes.

When insider advantage corrupts market data, the public loses trust in it. That matters well beyond the platforms themselves.

The Question Worth Sitting With

Here's where both stories converge.

The definitions of lawful, ethical, and acceptable are being rewritten right now. Not through the democratic process. Not through public debate. They’re being rewritten in private by those under the most financial pressure. These entities stand to gain the most.

Startup safety frameworks are now matching the big demand. There are many multi-billion-dollar military contracts available. Something gives under that pressure of temptation.

Casey Newton asked it in a straightforward manner in his column. How will people react when they find out that GPT models assist with ICE deportations or AI drones are used overseas?

The truth tends to surface. It just rarely surfaces on the timeline or in the format anyone anticipated.

The AI model in your pocket does many things. It drafts emails, summarizes meetings, and may even help with your next client proposal. All this was learned in one environment. It may be secretly tuned in to another. Pay attention to the fine print.

This Is Not New. It Has Always Worked This Way.

Here's the part that should reframe everything you just read. The moral outrage over the Pentagon's AI contracts is understandable. There's a bigger pattern here. Once you notice it, this moment seems less like a crisis and more like a chapter in a long book.

DARPA is often called the agency that shaped the modern world, and that's true. The internet you use, the GPS in your car, and the voice assistant on your phone all began as military research. No one built any of it for you first.

Automated voice recognition, language translation, and GPS receivers were derived from a model that ran for 60 years: Fund it for war, commercialize it later.

Consider what became of Siri. In 2003, DARPA launched the CALO project, which stands for Cognitive Assistant that Learns and Organizes. This was part of its program called Personalized Assistant that Learns. DARPA gave SRI International a five-year contract to study artificial intelligence. The goal was military command and control. SRI spent decades researching to develop Siri, a virtual personal assistant. Apple acquired Siri in 2010. The assistant in your pocket was born in a defense lab.

Or consider real-time language translation. Before Google Translate, the Defense Department paid nearly $700 million for one translation contract. It focused on human interpreters, mainly in Afghanistan. They realized there had to be a better solution. DARPA's TRANSTAC program created two-way, real-time voice translators. These devices were tested at vehicle checkpoints in Afghanistan. They translated English into Pashto and back instantly. That technology is now the standard in every free consumer translation app you use.

The pattern is consistent. Military necessity drives the investment. The technology matures by being tested in high-stakes environments. Then it migrates.

What's different this time is the speed, the scale, and the visibility. The migration used to happen slowly over a decade. Now it's happening in a Friday night announcement while 700,000 tech workers are watching.

The AI systems being shaped by Pentagon contracts right now will likely follow the same arc. Developed under a classified veil. Refined through operational pressure. Gradually normalized. Over time, the surveillance feature in a defense contract sets the standard for the next commercial model.

Perhaps that's alright with you. Perhaps it isn't. But the question isn't whether this has happened before. It has. Consistently. For sixty years.

The question is what happens when AI hallucinates, blatantly lies, or goes rogue without a kill switch? We know that GPT suffers from high rates of inaccuracies and poor performance in logical reasoning tasks. Then there is the 90% problem: If an agent has a 90% success rate on each subtask, a 4-step process only has a ~66% chance of being correct. One small error at the start cooksevery downstream action. 

Then there is research that shows it is often trivial to trick a custom agent into revealing its secret instructions or the contents of its uploaded knowledge files. And once an autonomous agent executes a task, there is often no way to reverse the action. There is no Undo button. We'd better have humans in the loop. We'd better not let it have the launch codes.

About Andrew Lawless

Andrew Lawless is an investor and AI-First Strategist. He helps entrepreneurs sell and deliver smarter with human-in-the-loop AI. His work is built around a simple stance. AI is useful. It also lies, scales fast, and rewards sloppy incentives. So he designs systems that protect judgment, pricing power, and client trust.

Lawless has over 20 years of experience in consulting and technology. He has worked with Fortune 500 companies, government agencies, and elite teams. His clients include the FBI and Special Forces. He has managed global teams. He also developed automation and localization systems for major institutions worldwide. He used that operator mindset to grow the business. He focused on LinkedIn sales automation and custom GPT workflows. He focuses on decision-grade thinking. He sets clear constraints and maintains ethical lines, even under pressure. He writes and teaches for leaders who want the leverage of AI without handing their standards to a contract clause.

 

 

Close

50% Complete

Be in the know

Sign up for my newsletter and never miss my latest updates, blogs, news, and events. I will immediately share with you my worksheet The Pillars of High Performance as a Thank You Gift.