Palantir's Maven AI has conducted thousands of targeting operations in Iran, OpenAI signed a $950 million Pentagon contract, and the tech industry's ethical guardrails have vanished in wartime.
The Wall Street Journal and CNBC covered the Pentagon contracts as business news, with limited examination of the ethical implications of AI in targeting.
AI researchers on X called the Maven deployment 'the thing we warned about for a decade' while defense hawks celebrated it as proof that AI dominance wins wars.
The Iran war is the first conflict in which artificial intelligence systems have been used at scale for target identification, strike planning, and battle damage assessment by the United States military. Five weeks in, the tools that Silicon Valley built and the Pentagon bought are being tested in ways that the AI safety debates of the past decade warned about but never resolved [1].
At the center is Palantir Technologies' Maven Smart System, the successor to the controversial Project Maven that Google abandoned in 2018 after employee protests. Palantir, which had no such qualms, took over the contract and has spent six years developing a platform that ingests satellite imagery, signals intelligence, drone footage, and human intelligence reports to generate targeting recommendations for military commanders. In the Iran campaign, Maven has been used to identify and prioritize thousands of targets, from missile launch sites and command bunkers to logistics nodes and air defense batteries [2].
The numbers are striking. According to a Pentagon briefing reported by Defense One, Maven-generated target packages have been used in "the majority" of US strike operations against Iran since March 1. The system processes intelligence data at a speed and volume impossible for human analysts, generating recommendations in minutes that would previously have required hours or days of manual work. Commanders retain final authority over strike decisions — the "human in the loop" that the Pentagon insists upon — but the loop has been compressed to the point where the human role is increasingly one of approval rather than analysis [3].
OpenAI's involvement is newer and less operationally visible, but strategically significant. The company signed a $950 million contract with the Department of Defense in January 2026, providing access to its large language models for intelligence analysis, logistics planning, and — the most controversial element — "decision support" in operational contexts. The contract explicitly excludes "lethal autonomous weapons systems," but the boundary between decision support and targeting is, in practice, a matter of configuration [4].
The ethical reversal is complete. In 2018, Google pulled out of Project Maven after 4,000 employees signed a petition declaring that "Google should not be in the business of war." The protest was treated as a watershed moment for the tech industry's relationship with the military. Eight years later, every major AI company has a defense contract. Google itself returned to military work through a separate cloud computing agreement. Microsoft has JEDI's successor. Amazon runs classified cloud infrastructure. Anthropic, which was founded partly in reaction to OpenAI's commercial turn, holds a $200 million Pentagon contract for classified deployments [5].
The war accelerated what was already underway. Defense spending on AI increased 38% year-over-year in the fiscal year ending September 2025, reaching approximately $3.7 billion. The Iran war has generated supplemental appropriations that will push the 2026 total above $5 billion. For companies like Palantir — whose stock has risen 47% since February 28, adding roughly $30 billion in market capitalization — the war is not merely a moral test. It is a business opportunity [6].
The employee dissent that characterized the 2018 Maven debate has not materialized at comparable scale. Approximately 300 OpenAI employees signed a letter in March expressing "deep concern" about the Pentagon contract, but the company's leadership dismissed it as a minority view. Palantir's workforce, which has always been more politically aligned with the defense establishment, has produced no organized opposition. The AI safety community — which spent years warning about exactly this scenario — has been largely reduced to commentary rather than action [7].
The operational questions are as urgent as the ethical ones. Maven's targeting recommendations are based on pattern recognition in intelligence data. The system is very good at identifying what it has been trained to identify. It is less good at understanding context that falls outside its training distribution — the difference between a missile launcher and a construction crane, between a military convoy and a civilian bus traveling the same road at night. The Pentagon says its civilian casualty protocols are robust. Independent verification is impossible because the war zone is closed to journalists [8].
What is knowable is that the architecture of AI warfare — the infrastructure, the contracts, the operational patterns — is being built right now, in real time, in a live conflict. The decisions being made in the Iran campaign about how AI systems are deployed, how much autonomy they exercise, and how their errors are accounted for will establish precedents that govern every future conflict.
In 2018, Google's employees asked their company not to build weapons of war. In 2026, those weapons are operational, and the companies that build them have learned to stop asking their employees for permission.
-- Kenji Nakamura, Tokyo