"Using AI as an operations staff officer for strikes on Iran?" How the US has completely changed the technology of war [AI Atlas]
- Input
- 2026-03-08 09:00:00
- Updated
- 2026-03-08 09:00:00

[Financial News]
The arcs of missiles cutting through the sky and the explosions shaking the ground: the traditional landscape of war is changing. On the 28th of last month, the United States of America (US) launched an airstrike on the Islamic Republic of Iran under the operation name "Operation Epic Fury." The mission signals a major turning point in the history of warfare. It is not simply because a more destructive weapon appeared, but because the "brain" of the battlefield is now being played by a Large Language Model (LLM) rather than a human..

"Claude as an operations officer?" AI becomes the brain of the battlefield
In the past, wars were contests of firepower and manpower. Now they are turning into contests over who can infer from data faster and more accurately. At the center of this shift stands Anthropic PBC (Anthropic)'s AI "Claude" and a silent war among Big Tech companies surrounding it. This is not the first time artificial intelligence (AI) has been deployed on the battlefield. The 2022 Russo-Ukrainian War was called a "testbed for AI warfare." At that time, US data analytics company Palantir Technologies (Palantir) played a decisive role by integrating and analyzing data from satellites, drones, closed-circuit television (CCTV), and social networking service (SNS) platforms to generate strike coordinates and detect landmines. However, the AI used then largely remained at the level of "specialized AI" optimized for specific tasks such as image classification and object detection.The latest airstrikes on Iran are on a different level. The key point is that Claude, a general-purpose commercial AI that people normally use to ask questions and get answers, rather than a narrow specialized model, oversaw comprehensive combat simulations.According to foreign media including The Washington Post (WP), the US military used an AI-based military intelligence platform called the Maven Smart System (MSS) during the first 24 hours of the Iran operation to strike roughly 1,000 targets.What is striking is that Anthropic's Claude was embedded as the core reasoning engine of this MSS. Within the Palantir system, Claude reportedly contextualized tens of thousands of heterogeneous data points in real time and was used to generate target coordinates and determine strike priorities.If earlier AI mainly served as the "eyes" of the battlefield, it is now beginning to act as a "staff officer" proposing strategy.
Anthropic's principles and its break with the Trump administration
Behind this technological advance, however, lies a fierce clash over ethics and national interest. The conflict began with Anthropic's "principles." Anthropic was founded with a strong emphasis on AI safety. Even if Claude were used for military purposes, the company insisted on guidelines to prevent it from being deployed for "large-scale surveillance of US citizens" or for "fully autonomous weapons" that exclude human intervention.The Trump administration saw things differently. For military authorities seeking to maximize the efficiency of warfighting, constraints on AI were tantamount to weakening combat power. In the end, the US government rejected Anthropic's demands. It went further than a simple rejection, terminating its contracts with Anthropic and taking the drastic step of banning the use of Anthropic AI across federal agencies. According to a report by The Guardian on the 5th (local time), President Donald Trump announced that day that he was severing ties with Anthropic andsaid, "I fired Anthropic. They should not have done that. I cut them off like a dog."On the same day, the United States Department of Defense (DoD) designated Anthropic and its products as supply-chain risk entities.

OpenAI moves in to fill the gap left by Anthropic
OpenAI, Anthropic's rival, did not miss the massive vacuum left behind. Almost as if it had been waiting, OpenAI signed an agreement with the DoD to deploy AI models on classified networks, effectively taking over Anthropic's place.This prompted a wave of "boycott" sentiment in the US, with critics arguing that OpenAI had abandoned universal human values and AI ethics in pursuit of technological superiority and market share. After news broke of OpenAI's contract with the DoD, an online boycott campaign called "QuitGPT" spread rapidly. According to market research firm Sensor Tower, as of February 28, deletions of the ChatGPT mobile app jumped 295% from the previous day, while downloads of Anthropic's Claude in the US were estimated to have risen 51% on the same day.

Securing AI sovereignty is urgent
This episode carries a message that goes far beyond a simple corporate tug-of-war.First, AI capabilities have become a core asset of national security. The recent war involving Iran has demonstrated that countries lacking "AI sovereignty" will also lose the initiative on the battlefield.
Second, there is the erosion of AI ethics. As general-purpose technologies developed by private companies are transformed into the core of lethal weapons, there are virtually no international agreements or legal frameworks to control this process. This raises a serious question for the future: if AI makes decisions that slip beyond human control, who will be held responsible?
Third, war is becoming increasingly dehumanized. An AI staff officer that communicates in natural language may cause commanders to perceive decisions about killing as if they were simply "business optimization." The horrific reality behind numerical strike success rates and the smooth logic of AI can easily be obscured. Now that AI has evolved from the engine of the economy into the sharpest spear and shield safeguarding national survival, South Korea must also advance its LLM technologies and solidify its AI sovereignty.
ksh@fnnews.com Kim Sung-hwan Reporter