OpenAI is also working to reform its recent deal with the Department of Defence, thus restricting its exposure to intelligence services and possibly changing the role of AI in national security.

On Monday, the CEO, Sam Altman, announced the amendments in an X platform, saying that unless the U.S. Department of Defence signs a new contract with OpenAI, including ChatGPT and other advanced generative models, they will not be deployed to intelligence services, including the NSA. Altman, in turn, wrote that their collaboration with DoD has been an attempt to add further provisions explaining our principles.

It is based on a major agreement finalized last week to install OpenAI tools within the Pentagon’s classified networks, which was the first step toward introducing the AI innovator into the military, amidst the increasing request of safe AI in military affairs.

OpenAI Ends Pentagon AI Contract 

Deal Background

The alliance was formed after an order by President Trump on February 27 that ordered federal agencies to stop using the Claude models of Anthropic. The resulting vacuum was rapidly occupied by OpenAI: the State Department was handed over completely by March 1, and the Treasury Department promised to stop Anthropic products by mid-year, according to Treasury nominee Scott Bessent.

The changing presence of AI in the military is indicative of its increased budget size, with the Department of Defence reporting $1.8 billion dollars in AI funding by fiscal 2025, a 22% increase on 2024. The modification of OpenAI to the position of non-military usage overturns the previous criticism that the position of the OpenAI is naive to the point where it does not consider non-intelligence uses of AI, such as logistics and simulated use.

Market Ripples

The announcement was favorable to financial markets. The implied valuation of OpenAI was estimated at more than $15.96 billion after the latter financing, and the smallpox of Anthropic dropped by 4% on secondary markets. Other AI companies with a defensive orientation, including Palantir, saw returns of 3.2%. Analyst Dan Ives of Wedbush observed that the guardrails of the company moderate the ethical with the generation of revenue, and estimated that the company would have a range of around $500 million in Department of Defence contracts by 2027.

This amendment does not involve any ethical scandals and affordable access to a promising market that is estimated to be big, with an expected size of up to 15billion by 2030 Markets and Markets. Still, there are some risks that should not be overlooked: a transition towards NSA usage may result in a backlash, which will lead to losing customer trust since a Pew Research survey shows that 65% of citizens will still distrust military AI applications as of February 2026.

Looking Ahead

OpenAI needs to cement its market success, with federal agencies adopting its technology and 80% of its pilots using the stack. Rising geopolitical tensions, which pushed oil prices up 5% due to Iran-related conflicts, are driving greater integration. Nevertheless, Altman’s corrective principles could set a precedent for responsible behavior in the AI arms race.