OpenAI Robotics Head Caitlin Kalinowski Resigns Over Pentagon AI Deal Concerns
A senior leader at OpenAI has stepped down after raising concerns about the company’s recent partnership with the United States Department of Defense. Caitlin Kalinowski, who led OpenAI’s robotics and consumer hardware division, announced her resignation in early March 2026, saying the decision was based on ethical principles rather than personal disagreements within the company.
Kalinowski’s departure comes shortly after OpenAI confirmed a deal to provide its artificial intelligence models for use on the Pentagon’s classified networks. The agreement aims to support defence-related work such as cybersecurity, logistics and intelligence analysis, areas where governments are increasingly exploring AI capabilities.
However, the announcement triggered debate within the technology community about how AI systems should be used in military and surveillance settings.
Why Caitlin Kalinowski Resigned
In public posts on social media and professional platforms, Kalinowski said she believed the agreement raised important governance and oversight questions. She argued that issues such as surveillance without judicial oversight and the possibility of lethal autonomous systems required deeper discussion before any announcement was made.
While acknowledging that artificial intelligence can play a role in national security, she said certain ethical boundaries deserved more deliberation and clearer safeguards. According to her statements, the deal appeared to move forward too quickly without fully defined guardrails around how the technology could be used.
Kalinowski also stressed that her decision was not directed at any individual within the company. She expressed respect for OpenAI’s leadership and said she was proud of the robotics work completed during her time at the organisation.
Before joining OpenAI in 2024, she worked at Meta, where she led development projects related to augmented reality hardware and contributed to products such as advanced AR glasses.
Details of the OpenAI–Pentagon Partnership
OpenAI’s agreement with the U.S. Department of Defense was announced in late February 2026. The partnership involves deploying OpenAI’s AI models within secure government cloud systems for defence-related applications.
The deal came after negotiations between the Pentagon and AI company Anthropic reportedly collapsed. Anthropic had sought stronger assurances that its technology would not be used for mass surveillance of citizens or fully autonomous weapons systems.
OpenAI has defended its decision, saying the agreement includes strict safeguards. According to the company, its policies prohibit the use of its AI technology for domestic surveillance or autonomous weapons without human control.
Company representatives also said they recognise that the topic raises strong opinions and that discussions with employees, policymakers and civil society will continue as the technology evolves.
Growing Debate Over AI and Military Use
Kalinowski’s resignation has added momentum to a broader global debate about the role of artificial intelligence in defence and surveillance. Some researchers and industry professionals argue that AI tools can improve security, logistics and threat detection. Others warn that rapid deployment without clear ethical standards could create risks related to privacy, accountability and automated warfare.
The controversy has also drawn attention to governance inside AI companies and how decisions about sensitive partnerships are made. Several observers say the situation highlights the growing tension between technological progress, commercial interests and ethical responsibility in the rapidly expanding AI industry.
For now, OpenAI has reaffirmed its commitment to setting limits on how its technology can be used. But Kalinowski’s resignation signals that debates over AI’s role in national security are likely to intensify as governments and tech companies deepen their collaborations.
