Sam Altman, CEO of OpenAI, during a fireside chat organized by Softbank Ventures Asia in Seoul, South Korea, on Friday, June 9, 2023.
Seungjun Cho | Bloomberg | Getty Images
OpenAI and Anduril on Wednesday announced a partnership that will allow the defense technology company to deploy advanced artificial intelligence systems for “national security missions.”
It's part of a broader and controversial trend of AI companies not only rolling back bans on military use of their products, but also entering into partnerships with defense industry giants and the US Department of Defense.
Last month, Anthropy, the AmazonAn AI-powered startup founded by former OpenAI executives and a defense contractor Palantir It announced a partnership with Amazon Web Services to “provide US intelligence and defense agencies with access to (Anthropic's) Claude 3 and 3.5 model suite on AWS.” This fall, Palantir signed a new five-year contract worth up to $100 million to expand the US military's access to its Maven AI warfare software.
The OpenAI-Anduril partnership announced on Wednesday that it “will focus on improving the country’s unmanned aerial systems (CUAS) and their ability to detect, assess and respond to potentially deadly aerial threats in real time,” according to a statement that added that “Anduril and OpenAI will explore how intelligence models can be leveraged.” Leading artificial intelligence to quickly collect time-sensitive data, reduce the burden on human operators, and improve situational awareness.”
Anduril, co-founded by Palmer Luckey, did not answer a question about whether reducing the burden on human operators would translate into fewer humans in the loop for high-risk warfighting decisions. Luckey founded Oculus VR, which he sold to Facebook in 2014.
OpenAI said it is working with Anduril to help human operators make decisions “to protect US military personnel on the ground from drone attacks.” The company said it adheres to the policy contained in its mission statement of prohibiting the use of its artificial intelligence systems to harm others.
The news comes yet MicrosoftIn January, the OpenAI-backed company quietly removed a ban on military use of ChatGPT and its other AI tools, just as it began working with the US Department of Defense on AI tools, including open source cybersecurity tools. .
As of early January, OpenAI's policies page specified that the company did not allow its models to be used in “activities that involve a high risk of physical harm” such as weapons development or military and warfare actions. In mid-January, OpenAI removed the specific reference to the military, although its policy still states that users should not “use our service to harm themselves or others,” including “developing or using weapons.”
The news comes after years of controversy over tech companies developing technology for military use, highlighted by public concerns among tech workers — especially those working in the field of artificial intelligence.
Employees at nearly every tech giant involved in military contracts expressed concerns after thousands of Google employees protested Project Maven, a Pentagon project that will use Google AI to analyze drone surveillance footage.
Microsoft employees protested a $480 million military contract that would equip soldiers with augmented reality headsets, and more than 1,500 workers at Amazon and Google signed a letter protesting a joint multi-year, $1.2 billion contract with the Israeli government and military, under which the tech giants would… From using augmented reality headsets. Providing cloud computing services, artificial intelligence tools, and data centers.
— CNBC's Morgan Brennan contributed to this report.