The U.S. Department of Defense is actively exploring the application of artificial intelligence in the military field and is cooperating with leading AI companies such as OpenAI and Anthropic. This collaboration aims to increase the efficiency of the Department of Defense while strictly adhering to the principle of not using AI technology for lethal weapons. The article explores the role of AI in military decision-making, as well as the ethical controversy over AI’s autonomous weapons decision-making power, and demonstrates the Pentagon’s cautious attitude and strict supervision of the application of AI technology.
As artificial intelligence technology advances rapidly, leading AI developers like OpenAI and Anthropic are working hard to work with the U.S. military, seeking to improve the efficiency of the Pentagon while ensuring that their AI technology is not used for lethal weapons. Dr. Radha Plumb, the Pentagon's chief digital and AI officer, said in an interview with TechCrunch that AI is not currently used in weapons, but it provides the Department of Defense with significant advantages in the identification, tracking and assessment of threats.
Picture source note: The picture is generated by AI, and the picture authorization service provider Midjourney
Dr. Plumb mentioned that the Pentagon is accelerating the execution of the "kill chain", a process that involves identifying, tracking and neutralizing threats, involving complex sensors, platforms and weapons systems. Generative AI shows its potential in the planning and strategy stages of the kill chain. She noted that AI can help commanders respond quickly and effectively when faced with threats. In recent years, the Pentagon has become increasingly close to AI developers. In 2024, companies such as OpenAI, Anthropic, and Meta relaxed their usage policies to enable U.S. intelligence and defense agencies to use their AI systems, but still prohibited the use of these AI technologies for purposes that harm humans. This shift has led to a rapid expansion of cooperation between AI companies and defense contractors. Meta, for example, in November partnered with companies including Lockheed Martin and Booz Allen to apply its Llama AI model to the defense sector. Anthropic has reached a similar cooperation with Palantir. Although the specific technical details of these collaborations are unclear, Dr. Plumb said that the application of AI in the planning stage may conflict with the usage policies of multiple leading developers. There has been a heated discussion in the industry about whether AI weapons should have the ability to make life and death decisions. Anduril CEO Palmer Luckey mentioned that the U.S. military has a long history of purchasing autonomous weapons systems. However, Dr. Plumb dismissed this, stressing that in any case, someone must be involved in making the decision to use force. She pointed out that the idea of automated systems making life-and-death decisions independently is too binary, and the reality is much more complex. The Pentagon's AI system is a collaboration between humans and machines, with senior leaders involved in the decision-making process. Highlights: AI is providing the Pentagon with significant advantages in identifying and assessing threats, driving more efficient military decision-making. AI developers are working increasingly closely with the Pentagon, but have always insisted that AI technology not be used to harm humans. Discussions continue about whether AI weapons should have life-and-death decision-making capabilities, with the Pentagon emphasizing that humans are always involved.All in all, AI has broad prospects for application in the military field, but its ethical and safety issues also need to be treated with caution. How to avoid the abuse of AI technology while ensuring national security will be an important issue that requires continued attention and resolution in the future.