Meta recently announced that it will open the Llama series of artificial intelligence models to U.S. government agencies and related contractors for national security applications. The move was intended to respond to concerns that its "open" artificial intelligence could be exploited by foreign adversaries and demonstrate Meta's support for national security. The editor of Downcodes will provide a detailed interpretation of this incident, analyzing the motivations behind it, the potential impact, and the resulting discussion on the safety of artificial intelligence.
Meta Corporation recently announced that it will open its Llama series of artificial intelligence models to U.S. government agencies and related contractors to support national security applications.
The move is aimed at dispelling perceptions that its “open” AI could empower foreign adversaries. "We are pleased to confirm that Llama will be available to U.S. government agencies, including those focused on defense and national security programs, as well as private sector partners supporting these efforts," Meta said in a blog post.
To promote this project, Meta has partnered with a number of well-known companies, including Accenture, Amazon Web Services, Andiel, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake. These companies will help apply the Llama model to a variety of national security missions.
For example, Oracle is using Llama to process aircraft maintenance documentation, and Scale AI is fine-tuning Llama for specific national security tasks. Lockheed Martin plans to provide Llama to its defense customers to help them generate computer code and so on.
In general, Meta's policy prohibits developers from using Llama for military, war, or espionage-related projects. However, in this case, Meta decided to make an exception and allow agencies and contractors related to the US government to use Llama, in addition to similar agencies in the UK, Canada, Australia and New Zealand.
It is worth noting that there have been recent reports that some researchers associated with the Chinese People's Liberation Army have used an old version of the Llama model (Llama2) to develop a military-focused chatbot designed to collect and process intelligence to provide operational decision-making. information. Meta responded that the use of the model was "unauthorized" and violated the company's acceptable use policy. However, the incident has further sparked widespread discussion about the pros and cons of open artificial intelligence.
With the application of artificial intelligence in military intelligence, surveillance and reconnaissance, related security risks have gradually surfaced. A study from the AI Now Institute shows that existing artificial intelligence systems rely on personal data that can be extracted and weaponized by adversaries. At the same time, artificial intelligence systems also have problems such as bias and illusion, and there is currently no effective solution. The researchers recommend developing dedicated AI systems that are insulated from “commercial” models.
Although Meta claims that open AI can accelerate defense research and promote U.S. economic and security interests, the U.S. military is still cautious about the adoption of this technology. So far, only the Army has deployed generative artificial intelligence.
Meta’s move has triggered widespread discussions about the ethics and safety issues of artificial intelligence applications in the defense field, and its long-term impact remains to be seen. The editor of Downcodes will continue to pay attention to the latest developments in this field and bring more in-depth analysis and reports to readers.