By partnering with major firms like Amazon Web Services, Lockheed Martin, Microsoft, and Oracle, Meta is aligning its AI resources with the needs of US defence and intelligence operations. Previously, Meta’s policies strictly barred the use of Llama for military, warfare, or espionage activities
read more
In a significant policy shift, Meta has announced that it will allow US government agencies and defence contractors access to its Llama AI models, citing national security interests. The decision, announced in a recent blog post, addresses concerns that Llama’s open-access nature could enable foreign adversaries to exploit the technology.
By partnering with major firms like Amazon Web Services, Lockheed Martin, Microsoft, and Oracle, Meta is aligning its AI resources with the needs of US defence and intelligence operations.
Previously, Meta’s policies strictly barred the use of Llama for military, warfare, or espionage activities. However, the company has made an exception for national security projects in the US, as well as for similar agencies in allied countries like the UK, Canada, Australia, and New Zealand.
Meta’s decision follows reports that Chinese military-linked researchers had used an older version, Llama 2, to develop a chatbot for defence purposes, which Meta condemned as an “unauthorised” use of its technology.
Among the companies integrating Llama for defence applications, Oracle is utilising the model to manage aircraft maintenance documents, while Scale AI is customising Llama to assist with specific mission requirements for national security teams. Lockheed Martin plans to deploy Llama for its defence customers in applications like computer code generation.
Despite Meta’s endorsement of open AI as a catalyst for innovation in defence, the use of AI in military applications remains contentious. According to a study by the AI Now Institute, AI models currently used for military intelligence and surveillance are vulnerable to biases, misinformation, and security risks.
The report highlights concerns over AI’s tendency to “hallucinate” information and the potential for personal data to be weaponised if intercepted. The AI Now Institute advocates for isolating military AI models from those developed for commercial purposes to prevent risks associated with mixed-use technology.
Within the tech industry, some employees have opposed their companies’ involvement in military projects, citing ethical concerns. Employees at Google, Microsoft, and other tech giants have previously protested contracts that involve building AI tools for defence, and the debate around AI’s ethical implications in warfare continues to simmer.
Meta, however, insists that opening Llama to US defence agencies can advance the country’s security interests while fostering economic growth. Nevertheless, the US military has proceeded cautiously, with the US Army being the only branch so far to implement a generative AI tool in its operations. As the integration of AI in defence continues to evolve, Meta’s collaboration with the US government could set a precedent, paving the way for more private-sector tech giants to support military applications with advanced AI.