EXCLUSIVE-Chinese Researchers Develop AI Model for Military Use
Papers Show China Reworked Llama Model for Military Tool
- China’s top PLA-linked Academy of Military Science involved
- Meta says PLA ‘unauthorised’ to use Llama model
- Pentagon says it is monitoring competitors’ AI capabilities
By James Pomfret and Jessie Pang
Nov 1 (Reuters) – In a significant development, top Chinese research institutions, including those linked to the People’s Liberation Army (PLA), have leveraged Meta’s publicly available Llama model to craft an AI tool with potential military applications. This information stems from three academic papers and insights from analysts.
One of the papers, reviewed in June, involved six Chinese researchers from three different institutions. Among these were two under the PLA’s prominent research entity, the Academy of Military Science (AMS). The document detailed their use of an early version of Meta’s Llama to create a tool known as “ChatBIT.”
The researchers utilized an earlier Llama 13B large language model (LLM) from Meta and injected their own parameters to build a military-specific AI that aids in intelligence gathering and processing. This AI is designed to deliver accurate and reliable information for making operational decisions. ChatBIT was optimized for dialogue and problem-solving in military contexts and outperformed some AI models with capabilities around 90% of OpenAI’s powerful ChatGPT-4. However, the researchers did not specify how they measured performance or whether the AI model has been deployed.
“It’s the first time there’s been substantial evidence indicating that PLA military experts in China are systematically exploring the advantages of open-source LLMs, especially those from Meta, for military ambitions,” noted Sunny Cheung, associate fellow at the Jamestown Foundation, who focuses on China’s emerging and dual-use technologies, including artificial intelligence.
Meta has chosen to openly release many of its AI models, including Llama, but imposes certain restrictions. Notably, entities with over 700 million users must obtain a license. The terms also restrict the usage of these models in “military, warfare, nuclear industries or applications, espionage” and other activities subject to U.S. defense export controls. They further discourage development related to weapons and content that “incites and promotes violence”.
However, due to the public nature of Meta’s models, enforcing these provisions remains challenging. Responding to inquiries from Reuters, Meta highlighted its acceptable use policy and measures to prevent misuse.
“Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy,” stated Molly Montgomery, Meta’s director of public policy, in an interview with Reuters.
Meta emphasized the importance of open innovation for the U.S. In the global AI competition, the role of an outdated version of an American open-source model “remains irrelevant” given China’s massive investments, exceeding a trillion dollars, to outpace the United States in AI development.
The Chinese researchers include figures like Geng Guotong and Li Weiwei from the AMS’s Military Science Information Research Center and the National Innovation Institute of Defense Technology. Additionally, researchers from the Beijing Institute of Technology and Minzu University were involved.
The prospectus highlighted, “In the future, through technological refinement, ChatBIT will not only aid in intelligence analysis but also explore strategic planning, simulation training, and command decision-making.”
Neither the Chinese Defence Ministry nor any institutions or researchers involved responded to requests for comment. Reuters could not independently verify ChatBIT’s capabilities and computational strength. Notably, the model only used 100,000 military dialogue records compared to other LLMs trained with trillions of tokens, raising questions about its effectiveness with such a small sample.
The research ignites debates in U.S. national security and technology forums regarding the public release of models by firms like Meta. U.S. President Joe Biden’s executive order in October 2023 aimed to manage AI progress while noting substantial benefits and security risks involved.
Recently, Washington mentioned finalizing regulations to restrict U.S. investment in AI and other tech sectors in China deemed a national security threat.
Pentagon spokesman John Supple noted the Department recognizes both the advantages and challenges of open-source models, ensuring continuous vigilance over competitors’ capabilities.
‘COOKIE JAR’
Observers argue that China’s advancements in domestic AI development, including numerous research labs, challenge efforts to prevent it from closing the technological gap with the U.S.
An unrelated paper reviewed by Reuters outlined how researchers from the Aviation Industry Corporation of China (AVIC) utilized Llama 2 for “training of airborne electronic warfare interference strategies.” AVIC is designated by the U.S. as a PLA-affiliated firm.
China’s application of Western-developed AI tools has also extended into domestic security. A June paper outlined Llama’s role in enhancing “intelligence policing” to process large datasets and improve law enforcement decision-making. The PLA Daily commented on AI’s potential to “accelerate weapons and equipment R&D” in April, suggesting applications in combat simulation and military training.
“Can you keep them (China) out of the cookie jar? No, I don’t see how you can,” said William Hannas, lead analyst at Georgetown University’s Center for Security and Emerging Technology (CSET). A 2023 study by CSET identified 370 Chinese institutions with researchers who had published papers on General Artificial Intelligence, advancing China’s goal of leading global AI by 2030.
“There’s too much collaboration between China’s leading scientists and the U.S.’s best AI scientists for them to be sidelined from developments,” Hannas added.