The United States Space Force has reportedly halted the use of internet-based generative AI tools among its staff, citing concerns about safeguarding sensitive government data. In an internal memorandum dated September 29, Space Force members were instructed that the use of such tools, including those for creating text and media, would require specific authorization.
The deputy chief of space operations for technology and innovation, Lisa Costa, emphasized the potential benefits of generative AI for the Space Force workforce in the future, but underscored the need for responsible adoption due to existing cybersecurity and data handling standards.
The decision has affected approximately 500 individuals who were utilizing a specific generative AI platform called “Ask Sage,” as revealed by Nick Chaillan, the former chief software officer for the United States Air Force and Space Force. Chaillan expressed dissatisfaction with the Space Force’s move, warning that it might cause a setback in the country’s technological advancement compared to China.
He pointed out that other agencies like the U.S. Central Intelligence Agency have developed their own generative AI tools that meet stringent data security protocols. This decision by the Space Force follows concerns raised by several governments, including Italy, regarding the potential risk of leakage of sensitive information through large language models (LLMs).
In recent months, tech giants such as Apple, Amazon, and Samsung have also enforced restrictions on the use of ChatGPT-like AI tools by their employees. This cautious approach underscores the heightened awareness surrounding data privacy and security, especially in the context of AI-driven applications.