On Tuesday, the Commerce Department’s Center for AI Standards and Innovation announced that top artificial intelligence firms including Microsoft, xAI and Google DeepMind have agreed to allow government officials to vet their most advanced AI models for national security threats before those systems reach the public.

The deals, struck with the Center for AI Standards and Innovation, an office within the Commerce Department’s National Institute of Standards and Technology, will involve pre-deployment evaluations and what the agency described as “targeted research” on the companies’ frontier AI systems. The arrangement marks a significant expansion of the federal government’s role in overseeing the safety of cutting-edge artificial intelligence.

Chris Fall, director of the Center for AI Standards and Innovation, said in a statement that “independent, rigorous measurement science is essential to understanding frontier AI and its national security implications.” The center’s work is intended to help the government assess the risks posed by powerful new models before they are integrated into products and services used by millions.

Voluntary Vetting Evolves Under New Administration

The agreements closely mirror voluntary vetting deals that the Biden administration struck in August 2024 with leading AI labs OpenAI and Anthropic. The Center for AI Standards and Innovation said those earlier arrangements have since been “renegotiated” to align with the priorities of the Trump administration, signaling a continuity of concern about AI security across presidential administrations.

The push for government oversight comes as the White House weighs even stronger action. The New York Times reported Monday that the administration is considering an executive order that would establish a formal government review process for new AI models, a move that would codify the voluntary agreements into binding policy. The urgency has been heightened by recent security concerns, including cybersecurity risks raised by Anthropic’s new Mythos model.

In a statement, the Center for AI Standards and Innovation said its new deals with Microsoft, xAI and Google DeepMind will “support information-sharing, driving voluntary product improvements and ensuring a clear understanding in government of AI capabilities and the state of international AI competition.” The agency framed the partnerships as a way to keep federal officials informed about the pace of AI development both at home and abroad.

Spokespeople for Microsoft and Google DeepMind pointed to social media posts from company executives confirming their participation. Natasha Crampton, head of Microsoft’s Office of Responsible AI, noted that the company has also signed a similar agreement with the United Kingdom’s AI Security Institute, reflecting a growing international dimension to AI safety oversight.