The proposed regime would require developers of frontier AI models to submit their systems for government review before they are released to the public, with the goal of identifying potential national security threats posed by the technology.

The discussions, which remain in an early and fluid stage, represent the Trump administration’s most concrete step yet toward regulating the fast-moving AI industry. Officials in the National Security Council and the Office of Science and Technology Policy have been debating the scope of the review, including which models would be subject to the new requirements and which federal agency would oversee the process. The talks come as the White House faces mounting pressure from intelligence agencies and defense officials who warn that unconstrained AI development could enable cyberattacks, biological weapons design, and other catastrophic risks.

Under the emerging framework, developers of the most powerful AI models would be required to share key technical details with the government, including training data sources, model weights, and safety testing results. The vetting process would aim to identify “dual-use” capabilities that could be weaponized by foreign adversaries or non-state actors. Companies that fail to comply could face restrictions on federal contracts or export licenses, though the precise enforcement mechanisms have not been finalized.

The proposed order would build on voluntary commitments that leading AI companies including OpenAI, Google, and Anthropic made to the White House last year. Those pledges included commitments to conduct safety testing and share information with the government, but critics have argued that the voluntary framework lacks teeth. The new executive action would transform those promises into binding obligations, potentially reshaping the competitive landscape for American AI firms.

Industry and Congressional Reaction

The prospect of mandatory government review has drawn sharp reactions from across the technology sector. Some AI executives have privately expressed concern that the vetting process could slow innovation and cede ground to Chinese competitors who face no such restrictions. Others have welcomed the move as a way to establish clear rules of the road and prevent a race to the bottom on safety. Several trade groups have already begun lobbying the White House to ensure that the requirements are narrowly tailored to the most dangerous capabilities.

Congressional leaders have also taken notice, with key committee staffers on both sides of the aisle requesting briefings on the administration’s plans. The executive action would allow the White House to act unilaterally while lawmakers continue to debate broader AI legislation that has stalled in the House and Senate. Some Democratic members have urged the administration to go further, calling for an independent regulatory agency modeled on the Nuclear Regulatory Commission to oversee advanced AI development.

The timeline for any executive order remains uncertain, with internal disagreements persisting over how to define the threshold for “frontier” models and whether to include open-source systems in the review. One official cautioned that the proposal could still be scaled back or abandoned entirely if industry opposition intensifies or if the national security community fails to reach consensus on the specific risks. The White House declined to comment on the record, but a spokesperson said the administration is “actively evaluating all tools to ensure American AI leadership does not come at the expense of national security.”