The US federal government has introduced a series of AI oversight measures that reverse or refine earlier positions, signaling a pragmatic approach to regulation without new legislation.
Overview
The proposals include voluntary safety commitments from leading AI developers, a draft executive order on AI safety standards, and expanded use of existing authorities such as the Defense Production Act. These steps aim to address risks in areas like national security, bias, and misinformation while avoiding the need for congressional approval.
Key measures
- Voluntary commitments: Eight major AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAI, and Mistral—agreed to internal and external safety testing before public release of new models. The commitments also include watermarking AI-generated content and sharing risk-mitigation strategies with governments.
- Draft executive order: The order would require federal agencies to adopt AI safety standards, including red-team testing for high-risk systems. It also directs agencies to use existing procurement and regulatory powers to enforce compliance.
- Defense Production Act: The administration has invoked this authority to compel AI developers to disclose safety test results for models posing national security risks.
- Bipartisan working groups: Congressional committees are exploring frameworks for AI regulation, though no comprehensive bill has advanced.
Tradeoffs
- Speed vs. enforceability: Voluntary commitments and executive actions can be implemented quickly but lack the legal weight of legislation.
- Innovation vs. safety: Critics argue that pre-release testing could slow development, while proponents counter that it reduces systemic risks.
- Transparency vs. competitiveness: Mandatory disclosures may help regulators but could disadvantage US firms in global markets where such requirements are absent.
When to use this framework
Organizations developing or deploying AI systems should:
- Align internal testing protocols with the voluntary commitments, even if not legally bound.
- Prepare for potential disclosure requirements under the Defense Production Act if working with high-risk models.
- Monitor agency-specific guidelines, as federal procurement rules may soon include AI safety clauses.
Bottom line
The shift reflects a tactical use of executive authority to fill gaps in AI governance. While not a substitute for legislation, the measures create near-term guardrails and set expectations for future regulation. Companies should treat these as de facto standards rather than optional best practices.