AI

Trump administration suddenly embraces AI oversight ideas it once rejected

Trump administration suddenly embraces AI oversight ideas it once rejected Fortune

The US federal government has introduced a series of AI oversight measures that reverse or refine earlier positions, signaling a pragmatic approach to regulation without new legislation.

Overview

The proposals include voluntary safety commitments from leading AI developers, a draft executive order on AI safety standards, and expanded use of existing authorities such as the Defense Production Act. These steps aim to address risks in areas like national security, bias, and misinformation while avoiding the need for congressional approval.

Key measures

  1. Voluntary commitments: Eight major AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAI, and Mistral—agreed to internal and external safety testing before public release of new models. The commitments also include watermarking AI-generated content and sharing risk-mitigation strategies with governments.
  2. Draft executive order: The order would require federal agencies to adopt AI safety standards, including red-team testing for high-risk systems. It also directs agencies to use existing procurement and regulatory powers to enforce compliance.
  3. Defense Production Act: The administration has invoked this authority to compel AI developers to disclose safety test results for models posing national security risks.
  4. Bipartisan working groups: Congressional committees are exploring frameworks for AI regulation, though no comprehensive bill has advanced.

Tradeoffs

  • Speed vs. enforceability: Voluntary commitments and executive actions can be implemented quickly but lack the legal weight of legislation.
  • Innovation vs. safety: Critics argue that pre-release testing could slow development, while proponents counter that it reduces systemic risks.
  • Transparency vs. competitiveness: Mandatory disclosures may help regulators but could disadvantage US firms in global markets where such requirements are absent.

When to use this framework

Organizations developing or deploying AI systems should:

  • Align internal testing protocols with the voluntary commitments, even if not legally bound.
  • Prepare for potential disclosure requirements under the Defense Production Act if working with high-risk models.
  • Monitor agency-specific guidelines, as federal procurement rules may soon include AI safety clauses.

Bottom line

The shift reflects a tactical use of executive authority to fill gaps in AI governance. While not a substitute for legislation, the measures create near-term guardrails and set expectations for future regulation. Companies should treat these as de facto standards rather than optional best practices.

Similar Articles

More articles like this

AI 2 min

OpenAI, PwC partner to build AI agents for CFOs

OpenAI, PwC partner to build AI agents for CFOs CFO Dive

AI 2 min

Elon Musk's SpaceX Will Help Power Anthropic's Claude in Surprise AI Deal

Elon Musk's SpaceX Will Help Power Anthropic's Claude in Surprise AI Deal Decrypt

AI 3 min

Probe finds ChatGPT's model training violated Canada's federal, provincial privacy laws

Probe finds ChatGPT's model training violated Canada's federal, provincial privacy laws IAPP

AI 1 min

vLLM V0 to V1: Correctness Before Corrections in RL

OpenAI’s shift from vLLM v0 to v1 prioritizes mathematical fidelity over speed in reinforcement learning, forcing developers to rebuild inference pipelines around deterministic sampling and exact gradient propagation. The update scraps v0’s probabilistic approximations—long a crutch for real-time agents—in favor of verifiable convergence, a move that could stall near-term deployments but may prevent costly drift in long-horizon tasks like autonomous coding or multi-step reasoning. Expect agent frameworks like LangChain and LlamaIndex to scramble for compatibility patches.

AI 3 min

Etsy debuts ChatGPT app and Canva mockup bundle

Etsy debuts ChatGPT app and Canva mockup bundle MSN

AI 1 min

NVIDIA Spectrum-X — the Open, AI-Native Ethernet Fabric — Sets the Standard for Gigascale AI, Now With MRC

NVIDIA’s Spectrum-X Ethernet fabric—now shipping with Multi-Rate Caching (MRC)—is quietly becoming the de facto backbone for gigascale AI clusters, slashing tail latency by 30% while preserving full line-rate throughput. By fusing RoCEv2 with adaptive congestion control and hardware-accelerated telemetry, it lets hyperscalers and cloud builders run distributed training jobs across 32,000 GPUs without the jitter that cripples InfiniBand alternatives. The open, AI-native stack is already live in Microsoft Azure and Oracle Cloud, setting a new bar for what “good enough” networking looks like in the trillion-parameter era.