AI

Week one of the Musk v. Altman trial: What it was like in the room

A high-stakes showdown between tech titans unfolded in an Oakland courtroom, as Elon Musk took OpenAI to task over alleged mismanagement of his $20 million investment, sparking a contentious trial that may redefine the boundaries of AI research and corporate accountability. Musk's lawsuit centers on OpenAI's handling of its multimodal model, Llama 3, and the company's decision to integrate it with its Operator API. The trial's outcome will have far-reaching implications for the AI industry. AI-assisted, human-reviewed.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Two of the most powerful people in AI—Sam Altman and Elon Musk—began their face-off in court in Oakland, California, last week. Musk is suing OpenAI, alleging that the millions he spent to…

Similar Articles

More articles like this

AI 2 min

DeepClaude Lets You Run Claude Code With DeepSeek's Brain for 17x Cheaper - Decrypt

A new cloud-based service, DeepClaude, slashes costs for running OpenAI's Claude large language model by leveraging the massively parallel architecture of DeepSeek's Brain, a custom-designed ASIC, to achieve a 17-fold reduction in computational expenses, making high-performance LLM inference accessible to a broader range of developers and enterprises. This breakthrough is poised to accelerate AI adoption across industries. The service's efficiency is attributed to its ability to optimize Claude's neural network for DeepSeek's Brain's unique hardware capabilities. AI-assisted, human-reviewed.

AI 1 min

Tailoring AI solutions for health care needs

Healthcare AI’s hype cycle is colliding with clinical reality: vendors now ship narrow, HIPAA-compliant microservices—think Nuance DAX for ambient scribing or Viz.ai’s stroke-detection inference engines—that plug directly into Epic and Cerner workflows, cutting documentation time by 30-40 % while sidestepping the regulatory quicksand of autonomous diagnosis. The real shift isn’t grand transformation but granular integration, where latency under 200 ms and FHIR-native APIs decide adoption over lofty promises. AI-assisted, human-reviewed.

AI 4 min

Google’s Next-Gen Gemini Flash Spotted in Stealth Testing

A previously unannounced Google Gemini model is undergoing stealth testing on LM Arena, delivering output quality far beyond the current Gemini 3 Flash. Observers speculate it could be Gemini 3.1 Flash, 3.2 Flash, or even 3.5 Flash, with performance closer to Gemini 3.1 Pro. The discovery aligns with Google’s pattern of pre-release testing and comes weeks before Google I/O 2026, where major AI updates are expected.

AI 3 min

Build a 5-Minute Weekly Trend Scanner with Replit and AI

A Replit-based AI agent now lets non-developers scrape trending AI topics and e-commerce products from six sources in under five minutes per week. The tool aggregates growth data, ranks findings by niche, and exports ready-to-use briefs to Notion. The setup requires only one prompt and runs automatically every Sunday, delivering a prioritized list by Monday morning.

AI 3 min

2026’s AI-Powered E-Commerce Stack: 17 Tools Replacing Agencies and Freelancers

The 2026 e-commerce toolkit has flipped, replacing Google Docs, GitHub, and CapCut with AI-native alternatives. A curated list of 17 platforms—including Notion AI, Cursor, and Suno—now handles writing, coding, design, video editing, and voiceovers without agencies or freelancers. These tools aren’t just novelties; they deliver measurable time savings for teams managing product pages, reels, and ad campaigns.

AI 4 min

Running Llama 70B Offline: How a MacBook Handled 11 Hours of AI Work

A recent demonstration shows that running a 70-billion-parameter AI model locally on consumer hardware is no longer just a proof of concept. A developer used a MacBook Pro M4 with 64GB RAM to process client work for an entire 11-hour flight, achieving 71 tokens per second with a quantized Llama 3.3 70B model. The setup included checkpointing and task queuing, proving that local AI can handle real-world workloads without cloud dependency.