AI

EMO: Pretraining mixture of experts for emergent modularity

A breakthrough in deep learning architecture emerges with EMO, a novel pretraining method that leverages mixture of experts to induce emergent modularity in neural networks. By pretraining a hierarchical mixture of experts, EMO enables the discovery of task-specific sub-networks that adapt to changing input distributions, significantly improving the robustness and efficiency of downstream models. This modularization technique has far-reaching implications for scalable and generalizable AI systems.

EMO is a novel pretraining method that leverages mixture of experts to induce emergent modularity in neural networks. By pretraining a hierarchical mixture of experts, EMO enables the discovery of task-specific sub-networks that adapt to changing input distributions, significantly improving the robustness and efficiency of downstream models.

Overview

EMO is a 1B-active, 14B-total-parameter MoE trained on 1 trillion tokens. It supports selective expert use, allowing users to select a small subset of experts for a given task while retaining near full-model performance. When all experts are used together, EMO remains a strong general-purpose model.

What it does

In an MoE, a small network called the router decides which experts each token activates. EMO's key observation is that tokens from the same document usually come from the same domain. The router learns to restrict tokens in a document to choose their active experts from a shared expert pool, encouraging groups of experts to form domain specialization.

Tradeoffs

The document pool size controls how restrictive the modularity constraint is. A smaller pool forces tokens in the same document to share a tighter set of experts, encouraging stronger modularity; a larger pool gives the model more flexibility but weakens the constraint. EMO's performance is comparable to a standard MoE model, and it remains robust under selective expert use. When only 12.5% of the experts are used, EMO loses only about 3% absolute performance across all benchmarks.

EMO's expert subsets specialize in semantically meaningful domains, such as Health, Medical & Wellness, News Reporting, US Politics & Elections, and Film & Music. This is in contrast to standard MoE training, which produces clusters of surface-level or syntactic features. The EMO-trained model, a matched standard-MoE baseline, and the training code are being released to help the community study emergent modularity in MoEs.

In practice, EMO can be used to improve the memory-accuracy tradeoff in large sparse models. The model's modular structure allows for flexible deployment, and the expert subsets can be composed to create new models. However, there are still many questions to be answered, such as how to better select and compose expert subsets, how to update modules without disrupting the full model, and how to use modular structure for better interpretability and control.

In conclusion, EMO is a significant step towards making large sparse models more modular, and its release should help the community to build towards modular language models that are easier to deploy, adapt, inspect, and compose.

Similar Articles

More articles like this

AI 2 min

CyberSecQwen-4B: Why Defensive Cyber Needs Small, Specialized, Locally-Runnable Models

A 4-billion-parameter model, CyberSecQwen-4B, is proving that on-premises threat detection no longer demands GPU clusters—its sub-100 ms inference latency on a single CPU core lets SOC teams run real-time behavioral analysis without cloud dependency or telemetry leaks. By fine-tuning on MITRE ATT&CK sequences instead of generic text, it achieves 92% precision on zero-day TTPs while fitting inside air-gapped networks.

AI 3 min

Train Claude to Remember You

Claude, an AI coding assistant, can be frustrating to use because it forgets user preferences and corrections after each session. A simple prompt can be used to make Claude write its own notes, allowing it to remember user preferences and improve over time. This guide explains how to use the prompt and save the output as a feedback file. By loading the feedback file into a Claude Project, users can create a personalized AI assistant that remembers their preferences and corrections. With regular use, Claude can become a valuable tool that feels like a personal assistant, rather than a generic AI.

AI 1 min

See what happens when creative legends use AI to make ads for small businesses

Ad veterans from Wieden+Kennedy and Droga5 are weaponizing generative AI to craft pro-bono campaigns for mom-and-pop shops, compressing weeks of production into days with Midjourney storyboards and ElevenLabs voice clones. The experiment tests whether diffusion models and LLMs can democratize high-end creative without eroding the human spark that sells.

AI 1 min

MedQA: Fine-Tuning a Clinical AI on AMD ROCm

A 70-billion-parameter clinical LLM, fine-tuned on AMD’s MI300X GPUs using ROCm 6.1, now matches or exceeds NVIDIA A100 performance on MedQA benchmarks—delivering 92% accuracy in differential diagnosis while cutting inference latency to 18 ms per token. The shift to open-source hardware stacks could break NVIDIA’s chokehold on medical AI training, slashing cloud costs by up to 40% for health systems.

AI 1 min

Scaling Trusted Access for Cyber with GPT-5.5 and GPT-5.5-Cyber

"OpenAI's latest Trusted Access for Cyber upgrade leverages GPT-5.5 and its specialized variant, GPT-5.5-Cyber, to expedite vulnerability research and safeguard high-stakes infrastructure through accelerated threat modeling, targeted exploit discovery, and AI-driven incident response. This strategic move empowers verified defenders to stay one step ahead of emerging threats, bolstering the resilience of critical systems. The integration of large language models and specialized cybersecurity tools marks a significant shift in the fight against cyber attacks."

AI 4 min

From Screenshot to Live Product: How to Build Real AI Websites with Stitch, Claude Code, and Vercel

AI website builders often generate beautiful but non-functional designs. This guide presents a practical workflow combining Google Stitch for design, Claude Code for engineering, and Vercel for deployment. It includes step-by-step setup instructions, a critical verification prompt, and pro tips to ensure your site is a real product, not just a demo.