EMO is a novel pretraining method that leverages mixture of experts to induce emergent modularity in neural networks. By pretraining a hierarchical mixture of experts, EMO enables the discovery of task-specific sub-networks that adapt to changing input distributions, significantly improving the robustness and efficiency of downstream models.
Overview
EMO is a 1B-active, 14B-total-parameter MoE trained on 1 trillion tokens. It supports selective expert use, allowing users to select a small subset of experts for a given task while retaining near full-model performance. When all experts are used together, EMO remains a strong general-purpose model.
What it does
In an MoE, a small network called the router decides which experts each token activates. EMO's key observation is that tokens from the same document usually come from the same domain. The router learns to restrict tokens in a document to choose their active experts from a shared expert pool, encouraging groups of experts to form domain specialization.
Tradeoffs
The document pool size controls how restrictive the modularity constraint is. A smaller pool forces tokens in the same document to share a tighter set of experts, encouraging stronger modularity; a larger pool gives the model more flexibility but weakens the constraint. EMO's performance is comparable to a standard MoE model, and it remains robust under selective expert use. When only 12.5% of the experts are used, EMO loses only about 3% absolute performance across all benchmarks.
EMO's expert subsets specialize in semantically meaningful domains, such as Health, Medical & Wellness, News Reporting, US Politics & Elections, and Film & Music. This is in contrast to standard MoE training, which produces clusters of surface-level or syntactic features. The EMO-trained model, a matched standard-MoE baseline, and the training code are being released to help the community study emergent modularity in MoEs.
In practice, EMO can be used to improve the memory-accuracy tradeoff in large sparse models. The model's modular structure allows for flexible deployment, and the expert subsets can be composed to create new models. However, there are still many questions to be answered, such as how to better select and compose expert subsets, how to update modules without disrupting the full model, and how to use modular structure for better interpretability and control.
In conclusion, EMO is a significant step towards making large sparse models more modular, and its release should help the community to build towards modular language models that are easier to deploy, adapt, inspect, and compose.