Tech · Apple Machine Learning
Mixture-of-Experts (MoE) models enable sparse expert activation, meaning that only a subset of the model’s parameters is used
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
★ Tier-1 Source
However, to translate this sparsity into practical performance, an expert caching mechanism is required.
Key facts
- With such gains, they achieve over 88% hit rates with up to 34.7% Time-to-first-token (TTFT) reduction on OLMoE at only 5% or 0.6GB of VRAM cache capacity
- Authors Duc Hoang, Ajay Jaiswal, Mohammad Samragh Razlighi, Minsik Cho
- Their experiments reveal that MoE expert access is not consistent with temporal locality assumptions (e.g LRU, LFU)
- SpecMD: A Comprehensive Study on Speculative Expert Prefetching
Summary
Authors Duc Hoang, Ajay Jaiswal, Mohammad Samragh Razlighi, Minsik Cho. Mixture-of-Experts (MoE) models enable sparse expert activation, meaning that only a subset of the model’s parameters is used during each inference. Their experiments reveal that MoE expert access is not consistent with temporal locality assumptions (e.g LRU, LFU). With such gains, they achieve over 88% hit rates with up to 34.7% Time-to-first-token (TTFT) reduction on OLMoE at only 5% or 0.6GB of VRAM cache capacity.