← Back to KHAO

Tech ·

Mixture-of-Experts (MoE) models enable sparse expert activation, meaning that only a subset of the model’s parameters is used

2 min read

Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.

★ Tier-1 Source

Bottom banner.

However, to translate this sparsity into practical performance, an expert caching mechanism is required.

Key facts

Summary

Authors Duc Hoang, Ajay Jaiswal, Mohammad Samragh Razlighi, Minsik Cho. Mixture-of-Experts (MoE) models enable sparse expert activation, meaning that only a subset of the model’s parameters is used during each inference. Their experiments reveal that MoE expert access is not consistent with temporal locality assumptions (e.g LRU, LFU). With such gains, they achieve over 88% hit rates with up to 34.7% Time-to-first-token (TTFT) reduction on OLMoE at only 5% or 0.6GB of VRAM cache capacity.

Read full article at Apple Machine Learning →

#Apple