Exploiting the Experts: Unauthorized Compression in MoE-LLMs

This research exposes critical vulnerabilities in Mixture-of-Experts LLMs where adversaries exploit expert attribution to prune and repurpose models. It deta...

Level: advanced

By Pinaki Prasad Guha Neogi, Ahmad Mohammadshirazi, Dheeraj Kulshrestha, Rajiv Ramnath

Category: research