Ted Hisokawa
Apr 12, 2026 01:37
MiniMax releases M2.7, a 230B-parameter mixture-of-experts mannequin optimized for NVIDIA GPUs with as much as 2.7x throughput positive factors on Blackwell {hardware}.
MiniMax has launched M2.7, a 230-billion parameter open-weights AI mannequin designed particularly for autonomous agent workflows, now out there throughout NVIDIA’s inference ecosystem together with the corporate’s newest Blackwell Extremely GPUs.
The mannequin represents a big effectivity play in enterprise AI. Regardless of its huge 230B complete parameters, M2.7 prompts solely 10B parameters per token—a 4.3% activation charge achieved by way of mixture-of-experts (MoE) structure with 256 native consultants. This retains inference prices manageable whereas sustaining the reasoning capability of a a lot bigger mannequin.
Efficiency Numbers on Blackwell
NVIDIA collaborated with open supply communities to optimize M2.7 for manufacturing workloads. Two key optimizations—a fused QK RMS Norm kernel and FP8 MoE integration from TensorRT-LLM—delivered substantial throughput enhancements on Blackwell Extremely GPUs.
Testing with a 1K/1K enter/output sequence size dataset confirmed vLLM achieved as much as 2.5x throughput enchancment, whereas SGLang hit 2.7x positive factors. Each optimizations had been carried out inside a single month, suggesting additional efficiency headroom exists.
Technical Structure
M2.7 helps 200K enter context size throughout 62 layers, utilizing multi-head causal self-attention with Rotary Place Embeddings (RoPE). A top-k knowledgeable routing mechanism prompts solely 8 of the 256 consultants for any given enter, which is how the mannequin maintains low inference prices regardless of its scale.
The structure targets coding challenges and sophisticated agentic duties—workflows the place AI techniques must plan, execute, and iterate autonomously quite than reply to single prompts.
Deployment Choices
Builders can entry M2.7 by way of a number of channels. NVIDIA’s NemoClaw reference stack gives a one-click deployment for operating autonomous brokers with OpenShell runtime. The mannequin can also be out there by way of NVIDIA NIM containerized microservices for on-premise, cloud, or hybrid deployments.
For groups eager to customise the mannequin, NVIDIA’s NeMo AutoModel library helps fine-tuning with revealed recipes. Reinforcement studying workflows can be found by way of NeMo RL with pattern configurations for 8K and 16K sequence lengths.
Free GPU-accelerated endpoints on construct.nvidia.com permit testing earlier than committing to infrastructure. The open weights are additionally out there on Hugging Face for self-hosted deployments.
The discharge positions MiniMax as a reputable different to closed fashions from OpenAI and Anthropic for enterprises constructing autonomous AI techniques, notably these already invested in NVIDIA infrastructure.
Picture supply: Shutterstock





