An instruction-tuned, hybrid-reasoning Mixture-of-Experts model built on Llama-4-Scout-17B-16E. Cogito v2 can answer directly or engage an extended “thinking” phase, with alignment guided by Iterated Distillation & Amplification (IDA). It targets coding, STEM, instruction following, and general helpfulness, with stronger multilingual, tool-calling, and reasoning performance than size-equivalent baselines. The model supports long-context use (up to 10M tokens) and standard Transformers workflows. Users can control the reasoning behaviour with the reasoningenabled boolean. Learn more in our docs
Recent activity on Cogito V2 Preview Llama 109B
Total usage per day on OpenRouter
Prompt
4.57M
Completion
87K
Reasoning
451
Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.