{"id":586,"date":"2026-03-21T06:38:13","date_gmt":"2026-03-20T22:38:13","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=586"},"modified":"2026-03-21T06:38:13","modified_gmt":"2026-03-20T22:38:13","slug":"nvidia-releases-nemotron-cascade-2-an-open-30b-moe-with-3b-active-parameters-delivering-better-reasoning-and-strong-agentic-capabilities","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=586","title":{"rendered":"NVIDIA Releases Nemotron-Cascade 2: An Open 30B MoE with 3B Active Parameters, Delivering Better Reasoning and Strong Agentic Capabilities"},"content":{"rendered":"<p>NVIDIA has announced the release of <strong>Nemotron-Cascade 2<\/strong>, an open-weight <strong>30B Mixture-of-Experts (MoE)<\/strong> model with <strong>3B activated parameters<\/strong>. The model focuses on maximizing \u2018intelligence density,\u2019 delivering advanced reasoning capabilities at a fraction of the parameter scale used by frontier models. Nemotron-Cascade 2 is the second open-weight LLM to achieve <strong>Gold Medal-level performance<\/strong> in the 2025 International Mathematical Olympiad (IMO), the International Olympiad in Informatics (IOI), and the ICPC World Finals.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1370\" height=\"708\" data-attachment-id=\"78490\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/03\/20\/nvidia-releases-nemotron-cascade-2-an-open-30b-moe-with-3b-active-parameters-delivering-better-reasoning-and-strong-agentic-capabilities\/screenshot-2026-03-20-at-3-11-01-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-20-at-3.11.01-PM-1.png\" data-orig-size=\"1370,708\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2026-03-20 at 3.11.01\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-20-at-3.11.01-PM-1-300x155.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-20-at-3.11.01-PM-1-1024x529.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-20-at-3.11.01-PM-1.png\" alt=\"\" class=\"wp-image-78490\" \/><figcaption class=\"wp-element-caption\">https:\/\/research.nvidia.com\/labs\/nemotron\/files\/Nemotron-Cascade-2.pdf<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Targeted Performance and Strategic Trade-offs<\/strong><\/h3>\n<p>The primary value proposition of Nemotron-Cascade 2 is its specialized performance in mathematical reasoning, coding, alignment, and instruction following. While it achieves state-of-the-art results in these key reasoning-intensive domains, it is surely not a \u2018blanket win\u2019 across all benchmarks.<\/p>\n<p>The model\u2019s performance excels in several targeted categories compared to the recently released <strong>Qwen3.5-35B-A3B<\/strong> (February 2026) and the larger <strong>Nemotron-3-Super-120B-A12B<\/strong><sup><\/sup>:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Mathematical Reasoning:<\/strong> Outperforms Qwen3.5-35B-A3B on <strong>AIME 2025<\/strong> (92.4 vs. 91.9) and <strong>HMMT Feb25<\/strong> (94.6 vs. 89.0).<\/li>\n<li><strong>Coding:<\/strong> Leads on <strong>LiveCodeBench v6<\/strong> (87.2 vs. 74.6) and <strong>IOI 2025<\/strong> (439.28 vs. 348.6+).<\/li>\n<li><strong>Alignment and Instruction Following:<\/strong> Scores significantly higher on <strong>ArenaHard v2<\/strong> (83.5 vs. 65.4+) and <strong>IFBench<\/strong> (82.9 vs. 70.2).<\/li>\n<\/ul>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1268\" height=\"1408\" data-attachment-id=\"78492\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/03\/20\/nvidia-releases-nemotron-cascade-2-an-open-30b-moe-with-3b-active-parameters-delivering-better-reasoning-and-strong-agentic-capabilities\/screenshot-2026-03-20-at-3-11-46-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-20-at-3.11.46-PM-1.png\" data-orig-size=\"1268,1408\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2026-03-20 at 3.11.46\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-20-at-3.11.46-PM-1-270x300.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-20-at-3.11.46-PM-1-922x1024.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-20-at-3.11.46-PM-1.png\" alt=\"\" class=\"wp-image-78492\" \/><figcaption class=\"wp-element-caption\">https:\/\/research.nvidia.com\/labs\/nemotron\/files\/Nemotron-Cascade-2.pdf<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Technical Architecture: Cascade RL and Multi-domain On-Policy Distillation<\/strong> (<strong>MOPD<\/strong>)<\/h3>\n<p>The model\u2019s reasoning capabilities stem from its post-training pipeline, starting from the <strong>Nemotron-3-Nano-30B-A3B-Base<\/strong> model<sup><\/sup>.<\/p>\n<h4 class=\"wp-block-heading\"><strong>1. Supervised Fine-Tuning (SFT)<\/strong><\/h4>\n<p>During SFT, NVIDIA research team utilized a meticulously curated dataset where samples were packed into sequences of up to <strong>256K tokens<\/strong>. <strong>The dataset included:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li><strong>1.9M Python reasoning traces<\/strong> and 1.3M Python tool-calling samples for competitive coding.<\/li>\n<li><strong>816K samples<\/strong> for mathematical natural language proofs.<\/li>\n<li>A specialized <strong>Software Engineering (SWE) blend<\/strong> consisting of 125K agentic and 389K agentless samples.<\/li>\n<\/ul>\n<h4 class=\"wp-block-heading\"><strong>2. Cascade Reinforcement Learning<\/strong><\/h4>\n<p>Following SFT, the model underwent <strong>Cascade RL<\/strong>, which applies sequential, domain-wise training<sup><\/sup>. This prevents catastrophic forgetting by allowing hyperparameters to be tailored to specific domains without destabilizing others<sup><\/sup><sup><\/sup><sup><\/sup><sup><\/sup>. The pipeline includes stages for instruction-following (IF-RL), multi-domain RL, RLHF, long-context RL, and specialized Code and SWE RL<sup><\/sup><sup><\/sup><sup><\/sup><sup><\/sup>.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1466\" height=\"544\" data-attachment-id=\"78495\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/03\/20\/nvidia-releases-nemotron-cascade-2-an-open-30b-moe-with-3b-active-parameters-delivering-better-reasoning-and-strong-agentic-capabilities\/screenshot-2026-03-20-at-3-13-06-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-20-at-3.13.06-PM-1.png\" data-orig-size=\"1466,544\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2026-03-20 at 3.13.06\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-20-at-3.13.06-PM-1-300x111.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-20-at-3.13.06-PM-1-1024x380.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-20-at-3.13.06-PM-1.png\" alt=\"\" class=\"wp-image-78495\" \/><figcaption class=\"wp-element-caption\">https:\/\/research.nvidia.com\/labs\/nemotron\/files\/Nemotron-Cascade-2.pdf<\/figcaption><\/figure>\n<\/div>\n<h4 class=\"wp-block-heading\"><strong>3. Multi-Domain On-Policy Distillation (MOPD)<\/strong><\/h4>\n<p>A critical innovation in Nemotron-Cascade 2 is the integration of <strong>MOPD<\/strong> during the Cascade RL process. MOPD assembly uses the best-performing intermediate \u2018teacher\u2019 models\u2014already derived from the same SFT initialization\u2014to provide a dense token-level distillation advantage. <strong>This advantage is defined mathematically as:<\/strong><\/p>\n<div class=\"wp-block-mathml-mathmlblock\">$$a_{t}^{MOPD}=log~pi^{domain_{t}}(y_{t}|s_{t})-log~pi^{train}(y_{t}|s_{t})$$\n<\/div>\n<p>The research team found that MOPD is substantially more sample-efficient than sequence-level reward algorithms like <strong>Group Relative Policy Optimization (GRPO)<\/strong>. For instance, on <strong>AIME25<\/strong>, MOPD reached teacher-level performance (92.0) within 30 steps, while GRPO achieved only 91.0 after matching those steps.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Inference Features and Agentic Interaction<\/strong><\/h3>\n<p><strong>Nemotron-Cascade 2 supports two primary operating modes through its chat template:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Thinking Mode:<\/strong> Initiated by a single <code>&lt;think&gt;<\/code> token, followed by a newline. This activates deep reasoning for complex math and code tasks.<\/li>\n<li><strong>Non-Thinking Mode:<\/strong> Activated by prepending an empty <code>&lt;think&gt;&lt;\/think&gt;<\/code> block for more efficient, direct responses.<\/li>\n<\/ul>\n<p>For agentic tasks, the model utilizes a structured tool-calling protocol within the system prompt<sup><\/sup><sup><\/sup><sup><\/sup><sup><\/sup>. Available tools are listed within <code>&lt;tools&gt;<\/code> tags, and the model is instructed to perform tool calls wrapped in <code>&lt;tool_call&gt;<\/code> tags to ensure verifiable execution feedback<sup><\/sup><sup><\/sup><sup><\/sup><sup><\/sup>.<\/p>\n<p>By focusing on \u2018intelligence density,\u2019 Nemotron-Cascade 2 demonstrates that specialized reasoning capabilities once thought to be the exclusive domain of frontier-scale models are achievable at a 30B scale through domain-specific reinforcement learning.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out\u00a0<strong><a href=\"https:\/\/research.nvidia.com\/labs\/nemotron\/files\/Nemotron-Cascade-2.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Paper<\/a><\/strong> and <strong><a href=\"https:\/\/huggingface.co\/collections\/nvidia\/nemotron-cascade-2\" target=\"_blank\" rel=\"noreferrer noopener\">Model on HF<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">120k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/03\/20\/nvidia-releases-nemotron-cascade-2-an-open-30b-moe-with-3b-active-parameters-delivering-better-reasoning-and-strong-agentic-capabilities\/\">NVIDIA Releases Nemotron-Cascade 2: An Open 30B MoE with 3B Active Parameters, Delivering Better Reasoning and Strong Agentic Capabilities<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>NVIDIA has announced the relea&hellip;<\/p>\n","protected":false},"author":1,"featured_media":587,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-586","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/586","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=586"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/586\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/587"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=586"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=586"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=586"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}