{"id":468,"date":"2026-02-25T16:37:55","date_gmt":"2026-02-25T08:37:55","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=468"},"modified":"2026-02-25T16:37:55","modified_gmt":"2026-02-25T08:37:55","slug":"liquid-ais-new-lfm2-24b-a2b-hybrid-architecture-blends-attention-with-convolutions-to-solve-the-scaling-bottlenecks-of-modern-llms","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=468","title":{"rendered":"Liquid AI\u2019s New LFM2-24B-A2B Hybrid Architecture Blends Attention with Convolutions to Solve the Scaling Bottlenecks of Modern LLMs"},"content":{"rendered":"<p>The generative AI race has long been a game of \u2018bigger is better.\u2019 But as the industry hits the limits of power consumption and memory bottlenecks, the conversation is shifting from raw parameter counts to architectural efficiency. Liquid AI team is leading this charge with the release of <strong>LFM2-24B-A2B<\/strong>, a 24-billion parameter model that redefines what we should expect from edge-capable AI.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1400\" height=\"1196\" data-attachment-id=\"78095\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/02\/25\/liquid-ais-new-lfm2-24b-a2b-hybrid-architecture-blends-attention-with-convolutions-to-solve-the-scaling-bottlenecks-of-modern-llms\/screenshot-2026-02-25-at-12-30-37-am-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-25-at-12.30.37-AM-1.png\" data-orig-size=\"1400,1196\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2026-02-25 at 12.30.37\u202fAM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-25-at-12.30.37-AM-1-300x256.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-25-at-12.30.37-AM-1-1024x875.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-25-at-12.30.37-AM-1.png\" alt=\"\" class=\"wp-image-78095\" \/><figcaption class=\"wp-element-caption\">https:\/\/www.liquid.ai\/blog\/lfm2-24b-a2b<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>The \u2018A2B\u2019 Architecture: A 1:3 Ratio for Efficiency<\/strong><\/h3>\n<p>The \u2018A2B\u2019 in the model\u2019s name stands for <strong>Attention-to-Base<\/strong>. In a traditional Transformer, every layer uses Softmax Attention, which scales quadratically (O(N<sup>2<\/sup>)) with sequence length. This leads to massive KV (Key-Value) caches that devour VRAM.<\/p>\n<p>Liquid AI team bypasses this by using a hybrid structure. The <strong>\u2018Base<\/strong>\u2018 layers are efficient <strong>gated short convolution blocks<\/strong>, while the <strong>\u2018Attention<\/strong>\u2018 layers utilize <strong>Grouped Query Attention (GQA)<\/strong>.<\/p>\n<p><strong>In the LFM2-24B-A2B configuration, the model uses a 1:3 ratio:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Total Layers:<\/strong> 40<\/li>\n<li><strong>Convolution Blocks:<\/strong> 30<\/li>\n<li><strong>Attention Blocks:<\/strong> 10<\/li>\n<\/ul>\n<p>By interspersing a small number of GQA blocks with a majority of gated convolution layers, the model retains the high-resolution retrieval and reasoning of a Transformer while maintaining the fast prefill and low memory footprint of a linear-complexity model.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Sparse MoE: 24B Intelligence on a 2B Budget<\/strong><\/h3>\n<p>The most important thing of LFM2-24B-A2B is its <strong>Mixture of Experts (MoE)<\/strong> design. While the model contains 24 billion parameters, it only activates <strong>2.3 billion parameters<\/strong> per token.<\/p>\n<p>This is a game-changer for deployment. Because the active parameter path is so lean, the model can fit into <strong>32GB of RAM<\/strong>. This means it can run locally on high-end consumer laptops, desktops with integrated GPUs (iGPUs), and dedicated NPUs without needing a data-center-grade A100. It effectively provides the knowledge density of a 24B model with the inference speed and energy efficiency of a 2B model.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1422\" height=\"1064\" data-attachment-id=\"78093\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/02\/25\/liquid-ais-new-lfm2-24b-a2b-hybrid-architecture-blends-attention-with-convolutions-to-solve-the-scaling-bottlenecks-of-modern-llms\/screenshot-2026-02-25-at-12-29-51-am-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-25-at-12.29.51-AM-1.png\" data-orig-size=\"1422,1064\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2026-02-25 at 12.29.51\u202fAM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-25-at-12.29.51-AM-1-300x224.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-25-at-12.29.51-AM-1-1024x766.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-25-at-12.29.51-AM-1.png\" alt=\"\" class=\"wp-image-78093\" \/><figcaption class=\"wp-element-caption\">https:\/\/www.liquid.ai\/blog\/lfm2-24b-a2b<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Benchmarks: Punching Up<\/strong><\/h3>\n<p>Liquid AI team reports that the LFM2 family follows a predictable, log-linear scaling behavior. Despite its smaller active parameter count, the 24B-A2B model consistently outperforms larger rivals.<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Logic and Reasoning:<\/strong> In tests like <strong>GSM8K<\/strong> and <strong>MATH-500<\/strong>, it rivals dense models twice its size.<\/li>\n<li><strong>Throughput:<\/strong> When benchmarked on a single NVIDIA H100 using <em>vLLM<\/em>, it reached <strong>26.8K total tokens per second<\/strong> at 1,024 concurrent requests, significantly outpacing Snowflake\u2019s <em>gpt-oss-20b<\/em> and <em>Qwen3-30B-A3B<\/em>.<\/li>\n<li><strong>Long Context:<\/strong> The model features a <strong>32k<\/strong> token context window, optimized for privacy-sensitive RAG (Retrieval-Augmented Generation) pipelines and local document analysis.<\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\"><strong>Technical Cheat Sheet<\/strong><\/h3>\n<figure class=\"wp-block-table\">\n<table class=\"has-fixed-layout\">\n<thead>\n<tr>\n<td><strong>Property<\/strong><\/td>\n<td><strong>Specification<\/strong><\/td>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Total Parameters<\/strong><\/td>\n<td>24 Billion<\/td>\n<\/tr>\n<tr>\n<td><strong>Active Parameters<\/strong><\/td>\n<td>2.3 Billion<\/td>\n<\/tr>\n<tr>\n<td><strong>Architecture<\/strong><\/td>\n<td>Hybrid (Gated Conv + GQA)<\/td>\n<\/tr>\n<tr>\n<td><strong>Layers<\/strong><\/td>\n<td>40 (30 Base \/ 10 Attention)<\/td>\n<\/tr>\n<tr>\n<td><strong>Context Length<\/strong><\/td>\n<td>32,768 Tokens<\/td>\n<\/tr>\n<tr>\n<td><strong>Training Data<\/strong><\/td>\n<td>17 Trillion Tokens<\/td>\n<\/tr>\n<tr>\n<td><strong>License<\/strong><\/td>\n<td>LFM Open License v1.0<\/td>\n<\/tr>\n<tr>\n<td><strong>Native Support<\/strong><\/td>\n<td>llama.cpp, vLLM, SGLang, MLX<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/figure>\n<h3 class=\"wp-block-heading\"><strong>Key Takeaways<\/strong><\/h3>\n<ul class=\"wp-block-list\">\n<li><strong>Hybrid \u2018A2B\u2019 Architecture:<\/strong> The model uses a 1:3 ratio of <strong>Grouped Query Attention (GQA)<\/strong> to <strong>Gated Short Convolutions<\/strong>. By utilizing linear-complexity \u2018Base\u2019 layers for 30 out of 40 layers, the model achieves much faster prefill and decode speeds with a significantly reduced memory footprint compared to traditional all-attention Transformers.<\/li>\n<li><strong>Sparse MoE Efficiency:<\/strong> Despite having <strong>24 billion total parameters<\/strong>, the model only activates <strong>2.3 billion parameters<\/strong> per token. This \u2018Sparse Mixture of Experts\u2019 design allows it to deliver the reasoning depth of a large model while maintaining the inference latency and energy efficiency of a 2B-parameter model.<\/li>\n<li><strong>True Edge Capability:<\/strong> Optimized via hardware-in-the-loop architecture search, the model is designed to fit in <strong>32GB of RAM<\/strong>. This makes it fully deployable on consumer-grade hardware, including laptops with integrated GPUs and NPUs, without requiring expensive data-center infrastructure.<\/li>\n<li><strong>State-of-the-Art Performance:<\/strong> LFM2-24B-A2B outperforms larger competitors like <strong>Qwen3-30B-A3B<\/strong> and <strong>Snowflake gpt-oss-20b<\/strong> in throughput. Benchmarks show it hits approximately <strong>26.8K tokens per second<\/strong> on a single H100, showing near-linear scaling and high efficiency in long-context tasks up to its <strong>32k token window<\/strong>.<\/li>\n<\/ul>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/www.liquid.ai\/blog\/lfm2-24b-a2b\" target=\"_blank\" rel=\"noreferrer noopener\">Technical details<\/a><\/strong> and <strong><a href=\"https:\/\/huggingface.co\/LiquidAI\/LFM2-24B-A2B\" target=\"_blank\" rel=\"noreferrer noopener\">Model weights<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">120k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/02\/25\/liquid-ais-new-lfm2-24b-a2b-hybrid-architecture-blends-attention-with-convolutions-to-solve-the-scaling-bottlenecks-of-modern-llms\/\">Liquid AI\u2019s New LFM2-24B-A2B Hybrid Architecture Blends Attention with Convolutions to Solve the Scaling Bottlenecks of Modern LLMs<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>The generative AI race has lon&hellip;<\/p>\n","protected":false},"author":1,"featured_media":469,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-468","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/468","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=468"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/468\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/469"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=468"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=468"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=468"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}