{"id":653,"date":"2026-04-02T14:04:34","date_gmt":"2026-04-02T06:04:34","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=653"},"modified":"2026-04-02T14:04:34","modified_gmt":"2026-04-02T06:04:34","slug":"ibm-releases-granite-4-0-3b-vision-a-new-vision-language-model-for-enterprise-grade-document-data-extraction","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=653","title":{"rendered":"IBM Releases Granite 4.0 3B Vision: A New Vision Language Model for Enterprise Grade Document Data Extraction"},"content":{"rendered":"<p>IBM has announced the release of <strong>Granite 4.0 3B Vision<\/strong>, a vision-language model (VLM) engineered specifically for enterprise-grade document data extraction.<sup><\/sup> Departing from the monolithic approach of larger multimodal models, the 4.0 Vision release is architected as a specialized adapter designed to bring high-fidelity visual reasoning to the <strong>Granite 4.0 Micro<\/strong> language backbone.<\/p>\n<p>This release represents a transition toward modular, extraction-focused AI that prioritizes structured data accuracy\u2014such as converting complex charts to code or tables to HTML\u2014over general-purpose image captioning.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Architecture: Modular LoRA and DeepStack Integration<\/strong><\/h3>\n<p>The Granite 4.0 3B Vision model is delivered as a <strong>LoRA (Low-Rank Adaptation)<\/strong> adapter with approximately 0.5B parameters. This adapter is designed to be loaded on top of the <strong>Granite 4.0 Micro<\/strong> base model, a 3.5B parameter dense language model. This design allows for a \u2018dual-mode\u2019 deployment: the base model can handle text-only requests independently, while the vision adapter is activated only when multimodal processing is required.<\/p>\n<h4 class=\"wp-block-heading\"><strong>Vision Encoder and Patch Tiling<\/strong><\/h4>\n<p>The visual component utilizes the <strong>google\/siglip2-so400m-patch16-384<\/strong> encoder. To maintain high resolution across diverse document layouts, the model employs a tiling mechanism. Input images are decomposed into <strong>384\u00d7384 patches<\/strong>, which are processed alongside a downscaled global view of the entire image. This approach ensures that fine details\u2014such as subscripts in formulas or small data points in charts\u2014are preserved before they reach the language backbone.<\/p>\n<h4 class=\"wp-block-heading\"><strong>The DeepStack Backbone<\/strong><\/h4>\n<p>To bridge the vision and language modalities, IBM utilizes a variant of the <strong>DeepStack architecture<\/strong>. This involves deeply stacking visual tokens into the language model across <strong>8 specific injection points<\/strong>. By routing visual features into multiple layers of the transformer, the model achieves a tighter alignment between the \u2018what\u2019 (semantic content) and the \u2018where\u2019 (spatial layout), which is critical for maintaining structure during document parsing.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Training Curriculum: Focused on Chart and Table Extraction<\/strong><\/h3>\n<p>The training of Granite 4.0 3B Vision reflects a strategic shift toward specialized extraction tasks. Rather than relying solely on general image-text datasets, IBM utilized a curated mixture of instruction-following data focused on complex document structures.<sup><\/sup><\/p>\n<ul class=\"wp-block-list\">\n<li><strong>ChartNet Dataset:<\/strong> The model was refined using <strong>ChartNet<\/strong>, a million-scale multimodal dataset designed for robust chart understanding.<\/li>\n<li><strong>Code-Guided Pipeline:<\/strong> A key technical highlight of the training involves a \u201ccode-guided\u201d approach for chart reasoning. This pipeline uses aligned data consisting of the original plotting code, the resulting rendered image, and the underlying data table, allowing the model to learn the structural relationship between visual representations and their source data.<\/li>\n<li><strong>Extraction Tuning:<\/strong> The model was fine-tuned on a mixture of datasets focusing on <strong>Key-Value Pair (KVP) extraction<\/strong>, table structure recognition, and converting visual charts into machine-readable formats like CSV, JSON, and OTSL.<\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\"><strong>Performance and Evaluation Benchmarks<\/strong><\/h3>\n<p>In technical evaluations, Granite 4.0 3B Vision has been benchmarked against several industry-standard suites for document understanding. It is important to note that datasets like <strong>PubTables-v2<\/strong> and <strong>OmniDocBench<\/strong> are utilized as evaluation benchmarks to verify the model\u2019s zero-shot performance in real-world scenarios.<\/p>\n<figure class=\"wp-block-table\">\n<table class=\"has-fixed-layout\">\n<thead>\n<tr>\n<td><strong>Task<\/strong><\/td>\n<td><strong>Evaluation Benchmark<\/strong><\/td>\n<td><strong>Metric<\/strong><\/td>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>KVP Extraction<\/strong><\/td>\n<td>VAREX<\/td>\n<td>85.5% Exact Match (Zero-Shot)<\/td>\n<\/tr>\n<tr>\n<td><strong>Chart Reasoning<\/strong><\/td>\n<td>ChartNet (Human-Verified Test Set)<\/td>\n<td>High Accuracy in Chart2Summary<\/td>\n<\/tr>\n<tr>\n<td><strong>Table Extraction<\/strong><\/td>\n<td>TableVQA-Bench &amp; OmniDocBench<\/td>\n<td>Evaluated via TEDS and HTML extraction<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/figure>\n<p>The model currently ranks 3rd among models in the 2\u20134B parameter class on the VAREX leaderboard (as of March 2026), demonstrating its efficiency in structured extraction despite its compact size.<sup><\/sup><\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1652\" height=\"1418\" data-attachment-id=\"78756\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/04\/01\/ibm-releases-granite-4-0-3b-vision-a-new-vision-language-model-for-enterprise-grade-document-data-extraction\/screenshot-2026-04-01-at-10-52-33-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/Screenshot-2026-04-01-at-10.52.33-PM-1.png\" data-orig-size=\"1652,1418\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2026-04-01 at 10.52.33\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/Screenshot-2026-04-01-at-10.52.33-PM-1-300x258.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/Screenshot-2026-04-01-at-10.52.33-PM-1-1024x879.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/Screenshot-2026-04-01-at-10.52.33-PM-1.png\" alt=\"\" class=\"wp-image-78756\" \/><figcaption class=\"wp-element-caption\">https:\/\/huggingface.co\/blog\/ibm-granite\/granite-4-vision<\/figcaption><\/figure>\n<\/div>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1602\" height=\"736\" data-attachment-id=\"78758\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/04\/01\/ibm-releases-granite-4-0-3b-vision-a-new-vision-language-model-for-enterprise-grade-document-data-extraction\/screenshot-2026-04-01-at-10-55-20-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/Screenshot-2026-04-01-at-10.55.20-PM-1.png\" data-orig-size=\"1602,736\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2026-04-01 at 10.55.20\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/Screenshot-2026-04-01-at-10.55.20-PM-1-300x138.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/Screenshot-2026-04-01-at-10.55.20-PM-1-1024x470.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/Screenshot-2026-04-01-at-10.55.20-PM-1.png\" alt=\"\" class=\"wp-image-78758\" \/><figcaption class=\"wp-element-caption\">https:\/\/huggingface.co\/blog\/ibm-granite\/granite-4-vision<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Key Takeaways<\/strong><\/h3>\n<ul class=\"wp-block-list\">\n<li><strong>Modular LoRA Architecture:<\/strong> The model is a <strong>0.5B parameter LoRA adapter<\/strong> that operates on the <strong>Granite 4.0 Micro<\/strong> (3.5B) backbone. This design allows a single deployment to handle text-only workloads efficiently while activating vision capabilities only when needed.<\/li>\n<li><strong>High-Resolution Tiling:<\/strong> Utilizing the <strong>google\/siglip2-so400m-patch16-384<\/strong> encoder, the model processes images by tiling them into <strong>384\u00d7384 patches<\/strong> alongside a global downscaled view, ensuring that fine details in complex documents are preserved.<\/li>\n<li><strong>DeepStack Injection:<\/strong> To improve layout awareness, the model uses a <strong>DeepStack<\/strong> approach with <strong>8 injection points<\/strong>. This routes semantic features to earlier layers and spatial details to later layers, which is critical for accurate table and chart extraction.<\/li>\n<li><strong>Specialized Extraction Training:<\/strong> Beyond general instruction following, the model was refined using <strong>ChartNet<\/strong> and a \u2018code-guided\u2019 pipeline that aligns plotting code, images, and data tables to help the model internalize the logic of visual data structures.<\/li>\n<li><strong>Developer-Ready Integration:<\/strong> The release is <strong>Apache 2.0<\/strong> licensed and features native support for <strong>vLLM<\/strong> (via a custom model implementation) and <strong>Docling<\/strong>, IBM\u2019s tool for converting unstructured PDFs into machine-readable JSON or HTML.<\/li>\n<\/ul>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out\u00a0the\u00a0<strong><a href=\"https:\/\/huggingface.co\/blog\/ibm-granite\/granite-4-vision\" target=\"_blank\" rel=\"noreferrer noopener\">Technical details<\/a> <\/strong>and<strong> <a href=\"https:\/\/huggingface.co\/ibm-granite\/granite-4.0-3b-vision\" target=\"_blank\" rel=\"noreferrer noopener\">Model Weight<\/a>. \u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">120k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/04\/01\/ibm-releases-granite-4-0-3b-vision-a-new-vision-language-model-for-enterprise-grade-document-data-extraction\/\">IBM Releases Granite 4.0 3B Vision: A New Vision Language Model for Enterprise Grade Document Data Extraction<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>IBM has announced the release &hellip;<\/p>\n","protected":false},"author":1,"featured_media":654,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-653","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/653","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=653"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/653\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/654"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=653"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=653"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=653"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}