{"id":759,"date":"2026-04-20T03:11:03","date_gmt":"2026-04-19T19:11:03","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=759"},"modified":"2026-04-20T03:11:03","modified_gmt":"2026-04-19T19:11:03","slug":"how-tabpfn-leverages-in-context-learning-to-achieve-superior-accuracy-on-tabular-datasets-compared-to-random-forest-and-catboost","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=759","title":{"rendered":"How TabPFN Leverages In-Context Learning to Achieve Superior Accuracy on Tabular Datasets Compared to Random Forest and CatBoost"},"content":{"rendered":"<p>Tabular data\u2014structured information stored in rows and columns\u2014is at the heart of most real-world machine learning problems, from healthcare records to financial transactions. Over the years, models based on decision trees, such as <strong>Random Forest<\/strong>, <strong>XGBoost<\/strong>, and <strong>CatBoost<\/strong>, have become the default choice for these tasks. Their strength lies in handling mixed data types, capturing complex feature interactions, and delivering strong performance without heavy preprocessing. While deep learning has transformed areas like computer vision and natural language processing, it has historically struggled to consistently outperform these tree-based approaches on tabular datasets.<\/p>\n<p>That long-standing trend is now being questioned. A newer approach, <strong>TabPFN<\/strong>, introduces a different way of tackling tabular problems\u2014one that avoids traditional dataset-specific training altogether. Instead of learning from scratch each time, it relies on a pretrained model to make predictions directly, effectively shifting much of the learning process to inference time. In this article, we take a closer look at this idea and put it to the test by comparing TabPFN with established tree-based models like Random Forest and CatBoost on a sample dataset, evaluating their performance in terms of accuracy, training time, and inference speed.<\/p>\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1055\" height=\"607\" data-attachment-id=\"79139\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/04\/19\/how-tabpfn-leverages-in-context-learning-to-achieve-superior-accuracy-on-tabular-datasets-compared-to-random-forest-and-catboost\/image-445\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-39.png\" data-orig-size=\"1055,607\" data-comments-opened=\"0\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"image\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-39-1024x589.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-39.png\" alt=\"\" class=\"wp-image-79139\" \/><\/figure>\n<h1 class=\"wp-block-heading\">What is TabPFN?<\/h1>\n<p>TabPFN is a <strong>tabular foundation model<\/strong> designed to handle structured data in a completely different way from traditional machine learning. Instead of training a new model for every dataset, TabPFN is <strong>pretrained on millions of synthetic tabular tasks<\/strong> generated from causal processes. This allows it to learn a general strategy for solving supervised learning problems. When you give it your dataset, it doesn\u2019t go through iterative training like tree-based models\u2014instead, it performs predictions directly by leveraging what it has already learned. In essence, it applies a form of <strong>in-context learning<\/strong> to tabular data, similar to how large language models work for text.<\/p>\n<p>The latest version, TabPFN-2.5, significantly expands this idea by supporting larger and more complex datasets, while also improving performance. It has been shown to <strong>outperform tuned tree-based models like XGBoost and CatBoost<\/strong> on standard benchmarks and even match strong ensemble systems like AutoGluon. At the same time, it reduces the need for hyperparameter tuning and manual effort. To make it practical for real-world deployment, TabPFN also introduces a <strong>distillation approach<\/strong>, where its predictions can be converted into smaller models like neural networks or tree ensembles\u2014retaining most of the accuracy while enabling much faster inference.<\/p>\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1061\" height=\"427\" data-attachment-id=\"79141\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/04\/19\/how-tabpfn-leverages-in-context-learning-to-achieve-superior-accuracy-on-tabular-datasets-compared-to-random-forest-and-catboost\/image-447\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-41.png\" data-orig-size=\"1061,427\" data-comments-opened=\"0\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"image\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-41-1024x412.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-41.png\" alt=\"\" class=\"wp-image-79141\" \/><\/figure>\n<h1 class=\"wp-block-heading\">Comparing TabPFN with Tree based models<\/h1>\n<h2 class=\"wp-block-heading\">Setting up the dependencies<\/h2>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">pip install tabpfn-client scikit-learn catboost<\/code><\/pre>\n<\/div>\n<\/div>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import time\nimport numpy as np\nfrom sklearn.datasets import make_classification\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score\n\n# Models\nfrom sklearn.ensemble import RandomForestClassifier\nfrom catboost import CatBoostClassifier\nfrom tabpfn_client import TabPFNClassifier<\/code><\/pre>\n<\/div>\n<\/div>\n<p>To run the model, you require the TabPFN API Key. You can get the same from <a href=\"https:\/\/ux.priorlabs.ai\/home\">https:\/\/ux.priorlabs.ai\/home<\/a><\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import os\nfrom getpass import getpass\nos.environ['TABPFN_TOKEN'] = getpass('Enter TABPFN Token: ')<\/code><\/pre>\n<\/div>\n<\/div>\n<h2 class=\"wp-block-heading\">Creating the dataset<\/h2>\n<p>For our experiment, we generate a synthetic binary classification dataset using make_classification from scikit-learn. The dataset contains 5,000 samples and 20 features, out of which 10 are informative (actually contribute to predicting the target) and 5 are redundant (derived from the informative ones). This setup helps simulate a realistic tabular scenario where not all features are equally useful, and some introduce noise or correlation.<\/p>\n<p>We then split the data into training (80%) and testing (20%) sets to evaluate model performance on unseen data. Using a synthetic dataset allows us to have full control over the data characteristics while ensuring a fair and reproducible comparison between TabPFN and traditional tree-based models.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">X, y = make_classification(\n\u00a0 \u00a0 n_samples=5000,\n\u00a0 \u00a0 n_features=20,\n\u00a0 \u00a0 n_informative=10,\n\u00a0 \u00a0 n_redundant=5,\n\u00a0 \u00a0 random_state=42\n)\n\nX_train, X_test, y_train, y_test = train_test_split(\n\u00a0 \u00a0 X, y, test_size=0.2, random_state=42\n)<\/code><\/pre>\n<\/div>\n<\/div>\n<h2 class=\"wp-block-heading\">Testing Random Forest<\/h2>\n<p>We start with a Random Forest classifier as a baseline, using 200 trees. Random Forest is a robust ensemble method that builds multiple decision trees and aggregates their predictions, making it a strong and reliable choice for tabular data without requiring heavy tuning.<\/p>\n<p>After training on the dataset, the model achieves an accuracy of <strong>95.5%<\/strong>, which is a solid performance given the synthetic nature of the data. However, this comes with a training time of <strong>9.56<\/strong> seconds, reflecting the cost of building hundreds of trees. On the positive side, inference is relatively fast at <strong>0.0627<\/strong> seconds, since predictions only involve passing data through the already constructed trees. This result serves as a strong baseline to compare against more advanced methods like CatBoost and TabPFN.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">rf = RandomForestClassifier(n_estimators=200)\n\nstart = time.time()\nrf.fit(X_train, y_train)\nrf_train_time = time.time() - start\n\nstart = time.time()\nrf_preds = rf.predict(X_test)\nrf_infer_time = time.time() - start\n\nrf_acc = accuracy_score(y_test, rf_preds)\n\nprint(f\"RandomForest \u2192 Acc: {rf_acc:.4f}, Train: {rf_train_time:.2f}s, Infer: {rf_infer_time:.4f}s\")<\/code><\/pre>\n<\/div>\n<\/div>\n<h2 class=\"wp-block-heading\">Testing CatBoost<\/h2>\n<p>Next, we train a CatBoost classifier, a gradient boosting model specifically designed for tabular data. It builds trees sequentially, where each new tree corrects the errors of the previous ones. Compared to Random Forest, CatBoost is typically more accurate because of this boosting approach and its ability to model complex patterns more effectively.<\/p>\n<p>On our dataset, CatBoost achieves an accuracy of <strong>96.7%<\/strong>, outperforming Random Forest and demonstrating its strength as a state-of-the-art tree-based method. It also trains slightly faster, taking <strong>8.15<\/strong> seconds, despite using 500 boosting iterations. One of its biggest advantages is inference speed\u2014predictions are extremely fast at just <strong>0.0119<\/strong> seconds, making it well-suited for production scenarios where low latency is critical. This makes CatBoost a strong benchmark before comparing against newer approaches like TabPFN.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">cat = CatBoostClassifier(\n\u00a0 \u00a0 iterations=500,\n\u00a0 \u00a0 depth=6,\n\u00a0 \u00a0 learning_rate=0.1,\n\u00a0 \u00a0 verbose=0\n)\n\nstart = time.time()\ncat.fit(X_train, y_train)\ncat_train_time = time.time() - start\n\nstart = time.time()\ncat_preds = cat.predict(X_test)\ncat_infer_time = time.time() - start\n\ncat_acc = accuracy_score(y_test, cat_preds)\n\nprint(f\"CatBoost \u2192 Acc: {cat_acc:.4f}, Train: {cat_train_time:.2f}s, Infer: {cat_infer_time:.4f}s\")<\/code><\/pre>\n<\/div>\n<\/div>\n<h2 class=\"wp-block-heading\">Testing TabPFN<\/h2>\n<p>Finally, we evaluate TabPFN, which takes a fundamentally different approach compared to traditional models. Instead of learning from scratch on the dataset, it leverages a pretrained model and simply conditions on the training data during inference. The .fit() step mainly involves loading the pretrained weights, which is why it is extremely fast.<\/p>\n<p>On our dataset, TabPFN achieves the highest accuracy of <strong>98.8%<\/strong>, outperforming both Random Forest and CatBoost. The fit time is just <strong>0.47<\/strong> seconds, significantly faster than the tree-based models since no actual training is performed. However, this shift comes with a trade-off\u2014inference takes <strong>2.21<\/strong> seconds, which is much slower than CatBoost and Random Forest. This is because TabPFN processes both the training and test data together during prediction, effectively performing the \u201clearning\u201d step at inference time.<\/p>\n<p>Overall, TabPFN demonstrates a strong advantage in accuracy and setup speed, while highlighting a different computational trade-off compared to traditional tabular models.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">tabpfn = TabPFNClassifier()\n\nstart = time.time()\ntabpfn.fit(X_train, y_train)\u00a0 # loads pretrained model\ntabpfn_train_time = time.time() - start\n\nstart = time.time()\ntabpfn_preds = tabpfn.predict(X_test)\ntabpfn_infer_time = time.time() - start\n\ntabpfn_acc = accuracy_score(y_test, tabpfn_preds)\n\nprint(f\"TabPFN \u2192 Acc: {tabpfn_acc:.4f}, Fit: {tabpfn_train_time:.2f}s, Infer: {tabpfn_infer_time:.4f}s\")<\/code><\/pre>\n<\/div>\n<\/div>\n<h1 class=\"wp-block-heading\">Results<\/h1>\n<p>Across our experiments, TabPFN delivers the strongest overall performance, achieving the highest accuracy (<strong>98.8%<\/strong>) while requiring virtually no training time (<strong>0.47s<\/strong>) compared to Random Forest (<strong>9.56s<\/strong>) and CatBoost (<strong>8.15s<\/strong>). This highlights its key advantage: eliminating dataset-specific training and hyperparameter tuning while still outperforming well-established tree-based methods. However, this benefit comes with a trade-off\u2014<strong>inference latency is significantly higher (2.21s)<\/strong>, as the model processes both training and test data together during prediction. In contrast, CatBoost and Random Forest offer much faster inference, making them more suitable for real-time applications.<\/p>\n<p>From a practical standpoint, TabPFN is highly effective for <strong>small-to-medium tabular tasks<\/strong>, rapid experimentation, and scenarios where minimizing development time is critical. For production environments, especially those requiring low-latency predictions or handling very large datasets, newer advancements such as <strong>TabPFN\u2019s distillation engine<\/strong> help bridge this gap by converting the model into compact neural networks or tree ensembles, retaining most of its accuracy while drastically improving inference speed. Additionally, support for scaling to millions of rows makes it increasingly viable for enterprise use cases. Overall, TabPFN represents a shift in tabular machine learning\u2014trading traditional training effort for a more flexible, inference-driven approach.<\/p>\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1061\" height=\"222\" data-attachment-id=\"79138\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/04\/19\/how-tabpfn-leverages-in-context-learning-to-achieve-superior-accuracy-on-tabular-datasets-compared-to-random-forest-and-catboost\/image-444\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-38.png\" data-orig-size=\"1061,222\" data-comments-opened=\"0\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"image\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-38-1024x214.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-38.png\" alt=\"\" class=\"wp-image-79138\" \/><\/figure>\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"348\" height=\"309\" data-attachment-id=\"79140\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/04\/19\/how-tabpfn-leverages-in-context-learning-to-achieve-superior-accuracy-on-tabular-datasets-compared-to-random-forest-and-catboost\/image-446\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-40.png\" data-orig-size=\"348,309\" data-comments-opened=\"0\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"image\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-40.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-40.png\" alt=\"\" class=\"wp-image-79140\" \/><\/figure>\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"353\" height=\"310\" data-attachment-id=\"79142\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/04\/19\/how-tabpfn-leverages-in-context-learning-to-achieve-superior-accuracy-on-tabular-datasets-compared-to-random-forest-and-catboost\/image-448\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-42.png\" data-orig-size=\"353,310\" data-comments-opened=\"0\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"image\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-42.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-42.png\" alt=\"\" class=\"wp-image-79142\" \/><\/figure>\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"367\" height=\"295\" data-attachment-id=\"79144\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/04\/19\/how-tabpfn-leverages-in-context-learning-to-achieve-superior-accuracy-on-tabular-datasets-compared-to-random-forest-and-catboost\/image-450\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-44.png\" data-orig-size=\"367,295\" data-comments-opened=\"0\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"image\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-44.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/04\/image-44.png\" alt=\"\" class=\"wp-image-79144\" \/><\/figure>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out\u00a0the<strong>\u00a0<a href=\"https:\/\/github.com\/Marktechpost\/AI-Agents-Projects-Tutorials\/blob\/main\/Data%20Science\/TabPFN.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes with Notebook here<\/a><\/strong>.<strong>\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">130k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.?\u00a0<strong><a href=\"https:\/\/forms.gle\/MTNLpmJtsFA3VRVd9\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Connect with us<\/mark><\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/04\/19\/how-tabpfn-leverages-in-context-learning-to-achieve-superior-accuracy-on-tabular-datasets-compared-to-random-forest-and-catboost\/\">How TabPFN Leverages In-Context Learning to Achieve Superior Accuracy on Tabular Datasets Compared to Random Forest and CatBoost<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Tabular data\u2014structured inform&hellip;<\/p>\n","protected":false},"author":1,"featured_media":760,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-759","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/759","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=759"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/759\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/760"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=759"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=759"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=759"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}