{"id":500,"date":"2026-03-03T11:23:03","date_gmt":"2026-03-03T03:23:03","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=500"},"modified":"2026-03-03T11:23:03","modified_gmt":"2026-03-03T03:23:03","slug":"a-coding-guide-to-build-a-scalable-end-to-end-analytics-and-machine-learning-pipeline-on-millions-of-rows-using-vaex","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=500","title":{"rendered":"A Coding Guide to Build a Scalable End-to-End Analytics and Machine Learning Pipeline on Millions of Rows Using Vaex"},"content":{"rendered":"<p>In this tutorial, we design an end-to-end, production-style analytics and modeling pipeline using <a href=\"https:\/\/github.com\/vaexio\/vaex\"><strong>Vaex<\/strong><\/a> to operate efficiently on millions of rows without materializing data in memory. We generate a realistic, large-scale dataset, engineer rich behavioral and city-level features using lazy expressions and approximate statistics, and aggregate insights at scale. We then integrate Vaex with scikit-learn to train and evaluate a predictive model, demonstrating how Vaex can act as the backbone for high-performance exploratory analysis and machine-learning workflows.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">!pip -q install \"vaex==4.19.0\" \"vaex-core==4.19.0\" \"vaex-ml==0.19.0\" \"vaex-viz==0.6.0\" \"vaex-hdf5==0.15.0\" \"pyarrow&gt;=14\" \"scikit-learn&gt;=1.3\"\n\n\nimport os, time, json, numpy as np, pandas as pd\nimport vaex\nimport vaex.ml\nfrom vaex.ml.sklearn import Predictor\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import roc_auc_score, average_precision_score\n\n\nprint(\"Python:\", __import__(\"sys\").version.split()[0])\nprint(\"vaex:\", vaex.__version__)\nprint(\"numpy:\", np.__version__)\nprint(\"pandas:\", pd.__version__)\n\n\nrng = np.random.default_rng(7)\n\n\nn = 2_000_000\ncities = np.array([\"Montreal\",\"Toronto\",\"Vancouver\",\"Calgary\",\"Ottawa\",\"Edmonton\",\"Quebec City\",\"Winnipeg\"], dtype=object)\ncity = rng.choice(cities, size=n, replace=True, p=np.array([0.16,0.18,0.12,0.10,0.10,0.10,0.10,0.14]))\nage = rng.integers(18, 75, size=n, endpoint=False).astype(\"int32\")\ntenure_m = rng.integers(0, 180, size=n, endpoint=False).astype(\"int32\")\ntx = rng.poisson(lam=22, size=n).astype(\"int32\")\nbase_income = rng.lognormal(mean=10.6, sigma=0.45, size=n).astype(\"float64\")\ncity_mult = pd.Series({\"Montreal\":0.92,\"Toronto\":1.05,\"Vancouver\":1.10,\"Calgary\":1.02,\"Ottawa\":1.00,\"Edmonton\":0.98,\"Quebec City\":0.88,\"Winnipeg\":0.90}).reindex(city).to_numpy()\nincome = (base_income * city_mult * (1.0 + 0.004*(age-35)) * (1.0 + 0.0025*np.minimum(tenure_m,120))).astype(\"float64\")\nincome = np.clip(income, 18_000, 420_000)\n\n\nnoise = rng.normal(0, 1, size=n).astype(\"float64\")\nscore_latent = (\n   0.55*np.log1p(income\/1000.0)\n   + 0.28*np.log1p(tx)\n   + 0.18*np.sqrt(np.maximum(tenure_m,0)\/12.0 + 1e-9)\n   - 0.012*(age-40)\n   + 0.22*(city == \"Vancouver\").astype(\"float64\")\n   + 0.15*(city == \"Toronto\").astype(\"float64\")\n   + 0.10*(city == \"Ottawa\").astype(\"float64\")\n   + 0.65*noise\n)\np = 1.0\/(1.0 + np.exp(-(score_latent - np.quantile(score_latent, 0.70))))\ntarget = (rng.random(n) &lt; p).astype(\"int8\")\n\n\ndf = vaex.from_arrays(city=city, age=age, tenure_m=tenure_m, tx=tx, income=income, target=target)\n\n\ndf[\"income_k\"] = df.income \/ 1000.0\ndf[\"tenure_y\"] = df.tenure_m \/ 12.0\ndf[\"log_income\"] = df.income.log1p()\ndf[\"tx_per_year\"] = df.tx \/ (df.tenure_y + 0.25)\ndf[\"value_score\"] = (0.35*df.log_income + 0.20*df.tx_per_year + 0.10*df.tenure_y - 0.015*df.age)\ndf[\"is_new\"] = df.tenure_m &lt; 6\ndf[\"is_senior\"] = df.age &gt;= 60\n\n\nprint(\"nRows:\", len(df), \"Columns:\", len(df.get_column_names()))\nprint(df[[\"city\",\"age\",\"tenure_m\",\"tx\",\"income\",\"income_k\",\"value_score\",\"target\"]].head(5))<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We generate a large, realistic synthetic dataset and initialize a Vaex DataFrame to work lazily on millions of rows. We engineer core numerical features directly as expressions so no intermediate data is materialized. We validate the setup by inspecting schema, row counts, and a small sample of computed values.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">encoder = vaex.ml.LabelEncoder(features=[\"city\"])\ndf = encoder.fit_transform(df)\ncity_map = encoder.labels_[\"city\"]\ninv_city_map = {v:k for k,v in city_map.items()}\nn_cities = len(city_map)\n\n\np95_income_k_by_city = df.percentile_approx(\"income_k\", 95, binby=\"label_encoded_city\", shape=n_cities, limits=[-0.5, n_cities-0.5])\np50_value_by_city = df.percentile_approx(\"value_score\", 50, binby=\"label_encoded_city\", shape=n_cities, limits=[-0.5, n_cities-0.5])\navg_income_k_by_city = df.mean(\"income_k\", binby=\"label_encoded_city\", shape=n_cities, limits=[-0.5, n_cities-0.5])\ntarget_rate_by_city = df.mean(\"target\", binby=\"label_encoded_city\", shape=n_cities, limits=[-0.5, n_cities-0.5])\nn_by_city = df.count(binby=\"label_encoded_city\", shape=n_cities, limits=[-0.5, n_cities-0.5])\n\n\np95_income_k_by_city = np.asarray(p95_income_k_by_city).reshape(-1)\np50_value_by_city = np.asarray(p50_value_by_city).reshape(-1)\navg_income_k_by_city = np.asarray(avg_income_k_by_city).reshape(-1)\ntarget_rate_by_city = np.asarray(target_rate_by_city).reshape(-1)\nn_by_city = np.asarray(n_by_city).reshape(-1)\n\n\ncity_table = pd.DataFrame({\n   \"city_id\": np.arange(n_cities),\n   \"city\": [inv_city_map[i] for i in range(n_cities)],\n   \"n\": n_by_city.astype(\"int64\"),\n   \"avg_income_k\": avg_income_k_by_city,\n   \"p95_income_k\": p95_income_k_by_city,\n   \"median_value_score\": p50_value_by_city,\n   \"target_rate\": target_rate_by_city\n}).sort_values([\"target_rate\",\"p95_income_k\"], ascending=False)\n\n\nprint(\"nCity summary:\")\nprint(city_table.to_string(index=False))\n\n\ndf_city_features = vaex.from_pandas(city_table[[\"city\",\"p95_income_k\",\"avg_income_k\",\"median_value_score\",\"target_rate\"]], copy_index=False)\ndf = df.join(df_city_features, on=\"city\", rsuffix=\"_city\")\n\n\ndf[\"income_vs_city_p95\"] = df.income_k \/ (df.p95_income_k + 1e-9)\ndf[\"value_vs_city_median\"] = df.value_score - df.median_value_score<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We encode categorical city data and compute scalable, approximate per-city statistics using binning-based operations. We assemble these aggregates into a city-level table and join them back to the main dataset. We then derive relative features that compare each record against its city context.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">features_num = [\n   \"age\",\"tenure_y\",\"tx\",\"income_k\",\"log_income\",\"tx_per_year\",\"value_score\",\n   \"p95_income_k\",\"avg_income_k\",\"median_value_score\",\"target_rate\",\n   \"income_vs_city_p95\",\"value_vs_city_median\"\n]\n\n\nscaler = vaex.ml.StandardScaler(features=features_num, with_mean=True, with_std=True, prefix=\"z_\")\ndf = scaler.fit_transform(df)\n\n\nfeatures = [\"z_\"+f for f in features_num] + [\"label_encoded_city\"]\n\n\ndf_train, df_test = df.split_random([0.80, 0.20], random_state=42)\n\n\nmodel = LogisticRegression(max_iter=250, solver=\"lbfgs\", n_jobs=None)\nvaex_model = Predictor(model=model, features=features, target=\"target\", prediction_name=\"pred\")\n\n\nt0 = time.time()\nvaex_model.fit(df=df_train)\nfit_s = time.time() - t0\n\n\ndf_test = vaex_model.transform(df_test)\n\n\ny_true = df_test[\"target\"].to_numpy()\ny_pred = df_test[\"pred\"].to_numpy()\n\n\nauc = roc_auc_score(y_true, y_pred)\nap = average_precision_score(y_true, y_pred)\n\n\nprint(\"nModel:\")\nprint(\"fit_seconds:\", round(fit_s, 3))\nprint(\"test_auc:\", round(float(auc), 4))\nprint(\"test_avg_precision:\", round(float(ap), 4))<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We standardize all numeric features using Vaex\u2019s ML utilities and prepare a consistent feature vector for modeling. We split the dataset without loading the entire dataset into memory. We train a logistic regression model through Vaex\u2019s sklearn wrapper and evaluate its predictive performance.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">deciles = np.linspace(0, 1, 11)\ncuts = np.quantile(y_pred, deciles)\ncuts[0] = -np.inf\ncuts[-1] = np.inf\nbucket = np.digitize(y_pred, cuts[1:-1], right=True).astype(\"int32\")\ndf_test_local = vaex.from_arrays(y_true=y_true.astype(\"int8\"), y_pred=y_pred.astype(\"float64\"), bucket=bucket)\nlift = df_test_local.groupby(by=\"bucket\", agg={\"n\": vaex.agg.count(), \"rate\": vaex.agg.mean(\"y_true\"), \"avg_pred\": vaex.agg.mean(\"y_pred\")}).sort(\"bucket\")\nlift_pd = lift.to_pandas_df()\nbaseline = float(df_test_local[\"y_true\"].mean())\nlift_pd[\"lift\"] = lift_pd[\"rate\"] \/ (baseline + 1e-12)\nprint(\"nDecile lift table:\")\nprint(lift_pd.to_string(index=False))<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We analyze model behavior by segmenting predictions into deciles and computing lift metrics. We calculate baseline rates and compare them across score buckets to assess ranking quality. We summarize the results in a compact lift table that reflects real-world model diagnostics.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">out_dir = \"\/content\/vaex_artifacts\"\nos.makedirs(out_dir, exist_ok=True)\n\n\nparquet_path = os.path.join(out_dir, \"customers_vaex.parquet\")\nstate_path = os.path.join(out_dir, \"vaex_pipeline.json\")\n\n\nbase_cols = [\"city\",\"label_encoded_city\",\"age\",\"tenure_m\",\"tenure_y\",\"tx\",\"income\",\"income_k\",\"value_score\",\"target\"]\nexport_cols = base_cols + [\"z_\"+f for f in features_num]\ndf_export = df[export_cols].sample(n=500_000, random_state=1)\n\n\nif os.path.exists(parquet_path):\n   os.remove(parquet_path)\ndf_export.export_parquet(parquet_path)\n\n\npipeline_state = {\n   \"vaex_version\": vaex.__version__,\n   \"encoder_labels\": {k: {str(kk): int(vv) for kk,vv in v.items()} for k,v in encoder.labels_.items()},\n   \"scaler_mean\": [float(x) for x in scaler.mean_],\n   \"scaler_std\": [float(x) for x in scaler.std_],\n   \"features_num\": features_num,\n   \"export_cols\": export_cols,\n}\nwith open(state_path, \"w\") as f:\n   json.dump(pipeline_state, f)\n\n\ndf_reopen = vaex.open(parquet_path)\n\n\ndf_reopen[\"income_k\"] = df_reopen.income \/ 1000.0\ndf_reopen[\"tenure_y\"] = df_reopen.tenure_m \/ 12.0\ndf_reopen[\"log_income\"] = df_reopen.income.log1p()\ndf_reopen[\"tx_per_year\"] = df_reopen.tx \/ (df_reopen.tenure_y + 0.25)\ndf_reopen[\"value_score\"] = (0.35*df_reopen.log_income + 0.20*df_reopen.tx_per_year + 0.10*df_reopen.tenure_y - 0.015*df_reopen.age)\n\n\ndf_city_features = vaex.from_pandas(city_table[[\"city\",\"p95_income_k\",\"avg_income_k\",\"median_value_score\",\"target_rate\"]], copy_index=False)\ndf_reopen = df_reopen.join(df_city_features, on=\"city\", rsuffix=\"_city\")\ndf_reopen[\"income_vs_city_p95\"] = df_reopen.income_k \/ (df_reopen.p95_income_k + 1e-9)\ndf_reopen[\"value_vs_city_median\"] = df_reopen.value_score - df_reopen.median_value_score\n\n\nwith open(state_path, \"r\") as f:\n   st = json.load(f)\n\n\nlabels_city = {k: int(v) for k,v in st[\"encoder_labels\"][\"city\"].items()}\ndf_reopen[\"label_encoded_city\"] = df_reopen.city.map(labels_city, default_value=-1)\n\n\nfor i, feat in enumerate(st[\"features_num\"]):\n   mean_i = st[\"scaler_mean\"][i]\n   std_i = st[\"scaler_std\"][i] if st[\"scaler_std\"][i] != 0 else 1.0\n   df_reopen[\"z_\"+feat] = (df_reopen[feat] - mean_i) \/ std_i\n\n\ndf_reopen = vaex_model.transform(df_reopen)\n\n\nprint(\"nArtifacts written:\")\nprint(parquet_path)\nprint(state_path)\nprint(\"nReopened parquet predictions (head):\")\nprint(df_reopen[[\"city\",\"income_k\",\"value_score\",\"pred\",\"target\"]].head(8))\n\n\nprint(\"nDone.\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We export a large, feature-complete sample to Parquet and persist the full preprocessing state for reproducibility. We reload the data and deterministically rebuild all engineered features from saved metadata. We run inference on the reloaded dataset to confirm that the pipeline remains stable and deployable end-to-end.<\/p>\n<p>In conclusion, we demonstrated how Vaex enables fast, memory-efficient data processing while still supporting advanced feature engineering, aggregation, and model integration. We showed that approximate statistics, lazy computation, and out-of-core execution allow us to scale cleanly from analysis to deployment-ready artifacts. By exporting reproducible features and persisting the full pipeline state, we closed the loop from raw data to inference, illustrating how Vaex fits naturally into modern large-data Python workflows.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Data%20Science\/vaex_large_scale_analytics_and_ml_pipeline_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">120k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/03\/02\/a-coding-guide-to-build-a-scalable-end-to-end-analytics-and-machine-learning-pipeline-on-millions-of-rows-using-vaex\/\">A Coding Guide to Build a Scalable End-to-End Analytics and Machine Learning Pipeline on Millions of Rows Using Vaex<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we design an&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-500","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/500","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=500"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/500\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=500"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=500"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=500"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}