{"id":399,"date":"2026-02-12T12:10:17","date_gmt":"2026-02-12T04:10:17","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=399"},"modified":"2026-02-12T12:10:17","modified_gmt":"2026-02-12T04:10:17","slug":"how-to-build-a-matryoshka-optimized-sentence-embedding-model-for-ultra-fast-retrieval-with-64-dimension-truncation","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=399","title":{"rendered":"How to Build a Matryoshka-Optimized Sentence Embedding Model for Ultra-Fast Retrieval with 64-Dimension Truncation"},"content":{"rendered":"<p>In this tutorial, we fine-tune a Sentence-Transformers embedding model using Matryoshka Representation Learning so that the earliest dimensions of the vector carry the most useful semantic signal. We train with MatryoshkaLoss on triplet data and then validate the key promise of MRL by benchmarking retrieval quality after truncating embeddings to 64, 128, and 256 dimensions. At the end, we save the tuned model and demonstrate how to load it with a small truncate_dim setting for fast and memory-efficient vector search. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/LLM%20Projects\/matryoshka_representation_learning_sentencetransformers_msmarco_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a>.<\/strong><\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">!pip -q install -U sentence-transformers datasets accelerate\n\n\nimport math\nimport random\nimport numpy as np\nimport torch\n\n\nfrom datasets import load_dataset\nfrom torch.utils.data import DataLoader\n\n\nfrom sentence_transformers import SentenceTransformer, InputExample\nfrom sentence_transformers import losses\nfrom sentence_transformers.util import cos_sim\n\n\n\n\ndef set_seed(seed=42):\n   random.seed(seed)\n   np.random.seed(seed)\n   torch.manual_seed(seed)\n   torch.cuda.manual_seed_all(seed)\n\n\nset_seed(42)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We install the required libraries and import all the necessary modules for training and evaluation. We set a deterministic seed, so our sampling and training behavior stay consistent across runs. We also ensure PyTorch and CUDA RNGs are aligned when a GPU is available. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/LLM%20Projects\/matryoshka_representation_learning_sentencetransformers_msmarco_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a>.<\/strong><\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">@torch.no_grad()\ndef retrieval_metrics_mrr_recall_at_k(\n   model,\n   queries,\n   corpus,\n   qrels,\n   dims_list=(64, 128, 256, None),\n   k=10,\n   batch_size=64,\n):\n   device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n   model.to(device)\n\n\n   qids = list(queries.keys())\n   docids = list(corpus.keys())\n\n\n   q_texts = [queries[qid] for qid in qids]\n   d_texts = [corpus[did] for did in docids]\n\n\n   q_emb = model.encode(q_texts, batch_size=batch_size, convert_to_tensor=True, normalize_embeddings=True)\n   d_emb = model.encode(d_texts, batch_size=batch_size, convert_to_tensor=True, normalize_embeddings=True)\n\n\n   results = {}\n\n\n   for dim in dims_list:\n       if dim is None:\n           qe = q_emb\n           de = d_emb\n           dim_name = \"full\"\n       else:\n           qe = q_emb[:, :dim]\n           de = d_emb[:, :dim]\n           dim_name = str(dim)\n           qe = torch.nn.functional.normalize(qe, p=2, dim=1)\n           de = torch.nn.functional.normalize(de, p=2, dim=1)\n\n\n       sims = cos_sim(qe, de)\n\n\n       mrr_total = 0.0\n       recall_total = 0.0\n\n\n       for i, qid in enumerate(qids):\n           rel = qrels.get(qid, set())\n           if not rel:\n               continue\n\n\n           topk = torch.topk(sims[i], k=min(k, sims.shape[1]), largest=True).indices.tolist()\n           topk_docids = [docids[j] for j in topk]\n\n\n           recall_total += 1.0 if any(d in rel for d in topk_docids) else 0.0\n\n\n           rr = 0.0\n           for rank, d in enumerate(topk_docids, start=1):\n               if d in rel:\n                   rr = 1.0 \/ rank\n                   break\n           mrr_total += rr\n\n\n       denom = max(1, len(qids))\n       results[dim_name] = {f\"MRR@{k}\": mrr_total \/ denom, f\"Recall@{k}\": recall_total \/ denom}\n\n\n   return results\n\n\n\n\ndef pretty_print(results, title):\n   print(\"n\" + \"=\" * 80)\n   print(title)\n   print(\"=\" * 80)\n   for dim, metrics in results.items():\n       print(f\"dim={dim:&gt;4} | \" + \" | \".join([f\"{k}={v:.4f}\" for k, v in metrics.items()]))<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We implement a lightweight retrieval evaluator that encodes queries and documents, computes cosine similarity, and reports MRR@10 and Recall@10. We re-normalize embeddings after truncation so smaller prefixes remain comparable in cosine space. We also added a compact printer to make before\/after comparisons easy to read. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/LLM%20Projects\/matryoshka_representation_learning_sentencetransformers_msmarco_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a>.<\/strong><\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">DATASET_ID = \"sentence-transformers\/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1\"\nSUBSET = \"triplet-hard\"\nSPLIT = \"train\"\n\n\nTRAIN_SAMPLES = 4000\nEVAL_QUERIES = 300\n\n\nstream = load_dataset(DATASET_ID, SUBSET, split=SPLIT, streaming=True)\n\n\ntrain_examples = []\neval_queries = {}\neval_corpus = {}\neval_qrels = {}\n\n\ndoc_id_counter = 0\nqid_counter = 0\n\n\nfor row in stream:\n   q = (row.get(\"query\") or \"\").strip()\n   pos = (row.get(\"positive\") or \"\").strip()\n   neg = (row.get(\"negative\") or \"\").strip()\n\n\n   if not q or not pos or not neg:\n       continue\n\n\n   train_examples.append(InputExample(texts=[q, pos, neg]))\n\n\n   if len(eval_queries) &lt; EVAL_QUERIES:\n       qid = f\"q{qid_counter}\"\n       qid_counter += 1\n\n\n       pos_id = f\"d{doc_id_counter}\"; doc_id_counter += 1\n       neg_id = f\"d{doc_id_counter}\"; doc_id_counter += 1\n\n\n       eval_queries[qid] = q\n       eval_corpus[pos_id] = pos\n       eval_corpus[neg_id] = neg\n       eval_qrels[qid] = {pos_id}\n\n\n   if len(train_examples) &gt;= TRAIN_SAMPLES and len(eval_queries) &gt;= EVAL_QUERIES:\n       break\n\n\nprint(len(train_examples), len(eval_queries), len(eval_corpus))<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We stream a mined MS MARCO triplet dataset and build both a training set (queries, positives, negatives) and a tiny IR benchmark set. We map each query to a relevant positive document and include a negative document to make retrieval meaningful. We stop early to keep the run Colab-friendly while still large enough to show truncation effects.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">MODEL_ID = \"BAAI\/bge-base-en-v1.5\"\n\n\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nmodel = SentenceTransformer(MODEL_ID, device=device)\nfull_dim = model.get_sentence_embedding_dimension()\n\n\nbaseline = retrieval_metrics_mrr_recall_at_k(\n   model,\n   queries=eval_queries,\n   corpus=eval_corpus,\n   qrels=eval_qrels,\n   dims_list=(64, 128, 256, None),\n   k=10,\n)\npretty_print(baseline, \"BEFORE\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We load a strong base embedding model and record its full embedding dimension. We run the baseline evaluation across 64\/128\/256\/full dimensions to see how truncation behaves before any training. We print the results so we can later compare whether MRL improves the early-dimension quality.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">batch_size = 16\nepochs = 1\nwarmup_steps = 100\n\n\ntrain_loader = DataLoader(train_examples, batch_size=batch_size, shuffle=True, drop_last=True)\n\n\nbase_loss = losses.MultipleNegativesRankingLoss(model=model)\n\n\nmrl_dims = [full_dim, 512, 256, 128, 64] if full_dim &gt;= 768 else [full_dim, 256, 128, 64]\nmrl_loss = losses.MatryoshkaLoss(\n   model=model,\n   loss=base_loss,\n   matryoshka_dims=mrl_dims\n)\n\n\nmodel.fit(\n   train_objectives=[(train_loader, mrl_loss)],\n   epochs=epochs,\n   warmup_steps=warmup_steps,\n   show_progress_bar=True,\n)\n\n\nafter = retrieval_metrics_mrr_recall_at_k(\n   model,\n   queries=eval_queries,\n   corpus=eval_corpus,\n   qrels=eval_qrels,\n   dims_list=(64, 128, 256, None),\n   k=10,\n)\npretty_print(after, \"AFTER\")\n\n\nout_dir = \"mrl-msmarco-demo\"\nmodel.save(out_dir)\n\n\nm64 = SentenceTransformer(out_dir, truncate_dim=64)\nemb = m64.encode(\n   [\"what is the liberal arts?\", \"liberal arts covers humanities and sciences\"],\n   normalize_embeddings=True\n)\nprint(emb.shape)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We create a MultipleNegativesRankingLoss and wrap it with MatryoshkaLoss using a descending list of target prefix dimensions. We fine-tune the model on the triplets, then re-run the same truncation benchmark to measure the improvement in retention. Also, we save the model and reload it with truncate_dim=64 to confirm practical usage for compact retrieval.<\/p>\n<p>In conclusion, we successfully trained a Matryoshka-optimized embedding model that maintains strong retrieval performance even when we truncate vectors to small prefix dimensions, such as 64. We verified the effect by comparing baseline versus post-training retrieval metrics across multiple truncation sizes and the full embedding. With the saved model and the truncate_dim loading pattern, we now have a clean workflow for building smaller, faster vector indexes while keeping the option to rerank with full-dimensional embeddings.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/LLM%20Projects\/matryoshka_representation_learning_sentencetransformers_msmarco_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/02\/11\/how-to-build-a-matryoshka-optimized-sentence-embedding-model-for-ultra-fast-retrieval-with-64-dimension-truncation\/\">How to Build a Matryoshka-Optimized Sentence Embedding Model for Ultra-Fast Retrieval with 64-Dimension Truncation<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we fine-tune&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-399","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/399","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=399"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/399\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=399"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=399"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=399"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}