{"id":508,"date":"2026-03-04T06:29:23","date_gmt":"2026-03-03T22:29:23","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=508"},"modified":"2026-03-04T06:29:23","modified_gmt":"2026-03-03T22:29:23","slug":"how-to-build-a-stable-and-efficient-qlora-fine-tuning-pipeline-using-unsloth-for-large-language-models","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=508","title":{"rendered":"How to Build a Stable and Efficient QLoRA Fine-Tuning Pipeline Using Unsloth for Large Language Models"},"content":{"rendered":"<p>In this tutorial, we demonstrate how to efficiently fine-tune a large language model using <a href=\"https:\/\/github.com\/unslothai\/unsloth\"><strong>Unsloth<\/strong><\/a> and QLoRA. We focus on building a stable, end-to-end supervised fine-tuning pipeline that handles common Colab issues such as GPU detection failures, runtime crashes, and library incompatibilities. By carefully controlling the environment, model configuration, and training loop, we show how to reliably train an instruction-tuned model with limited resources while maintaining strong performance and rapid iteration speed.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import os, sys, subprocess, gc, locale\n\n\nlocale.getpreferredencoding = lambda: \"UTF-8\"\n\n\ndef run(cmd):\n   print(\"n$ \" + cmd, flush=True)\n   p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)\n   for line in p.stdout:\n       print(line, end=\"\", flush=True)\n   rc = p.wait()\n   if rc != 0:\n       raise RuntimeError(f\"Command failed ({rc}): {cmd}\")\n\n\nprint(\"Installing packages (this may take 2\u20133 minutes)...\", flush=True)\n\n\nrun(\"pip install -U pip\")\nrun(\"pip uninstall -y torch torchvision torchaudio\")\nrun(\n   \"pip install --no-cache-dir \"\n   \"torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 \"\n   \"--index-url https:\/\/download.pytorch.org\/whl\/cu121\"\n)\nrun(\n   \"pip install -U \"\n   \"transformers==4.45.2 \"\n   \"accelerate==0.34.2 \"\n   \"datasets==2.21.0 \"\n   \"trl==0.11.4 \"\n   \"sentencepiece safetensors evaluate\"\n)\nrun(\"pip install -U unsloth\")\n\n\nimport torch\ntry:\n   import unsloth\n   restarted = False\nexcept Exception:\n   restarted = True\n\n\nif restarted:\n   print(\"nRuntime needs restart. After restart, run this SAME cell again.\", flush=True)\n   os._exit(0)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We set up a controlled and compatible environment by reinstalling PyTorch and all required libraries. We ensure that Unsloth and its dependencies align correctly with the CUDA runtime available in Google Colab. We also handle the runtime restart logic so that the environment is clean and stable before training begins.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import torch, gc\n\n\nassert torch.cuda.is_available()\nprint(\"Torch:\", torch.__version__)\nprint(\"GPU:\", torch.cuda.get_device_name(0))\nprint(\"VRAM(GB):\", round(torch.cuda.get_device_properties(0).total_memory \/ 1e9, 2))\n\n\ntorch.backends.cuda.matmul.allow_tf32 = True\ntorch.backends.cudnn.allow_tf32 = True\n\n\ndef clean():\n   gc.collect()\n   torch.cuda.empty_cache()\n\n\nimport unsloth\nfrom unsloth import FastLanguageModel\nfrom datasets import load_dataset\nfrom transformers import TextStreamer\nfrom trl import SFTTrainer, SFTConfig<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We verify GPU availability and configure PyTorch for efficient computation. We import Unsloth before all other training libraries to ensure that all performance optimizations are applied correctly. We also define utility functions to manage GPU memory during training.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">max_seq_length = 768\nmodel_name = \"unsloth\/Qwen2.5-1.5B-Instruct-bnb-4bit\"\n\n\nmodel, tokenizer = FastLanguageModel.from_pretrained(\n   model_name=model_name,\n   max_seq_length=max_seq_length,\n   dtype=None,\n   load_in_4bit=True,\n)\n\n\nmodel = FastLanguageModel.get_peft_model(\n   model,\n   r=8,\n   target_modules=[\"q_proj\",\"k_proj],\n   lora_alpha=16,\n   lora_dropout=0.0,\n   bias=\"none\",\n   use_gradient_checkpointing=\"unsloth\",\n   random_state=42,\n   max_seq_length=max_seq_length,\n)\n<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We load a 4-bit quantized, instruction-tuned model using Unsloth\u2019s fast-loading utilities. We then attach LoRA adapters to the model to enable parameter-efficient fine-tuning. We configure the LoRA setup to balance memory efficiency and learning capacity.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">ds = load_dataset(\"trl-lib\/Capybara\", split=\"train\").shuffle(seed=42).select(range(1200))\n\n\ndef to_text(example):\n   example[\"text\"] = tokenizer.apply_chat_template(\n       example[\"messages\"],\n       tokenize=False,\n       add_generation_prompt=False,\n   )\n   return example\n\n\nds = ds.map(to_text, remove_columns=[c for c in ds.column_names if c != \"messages\"])\nds = ds.remove_columns([\"messages\"])\nsplit = ds.train_test_split(test_size=0.02, seed=42)\ntrain_ds, eval_ds = split[\"train\"], split[\"test\"]\n\n\ncfg = SFTConfig(\n   output_dir=\"unsloth_sft_out\",\n   dataset_text_field=\"text\",\n   max_seq_length=max_seq_length,\n   packing=False,\n   per_device_train_batch_size=1,\n   gradient_accumulation_steps=8,\n   max_steps=150,\n   learning_rate=2e-4,\n   warmup_ratio=0.03,\n   lr_scheduler_type=\"cosine\",\n   logging_steps=10,\n   eval_strategy=\"no\",\n   save_steps=0,\n   fp16=True,\n   optim=\"adamw_8bit\",\n   report_to=\"none\",\n   seed=42,\n)\n\n\ntrainer = SFTTrainer(\n   model=model,\n   tokenizer=tokenizer,\n   train_dataset=train_ds,\n   eval_dataset=eval_ds,\n   args=cfg,\n)\n<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We prepare the training dataset by converting multi-turn conversations into a single text format suitable for supervised fine-tuning. We split the dataset to maintain training integrity. We also define the training configuration, which controls the batch size, learning rate, and training duration.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">clean()\ntrainer.train()\n\n\nFastLanguageModel.for_inference(model)\n\n\ndef chat(prompt, max_new_tokens=160):\n   messages = [{\"role\":\"user\",\"content\":prompt}]\n   text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\n   inputs = tokenizer([text], return_tensors=\"pt\").to(\"cuda\")\n   streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)\n   with torch.inference_mode():\n       model.generate(\n           **inputs,\n           max_new_tokens=max_new_tokens,\n           temperature=0.7,\n           top_p=0.9,\n           do_sample=True,\n           streamer=streamer,\n       )\n\n\nchat(\"Give a concise checklist for validating a machine learning model before deployment.\")\n\n\nsave_dir = \"unsloth_lora_adapters\"\nmodel.save_pretrained(save_dir)\ntokenizer.save_pretrained(save_dir)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We execute the training loop and monitor the fine-tuning process on the GPU. We switch the model to inference mode and validate its behavior using a sample prompt. We finally save the trained LoRA adapters so that we can reuse or deploy the fine-tuned model later.<\/p>\n<p>In conclusion, we fine-tuned an instruction-following language model using Unsloth\u2019s optimized training stack and a lightweight QLoRA setup. We demonstrated that by constraining sequence length, dataset size, and training steps, we can achieve stable training on Colab GPUs without runtime interruptions. The resulting LoRA adapters provide a practical, reusable artifact that we can deploy or extend further, making this workflow a robust foundation for future experimentation and advanced alignment techniques.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/LLM%20Projects\/unsloth_qlora_stable_sft_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">120k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/03\/03\/how-to-build-a-stable-and-efficient-qlora-fine-tuning-pipeline-using-unsloth-for-large-language-models\/\">How to Build a Stable and Efficient QLoRA Fine-Tuning Pipeline Using Unsloth for Large Language Models<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we demonstra&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-508","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/508","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=508"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/508\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=508"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=508"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=508"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}