{"id":405,"date":"2026-02-13T12:32:12","date_gmt":"2026-02-13T04:32:12","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=405"},"modified":"2026-02-13T12:32:12","modified_gmt":"2026-02-13T04:32:12","slug":"how-to-align-large-language-models-with-human-preferences-using-direct-preference-optimization-qlora-and-ultra-feedback","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=405","title":{"rendered":"How to Align Large Language Models with Human Preferences Using Direct Preference Optimization, QLoRA, and Ultra-Feedback"},"content":{"rendered":"<p>In this tutorial, we implement an end-to-end Direct Preference Optimization workflow to align a large language model with human preferences without using a reward model. We combine TRL\u2019s DPOTrainer with QLoRA and PEFT to make preference-based alignment feasible on a single Colab GPU. We train directly on the UltraFeedback binarized dataset, where each prompt has a chosen and a rejected response, allowing us to shape model behavior and style rather than just factual recall.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import os\nimport math\nimport random\nimport torch\n\n\n!pip -q install -U \"transformers&gt;=4.45.0\" \"datasets&gt;=2.19.0\" \"accelerate&gt;=0.33.0\" \"trl&gt;=0.27.0\" \"peft&gt;=0.12.0\" \"bitsandbytes&gt;=0.43.0\" \"sentencepiece\" \"evaluate\"\n\n\nSEED = 42\nrandom.seed(SEED)\ntorch.manual_seed(SEED)\ntorch.cuda.manual_seed_all(SEED)\n\n\nMODEL_NAME = os.environ.get(\"MODEL_NAME\", \"Qwen\/Qwen2-0.5B-Instruct\")\nDATASET_NAME = \"HuggingFaceH4\/ultrafeedback_binarized\"\nOUTPUT_DIR = \"dpo_ultrafeedback_qlora\"\n\n\nMAX_TRAIN_SAMPLES = 8000\nMAX_EVAL_SAMPLES  = 200\nMAX_PROMPT_LEN = 512\nMAX_COMPLETION_LEN = 256\n\n\nBETA = 0.1\nLR = 2e-4\nEPOCHS = 1\nPER_DEVICE_BS = 2\nGRAD_ACCUM = 8\n\n\nLOGGING_STEPS = 10\nSAVE_STEPS = 200\n\n\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nprint(\"Device:\", device, \"GPU:\", torch.cuda.get_device_name(0) if device == \"cuda\" else \"None\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We set up the execution environment and install all required libraries for DPO, PEFT, and quantized training. We define all global hyperparameters, dataset limits, and optimization settings in one place. We also initialize the random number generator and confirm GPU availability to ensure reproducible runs.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\n\n\nbnb_config = BitsAndBytesConfig(\n   load_in_4bit=True,\n   bnb_4bit_quant_type=\"nf4\",\n   bnb_4bit_use_double_quant=True,\n   bnb_4bit_compute_dtype=torch.bfloat16 if torch.cuda.is_available() and torch.cuda.get_device_capability(0)[0] &gt;= 8 else torch.float16,\n)\n\n\ntokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True)\nif tokenizer.pad_token is None:\n   tokenizer.pad_token = tokenizer.eos_token\n\n\nmodel = AutoModelForCausalLM.from_pretrained(\n   MODEL_NAME,\n   quantization_config=bnb_config,\n   torch_dtype=torch.bfloat16 if torch.cuda.is_available() and torch.cuda.get_device_capability(0)[0] &gt;= 8 else torch.float16,\n   device_map=\"auto\",\n)\nmodel.config.use_cache = False<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We load the tokenizer and the base language model using 4-bit quantization to minimize memory usage. We configure bitsandbytes to enable efficient QLoRA-style computation on Colab GPUs. We prepare the model for training by disabling cache usage to avoid incompatibilities during backpropagation.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">from peft import LoraConfig, get_peft_model\n\n\nlora_config = LoraConfig(\n   r=16,\n   lora_alpha=32,\n   lora_dropout=0.05,\n   bias=\"none\",\n   task_type=\"CAUSAL_LM\",\n   target_modules=[\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\", \"up_proj\", \"down_proj\", \"gate_proj\"],\n)\n\n\nmodel = get_peft_model(model, lora_config)\nmodel.print_trainable_parameters()\n\n\nmodel.gradient_checkpointing_enable()<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We attach LoRA adapters to the model\u2019s attention and feed-forward projection layers. We restrict training to a small set of parameters to make fine-tuning efficient and stable. We enable gradient checkpointing to further reduce GPU memory consumption during training.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">from datasets import load_dataset\n\n\nds = load_dataset(DATASET_NAME)\n\n\ntrain_split = \"train_prefs\" if \"train_prefs\" in ds else (\"train\" if \"train\" in ds else list(ds.keys())[0])\ntest_split  = \"test_prefs\" if \"test_prefs\" in ds else (\"test\" if \"test\" in ds else None)\n\n\ntrain_raw = ds[train_split]\ntest_raw = ds[test_split] if test_split is not None else None\n\n\nprint(\"Splits:\", ds.keys())\nprint(\"Using train split:\", train_split, \"size:\", len(train_raw))\nif test_raw is not None:\n   print(\"Using test split:\", test_split, \"size:\", len(test_raw))\n\n\ndef _extract_last_user_and_assistant(messages):\n   last_user_idx = None\n   last_asst_idx = None\n   for i, m in enumerate(messages):\n       if m.get(\"role\") == \"user\":\n           last_user_idx = i\n       if m.get(\"role\") == \"assistant\":\n           last_asst_idx = i\n\n\n   if last_user_idx is None or last_asst_idx is None:\n       return None, None\n\n\n   prompt_messages = messages[: last_user_idx + 1]\n   assistant_text = messages[last_asst_idx].get(\"content\", \"\")\n   return prompt_messages, assistant_text\n\n\ndef format_example(ex):\n   chosen_msgs = ex[\"chosen\"]\n   rejected_msgs = ex[\"rejected\"]\n\n\n   prompt_msgs_c, chosen_text = _extract_last_user_and_assistant(chosen_msgs)\n   prompt_msgs_r, rejected_text = _extract_last_user_and_assistant(rejected_msgs)\n\n\n   if prompt_msgs_c is None or prompt_msgs_r is None:\n       return {\"prompt\": None, \"chosen\": None, \"rejected\": None}\n\n\n   prompt_text = tokenizer.apply_chat_template(\n       prompt_msgs_c, tokenize=False, add_generation_prompt=True\n   )\n\n\n   return {\n       \"prompt\": prompt_text,\n       \"chosen\": chosen_text.strip(),\n       \"rejected\": rejected_text.strip(),\n   }\n\n\ntrain_raw = train_raw.shuffle(seed=SEED)\ntrain_raw = train_raw.select(range(min(MAX_TRAIN_SAMPLES, len(train_raw))))\n\n\ntrain_ds = train_raw.map(format_example, remove_columns=train_raw.column_names)\ntrain_ds = train_ds.filter(lambda x: x[\"prompt\"] is not None and len(x[\"chosen\"]) &gt; 0 and len(x[\"rejected\"]) &gt; 0)\n\n\nif test_raw is not None:\n   test_raw = test_raw.shuffle(seed=SEED)\n   test_raw = test_raw.select(range(min(MAX_EVAL_SAMPLES, len(test_raw))))\n   eval_ds = test_raw.map(format_example, remove_columns=test_raw.column_names)\n   eval_ds = eval_ds.filter(lambda x: x[\"prompt\"] is not None and len(x[\"chosen\"]) &gt; 0 and len(x[\"rejected\"]) &gt; 0)\nelse:\n   eval_ds = None\n\n\nprint(\"Train examples:\", len(train_ds), \"Eval examples:\", len(eval_ds) if eval_ds is not None else 0)\nprint(train_ds[0])<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We load the UltraFeedback binarized dataset and dynamically select the appropriate train and test splits. We extract prompt, chosen, and rejected responses from multi-turn conversations and format them using the model\u2019s chat template. We shuffle, filter, and subsample the data to create clean and efficient training and evaluation datasets.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">from trl import DPOTrainer, DPOConfig\n\n\nuse_bf16 = torch.cuda.is_available() and torch.cuda.get_device_capability(0)[0] &gt;= 8\nuse_fp16 = torch.cuda.is_available() and not use_bf16\n\n\ntraining_args = DPOConfig(\n   output_dir=OUTPUT_DIR,\n   beta=BETA,\n   per_device_train_batch_size=PER_DEVICE_BS,\n   gradient_accumulation_steps=GRAD_ACCUM,\n   num_train_epochs=EPOCHS,\n   learning_rate=LR,\n   lr_scheduler_type=\"cosine\",\n   warmup_ratio=0.05,\n   logging_steps=LOGGING_STEPS,\n   save_steps=SAVE_STEPS,\n   save_total_limit=2,\n   bf16=use_bf16,\n   fp16=use_fp16,\n   optim=\"paged_adamw_8bit\",\n   max_length=MAX_PROMPT_LEN + MAX_COMPLETION_LEN,\n   max_prompt_length=MAX_PROMPT_LEN,\n   report_to=\"none\",\n)\n\n\ntrainer = DPOTrainer(\n   model=model,\n   args=training_args,\n   processing_class=tokenizer,\n   train_dataset=train_ds,\n   eval_dataset=eval_ds,\n)\n\n\ntrainer.train()\n\n\ntrainer.save_model(OUTPUT_DIR)\ntokenizer.save_pretrained(OUTPUT_DIR)\n\n\nprint(\"Saved to:\", OUTPUT_DIR)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We configure the DPO training objective with carefully chosen optimization and scheduling parameters. We initialize the DPOTrainer to directly optimize preference pairs without a reward model. We train the LoRA adapters and save the aligned model artifacts for later inference.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">from peft import PeftModel\nfrom transformers import pipeline\n\n\ndef generate_text(model_for_gen, prompt, max_new_tokens=180):\n   model_for_gen.eval()\n   inputs = tokenizer(prompt, return_tensors=\"pt\", truncation=True, max_length=MAX_PROMPT_LEN).to(model_for_gen.device)\n   with torch.no_grad():\n       out = model_for_gen.generate(\n           **inputs,\n           max_new_tokens=max_new_tokens,\n           do_sample=True,\n           temperature=0.7,\n           top_p=0.95,\n           pad_token_id=tokenizer.eos_token_id,\n       )\n   return tokenizer.decode(out[0], skip_special_tokens=True)\n\n\nbase_model = AutoModelForCausalLM.from_pretrained(\n   MODEL_NAME,\n   quantization_config=bnb_config,\n   torch_dtype=torch.bfloat16 if use_bf16 else torch.float16,\n   device_map=\"auto\",\n)\nbase_model.config.use_cache = True\n\n\ndpo_model = PeftModel.from_pretrained(base_model, OUTPUT_DIR)\ndpo_model.config.use_cache = True\n\n\nsample_pool = eval_ds if eval_ds is not None and len(eval_ds) &gt; 0 else train_ds\nsamples = [sample_pool[i] for i in random.sample(range(len(sample_pool)), k=min(3, len(sample_pool)))]\n\n\nfor i, ex in enumerate(samples, 1):\n   prompt = ex[\"prompt\"]\n   print(\"n\" + \"=\"*90)\n   print(f\"Sample #{i}\")\n   print(\"- Prompt:n\", prompt)\n\n\n   base_out = generate_text(base_model, prompt)\n   dpo_out  = generate_text(dpo_model, prompt)\n\n\n   print(\"n- Base model output:n\", base_out)\n   print(\"n- DPO (LoRA) output:n\", dpo_out)\n\n\nprint(\"nDone.\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We reload the base model and attach the trained DPO LoRA adapters for inference. We generate responses from both the original and aligned models using the same prompts for comparison. We qualitatively evaluate how preference optimization changes model behavior by inspecting the outputs side by side.<\/p>\n<p>In conclusion, we demonstrated how DPO provides a stable and efficient alternative to RLHF by directly optimizing preference pairs with a simple, well-defined objective. We showed that parameter-efficient fine-tuning with LoRA and 4-bit quantization enables practical experimentation even under tight compute constraints. We qualitatively validated alignment by comparing generations before and after DPO training, confirming that the model learns to prefer higher-quality responses while remaining lightweight and deployable.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/LLM%20Projects\/dpo_alignment_qlora_ultrafeedback_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/02\/12\/how-to-align-large-language-models-with-human-preferences-using-direct-preference-optimization-qlora-and-ultra-feedback\/\">How to Align Large Language Models with Human Preferences Using Direct Preference Optimization, QLoRA, and Ultra-Feedback<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we implement&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-405","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/405","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=405"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/405\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=405"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=405"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=405"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}