{"id":387,"date":"2026-02-10T12:57:19","date_gmt":"2026-02-10T04:57:19","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=387"},"modified":"2026-02-10T12:57:19","modified_gmt":"2026-02-10T04:57:19","slug":"how-to-build-a-privacy-preserving-federated-pipeline-to-fine-tune-large-language-models-with-lora-using-flower-and-peft","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=387","title":{"rendered":"How to Build a Privacy-Preserving Federated Pipeline to Fine-Tune Large Language Models with LoRA Using Flower and PEFT"},"content":{"rendered":"<p>In this tutorial, we demonstrate how to federate fine-tuning of a large language model using LoRA without ever centralizing private text data. We simulate multiple organizations as virtual clients and show how each client adapts a shared base model locally while exchanging only lightweight LoRA adapter parameters. By combining Flower\u2019s federated learning simulation engine with parameter-efficient fine-tuning, we demonstrate a practical, scalable approach for organizations that want to customize LLMs on sensitive data while preserving privacy and reducing communication and compute costs. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Federated%20Learning\/federated_lora_llm_finetuning_flower_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a>.<\/strong><\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">!pip -q install -U \"protobuf&lt;5\" \"flwr[simulation]\" transformers peft accelerate datasets sentencepiece\nimport torch\nif torch.cuda.is_available():\n   !pip -q install -U bitsandbytes\nimport os\nos.environ[\"RAY_DISABLE_USAGE_STATS\"] = \"1\"\nos.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\nimport math\nimport random\nimport numpy as np\nfrom typing import Dict, List, Tuple, Optional\nfrom torch.utils.data import DataLoader\nfrom datasets import Dataset\nimport flwr as fl\nfrom flwr.common import Context\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, DataCollatorForLanguageModeling\nfrom peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training\nSEED = 7\nrandom.seed(SEED)\nnp.random.seed(SEED)\ntorch.manual_seed(SEED)\ntorch.cuda.manual_seed_all(SEED)\nDEVICE = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nprint(\"Device:\", DEVICE)\nGPU_MODEL_ID = \"TinyLlama\/TinyLlama-1.1B-Chat-v1.0\"\nCPU_MODEL_ID = \"distilgpt2\"\nMODEL_ID = GPU_MODEL_ID if DEVICE == \"cuda\" else CPU_MODEL_ID\nMAX_LEN = 256 if DEVICE == \"cuda\" else 192\nNUM_CLIENTS = 3\nROUNDS = 3\nLOCAL_EPOCHS = 1\nBATCH_SIZE = 2\nGRAD_ACCUM = 4\nLR = 2e-4\nWARMUP_STEPS = 5\nWEIGHT_DECAY = 0.0\nLOG_EVERY = 10\nCLIENT_TEXTS: Dict[int, List[str]] = {\n   0: [\n       \"Policy memo: Employees must rotate on-call weekly and document incidents in the internal tracker.\",\n       \"Runbook: If latency spikes, check the database connection pool and recent deploys, then roll back if needed.\",\n       \"Security note: Never paste customer identifiers into public issue trackers. Use redacted tokens.\",\n       \"Engineering guideline: Prefer idempotent retries for event processing; avoid duplicate side-effects.\",\n       \"Postmortem template: impact, timeline, root cause, contributing factors, action items, owners, deadlines.\"\n   ],\n   1: [\n       \"Credit risk review: monitor delinquency curves by cohort and compare against seasonal baselines.\",\n       \"Fraud signals: repeated small authorizations, device changes, and sudden merchant-category shifts require review.\",\n       \"Portfolio strategy: tighten limits on volatile segments while maintaining service levels for stable accounts.\",\n       \"Operational note: reconcile chargebacks weekly and track win-rate by reason code.\",\n       \"Internal SOP: escalation path is analyst -&gt; manager -&gt; compliance for high-risk cases.\"\n   ],\n   2: [\n       \"Fleet ops: preventive maintenance reduces downtime; prioritize vehicles with repeated fault codes.\",\n       \"Dispatch note: optimize routes by time windows and driver hours to reduce empty miles.\",\n       \"Safety policy: enforce rest breaks and log inspections before long-haul trips.\",\n       \"Inventory update: track spare parts usage; reorder thresholds should reflect lead time and seasonality.\",\n       \"Customer SLA: late deliveries require proactive notifications and documented root cause.\"\n   ],\n}\nfor cid in list(CLIENT_TEXTS.keys()):\n   base = CLIENT_TEXTS[cid]\n   CLIENT_TEXTS[cid] = base + [f\"Q: Summarize this for leadership. A: {t}\" for t in base]\ntokenizer = AutoTokenizer.from_pretrained(MODEL_ID, use_fast=True)\nif tokenizer.pad_token is None:\n   tokenizer.pad_token = tokenizer.eos_token\nbnb_config: Optional[BitsAndBytesConfig] = None\nif DEVICE == \"cuda\":\n   compute_dtype = torch.bfloat16 if torch.cuda.get_device_capability(0)[0] &gt;= 8 else torch.float16\n   bnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type=\"nf4\", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=compute_dtype)\nif \"gpt2\" in MODEL_ID.lower():\n   TARGET_MODULES = [\"c_attn\", \"c_proj\"]\nelse:\n   TARGET_MODULES = [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\"]\nLORA_R = 16\nLORA_ALPHA = 32\nLORA_DROPOUT = 0.05\nlora_config = LoraConfig(r=LORA_R, lora_alpha=LORA_ALPHA, lora_dropout=LORA_DROPOUT, bias=\"none\", task_type=\"CAUSAL_LM\", target_modules=TARGET_MODULES)\ndef model_primary_device(model) -&gt; torch.device:\n   return next(model.parameters()).device\ndef build_model_with_lora():\n   if DEVICE == \"cuda\":\n       model = AutoModelForCausalLM.from_pretrained(MODEL_ID, device_map=\"auto\", quantization_config=bnb_config, torch_dtype=\"auto\")\n       model = prepare_model_for_kbit_training(model)\n   else:\n       model = AutoModelForCausalLM.from_pretrained(MODEL_ID, torch_dtype=torch.float32)\n       model.to(\"cpu\")\n   model = get_peft_model(model, lora_config)\n   model.train()\n   return model\ndef make_dataset(texts: List[str]) -&gt; Dataset:\n   ds = Dataset.from_dict({\"text\": texts})\n   def tok(batch):\n       return tokenizer(batch[\"text\"], truncation=True, max_length=MAX_LEN, padding=\"max_length\")\n   ds = ds.map(tok, batched=True, remove_columns=[\"text\"])\n   return ds\ncollator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)\ndef lora_state_keys(model) -&gt; List[str]:\n   sd = model.state_dict()\n   keys = sorted([k for k in sd.keys() if \"lora_\" in k])\n   if not keys:\n       raise RuntimeError(\"No LoRA keys found. Your model might not have the target_modules specified. \" f\"Current TARGET_MODULES={TARGET_MODULES}, MODEL_ID={MODEL_ID}\")\n   return keys\ndef get_lora_ndarrays(model) -&gt; List[np.ndarray]:\n   sd = model.state_dict()\n   keys = lora_state_keys(model)\n   return [sd[k].detach().float().cpu().numpy() for k in keys]\ndef set_lora_ndarrays(model, arrays: List[np.ndarray]) -&gt; None:\n   keys = lora_state_keys(model)\n   if len(keys) != len(arrays):\n       raise ValueError(f\"Mismatch: got {len(arrays)} arrays but expected {len(keys)}.\")\n   sd = model.state_dict()\n   for k, arr in zip(keys, arrays):\n       t = torch.from_numpy(arr).to(sd[k].device).to(sd[k].dtype)\n       sd[k].copy_(t)\ndef cosine_warmup_lr(step: int, total_steps: int, base_lr: float, warmup_steps: int) -&gt; float:\n   if step &lt; warmup_steps:\n       return base_lr * (step + 1) \/ max(1, warmup_steps)\n   progress = (step - warmup_steps) \/ max(1, total_steps - warmup_steps)\n   return base_lr * 0.5 * (1.0 + math.cos(math.pi * progress))\n@torch.no_grad()\ndef eval_loss(model, ds: Dataset, max_batches: int = 20) -&gt; float:\n   model.eval()\n   dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=False, collate_fn=collator)\n   losses = []\n   dev = model_primary_device(model)\n   for i, batch in enumerate(dl):\n       if i &gt;= max_batches:\n           break\n       batch = {k: v.to(dev) for k, v in batch.items()}\n       out = model(**batch, labels=batch[\"input_ids\"])\n       losses.append(float(out.loss.detach().cpu()))\n   model.train()\n   return float(np.mean(losses)) if losses else float(\"nan\")\ndef train_one_client_round(model, ds: Dataset, epochs: int, lr: float, grad_accum: int, warmup_steps: int) -&gt; Tuple[float, int]:\n   dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collator)\n   total_steps = max(1, (len(dl) * epochs) \/\/ max(1, grad_accum))\n   step = 0\n   optimizer = torch.optim.AdamW(model.parameters(), lr=lr, weight_decay=WEIGHT_DECAY)\n   optimizer.zero_grad(set_to_none=True)\n   running = []\n   examples = 0\n   dev = model_primary_device(model)\n   for _ in range(epochs):\n       for bi, batch in enumerate(dl):\n           batch = {k: v.to(dev) for k, v in batch.items()}\n           out = model(**batch, labels=batch[\"input_ids\"])\n           loss = out.loss \/ grad_accum\n           loss.backward()\n           running.append(float(loss.detach().cpu()) * grad_accum)\n           examples += batch[\"input_ids\"].shape[0]\n           if (bi + 1) % grad_accum == 0:\n               lr_t = cosine_warmup_lr(step, total_steps, lr, warmup_steps)\n               for pg in optimizer.param_groups:\n                   pg[\"lr\"] = lr_t\n               optimizer.step()\n               optimizer.zero_grad(set_to_none=True)\n               step += 1\n               if step % LOG_EVERY == 0:\n                   print(f\"  step={step}\/{total_steps} loss={np.mean(running[-LOG_EVERY:]):.4f} lr={lr_t:.2e}\")\n   return float(np.mean(running)) if running else float(\"nan\"), examples<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We set up the full execution environment and define all global configurations required for the experiment. We prepare the private client text silos, tokenizer, LoRA configuration, and model-loading logic so they automatically adapt to CPU or GPU availability. We also establish all helper utilities that enable parameter-efficient fine-tuning and safe device handling across federated clients. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Federated%20Learning\/federated_lora_llm_finetuning_flower_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a>.<\/strong><\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">class FedLoRAClient(fl.client.NumPyClient):\n   def __init__(self, cid: int):\n       self.cid = cid\n       self._model = None\n       self._ds_train = None\n       self._ds_eval = None\n   def _ensure(self):\n       if self._model is None:\n           print(f\"[Client {self.cid}] Loading model + LoRA (MODEL_ID={MODEL_ID})...\")\n           self._model = build_model_with_lora()\n           texts = CLIENT_TEXTS[self.cid].copy()\n           random.shuffle(texts)\n           split = max(1, int(0.8 * len(texts)))\n           self._ds_train = make_dataset(texts[:split])\n           self._ds_eval = make_dataset(texts[split:])\n   def get_parameters(self, config):\n       self._ensure()\n       return get_lora_ndarrays(self._model)\n   def fit(self, parameters, config):\n       self._ensure()\n       set_lora_ndarrays(self._model, parameters)\n       loss_before = eval_loss(self._model, self._ds_eval, max_batches=10)\n       print(f\"[Client {self.cid}] eval_loss_before={loss_before:.4f}\")\n       train_loss, n_examples = train_one_client_round(self._model, self._ds_train, epochs=int(config.get(\"local_epochs\", LOCAL_EPOCHS)), lr=float(config.get(\"lr\", LR)), grad_accum=int(config.get(\"grad_accum\", GRAD_ACCUM)), warmup_steps=int(config.get(\"warmup_steps\", WARMUP_STEPS)))\n       loss_after = eval_loss(self._model, self._ds_eval, max_batches=10)\n       print(f\"[Client {self.cid}] train_loss={train_loss:.4f} eval_loss_after={loss_after:.4f}\")\n       new_params = get_lora_ndarrays(self._model)\n       metrics = {\"eval_loss_before\": loss_before, \"eval_loss_after\": loss_after, \"train_loss\": train_loss}\n       return new_params, n_examples, metrics\n   def evaluate(self, parameters, config):\n       self._ensure()\n       set_lora_ndarrays(self._model, parameters)\n       loss = eval_loss(self._model, self._ds_eval, max_batches=20)\n       return float(loss), len(self._ds_eval), {\"eval_loss\": float(loss)}\ndef client_fn(context: Context):\n   cid = None\n   try:\n       cid = int(context.node_config.get(\"partition-id\"))\n   except Exception:\n       try:\n           cid = int(context.node_id)\n       except Exception:\n           cid = 0\n   return FedLoRAClient(cid).to_client()<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We define the federated client logic that simulates independent organizations participating in training. We initialize a LoRA-augmented language model per client and ensure that local datasets remain isolated. We implement client-side training, evaluation, and parameter exchange while exposing only LoRA adapter weights to the server.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">def fit_config(server_round: int):\n   return {\"local_epochs\": LOCAL_EPOCHS, \"lr\": LR, \"grad_accum\": GRAD_ACCUM, \"warmup_steps\": WARMUP_STEPS}\nstrategy = fl.server.strategy.FedAvg(fraction_fit=1.0, fraction_evaluate=1.0, min_fit_clients=NUM_CLIENTS, min_evaluate_clients=NUM_CLIENTS, min_available_clients=NUM_CLIENTS, on_fit_config_fn=fit_config)\nprint(\"nStarting Flower simulation...n\")\nclient_resources = {\"num_cpus\": 2, \"num_gpus\": 0.0}\nif DEVICE == \"cuda\":\n   client_resources = {\"num_cpus\": 2, \"num_gpus\": 0.25}\nhistory = fl.simulation.start_simulation(client_fn=client_fn, num_clients=NUM_CLIENTS, config=fl.server.ServerConfig(num_rounds=ROUNDS), strategy=strategy, client_resources=client_resources, ray_init_args={\"include_dashboard\": False, \"ignore_reinit_error\": True})\nprint(\"nSimulation done.\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We configure the federated learning strategy and orchestrate the global training process. We specify how many clients participate, how parameters are aggregated, and how training rounds are scheduled. We then launch the Flower simulation to coordinate communication and aggregation across all virtual clients. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Federated%20Learning\/federated_lora_llm_finetuning_flower_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a>.<\/strong><\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">demo_model = build_model_with_lora()\ndemo_model.eval()\nprompt = \"Summarize this internal note for leadership in 2 bullets:nDispatch note: optimize routes by time windows and driver hours to reduce empty miles.nnAnswer:\"\ninputs = tokenizer(prompt, return_tensors=\"pt\")\ndev = model_primary_device(demo_model)\ninputs = {k: v.to(dev) for k, v in inputs.items()}\nwith torch.no_grad():\n   out = demo_model.generate(**inputs, max_new_tokens=80, do_sample=True, temperature=0.8, top_p=0.95, repetition_penalty=1.05, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)\nprint(\"n=== Generation output ===n\")\nprint(tokenizer.decode(out[0], skip_special_tokens=True))<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We load a final LoRA-augmented model instance to demonstrate inference after federated training. We prepare a realistic prompt and run text generation using the same architecture employed during training. We verify that the pipeline executes end-to-end by producing coherent, task-aligned outputs.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(type(history))\nprint(history.__dict__.keys())<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We inspect the training artifacts and simulation outputs produced by the federated run. We examine the returned history object to confirm that rounds, metrics, and aggregation completed successfully. We use this step to validate the overall integrity and reproducibility of the federated fine-tuning workflow.<\/p>\n<p>In conclusion, we showed that federated fine-tuning of LLMs is not only a research concept but something we can run end-to-end in a Colab environment today. We successfully coordinated client-side LoRA training, server-side aggregation, and evaluation without sharing raw text or full model weights. This workflow highlights how federated learning, when paired with modern PEFT techniques, enables privacy-preserving adaptation of generative models and provides a strong foundation for extending the system toward personalization, robustness, and real-world enterprise deployment.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Federated%20Learning\/federated_lora_llm_finetuning_flower_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a>.<\/strong>\u00a0Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/02\/09\/how-to-build-a-privacy-preserving-federated-pipeline-to-fine-tune-large-language-models-with-lora-using-flower-and-peft\/\">How to Build a Privacy-Preserving Federated Pipeline to Fine-Tune Large Language Models with LoRA Using Flower and PEFT<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we demonstra&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-387","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/387","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=387"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/387\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=387"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=387"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=387"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}