{"id":351,"date":"2026-02-02T09:14:35","date_gmt":"2026-02-02T01:14:35","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=351"},"modified":"2026-02-02T09:14:35","modified_gmt":"2026-02-02T01:14:35","slug":"a-coding-and-experimental-analysis-of-decentralized-federated-learning-with-gossip-protocols-and-differential-privacy","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=351","title":{"rendered":"A Coding and Experimental Analysis of Decentralized Federated Learning with Gossip Protocols and Differential Privacy"},"content":{"rendered":"<p>In this tutorial, we explore how federated learning behaves when the traditional centralized aggregation server is removed and replaced with a fully decentralized, peer-to-peer gossip mechanism. We implement both centralized FedAvg and decentralized Gossip Federated Learning from scratch and introduce client-side differential privacy by injecting calibrated noise into local model updates. By running controlled experiments on non-IID MNIST data, we examine how privacy strength, as measured by different epsilon values, directly affects convergence speed, stability, and final model accuracy. Also, we study the practical trade-offs between privacy guarantees and learning efficiency in real-world decentralized learning systems. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Distributed%20Systems\/decentralized_gossip_federated_learning_with_differential_privacy_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a><\/strong>.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import os, math, random, time\nfrom dataclasses import dataclass\nfrom typing import Dict, List, Tuple\nimport subprocess, sys\n\n\ndef pip_install(pkgs):\n   subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"-q\"] + pkgs)\n\n\npip_install([\"torch\", \"torchvision\", \"numpy\", \"matplotlib\", \"networkx\", \"tqdm\"])\n\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.utils.data import DataLoader, Subset\nfrom torchvision import datasets, transforms\nimport matplotlib.pyplot as plt\nimport networkx as nx\nfrom tqdm import trange\n\n\nSEED = 7\nrandom.seed(SEED)\nnp.random.seed(SEED)\ntorch.manual_seed(SEED)\ntorch.cuda.manual_seed_all(SEED)\ntorch.backends.cudnn.deterministic = False\ntorch.backends.cudnn.benchmark = True\n\n\ntransform = transforms.Compose([transforms.ToTensor()])\n\n\ntrain_ds = datasets.MNIST(root=\"\/content\/data\", train=True, download=True, transform=transform)\ntest_ds  = datasets.MNIST(root=\"\/content\/data\", train=False, download=True, transform=transform)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We set up the execution environment and installed all required dependencies. We initialize random seeds and device settings to maintain reproducibility across experiments. We also load the MNIST dataset, which serves as a lightweight yet effective benchmark for federated learning experiments. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Distributed%20Systems\/decentralized_gossip_federated_learning_with_differential_privacy_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a><\/strong>.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">def make_noniid_clients(dataset, num_clients=20, shards_per_client=2, seed=SEED):\n   rng = np.random.default_rng(seed)\n   y = np.array([dataset[i][1] for i in range(len(dataset))])\n   idx = np.arange(len(dataset))\n   idx_sorted = idx[np.argsort(y)]\n   num_shards = num_clients * shards_per_client\n   shard_size = len(dataset) \/\/ num_shards\n   shards = [idx_sorted[i*shard_size:(i+1)*shard_size] for i in range(num_shards)]\n   rng.shuffle(shards)\n   client_indices = []\n   for c in range(num_clients):\n       take = shards[c*shards_per_client:(c+1)*shards_per_client]\n       client_indices.append(np.concatenate(take))\n   return client_indices\n\n\nNUM_CLIENTS = 20\nclient_indices = make_noniid_clients(train_ds, num_clients=NUM_CLIENTS, shards_per_client=2)\n\n\ntest_loader = DataLoader(test_ds, batch_size=1024, shuffle=False, num_workers=2, pin_memory=True)\n\n\nclass MLP(nn.Module):\n   def __init__(self):\n       super().__init__()\n       self.fc1 = nn.Linear(28*28, 256)\n       self.fc2 = nn.Linear(256, 128)\n       self.fc3 = nn.Linear(128, 10)\n   def forward(self, x):\n       x = x.view(x.size(0), -1)\n       x = F.relu(self.fc1(x))\n       x = F.relu(self.fc2(x))\n       return self.fc3(x)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We construct a non-IID data distribution by partitioning the training dataset into label-based shards across multiple clients. We define a compact neural network model that balances expressiveness and computational efficiency. It enables us to realistically simulate data heterogeneity, a critical challenge in federated learning systems. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Distributed%20Systems\/decentralized_gossip_federated_learning_with_differential_privacy_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a><\/strong>.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">def get_model_params(model):\n   return {k: v.detach().clone() for k, v in model.state_dict().items()}\n\n\ndef set_model_params(model, params):\n   model.load_state_dict(params, strict=True)\n\n\ndef add_params(a, b):\n   return {k: a[k] + b[k] for k in a.keys()}\n\n\ndef sub_params(a, b):\n   return {k: a[k] - b[k] for k in a.keys()}\n\n\ndef scale_params(a, s):\n   return {k: a[k] * s for k in a.keys()}\n\n\ndef mean_params(params_list):\n   out = {k: torch.zeros_like(params_list[0][k]) for k in params_list[0].keys()}\n   for p in params_list:\n       for k in out.keys():\n           out[k] += p[k]\n   for k in out.keys():\n       out[k] \/= len(params_list)\n   return out\n\n\ndef l2_norm_params(delta):\n   sq = 0.0\n   for v in delta.values():\n       sq += float(torch.sum(v.float() * v.float()).item())\n   return math.sqrt(sq)\n\n\ndef dp_sanitize_update(delta, clip_norm, epsilon, delta_dp, rng):\n   norm = l2_norm_params(delta)\n   scale = min(1.0, clip_norm \/ (norm + 1e-12))\n   clipped = scale_params(delta, scale)\n   if epsilon is None or math.isinf(epsilon) or epsilon &lt;= 0:\n       return clipped\n   sigma = clip_norm * math.sqrt(2.0 * math.log(1.25 \/ delta_dp)) \/ epsilon\n   noised = {}\n   for k, v in clipped.items():\n       noise = torch.normal(mean=0.0, std=sigma, size=v.shape, generator=rng, device=v.device, dtype=v.dtype)\n       noised[k] = v + noise\n   return noised<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We implement parameter manipulation utilities that enable addition, subtraction, scaling, and averaging of model weights across clients. We introduce differential privacy by clipping local updates and injecting Gaussian noise, both determined by the chosen privacy budget. It serves as the core privacy mechanism that enables us to study the privacy\u2013utility trade-off in both centralized and decentralized settings. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Distributed%20Systems\/decentralized_gossip_federated_learning_with_differential_privacy_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a><\/strong>.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">def local_train_one_client(base_params, client_id, epochs, lr, batch_size, weight_decay=0.0):\n   model = MLP().to(device)\n   set_model_params(model, base_params)\n   model.train()\n   loader = DataLoader(\n       Subset(train_ds, client_indices[client_id].tolist() if hasattr(client_indices[client_id], \"tolist\") else client_indices[client_id]),\n       batch_size=batch_size,\n       shuffle=True,\n       num_workers=2,\n       pin_memory=True\n   )\n   opt = torch.optim.SGD(model.parameters(), lr=lr, momentum=0.9, weight_decay=weight_decay)\n   for _ in range(epochs):\n       for xb, yb in loader:\n           xb, yb = xb.to(device), yb.to(device)\n           opt.zero_grad(set_to_none=True)\n           logits = model(xb)\n           loss = F.cross_entropy(logits, yb)\n           loss.backward()\n           opt.step()\n   return get_model_params(model)\n\n\n@torch.no_grad()\ndef evaluate(params):\n   model = MLP().to(device)\n   set_model_params(model, params)\n   model.eval()\n   total, correct = 0, 0\n   loss_sum = 0.0\n   for xb, yb in test_loader:\n       xb, yb = xb.to(device), yb.to(device)\n       logits = model(xb)\n       loss = F.cross_entropy(logits, yb, reduction=\"sum\")\n       loss_sum += float(loss.item())\n       pred = torch.argmax(logits, dim=1)\n       correct += int((pred == yb).sum().item())\n       total += int(yb.numel())\n   return loss_sum \/ total, correct \/ total<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We define the local training loop that each client executes independently on its private data. We also implement a unified evaluation routine to measure test loss and accuracy for any given model state. Together, these functions simulate realistic federated learning behavior where training and evaluation are fully decoupled from data ownership. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Distributed%20Systems\/decentralized_gossip_federated_learning_with_differential_privacy_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a><\/strong>.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">@dataclass\nclass FedAvgConfig:\n   rounds: int = 25\n   clients_per_round: int = 10\n   local_epochs: int = 1\n   lr: float = 0.06\n   batch_size: int = 64\n   clip_norm: float = 2.0\n   epsilon: float = math.inf\n   delta_dp: float = 1e-5\n\n\ndef run_fedavg(cfg):\n   global_params = get_model_params(MLP().to(device))\n   history = {\"test_loss\": [], \"test_acc\": []}\n   for r in trange(cfg.rounds):\n       chosen = random.sample(range(NUM_CLIENTS), k=cfg.clients_per_round)\n       start_params = global_params\n       updates = []\n       for cid in chosen:\n           local_params = local_train_one_client(start_params, cid, cfg.local_epochs, cfg.lr, cfg.batch_size)\n           delta = sub_params(local_params, start_params)\n           rng = torch.Generator(device=device)\n           rng.manual_seed(SEED * 10000 + r * 100 + cid)\n           delta_dp = dp_sanitize_update(delta, cfg.clip_norm, cfg.epsilon, cfg.delta_dp, rng)\n           updates.append(delta_dp)\n       avg_update = mean_params(updates)\n       global_params = add_params(start_params, avg_update)\n       tl, ta = evaluate(global_params)\n       history[\"test_loss\"].append(tl)\n       history[\"test_acc\"].append(ta)\n   return history, global_params<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We implement the centralized FedAvg algorithm, where a subset of clients trains locally and sends differentially private updates to a central aggregator. We track model performance across communication rounds to observe convergence behavior under varying privacy budgets. This serves as the baseline against which decentralized gossip-based learning is compared. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Distributed%20Systems\/decentralized_gossip_federated_learning_with_differential_privacy_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a><\/strong>.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">@dataclass\nclass GossipConfig:\n   rounds: int = 25\n   local_epochs: int = 1\n   lr: float = 0.06\n   batch_size: int = 64\n   clip_norm: float = 2.0\n   epsilon: float = math.inf\n   delta_dp: float = 1e-5\n   topology: str = \"ring\"\n   p: float = 0.2\n   gossip_pairs_per_round: int = 10\n\n\ndef build_topology(cfg):\n   if cfg.topology == \"ring\":\n       G = nx.cycle_graph(NUM_CLIENTS)\n   elif cfg.topology == \"erdos_renyi\":\n       G = nx.erdos_renyi_graph(NUM_CLIENTS, cfg.p, seed=SEED)\n       if not nx.is_connected(G):\n           comps = list(nx.connected_components(G))\n           for i in range(len(comps) - 1):\n               a = next(iter(comps[i]))\n               b = next(iter(comps[i+1]))\n               G.add_edge(a, b)\n   else:\n       raise ValueError\n   return G\n\n\ndef run_gossip(cfg):\n   node_params = [get_model_params(MLP().to(device)) for _ in range(NUM_CLIENTS)]\n   G = build_topology(cfg)\n   history = {\"avg_test_loss\": [], \"avg_test_acc\": []}\n   for r in trange(cfg.rounds):\n       new_params = []\n       for cid in range(NUM_CLIENTS):\n           p0 = node_params[cid]\n           p_local = local_train_one_client(p0, cid, cfg.local_epochs, cfg.lr, cfg.batch_size)\n           delta = sub_params(p_local, p0)\n           rng = torch.Generator(device=device)\n           rng.manual_seed(SEED * 10000 + r * 100 + cid)\n           delta_dp = dp_sanitize_update(delta, cfg.clip_norm, cfg.epsilon, cfg.delta_dp, rng)\n           p_local_dp = add_params(p0, delta_dp)\n           new_params.append(p_local_dp)\n       node_params = new_params\n       edges = list(G.edges())\n       for _ in range(cfg.gossip_pairs_per_round):\n           i, j = random.choice(edges)\n           avg = mean_params([node_params[i], node_params[j]])\n           node_params[i] = avg\n           node_params[j] = avg\n       losses, accs = [], []\n       for cid in range(NUM_CLIENTS):\n           tl, ta = evaluate(node_params[cid])\n           losses.append(tl)\n           accs.append(ta)\n       history[\"avg_test_loss\"].append(float(np.mean(losses)))\n       history[\"avg_test_acc\"].append(float(np.mean(accs)))\n   return history, node_params<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We implement decentralized Gossip Federated Learning using a peer-to-peer model that exchanges over a predefined network topology. We simulate repeated local training and pairwise parameter averaging without relying on a central server. It allows us to analyze how privacy noise propagates through decentralized communication patterns and affects convergence. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Distributed%20Systems\/decentralized_gossip_federated_learning_with_differential_privacy_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a><\/strong>.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">eps_sweep = [math.inf, 8.0, 4.0, 2.0, 1.0]\nROUNDS = 20\n\n\nfedavg_results = {}\ngossip_results = {}\n\n\ncommon_local_epochs = 1\ncommon_lr = 0.06\ncommon_bs = 64\ncommon_clip = 2.0\ncommon_delta = 1e-5\n\n\nfor eps in eps_sweep:\n   fcfg = FedAvgConfig(\n       rounds=ROUNDS,\n       clients_per_round=10,\n       local_epochs=common_local_epochs,\n       lr=common_lr,\n       batch_size=common_bs,\n       clip_norm=common_clip,\n       epsilon=eps,\n       delta_dp=common_delta\n   )\n   hist_f, _ = run_fedavg(fcfg)\n   fedavg_results[eps] = hist_f\n\n\n   gcfg = GossipConfig(\n       rounds=ROUNDS,\n       local_epochs=common_local_epochs,\n       lr=common_lr,\n       batch_size=common_bs,\n       clip_norm=common_clip,\n       epsilon=eps,\n       delta_dp=common_delta,\n       topology=\"ring\",\n       gossip_pairs_per_round=10\n   )\n   hist_g, _ = run_gossip(gcfg)\n   gossip_results[eps] = hist_g\n\n\nplt.figure(figsize=(10, 5))\nfor eps in eps_sweep:\n   plt.plot(fedavg_results[eps][\"test_acc\"], label=f\"FedAvg eps={eps}\")\nplt.xlabel(\"Round\")\nplt.ylabel(\"Accuracy\")\nplt.legend()\nplt.grid(True)\nplt.show()\n\n\nplt.figure(figsize=(10, 5))\nfor eps in eps_sweep:\n   plt.plot(gossip_results[eps][\"avg_test_acc\"], label=f\"Gossip eps={eps}\")\nplt.xlabel(\"Round\")\nplt.ylabel(\"Avg Accuracy\")\nplt.legend()\nplt.grid(True)\nplt.show()\n\n\nfinal_fed = [fedavg_results[eps][\"test_acc\"][-1] for eps in eps_sweep]\nfinal_gos = [gossip_results[eps][\"avg_test_acc\"][-1] for eps in eps_sweep]\n\n\nx = [100.0 if math.isinf(eps) else eps for eps in eps_sweep]\n\n\nplt.figure(figsize=(8, 5))\nplt.plot(x, final_fed, marker=\"o\", label=\"FedAvg\")\nplt.plot(x, final_gos, marker=\"o\", label=\"Gossip\")\nplt.xlabel(\"Epsilon\")\nplt.ylabel(\"Final Accuracy\")\nplt.legend()\nplt.grid(True)\nplt.show()\n\n\ndef rounds_to_threshold(acc_curve, threshold):\n   for i, a in enumerate(acc_curve):\n       if a &gt;= threshold:\n           return i + 1\n   return None\n\n\nbest_f = fedavg_results[math.inf][\"test_acc\"][-1]\nbest_g = gossip_results[math.inf][\"avg_test_acc\"][-1]\n\n\nth_f = 0.9 * best_f\nth_g = 0.9 * best_g\n\n\nfor eps in eps_sweep:\n   rf = rounds_to_threshold(fedavg_results[eps][\"test_acc\"], th_f)\n   rg = rounds_to_threshold(gossip_results[eps][\"avg_test_acc\"], th_g)\n   print(eps, rf, rg)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We run controlled experiments across multiple privacy levels and collect results for both centralized and decentralized training strategies. We visualize convergence trends and final accuracy to clearly expose the privacy\u2013utility trade-off. We also compute convergence speed metrics to quantitatively compare how different aggregation schemes respond to increasing privacy constraints.<\/p>\n<p>In conclusion, we demonstrated that decentralization fundamentally changes how differential privacy noise propagates through a federated system. We observed that while centralized FedAvg typically converges faster under weak privacy constraints, gossip-based federated learning is more robust to noisy updates at the cost of slower convergence. Our experiments highlighted that stronger privacy guarantees significantly slow learning in both settings, but the effect is amplified in decentralized topologies due to delayed information mixing. Overall, we showed that designing privacy-preserving federated systems requires jointly reasoning about aggregation topology, communication patterns, and privacy budgets rather than treating them as independent choices.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Distributed%20Systems\/decentralized_gossip_federated_learning_with_differential_privacy_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a><\/strong>.\u00a0Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/02\/01\/a-coding-and-experimental-analysis-of-decentralized-federated-learning-with-gossip-protocols-and-differential-privacy\/\">A Coding and Experimental Analysis of Decentralized Federated Learning with Gossip Protocols and Differential Privacy<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we explore h&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-351","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/351","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=351"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/351\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=351"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=351"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=351"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}