{"id":579,"date":"2026-03-19T16:11:01","date_gmt":"2026-03-19T08:11:01","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=579"},"modified":"2026-03-19T16:11:01","modified_gmt":"2026-03-19T08:11:01","slug":"a-coding-guide-to-implement-advanced-differential-equation-solvers-stochastic-simulations-and-neural-ordinary-differential-equations-using-diffrax-and-jax","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=579","title":{"rendered":"A Coding Guide to Implement Advanced Differential Equation Solvers, Stochastic Simulations, and Neural Ordinary Differential Equations Using Diffrax and JAX"},"content":{"rendered":"<p>In this tutorial, we explore how to solve differential equations and build neural differential equation models using the <a href=\"https:\/\/github.com\/patrick-kidger\/diffrax\"><strong>Diffrax<\/strong><\/a> library. We begin by setting up a clean computational environment and installing the required scientific computing libraries such as JAX, Diffrax, Equinox, and Optax. We then demonstrate how to solve ordinary differential equations using adaptive solvers and perform dense interpolation to query solutions at arbitrary time points. As we progress, we investigate more advanced capabilities of Diffrax, including solving classical dynamical systems, working with PyTree-based states, and running batched simulations using JAX\u2019s vectorization features. We also simulate stochastic differential equations and generate data from a dynamical system that will later be used to train a neural ordinary differential equation model.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import os, sys, subprocess, importlib, pathlib\n\n\nSENTINEL = \"\/tmp\/diffrax_colab_ready_v3\"\n\n\ndef _run(cmd):\n   subprocess.check_call(cmd)\n\n\ndef _need_install():\n   try:\n       import numpy\n       import jax\n       import diffrax\n       import equinox\n       import optax\n       import matplotlib\n       return False\n   except Exception:\n       return True\n\n\nif not os.path.exists(SENTINEL) or _need_install():\n   _run([sys.executable, \"-m\", \"pip\", \"uninstall\", \"-y\", \"numpy\", \"jax\", \"jaxlib\", \"diffrax\", \"equinox\", \"optax\"])\n   _run([sys.executable, \"-m\", \"pip\", \"install\", \"-q\", \"--upgrade\", \"pip\"])\n   _run([\n       sys.executable, \"-m\", \"pip\", \"install\", \"-q\",\n       \"numpy==1.26.4\",\n       \"jax[cpu]==0.4.38\",\n       \"jaxlib==0.4.38\",\n       \"diffrax\",\n       \"equinox\",\n       \"optax\",\n       \"matplotlib\"\n   ])\n   pathlib.Path(SENTINEL).write_text(\"ready\")\n   print(\"Packages installed cleanly. Runtime will restart now. After reconnect, run this same cell again.\")\n   os._exit(0)\n\n\nimport time\nimport math\nimport numpy as np\nimport jax\nimport jax.numpy as jnp\nimport jax.random as jr\nimport diffrax\nimport equinox as eqx\nimport optax\nimport matplotlib.pyplot as plt\n\n\nprint(\"NumPy:\", np.__version__)\nprint(\"JAX:\", jax.__version__)\nprint(\"Backend:\", jax.default_backend())\n\n\ndef logistic(t, y, args):\n   r, k = args\n   return r * y * (1 - y \/ k)\n\n\nt0, t1 = 0.0, 10.0\nts = jnp.linspace(t0, t1, 300)\ny0 = jnp.array(0.4)\nargs = (2.0, 5.0)\n\n\nsol_logistic = diffrax.diffeqsolve(\n   diffrax.ODETerm(logistic),\n   diffrax.Tsit5(),\n   t0=t0,\n   t1=t1,\n   dt0=0.05,\n   y0=y0,\n   args=args,\n   saveat=diffrax.SaveAt(ts=ts, dense=True),\n   stepsize_controller=diffrax.PIDController(rtol=1e-6, atol=1e-8),\n   max_steps=100000,\n)\n\n\nquery_ts = jnp.array([0.7, 2.35, 4.8, 9.2])\nquery_ys = jax.vmap(sol_logistic.evaluate)(query_ts)\n\n\nprint(\"n=== Example 1: Logistic growth ===\")\nprint(\"Saved solution shape:\", sol_logistic.ys.shape)\nprint(\"Interpolated values:\")\nfor t_, y_ in zip(query_ts, query_ys):\n   print(f\"t={float(t_):.3f} -&gt; y={float(y_):.6f}\")\n\n\ndef lotka_volterra(t, y, args):\n   alpha, beta, delta, gamma = args\n   prey, predator = y\n   dprey = alpha * prey - beta * prey * predator\n   dpred = delta * prey * predator - gamma * predator\n   return jnp.array([dprey, dpred])\n\n\nlv_y0 = jnp.array([10.0, 2.0])\nlv_args = (1.5, 1.0, 0.75, 1.0)\nlv_ts = jnp.linspace(0.0, 15.0, 500)\n\n\nsol_lv = diffrax.diffeqsolve(\n   diffrax.ODETerm(lotka_volterra),\n   diffrax.Dopri5(),\n   t0=0.0,\n   t1=15.0,\n   dt0=0.02,\n   y0=lv_y0,\n   args=lv_args,\n   saveat=diffrax.SaveAt(ts=lv_ts),\n   stepsize_controller=diffrax.PIDController(rtol=1e-6, atol=1e-8),\n   max_steps=100000,\n)\n\n\nprint(\"n=== Example 2: Lotka-Volterra ===\")\nprint(\"Shape:\", sol_lv.ys.shape)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We set up the environment and ensure that all required scientific computing libraries are installed correctly. We import JAX, Diffrax, Equinox, Optax, and visualization tools to build and run differential equation simulations. We then solve a logistic growth ordinary differential equation using an adaptive solver and demonstrate dense interpolation to query the solution at arbitrary time points.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">def spring_mass_damper(t, state, args):\n   k, c, m = args[\"k\"], args[\"c\"], args[\"m\"]\n   x = state[\"x\"]\n   v = state[\"v\"]\n   dx = v\n   dv = -(k \/ m) * x - (c \/ m) * v\n   return {\"x\": dx, \"v\": dv}\n\n\npytree_state0 = {\"x\": jnp.array([2.0]), \"v\": jnp.array([0.0])}\npytree_args = {\"k\": 6.0, \"c\": 0.6, \"m\": 1.5}\npytree_ts = jnp.linspace(0.0, 12.0, 400)\n\n\nsol_pytree = diffrax.diffeqsolve(\n   diffrax.ODETerm(spring_mass_damper),\n   diffrax.Tsit5(),\n   t0=0.0,\n   t1=12.0,\n   dt0=0.02,\n   y0=pytree_state0,\n   args=pytree_args,\n   saveat=diffrax.SaveAt(ts=pytree_ts),\n   stepsize_controller=diffrax.PIDController(rtol=1e-6, atol=1e-8),\n   max_steps=100000,\n)\n\n\nprint(\"n=== Example 3: PyTree state ===\")\nprint(\"x shape:\", sol_pytree.ys[\"x\"].shape)\nprint(\"v shape:\", sol_pytree.ys[\"v\"].shape)\n\n\ndef damped_oscillator(t, y, args):\n   omega, zeta = args\n   x, v = y\n   dx = v\n   dv = -(omega ** 2) * x - 2.0 * zeta * omega * v\n   return jnp.array([dx, dv])\n\n\nbatch_y0 = jnp.array([\n   [1.0, 0.0],\n   [1.5, 0.0],\n   [2.0, 0.0],\n   [2.5, 0.0],\n   [3.0, 0.0],\n])\nbatch_args = (2.5, 0.15)\nbatch_ts = jnp.linspace(0.0, 10.0, 300)\n\n\ndef solve_single(y0_single):\n   sol = diffrax.diffeqsolve(\n       diffrax.ODETerm(damped_oscillator),\n       diffrax.Tsit5(),\n       t0=0.0,\n       t1=10.0,\n       dt0=0.02,\n       y0=y0_single,\n       args=batch_args,\n       saveat=diffrax.SaveAt(ts=batch_ts),\n       stepsize_controller=diffrax.PIDController(rtol=1e-5, atol=1e-7),\n       max_steps=100000,\n   )\n   return sol.ys\n\n\nbatched_ys = jax.vmap(solve_single)(batch_y0)\n\n\nprint(\"n=== Example 4: Batched solves ===\")\nprint(\"Batched shape:\", batched_ys.shape)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We model the Lotka\u2013Volterra predator\u2013prey system to study the dynamics of interacting populations over time. We then introduce a PyTree-based state representation to simulate a spring\u2013mass\u2013damper system where the system state is stored as structured data. Finally, we perform batched differential equation solves using JAX\u2019s vmap to efficiently simulate multiple systems in parallel.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">sigma = 0.30\ntheta = 1.20\nmu = 1.50\nsde_ts = jnp.linspace(0.0, 6.0, 400)\n\n\ndef ou_drift(t, y, args):\n   theta_, mu_ = args\n   return theta_ * (mu_ - y)\n\n\ndef ou_diffusion(t, y, args):\n   return jnp.array([[sigma]])\n\n\ndef solve_ou(key):\n   bm = diffrax.VirtualBrownianTree(\n       t0=0.0,\n       t1=6.0,\n       tol=1e-3,\n       shape=(1,),\n       key=key,\n   )\n   terms = diffrax.MultiTerm(\n       diffrax.ODETerm(ou_drift),\n       diffrax.ControlTerm(ou_diffusion, bm),\n   )\n   sol = diffrax.diffeqsolve(\n       terms,\n       diffrax.EulerHeun(),\n       t0=0.0,\n       t1=6.0,\n       dt0=0.01,\n       y0=jnp.array([0.0]),\n       args=(theta, mu),\n       saveat=diffrax.SaveAt(ts=sde_ts),\n       max_steps=100000,\n   )\n   return sol.ys[:, 0]\n\n\nsde_keys = jr.split(jr.PRNGKey(0), 5)\nsde_paths = jax.vmap(solve_ou)(sde_keys)\n\n\nprint(\"n=== Example 5: SDE ===\")\nprint(\"SDE paths shape:\", sde_paths.shape)\n\n\ntrue_a = 0.25\ntrue_b = 2.20\ntrain_ts = jnp.linspace(0.0, 6.0, 120)\n\n\ndef true_dynamics(t, y, args):\n   x, v = y\n   dx = v\n   dv = -true_b * x - true_a * v + 0.1 * jnp.sin(2.0 * t)\n   return jnp.array([dx, dv])\n\n\ntrue_sol = diffrax.diffeqsolve(\n   diffrax.ODETerm(true_dynamics),\n   diffrax.Tsit5(),\n   t0=0.0,\n   t1=6.0,\n   dt0=0.01,\n   y0=jnp.array([1.0, 0.0]),\n   saveat=diffrax.SaveAt(ts=train_ts),\n   stepsize_controller=diffrax.PIDController(rtol=1e-6, atol=1e-8),\n   max_steps=100000,\n)\n\n\nnoise_key = jr.PRNGKey(42)\ntrain_y = true_sol.ys + 0.01 * jr.normal(noise_key, true_sol.ys.shape)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We simulate a stochastic differential equation representing an Ornstein\u2013Uhlenbeck process. We construct a Brownian motion process and integrate it with the drift and diffusion terms to generate multiple stochastic trajectories. We then create a synthetic dataset by solving a physical dynamical system that will later be used to train a neural differential equation model.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">class ODEFunc(eqx.Module):\n   mlp: eqx.nn.MLP\n\n\n   def __init__(self, key, width=64, depth=2):\n       self.mlp = eqx.nn.MLP(\n           in_size=3,\n           out_size=2,\n           width_size=width,\n           depth=depth,\n           activation=jax.nn.tanh,\n           final_activation=lambda x: x,\n           key=key,\n       )\n\n\n   def __call__(self, t, y, args):\n       inp = jnp.concatenate([y, jnp.array([t])], axis=0)\n       return self.mlp(inp)\n\n\nclass NeuralODE(eqx.Module):\n   func: ODEFunc\n\n\n   def __init__(self, key):\n       self.func = ODEFunc(key)\n\n\n   def __call__(self, ts, y0):\n       sol = diffrax.diffeqsolve(\n           diffrax.ODETerm(self.func),\n           diffrax.Tsit5(),\n           t0=ts[0],\n           t1=ts[-1],\n           dt0=0.01,\n           y0=y0,\n           saveat=diffrax.SaveAt(ts=ts),\n           stepsize_controller=diffrax.PIDController(rtol=1e-4, atol=1e-6),\n           max_steps=100000,\n       )\n       return sol.ys\n\n\nmodel = NeuralODE(jr.PRNGKey(123))\noptim = optax.adam(1e-2)\nopt_state = optim.init(eqx.filter(model, eqx.is_array))\n\n\n@eqx.filter_value_and_grad\ndef loss_fn(model, ts, y0, target):\n   pred = model(ts, y0)\n   return jnp.mean((pred - target) ** 2)\n\n\n@eqx.filter_jit\ndef train_step(model, opt_state, ts, y0, target):\n   loss, grads = loss_fn(model, ts, y0, target)\n   updates, opt_state = optim.update(grads, opt_state, model)\n   model = eqx.apply_updates(model, updates)\n   return model, opt_state, loss\n\n\nprint(\"n=== Example 6: Neural ODE training ===\")\nlosses = []\nstart = time.time()\n\n\nfor step in range(200):\n   model, opt_state, loss = train_step(model, opt_state, train_ts, jnp.array([1.0, 0.0]), train_y)\n   losses.append(float(loss))\n   if step % 40 == 0 or step == 199:\n       print(f\"step={step:03d} loss={float(loss):.8f}\")\n\n\nelapsed = time.time() - start\npred_y = model(train_ts, jnp.array([1.0, 0.0]))\nprint(f\"Training time: {elapsed:.2f}s\")\n\n\njit_solver = jax.jit(solve_single)\n_ = jit_solver(batch_y0[0]).block_until_ready()\nbench_start = time.time()\n_ = jit_solver(batch_y0[0]).block_until_ready()\nbench_end = time.time()\nprint(\"n=== Example 7: JIT benchmark ===\")\nprint(f\"Single compiled solve latency: {(bench_end - bench_start) * 1000:.2f} ms\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We build a neural ordinary differential equation model using Equinox to represent the system dynamics with a neural network. We define a loss function and optimization procedure using Optax so that the model can learn the underlying dynamics from data. We then train the neural ODE using the differential equation solver and evaluate its performance, benchmarking the solver with JAX\u2019s JIT compilation.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">plt.figure(figsize=(8, 4))\nplt.plot(ts, sol_logistic.ys, label=\"solution\")\nplt.scatter(np.array(query_ts), np.array(query_ys), s=30, label=\"dense interpolation\")\nplt.title(\"Adaptive ODE + Dense Interpolation\")\nplt.xlabel(\"t\")\nplt.ylabel(\"y\")\nplt.legend()\nplt.tight_layout()\nplt.show()\n\n\nplt.figure(figsize=(8, 4))\nplt.plot(lv_ts, sol_lv.ys[:, 0], label=\"prey\")\nplt.plot(lv_ts, sol_lv.ys[:, 1], label=\"predator\")\nplt.title(\"Lotka-Volterra\")\nplt.xlabel(\"t\")\nplt.ylabel(\"population\")\nplt.legend()\nplt.tight_layout()\nplt.show()\n\n\nplt.figure(figsize=(8, 4))\nplt.plot(pytree_ts, sol_pytree.ys[\"x\"][:, 0], label=\"position\")\nplt.plot(pytree_ts, sol_pytree.ys[\"v\"][:, 0], label=\"velocity\")\nplt.title(\"PyTree State Solve\")\nplt.xlabel(\"t\")\nplt.legend()\nplt.tight_layout()\nplt.show()\n\n\nplt.figure(figsize=(8, 4))\nfor i in range(batched_ys.shape[0]):\n   plt.plot(batch_ts, batched_ys[i, :, 0], label=f\"x0={float(batch_y0[i,0]):.1f}\")\nplt.title(\"Batched Solves with vmap\")\nplt.xlabel(\"t\")\nplt.ylabel(\"x(t)\")\nplt.legend()\nplt.tight_layout()\nplt.show()\n\n\nplt.figure(figsize=(8, 4))\nfor i in range(sde_paths.shape[0]):\n   plt.plot(sde_ts, sde_paths[i], alpha=0.8)\nplt.title(\"SDE Sample Paths (Ornstein-Uhlenbeck)\")\nplt.xlabel(\"t\")\nplt.ylabel(\"state\")\nplt.tight_layout()\nplt.show()\n\n\nplt.figure(figsize=(8, 4))\nplt.plot(train_ts, train_y[:, 0], label=\"target x\")\nplt.plot(train_ts, pred_y[:, 0], \"--\", label=\"pred x\")\nplt.plot(train_ts, train_y[:, 1], label=\"target v\")\nplt.plot(train_ts, pred_y[:, 1], \"--\", label=\"pred v\")\nplt.title(\"Neural ODE Fit\")\nplt.xlabel(\"t\")\nplt.legend()\nplt.tight_layout()\nplt.show()\n\n\nplt.figure(figsize=(8, 4))\nplt.plot(losses)\nplt.yscale(\"log\")\nplt.title(\"Neural ODE Training Loss\")\nplt.xlabel(\"step\")\nplt.ylabel(\"MSE\")\nplt.tight_layout()\nplt.show()\n\n\nprint(\"n=== SUMMARY ===\")\nprint(\"1. Adaptive ODE solve with Tsit5\")\nprint(\"2. Dense interpolation using solution.evaluate\")\nprint(\"3. PyTree-valued states\")\nprint(\"4. Batched solves using jax.vmap\")\nprint(\"5. SDE simulation with VirtualBrownianTree\")\nprint(\"6. Neural ODE training with Equinox + Optax\")\nprint(\"7. JIT-compiled solve benchmark complete\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We visualize the results of the simulations and training process to understand the behavior of the systems we modeled. We plot the logistic growth solution, predator\u2013prey dynamics, PyTree system states, batched oscillator trajectories, and stochastic paths. Also, we compare the neural ODE predictions with the target data and display the training loss to summarize the model\u2019s overall performance.<\/p>\n<p>In conclusion, we implemented a complete workflow for scientific computing and machine learning using Diffrax and the JAX ecosystem. We solved deterministic and stochastic differential equations, performed batched simulations, and trained a neural ODE model that learns the underlying dynamics of a system from data. Throughout the process, we leveraged JAX\u2019s just-in-time compilation and automatic differentiation to achieve efficient computation and scalable experimentation. By combining Diffrax with Equinox and Optax, we demonstrated how differential equation solvers can seamlessly integrate with modern deep learning frameworks.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Deep%20Learning\/diffrax_differential_equations_neural_ode_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Notebook here<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">120k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/03\/19\/a-coding-guide-to-implement-advanced-differential-equation-solvers-stochastic-simulations-and-neural-ordinary-differential-equations-using-diffrax-and-jax\/\">A Coding Guide to Implement Advanced Differential Equation Solvers, Stochastic Simulations, and Neural Ordinary Differential Equations Using Diffrax and JAX<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we explore h&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-579","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/579","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=579"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/579\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=579"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=579"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=579"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}