{"id":676,"date":"2026-04-06T04:19:56","date_gmt":"2026-04-05T20:19:56","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=676"},"modified":"2026-04-06T04:19:56","modified_gmt":"2026-04-05T20:19:56","slug":"how-to-build-a-netflix-void-video-object-removal-and-inpainting-pipeline-with-cogvideox-custom-prompting-and-end-to-end-sample-inference","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=676","title":{"rendered":"How to Build a Netflix VOID Video Object Removal and Inpainting Pipeline with CogVideoX, Custom Prompting, and End-to-End Sample Inference"},"content":{"rendered":"<p>In this tutorial, we build and run an advanced pipeline for <a href=\"https:\/\/github.com\/Netflix\/void-model\"><strong>Netflix\u2019s VOID model<\/strong><\/a>. We set up the environment, install all required dependencies, clone the repository, download the official base model and VOID checkpoint, and prepare the sample inputs needed for video object removal. We also make the workflow more practical by allowing secure terminal-style secret input for tokens and optionally using an OpenAI model to generate a cleaner background prompt. As we move through the tutorial, we load the model components, configure the pipeline, run inference on a built-in sample, and visualize both the generated result and a side-by-side comparison, giving us a full hands-on understanding of how VOID works in practice. Check out\u00a0the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Computer%20Vision\/netflix_void_video_object_removal_inpainting_pipeline_with_cogvideox_and_sample_inference.py\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes<\/a><\/strong><\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import os, sys, json, shutil, subprocess, textwrap, gc\nfrom pathlib import Path\nfrom getpass import getpass\n\n\ndef run(cmd, check=True):\n   print(f\"n[RUN] {cmd}\")\n   result = subprocess.run(cmd, shell=True, text=True)\n   if check and result.returncode != 0:\n       raise RuntimeError(f\"Command failed with exit code {result.returncode}: {cmd}\")\n\n\nprint(\"=\" * 100)\nprint(\"VOID \u2014 ADVANCED GOOGLE COLAB TUTORIAL\")\nprint(\"=\" * 100)\n\n\ntry:\n   import torch\n   gpu_name = torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"CPU\"\n   print(f\"PyTorch already available. CUDA: {torch.cuda.is_available()} | Device: {gpu_name}\")\nexcept Exception:\n   run(f\"{sys.executable} -m pip install -q torch torchvision torchaudio\")\n   import torch\n   gpu_name = torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"CPU\"\n   print(f\"CUDA: {torch.cuda.is_available()} | Device: {gpu_name}\")\n\n\nif not torch.cuda.is_available():\n   raise RuntimeError(\"This tutorial needs a GPU runtime. In Colab, go to Runtime &gt; Change runtime type &gt; GPU.\")\n\n\nprint(\"nThis repo is heavy. The official notebook notes 40GB+ VRAM is recommended.\")\nprint(\"A100 works best. T4\/L4 may fail or be extremely slow even with CPU offload.n\")\n\n\nHF_TOKEN = getpass(\"Enter your Hugging Face token (input hidden, press Enter if already logged in): \").strip()\nOPENAI_API_KEY = getpass(\"Enter your OpenAI API key for OPTIONAL prompt assistance (press Enter to skip): \").strip()\n\n\nrun(f\"{sys.executable} -m pip install -q --upgrade pip\")\nrun(f\"{sys.executable} -m pip install -q huggingface_hub hf_transfer\")\nrun(\"apt-get -qq update &amp;&amp; apt-get -qq install -y ffmpeg git\")\nrun(\"rm -rf \/content\/void-model\")\nrun(\"git clone https:\/\/github.com\/Netflix\/void-model.git \/content\/void-model\")\nos.chdir(\"\/content\/void-model\")\n\n\nif HF_TOKEN:\n   os.environ[\"HF_TOKEN\"] = HF_TOKEN\n   os.environ[\"HUGGINGFACE_HUB_TOKEN\"] = HF_TOKEN\n\n\nos.environ[\"HF_HUB_ENABLE_HF_TRANSFER\"] = \"1\"\n\n\nrun(f\"{sys.executable} -m pip install -q -r requirements.txt\")\n\n\nif OPENAI_API_KEY:\n   run(f\"{sys.executable} -m pip install -q openai\")\n   os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY\n\n\nfrom huggingface_hub import snapshot_download, hf_hub_download<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We set up the full Colab environment and prepared the system for running the VOID pipeline. We install the required tools, check whether GPU support is available, securely collect the Hugging Face and optional OpenAI API keys, and clone the official repository into the Colab workspace. We also configure environment variables and install project dependencies so the rest of the workflow can run smoothly without manual setup later.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"nDownloading base CogVideoX inpainting model...\")\nsnapshot_download(\n   repo_id=\"alibaba-pai\/CogVideoX-Fun-V1.5-5b-InP\",\n   local_dir=\".\/CogVideoX-Fun-V1.5-5b-InP\",\n   token=HF_TOKEN if HF_TOKEN else None,\n   local_dir_use_symlinks=False,\n   resume_download=True,\n)\n\n\nprint(\"nDownloading VOID Pass 1 checkpoint...\")\nhf_hub_download(\n   repo_id=\"netflix\/void-model\",\n   filename=\"void_pass1.safetensors\",\n   local_dir=\".\",\n   token=HF_TOKEN if HF_TOKEN else None,\n   local_dir_use_symlinks=False,\n)\n\n\nsample_options = [\"lime\", \"moving_ball\", \"pillow\"]\nprint(f\"nAvailable built-in samples: {sample_options}\")\nsample_name = input(\"Choose a sample [lime\/moving_ball\/pillow] (default: lime): \").strip() or \"lime\"\nif sample_name not in sample_options:\n   print(\"Invalid sample selected. Falling back to 'lime'.\")\n   sample_name = \"lime\"\n\n\nuse_openai_prompt_helper = False\ncustom_bg_prompt = None\n\n\nif OPENAI_API_KEY:\n   ans = input(\"nUse OpenAI to generate an alternative background prompt for the selected sample? [y\/N]: \").strip().lower()\n   use_openai_prompt_helper = ans == \"y\"<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We download the base CogVideoX inpainting model and the VOID Pass 1 checkpoint required for inference. We then present the available built-in sample options and let ourselves choose which sample video we want to process. We also initialize the optional prompt-helper flow to decide whether to generate a refined background prompt with OpenAI.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">if use_openai_prompt_helper:\n   from openai import OpenAI\n   client = OpenAI(api_key=OPENAI_API_KEY)\n\n\n   sample_context = {\n       \"lime\": {\n           \"removed_object\": \"the glass\",\n           \"scene_hint\": \"A lime falls on the table.\"\n       },\n       \"moving_ball\": {\n           \"removed_object\": \"the rubber duckie\",\n           \"scene_hint\": \"A ball rolls off the table.\"\n       },\n       \"pillow\": {\n           \"removed_object\": \"the kettlebell being placed on the pillow\",\n           \"scene_hint\": \"Two pillows are on the table.\"\n       },\n   }\n\n\n   helper_prompt = f\"\"\"\nYou are helping prepare a clean background prompt for a video object removal model.\n\n\nRules:\n- Describe only what should remain in the scene after removing the target object\/action.\n- Do not mention removal, deletion, masks, editing, or inpainting.\n- Keep it short, concrete, and physically plausible.\n- Return only one sentence.\n\n\nSample name: {sample_name}\nTarget being removed: {sample_context[sample_name]['removed_object']}\nKnown scene hint from the repo: {sample_context[sample_name]['scene_hint']}\n\"\"\"\n   try:\n       response = client.chat.completions.create(\n           model=\"gpt-4o-mini\",\n           temperature=0.2,\n           messages=[\n               {\"role\": \"system\", \"content\": \"You write short, precise scene descriptions for video generation pipelines.\"},\n               {\"role\": \"user\", \"content\": helper_prompt},\n           ],\n       )\n       custom_bg_prompt = response.choices[0].message.content.strip()\n       print(f\"nOpenAI-generated background prompt:n{custom_bg_prompt}n\")\n   except Exception as e:\n       print(f\"OpenAI prompt helper failed: {e}\")\n       custom_bg_prompt = None\n\n\nprompt_json_path = Path(f\".\/sample\/{sample_name}\/prompt.json\")\nif custom_bg_prompt:\n   backup_path = prompt_json_path.with_suffix(\".json.bak\")\n   if not backup_path.exists():\n       shutil.copy(prompt_json_path, backup_path)\n   with open(prompt_json_path, \"w\") as f:\n       json.dump({\"bg\": custom_bg_prompt}, f)\n   print(f\"Updated prompt.json for sample '{sample_name}'.\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We use the optional OpenAI prompt helper to generate a cleaner and more focused background description for the selected sample. We define the scene context, send it to the model, capture the generated prompt, and then update the sample\u2019s prompt.json file when a custom prompt is available. This allows us to make the pipeline a bit more flexible while still keeping the original sample structure intact.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import numpy as np\nimport torch.nn.functional as F\nfrom safetensors.torch import load_file\nfrom diffusers import DDIMScheduler\nfrom IPython.display import Video, display\n\n\nfrom videox_fun.models import (\n   AutoencoderKLCogVideoX,\n   CogVideoXTransformer3DModel,\n   T5EncoderModel,\n   T5Tokenizer,\n)\nfrom videox_fun.pipeline import CogVideoXFunInpaintPipeline\nfrom videox_fun.utils.fp8_optimization import convert_weight_dtype_wrapper\nfrom videox_fun.utils.utils import get_video_mask_input, save_videos_grid, save_inout_row\n\n\nBASE_MODEL_PATH = \".\/CogVideoX-Fun-V1.5-5b-InP\"\nTRANSFORMER_CKPT = \".\/void_pass1.safetensors\"\nDATA_ROOTDIR = \".\/sample\"\nSAMPLE_NAME = sample_name\n\n\nSAMPLE_SIZE = (384, 672)\nMAX_VIDEO_LENGTH = 197\nTEMPORAL_WINDOW_SIZE = 85\nNUM_INFERENCE_STEPS = 50\nGUIDANCE_SCALE = 1.0\nSEED = 42\nDEVICE = \"cuda\"\nWEIGHT_DTYPE = torch.bfloat16\n\n\nprint(\"nLoading VAE...\")\nvae = AutoencoderKLCogVideoX.from_pretrained(\n   BASE_MODEL_PATH,\n   subfolder=\"vae\",\n).to(WEIGHT_DTYPE)\n\n\nvideo_length = int(\n   (MAX_VIDEO_LENGTH - 1) \/\/ vae.config.temporal_compression_ratio * vae.config.temporal_compression_ratio\n) + 1\nprint(f\"Effective video length: {video_length}\")\n\n\nprint(\"nLoading base transformer...\")\ntransformer = CogVideoXTransformer3DModel.from_pretrained(\n   BASE_MODEL_PATH,\n   subfolder=\"transformer\",\n   low_cpu_mem_usage=True,\n   use_vae_mask=True,\n).to(WEIGHT_DTYPE)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We import the deep learning, diffusion, video display, and VOID-specific modules required for inference. We define key configuration values, such as model paths, sample dimensions, video length, inference steps, seed, device, and data type, and then load the VAE and base transformer components. This section presents the core model objects that form the underpino inpainting pipeline.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(f\"Loading VOID checkpoint from {TRANSFORMER_CKPT} ...\")\nstate_dict = load_file(TRANSFORMER_CKPT)\n\n\nparam_name = \"patch_embed.proj.weight\"\nif state_dict[param_name].size(1) != transformer.state_dict()[param_name].size(1):\n   latent_ch, feat_scale = 16, 8\n   feat_dim = latent_ch * feat_scale\n   new_weight = transformer.state_dict()[param_name].clone()\n   new_weight[:, :feat_dim] = state_dict[param_name][:, :feat_dim]\n   new_weight[:, -feat_dim:] = state_dict[param_name][:, -feat_dim:]\n   state_dict[param_name] = new_weight\n   print(f\"Adapted {param_name} channels for VAE mask.\")\n\n\nmissing_keys, unexpected_keys = transformer.load_state_dict(state_dict, strict=False)\nprint(f\"Missing keys: {len(missing_keys)}, Unexpected keys: {len(unexpected_keys)}\")\n\n\nprint(\"nLoading tokenizer, text encoder, and scheduler...\")\ntokenizer = T5Tokenizer.from_pretrained(BASE_MODEL_PATH, subfolder=\"tokenizer\")\ntext_encoder = T5EncoderModel.from_pretrained(\n   BASE_MODEL_PATH,\n   subfolder=\"text_encoder\",\n   torch_dtype=WEIGHT_DTYPE,\n)\nscheduler = DDIMScheduler.from_pretrained(BASE_MODEL_PATH, subfolder=\"scheduler\")\n\n\nprint(\"nBuilding pipeline...\")\npipe = CogVideoXFunInpaintPipeline(\n   tokenizer=tokenizer,\n   text_encoder=text_encoder,\n   vae=vae,\n   transformer=transformer,\n   scheduler=scheduler,\n)\n\n\nconvert_weight_dtype_wrapper(pipe.transformer, WEIGHT_DTYPE)\npipe.enable_model_cpu_offload(device=DEVICE)\ngenerator = torch.Generator(device=DEVICE).manual_seed(SEED)\n\n\nprint(\"nPreparing sample input...\")\ninput_video, input_video_mask, prompt, _ = get_video_mask_input(\n   SAMPLE_NAME,\n   sample_size=SAMPLE_SIZE,\n   keep_fg_ids=[-1],\n   max_video_length=video_length,\n   temporal_window_size=TEMPORAL_WINDOW_SIZE,\n   data_rootdir=DATA_ROOTDIR,\n   use_quadmask=True,\n   dilate_width=11,\n)\n\n\nnegative_prompt = (\n   \"Watermark present in each frame. The background is solid. \"\n   \"Strange body and strange trajectory. Distortion.\"\n)\n\n\nprint(f\"nPrompt: {prompt}\")\nprint(f\"Input video tensor shape: {tuple(input_video.shape)}\")\nprint(f\"Mask video tensor shape: {tuple(input_video_mask.shape)}\")\n\n\nprint(\"nDisplaying input video...\")\ninput_video_path = os.path.join(DATA_ROOTDIR, SAMPLE_NAME, \"input_video.mp4\")\ndisplay(Video(input_video_path, embed=True, width=672))<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We load the VOID checkpoint, align the transformer weights when needed, and initialize the tokenizer, text encoder, scheduler, and final inpainting pipeline. We then enable CPU offloading, seed the generator for reproducibility, and prepare the input video, mask video, and prompt from the selected sample. By the end of this section, we will have everything ready for actual inference, including the negative prompt and the input video preview.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"nRunning VOID Pass 1 inference...\")\nwith torch.no_grad():\n   sample = pipe(\n       prompt,\n       num_frames=TEMPORAL_WINDOW_SIZE,\n       negative_prompt=negative_prompt,\n       height=SAMPLE_SIZE[0],\n       width=SAMPLE_SIZE[1],\n       generator=generator,\n       guidance_scale=GUIDANCE_SCALE,\n       num_inference_steps=NUM_INFERENCE_STEPS,\n       video=input_video,\n       mask_video=input_video_mask,\n       strength=1.0,\n       use_trimask=True,\n       use_vae_mask=True,\n   ).videos\n\n\nprint(f\"Output shape: {tuple(sample.shape)}\")\n\n\noutput_dir = Path(\"\/content\/void_outputs\")\noutput_dir.mkdir(parents=True, exist_ok=True)\n\n\noutput_path = str(output_dir \/ f\"{SAMPLE_NAME}_void_pass1.mp4\")\ncomparison_path = str(output_dir \/ f\"{SAMPLE_NAME}_comparison.mp4\")\n\n\nprint(\"nSaving output video...\")\nsave_videos_grid(sample, output_path, fps=12)\n\n\nprint(\"Saving side-by-side comparison...\")\nsave_inout_row(input_video, input_video_mask, sample, comparison_path, fps=12)\n\n\nprint(f\"nSaved output to: {output_path}\")\nprint(f\"Saved comparison to: {comparison_path}\")\n\n\nprint(\"nDisplaying generated result...\")\ndisplay(Video(output_path, embed=True, width=672))\n\n\nprint(\"nDisplaying comparison (input | mask | output)...\")\ndisplay(Video(comparison_path, embed=True, width=1344))\n\n\nprint(\"nDone.\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We run the actual VOID Pass 1 inference on the selected sample using the prepared prompt, mask, and model pipeline. We save the generated output video and also create a side-by-side comparison video so we can inspect the input, mask, and final result together. We display the generated videos directly in Colab, which helps us verify that the full video object-removal workflow works end to end.<\/p>\n<p>In conclusion, we created a complete, Colab-ready implementation of the VOID model and ran an end-to-end video inpainting workflow within a single, streamlined pipeline. We went beyond basic setup by handling model downloads, prompt preparation, checkpoint loading, mask-aware inference, and output visualization in a way that is practical for experimentation and adaptation. We also saw how the different model components come together to remove objects from video while preserving the surrounding scene as naturally as possible. At the end, we successfully ran the official sample and built a strong working foundation that helps us extend the pipeline for custom videos, prompts, and more advanced research use cases.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out\u00a0the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Computer%20Vision\/netflix_void_video_object_removal_inpainting_pipeline_with_cogvideox_and_sample_inference.py\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes<\/a>. \u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">120k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.?\u00a0<strong><a href=\"https:\/\/forms.gle\/MTNLpmJtsFA3VRVd9\" target=\"_blank\" rel=\"noreferrer noopener\">Connect with us<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/04\/05\/how-to-build-a-netflix-void-video-object-removal-and-inpainting-pipeline-with-cogvideox-custom-prompting-and-end-to-end-sample-inference\/\">How to Build a Netflix VOID Video Object Removal and Inpainting Pipeline with CogVideoX, Custom Prompting, and End-to-End Sample Inference<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we build and&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-676","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/676","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=676"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/676\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=676"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=676"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=676"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}