{"id":701,"date":"2026-04-11T04:14:41","date_gmt":"2026-04-10T20:14:41","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=701"},"modified":"2026-04-11T04:14:41","modified_gmt":"2026-04-10T20:14:41","slug":"a-coding-guide-to-markerless-3d-human-kinematics-with-pose2sim-rtmpose-and-opensim","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=701","title":{"rendered":"A Coding Guide to Markerless 3D Human Kinematics with Pose2Sim, RTMPose, and OpenSim"},"content":{"rendered":"<p>In this tutorial, we build and run a complete <a href=\"https:\/\/github.com\/perfanalytics\/pose2sim\"><strong>Pose2Sim<\/strong><\/a> pipeline on Colab to understand how markerless 3D kinematics works in practice. We begin with environment setup, configure the project for Colab\u2019s headless runtime, and then walk through calibration, 2D pose estimation, synchronization, person association, triangulation, filtering, marker augmentation, and OpenSim-based kinematics. As we progress, we not only execute each stage of the workflow but also inspect the generated outputs, visualize trajectories and joint angles, and learn how each component contributes to converting raw multi-camera videos into meaningful biomechanical motion data.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import subprocess, sys, os\n\n\ndef run(cmd, desc=\"\"):\n   \"\"\"Run a shell command with live output.\"\"\"\n   print(f\"n{'='*60}\")\n   print(f\"  {desc}\" if desc else f\"  Running: {cmd}\")\n   print(f\"{'='*60}\")\n   result = subprocess.run(cmd, shell=True, capture_output=True, text=True)\n   if result.stdout:\n       lines = result.stdout.strip().split('n')\n       for line in lines[-15:]:\n           print(line)\n   if result.returncode != 0:\n       print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> Warning (exit code {result.returncode}): {result.stderr[-500:] if result.stderr else 'unknown error'}\")\n   return result.returncode\n\n\nrun(\"pip install -q pose2sim\", \"Installing Pose2Sim (includes RTMPose, filtering, etc.)\")\n\n\nopensim_available = False\ntry:\n   run(\"pip install -q opensim\", \"Installing OpenSim Python bindings\")\n   import opensim\n   opensim_available = True\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> OpenSim {opensim.__version__} installed successfully!\")\nexcept Exception as e:\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> OpenSim could not be installed ({e}).\")\n   print(\"  The tutorial will run all steps EXCEPT kinematics.\")\n   print(\"  For full kinematics, use a local conda environment instead.\")\n\n\ngpu_returncode = run(\"nvidia-smi --query-gpu=name,memory.total --format=csv,noheader\", \"Checking GPU availability\")\nif gpu_returncode != 0:\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> No GPU detected. Pose estimation will run on CPU (slower).\")\n   print(\"  Tip: Runtime \u2192 Change runtime type \u2192 T4 GPU\")\nelse:\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> GPU detected! Pose estimation will be accelerated.\")\n\n\ntry:\n   import torch\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> PyTorch {torch.__version__} | CUDA available: {torch.cuda.is_available()}\")\nexcept ImportError:\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2139.png\" alt=\"\u2139\" class=\"wp-smiley\" \/> PyTorch not found (pose estimation will use ONNX Runtime directly)\")\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f389.png\" alt=\"\ud83c\udf89\" class=\"wp-smiley\" \/> Installation complete!\")\n\n\nimport shutil\nfrom pathlib import Path\n\n\nimport Pose2Sim\npkg_path = Path(Pose2Sim.__file__).parent\nprint(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4e6.png\" alt=\"\ud83d\udce6\" class=\"wp-smiley\" \/> Pose2Sim installed at: {pkg_path}\")\n\n\ndemo_src = pkg_path \/ \"Demo_SinglePerson\"\nwork_dir = Path(\"\/content\/Pose2Sim_Tutorial\")\n\n\nif work_dir.exists():\n   shutil.rmtree(work_dir)\n\n\nshutil.copytree(demo_src, work_dir)\nprint(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c2.png\" alt=\"\ud83d\udcc2\" class=\"wp-smiley\" \/> Demo copied to: {work_dir}\")\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f5c2.png\" alt=\"\ud83d\uddc2\" class=\"wp-smiley\" \/>  PROJECT STRUCTURE:\")\nprint(\"=\" * 60)\nfor p in sorted(work_dir.rglob(\"*\")):\n   depth = len(p.relative_to(work_dir).parts)\n   indent = \"  \" * depth\n   if p.is_dir():\n       print(f\"{indent}<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c1.png\" alt=\"\ud83d\udcc1\" class=\"wp-smiley\" \/> {p.name}\/\")\n   elif depth &lt;= 3:\n       size_kb = p.stat().st_size \/ 1024\n       print(f\"{indent}<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/> {p.name} ({size_kb:.1f} KB)\")\n\n\nprint(\"\"\"\n\u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557\n\u2551  KEY DIRECTORIES:                                       \u2551\n\u2551  calibration\/  \u2192 Camera calibration files               \u2551\n\u2551  videos\/       \u2192 Raw input videos from each camera      \u2551\n\u2551  pose\/         \u2192 2D pose estimation results (auto-gen)  \u2551\n\u2551  pose-3d\/      \u2192 3D triangulated coordinates (.trc)     \u2551\n\u2551  kinematics\/   \u2192 OpenSim joint angles (.mot)            \u2551\n\u2551  Config.toml   \u2192 ALL pipeline parameters                \u2551\n\u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d\n\"\"\")\n\n\nimport toml\n\n\nconfig_path = work_dir \/ \"Config.toml\"\nconfig = toml.load(config_path)\n\n\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/> KEY CONFIGURATION PARAMETERS:\")\nprint(\"=\" * 60)\n\n\nsections_of_interest = ['project', 'pose', 'calibration', 'synchronization',\n                       'triangulation', 'filtering', 'markerAugmentation']\nfor section in sections_of_interest:\n   if section in config:\n       print(f\"n[{section}]\")\n       for key, val in list(config[section].items())[:6]:\n           print(f\"  {key} = {val}\")\n       if len(config[section]) &gt; 6:\n           print(f\"  ... and {len(config[section]) - 6} more parameters\")\n\n\nprint(\"nn<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/> ADAPTING CONFIG FOR GOOGLE COLAB (headless)...\")\nprint(\"-\" * 60)\n\n\nmodifications = {}\n\n\nif 'synchronization' in config:\n   config['synchronization']['synchronization_gui'] = False\n   modifications['synchronization.synchronization_gui'] = False\n\n\nif 'pose' in config:\n   config['pose']['display_detection'] = False\n   modifications['pose.display_detection'] = False\n   config['pose']['mode'] = 'balanced'\n   modifications['pose.mode'] = 'balanced'\n   config['pose']['save_video'] = 'none'\n   modifications['pose.save_video'] = 'none'\n   config['pose']['det_frequency'] = 50\n   modifications['pose.det_frequency'] = 50\n\n\nif 'filtering' in config:\n   config['filtering']['display_figures'] = False\n   modifications['filtering.display_figures'] = False\n\n\nif 'triangulation' in config:\n   config['triangulation']['undistort_points'] = False\n   modifications['triangulation.undistort_points'] = False\n   config['triangulation']['handle_LR_swap'] = False\n   modifications['triangulation.handle_LR_swap'] = False\n\n\nif 'kinematics' in config:\n   config['kinematics']['use_simple_model'] = True\n   modifications['kinematics.use_simple_model'] = True\n\n\nwith open(config_path, 'w') as f:\n   toml.dump(config, f)\n\n\nfor k, v in modifications.items():\n   print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/270f.png\" alt=\"\u270f\" class=\"wp-smiley\" \/> {k} \u2192 {v}\")\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Config.toml updated for Colab execution!\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We set up the full Pose2Sim environment in Google Colab and make sure the required dependencies are available for the pipeline. We install Pose2Sim, optionally test OpenSim support, check GPU availability, and then copy the demo project into a working directory so we can operate on a clean example dataset. We also inspect and modify the configuration file to ensure the workflow runs smoothly in Colab\u2019s headless environment, with settings that balance speed, stability, and usability.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import os\nimport time\nimport warnings\nimport matplotlib\nmatplotlib.use('Agg')\n\n\nos.chdir(work_dir)\nprint(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c2.png\" alt=\"\ud83d\udcc2\" class=\"wp-smiley\" \/> Working directory: {os.getcwd()}\")\n\n\nfrom Pose2Sim import Pose2Sim\n\n\nstep_times = {}\n\n\nprint(\"n\" + \"\u2588\" * 60)\nprint(\"\u2588  STEP 1\/8: CAMERA CALIBRATION\")\nprint(\"\u2588\" * 60)\nprint(\"\"\"\n Calibration converts camera parameters (intrinsics + extrinsics)\n into a unified Pose2Sim format.\n\n\n The demo uses 'convert' mode (pre-existing Qualisys calibration).\n For your own data, you can:\n   - Convert from Qualisys, Vicon, OpenCap, AniPose, etc.\n   - Calculate from scratch using a checkerboard\n\"\"\")\n\n\nt0 = time.time()\ntry:\n   Pose2Sim.calibration()\n   step_times['calibration'] = time.time() - t0\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Calibration complete! ({step_times['calibration']:.1f}s)\")\n\n\n   calib_file = work_dir \/ \"calibration\" \/ \"Calib.toml\"\n   if calib_file.exists():\n       calib_data = toml.load(calib_file)\n       cam_names = [k for k in calib_data.keys() if k.startswith('cam')]\n       print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4f8.png\" alt=\"\ud83d\udcf8\" class=\"wp-smiley\" \/> Cameras found: {len(cam_names)} \u2192 {cam_names}\")\n       for cam in cam_names[:2]:\n           print(f\"  Camera '{cam}':\")\n           if 'matrix' in calib_data[cam]:\n               print(f\"    Intrinsic matrix (3x3): \u2713\")\n           if 'rotation' in calib_data[cam]:\n               print(f\"    Rotation vector: {calib_data[cam]['rotation']}\")\n           if 'translation' in calib_data[cam]:\n               print(f\"    Translation (m): {calib_data[cam]['translation']}\")\nexcept Exception as e:\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> Calibration warning: {e}\")\n   step_times['calibration'] = time.time() - t0\n\n\nprint(\"n\" + \"\u2588\" * 60)\nprint(\"\u2588  STEP 2\/8: 2D POSE ESTIMATION (RTMPose)\")\nprint(\"\u2588\" * 60)\nprint(\"\"\"\n RTMPose detects 2D body keypoints in each video frame.\n - Model: RTMPose (balanced mode) with YOLOX detection\n - Body model: Body_with_feet (HALPE_26 keypoints)\n - Output: JSON files per frame, per camera\n - GPU acceleration used if available\n\n\n For custom models (animal, hand, face), set `mode` in Config.toml\n to a dictionary with custom model URLs (see README).\n\"\"\")\n\n\nt0 = time.time()\ntry:\n   Pose2Sim.poseEstimation()\n   step_times['poseEstimation'] = time.time() - t0\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Pose estimation complete! ({step_times['poseEstimation']:.1f}s)\")\n\n\n   pose_dirs = list((work_dir \/ \"pose\").glob(\"*_json\"))\n   for pd in pose_dirs:\n       n_files = len(list(pd.glob(\"*.json\")))\n       print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c1.png\" alt=\"\ud83d\udcc1\" class=\"wp-smiley\" \/> {pd.name}: {n_files} JSON frames generated\")\nexcept Exception as e:\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> Pose estimation error: {e}\")\n   step_times['poseEstimation'] = time.time() - t0\n\n\nprint(\"n\" + \"\u2588\" * 60)\nprint(\"\u2588  STEP 3\/8: CAMERA SYNCHRONIZATION\")\nprint(\"\u2588\" * 60)\nprint(\"\"\"\n Synchronization aligns frames across cameras by correlating\n vertical keypoint speeds. This finds the time offset between cameras.\n\n\n Skip this step if your cameras are hardware-synchronized\n (e.g., via timecode, trigger cable, or GoPro GPS sync).\n\"\"\")\n\n\nt0 = time.time()\ntry:\n   Pose2Sim.synchronization()\n   step_times['synchronization'] = time.time() - t0\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Synchronization complete! ({step_times['synchronization']:.1f}s)\")\nexcept Exception as e:\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> Synchronization note: {e}\")\n   step_times['synchronization'] = time.time() - t0\n\n\nprint(\"n\" + \"\u2588\" * 60)\nprint(\"\u2588  STEP 4\/8: PERSON ASSOCIATION\")\nprint(\"\u2588\" * 60)\nprint(\"\"\"\n Associates detected persons across different camera views.\n - Single-person mode: picks person with lowest reprojection error\n - Multi-person mode: uses epipolar geometry to match people\n\n\n For this single-person demo, it selects the best-matching person.\n\"\"\")\n\n\nt0 = time.time()\ntry:\n   Pose2Sim.personAssociation()\n   step_times['personAssociation'] = time.time() - t0\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Person association complete! ({step_times['personAssociation']:.1f}s)\")\nexcept Exception as e:\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> Person association note: {e}\")\n   step_times['personAssociation'] = time.time() - t0<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We begin running the core Pose2Sim pipeline step by step to clearly understand what happens at each stage. We start with camera calibration, then perform 2D pose estimation with RTMPose, and continue into camera synchronization and person association to prepare the data for 3D reconstruction. As we execute these stages, we also inspect the outputs and timing information to connect the computation to the files and structures being generated.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"n\" + \"\u2588\" * 60)\nprint(\"\u2588  STEP 5\/8: 3D TRIANGULATION\")\nprint(\"\u2588\" * 60)\nprint(\"\"\"\n Triangulates 2D keypoints from multiple cameras into 3D coordinates.\n - Weighted by detection confidence scores\n - Cameras removed if reprojection error exceeds threshold\n - Right\/left swap correction (disabled per current recommendation)\n - Missing values interpolated (cubic, bezier, linear, etc.)\n - Output: .trc file (OpenSim-compatible 3D marker format)\n\"\"\")\n\n\nt0 = time.time()\ntry:\n   Pose2Sim.triangulation()\n   step_times['triangulation'] = time.time() - t0\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Triangulation complete! ({step_times['triangulation']:.1f}s)\")\n\n\n   trc_files = list((work_dir \/ \"pose-3d\").glob(\"*.trc\"))\n   for trc in trc_files:\n       size_kb = trc.stat().st_size \/ 1024\n       print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/> {trc.name} ({size_kb:.1f} KB)\")\nexcept Exception as e:\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> Triangulation error: {e}\")\n   step_times['triangulation'] = time.time() - t0\n\n\nprint(\"n\" + \"\u2588\" * 60)\nprint(\"\u2588  STEP 6\/8: 3D COORDINATE FILTERING\")\nprint(\"\u2588\" * 60)\nprint(\"\"\"\n Smooths 3D trajectories to reduce noise from triangulation.\n Available filters:\n   - Butterworth (low-pass, default)\n   - Kalman (predictive smoothing)\n   - Butterworth on speed\n   - Gaussian, LOESS, Median\n\n\n Filter type and parameters are set in Config.toml under [filtering].\n\"\"\")\n\n\nt0 = time.time()\ntry:\n   Pose2Sim.filtering()\n   step_times['filtering'] = time.time() - t0\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Filtering complete! ({step_times['filtering']:.1f}s)\")\nexcept Exception as e:\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> Filtering note: {e}\")\n   step_times['filtering'] = time.time() - t0\n\n\nprint(\"n\" + \"\u2588\" * 60)\nprint(\"\u2588  STEP 7\/8: MARKER AUGMENTATION\")\nprint(\"\u2588\" * 60)\nprint(\"\"\"\n Uses a Stanford-trained LSTM to predict 47 virtual markers from\n the triangulated keypoints. These additional markers can improve\n OpenSim model scaling and inverse kinematics.\n\n\n NOTE: Results are NOT always better with augmentation (~50\/50).\n The pipeline lets you compare with and without augmentation.\n\"\"\")\n\n\nt0 = time.time()\ntry:\n   Pose2Sim.markerAugmentation()\n   step_times['markerAugmentation'] = time.time() - t0\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Marker augmentation complete! ({step_times['markerAugmentation']:.1f}s)\")\n\n\n   aug_files = list((work_dir \/ \"pose-3d\").glob(\"*augmented*.trc\"))\n   for af in aug_files:\n       print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/> {af.name} ({af.stat().st_size\/1024:.1f} KB)\")\nexcept Exception as e:\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> Marker augmentation note: {e}\")\n   step_times['markerAugmentation'] = time.time() - t0\n\n\nprint(\"n\" + \"\u2588\" * 60)\nprint(\"\u2588  STEP 8\/8: OPENSIM KINEMATICS\")\nprint(\"\u2588\" * 60)\nprint(\"\"\"\n Performs automatic model scaling and inverse kinematics:\n   1. Scale: Adjusts a generic OpenSim model to participant dimensions\n      (based on segment lengths from the slowest, most stable frames)\n   2. IK: Computes 3D joint angles by fitting the model to markers\n  Output:\n   - Scaled .osim model file\n   - Joint angles .mot file (can be loaded in OpenSim GUI)\n\"\"\")\n\n\nt0 = time.time()\nif opensim_available:\n   try:\n       Pose2Sim.kinematics()\n       step_times['kinematics'] = time.time() - t0\n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Kinematics complete! ({step_times['kinematics']:.1f}s)\")\n\n\n       kin_dir = work_dir \/ \"kinematics\"\n       if kin_dir.exists():\n           for f in sorted(kin_dir.glob(\"*\")):\n               print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/> {f.name} ({f.stat().st_size\/1024:.1f} KB)\")\n   except Exception as e:\n       print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> Kinematics error: {e}\")\n       step_times['kinematics'] = time.time() - t0\nelse:\n   print(\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/23ed.png\" alt=\"\u23ed\" class=\"wp-smiley\" \/> Skipped \u2014 OpenSim not installed on this Colab runtime.\")\n   print(\"  To run kinematics locally, install via:\")\n   print(\"    conda install -c opensim-org opensim\")\n   step_times['kinematics'] = 0\n\n\nprint(\"n\" + \"\u2550\" * 60)\nprint(\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/23f1.png\" alt=\"\u23f1\" class=\"wp-smiley\" \/>  PIPELINE TIMING SUMMARY\")\nprint(\"\u2550\" * 60)\ntotal = 0\nfor step, t in step_times.items():\n   bar = \"\u2588\" * int(t \/ max(step_times.values(), default=1) * 30)\n   print(f\"  {step:&lt;22s} {t:6.1f}s  {bar}\")\n   total += t\nprint(f\"  {'TOTAL':&lt;22s} {total:6.1f}s\")\nprint(\"\u2550\" * 60)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We continue the main workflow by converting the associated 2D detections into 3D coordinates via triangulation, then improving the resulting trajectories via filtering. We also run marker augmentation to enrich the marker set for downstream biomechanical analysis and then attempt OpenSim-based kinematics when the environment supports it. By the end of this section, we will summarize the runtime of each major step to evaluate the overall efficiency and flow of the full pipeline.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom pathlib import Path\nimport re\n\n\ndef parse_trc(trc_path):\n   \"\"\"Parse a .trc file and return marker names, frame data, and metadata.\"\"\"\n   with open(trc_path, 'r') as f:\n       lines = f.readlines()\n\n\n   meta_keys = lines[2].strip().split('t')\n   meta_vals = lines[3].strip().split('t')\n   metadata = dict(zip(meta_keys, meta_vals))\n\n\n   marker_line = lines[3].strip().split('t')\n\n\n   header_lines = 0\n   for i, line in enumerate(lines):\n       if line.strip() and not line.startswith(('PathFileType', 'DataRate',\n                                                 'Frame', 't')):\n           try:\n               float(line.strip().split('t')[0])\n               header_lines = i\n               break\n           except ValueError:\n               continue\n\n\n   raw_markers = lines[3].strip().split('t')\n   markers = [m for m in raw_markers if m and m not in ('Frame#', 'Time')]\n\n\n   marker_names = []\n   for m in markers:\n       if m and (not marker_names or m != marker_names[-1]):\n           marker_names.append(m)\n\n\n   data_lines = lines[header_lines:]\n   data = []\n   for line in data_lines:\n       vals = line.strip().split('t')\n       if len(vals) &gt; 2:\n           try:\n               row = [float(v) if v else np.nan for v in vals]\n               data.append(row)\n           except ValueError:\n               continue\n\n\n   data = np.array(data)\n   return marker_names, data, metadata\n\n\n\n\ntrc_files = sorted((work_dir \/ \"pose-3d\").glob(\"*.trc\"))\nprint(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> Found {len(trc_files)} TRC file(s):\")\nfor f in trc_files:\n   print(f\"   {f.name}\")\n\n\ntrc_file = None\nfor f in trc_files:\n   if 'filt' in f.name.lower() and 'augm' not in f.name.lower():\n       trc_file = f\n       break\nif trc_file is None and trc_files:\n   trc_file = trc_files[0]\n\n\nif trc_file:\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c8.png\" alt=\"\ud83d\udcc8\" class=\"wp-smiley\" \/> Visualizing: {trc_file.name}\")\n   marker_names, data, metadata = parse_trc(trc_file)\n   print(f\"   Markers: {len(marker_names)}\")\n   print(f\"   Frames:  {data.shape[0]}\")\n   print(f\"   Marker names: {marker_names[:10]}{'...' if len(marker_names) &gt; 10 else ''}\")\n\n\n   frames = data[:, 0].astype(int) if data.shape[1] &gt; 0 else []\n   times = data[:, 1] if data.shape[1] &gt; 1 else []\n   coords = data[:, 2:]\n\n\n   n_markers = len(marker_names)\n\n\n   mid_frame = len(data) \/\/ 2\n   fig = plt.figure(figsize=(16, 6))\n\n\n   ax1 = fig.add_subplot(131, projection='3d')\n   xs = coords[mid_frame, 0::3][:n_markers]\n   ys = coords[mid_frame, 1::3][:n_markers]\n   zs = coords[mid_frame, 2::3][:n_markers]\n\n\n   ax1.scatter(xs, ys, zs, c='dodgerblue', s=40, alpha=0.8, edgecolors='navy')\n   for i, name in enumerate(marker_names[:len(xs)]):\n       if i % 3 == 0:\n           ax1.text(xs[i], ys[i], zs[i], f' {name}', fontsize=6, alpha=0.7)\n\n\n   ax1.set_xlabel('X (m)')\n   ax1.set_ylabel('Y (m)')\n   ax1.set_zlabel('Z (m)')\n   ax1.set_title(f'3D Keypoints (Frame {int(frames[mid_frame])})', fontsize=10)\n\n\n   ax2 = fig.add_subplot(132)\n   key_markers = ['RAnkle', 'LAnkle', 'RWrist', 'LWrist', 'Nose']\n   colors_map = {'RAnkle': 'red', 'LAnkle': 'blue', 'RWrist': 'orange',\n                 'LWrist': 'green', 'Nose': 'purple'}\n\n\n   for mkr in key_markers:\n       if mkr in marker_names:\n           idx = marker_names.index(mkr)\n           z_col = idx * 3 + 2\n           if z_col &lt; coords.shape[1]:\n               ax2.plot(times, coords[:, z_col],\n                        label=mkr, color=colors_map.get(mkr, 'gray'),\n                        linewidth=1.2, alpha=0.8)\n\n\n   ax2.set_xlabel('Time (s)')\n   ax2.set_ylabel('Z position (m)')\n   ax2.set_title('Vertical Trajectories', fontsize=10)\n   ax2.legend(fontsize=8, loc='best')\n   ax2.grid(True, alpha=0.3)\n\n\n   ax3 = fig.add_subplot(133)\n   if len(times) &gt; 1:\n       dt = np.diff(times)\n       dt[dt == 0] = 1e-6\n       for mkr in ['RAnkle', 'RWrist']:\n           if mkr in marker_names:\n               idx = marker_names.index(mkr)\n               x_col, y_col, z_col = idx * 3, idx * 3 + 1, idx * 3 + 2\n               if z_col &lt; coords.shape[1]:\n                   dx = np.diff(coords[:, x_col])\n                   dy = np.diff(coords[:, y_col])\n                   dz = np.diff(coords[:, z_col])\n                   speed = np.sqrt(dx**2 + dy**2 + dz**2) \/ dt\n                   speed = np.clip(speed, 0, np.nanpercentile(speed, 99))\n                   ax3.plot(times[1:], speed, label=mkr,\n                            color=colors_map.get(mkr, 'gray'),\n                            linewidth=0.8, alpha=0.7)\n\n\n       ax3.set_xlabel('Time (s)')\n       ax3.set_ylabel('Speed (m\/s)')\n       ax3.set_title('Marker Speeds (Quality Check)', fontsize=10)\n       ax3.legend(fontsize=8)\n       ax3.grid(True, alpha=0.3)\n\n\n   plt.tight_layout()\n   plt.savefig(work_dir \/ 'trajectory_analysis.png', dpi=150, bbox_inches='tight')\n   plt.show()\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Trajectory plots saved to trajectory_analysis.png\")\n\n\nelse:\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> No TRC file found to visualize.\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We focus on interpreting the generated 3D motion data by parsing the TRC file and extracting marker trajectories in a structured way. We create visualizations of 3D keypoints, the vertical motion of selected body parts, and marker speeds, so we can inspect both movement patterns and signal quality. This helps us move beyond simply running the pipeline and start understanding what the reconstructed motion actually looks like over time.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">def parse_mot(mot_path):\n   \"\"\"Parse an OpenSim .mot file and return column names and data.\"\"\"\n   with open(mot_path, 'r') as f:\n       lines = f.readlines()\n\n\n   data_start = 0\n   for i, line in enumerate(lines):\n       if 'endheader' in line.lower():\n           data_start = i + 1\n           break\n\n\n   col_line = lines[data_start].strip().split('t')\n   if not col_line[0].replace('.', '').replace('-', '').replace('e', '').isdigit():\n       col_names = col_line\n       data_start += 1\n   else:\n       col_names = lines[data_start - 2].strip().split('t')\n\n\n   data = []\n   for line in lines[data_start:]:\n       vals = line.strip().split('t')\n       try:\n           row = [float(v) for v in vals]\n           data.append(row)\n       except ValueError:\n           continue\n\n\n   return col_names, np.array(data)\n\n\n\n\nmot_files = sorted((work_dir \/ \"kinematics\").glob(\"*.mot\")) if (work_dir \/ \"kinematics\").exists() else []\n\n\nif mot_files:\n   mot_file = mot_files[0]\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c8.png\" alt=\"\ud83d\udcc8\" class=\"wp-smiley\" \/> Visualizing joint angles: {mot_file.name}\")\n\n\n   col_names, jdata = parse_mot(mot_file)\n   print(f\"   Columns: {len(col_names)}\")\n   print(f\"   Frames:  {jdata.shape[0]}\")\n   print(f\"   Joints:  {col_names[:8]}...\")\n\n\n   fig, axes = plt.subplots(2, 3, figsize=(16, 8))\n   fig.suptitle('OpenSim Joint Angles from Pose2Sim', fontsize=14, fontweight='bold')\n\n\n   joints_to_plot = [\n       ('hip_flexion_r', 'Right Hip Flexion'),\n       ('hip_flexion_l', 'Left Hip Flexion'),\n       ('knee_angle_r', 'Right Knee Angle'),\n       ('knee_angle_l', 'Left Knee Angle'),\n       ('ankle_angle_r', 'Right Ankle Angle'),\n       ('ankle_angle_l', 'Left Ankle Angle'),\n   ]\n\n\n   time_col = jdata[:, 0] if 'time' in col_names[0].lower() else np.arange(jdata.shape[0])\n\n\n   for ax, (joint_key, title) in zip(axes.flat, joints_to_plot):\n       col_idx = None\n       for i, cn in enumerate(col_names):\n           if joint_key.lower() in cn.lower():\n               col_idx = i\n               break\n\n\n       if col_idx is not None and col_idx &lt; jdata.shape[1]:\n           ax.plot(time_col, jdata[:, col_idx], 'b-', linewidth=1.2)\n           ax.set_title(title, fontsize=10)\n           ax.set_xlabel('Time (s)')\n           ax.set_ylabel('Angle (\u00b0)')\n           ax.grid(True, alpha=0.3)\n       else:\n           ax.text(0.5, 0.5, f'{title}n(not found)', ha='center',\n                   va='center', transform=ax.transAxes, fontsize=10, color='gray')\n           ax.set_title(title, fontsize=10, color='gray')\n\n\n   plt.tight_layout()\n   plt.savefig(work_dir \/ 'joint_angles.png', dpi=150, bbox_inches='tight')\n   plt.show()\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Joint angle plots saved to joint_angles.png\")\n\n\nelse:\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2139.png\" alt=\"\u2139\" class=\"wp-smiley\" \/> No .mot files found (kinematics step was skipped or OpenSim unavailable).\")\n   print(\"  Joint angle visualization will be available when running locally with OpenSim.\")\n\n\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> DATA QUALITY ANALYSIS\")\nprint(\"=\" * 60)\n\n\nif trc_file:\n   marker_names, data, metadata = parse_trc(trc_file)\n   coords = data[:, 2:]\n\n\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f50d.png\" alt=\"\ud83d\udd0d\" class=\"wp-smiley\" \/> Per-Marker Data Completeness:\")\n   print(\"-\" * 45)\n   n_frames = data.shape[0]\n   for i, mkr in enumerate(marker_names):\n       x_col = i * 3\n       if x_col &lt; coords.shape[1]:\n           valid = np.sum(~np.isnan(coords[:, x_col]))\n           pct = 100 * valid \/ n_frames\n           bar = \"\u2588\" * int(pct \/ 5) + \"\u2591\" * (20 - int(pct \/ 5))\n           status = \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/>\" if pct &gt; 90 else \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/>\" if pct &gt; 50 else \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" \/>\"\n           print(f\"  {status} {mkr:&lt;20s} {bar} {pct:5.1f}% ({valid}\/{n_frames})\")\n\n\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f50d.png\" alt=\"\ud83d\udd0d\" class=\"wp-smiley\" \/> Trajectory Smoothness (lower = better):\")\n   print(\"-\" * 45)\n   for mkr in marker_names[:10]:\n       idx = marker_names.index(mkr)\n       cols = [idx * 3, idx * 3 + 1, idx * 3 + 2]\n       if cols[2] &lt; coords.shape[1]:\n           xyz = coords[:, cols]\n           accel = np.diff(xyz, axis=0, n=2)\n           jitter = np.nanmean(np.abs(accel))\n           bar = \"\u2588\" * min(int(jitter * 500), 20)\n           print(f\"  {mkr:&lt;20s} jitter={jitter:.4f}  {bar}\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We further analyze the kinematic results by parsing the MOT file and plotting key joint angles after the OpenSim stage completes successfully. We then evaluate overall data quality by measuring marker completeness and estimating smoothness using inter-frame jitter, which provides a better sense of how stable and reliable the reconstructed trajectories are. This section helps us assess whether the motion output is ready for meaningful biomechanical interpretation or if additional tuning is needed.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/> ADVANCED: PROGRAMMATIC CONFIGURATION\")\nprint(\"=\" * 60)\nprint(\"\"\"\nYou can modify ANY Config.toml parameter programmatically.\nThis is useful for batch experiments or parameter sweeps.\nBelow are common modifications for different scenarios.\n\"\"\")\n\n\nimport toml\n\n\nconfig = toml.load(work_dir \/ \"Config.toml\")\n\n\nprint(\"Example 1: Change filter type to Kalman\")\nexample_config_1 = {\n   'filtering': {\n       'type': 'kalman',\n   }\n}\nprint(f\"  config['filtering']['type'] = 'kalman'\")\n\n\nprint(\"nExample 2: Tighten triangulation quality thresholds\")\nexample_config_2 = {\n   'triangulation': {\n       'likelihood_threshold': 0.4,\n       'reproj_error_threshold_triangulation': 15,\n       'min_cameras_for_triangulation': 3,\n   }\n}\nfor k, v in example_config_2['triangulation'].items():\n   print(f\"  config['triangulation']['{k}'] = {v}\")\n\n\nprint(\"nExample 3: Use lightweight mode for faster processing\")\nprint(\"  config['pose']['mode'] = 'lightweight'\")\nprint(\"  config['pose']['mode'] = 'balanced', 'performance', or custom dict\")\n\n\nprint(\"nExample 4: Enable multi-person analysis\")\nprint(\"  config['project']['multi_person'] = True\")\nprint(\"  config['project']['participant_height'] = [1.72, 1.65]\")\nprint(\"  config['project']['participant_mass'] = [70.0, 55.0]\")\n\n\nprint(\"nExample 5: Process only specific frames\")\nprint(\"  config['project']['frame_range'] = [50, 200]\")\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4a1.png\" alt=\"\ud83d\udca1\" class=\"wp-smiley\" \/> To apply changes, modify and save Config.toml, then re-run the pipeline:\")\nprint(\"\"\"\n   import toml\n   config = toml.load('Config.toml')\n   config['filtering']['type'] = 'kalman'\n   with open('Config.toml', 'w') as f:\n       toml.dump(config, f)\n  \n   from Pose2Sim import Pose2Sim\n   Pose2Sim.filtering()\n\"\"\")\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f680.png\" alt=\"\ud83d\ude80\" class=\"wp-smiley\" \/> ALTERNATIVE: ONE-LINE PIPELINE EXECUTION\")\nprint(\"=\" * 60)\nprint(\"\"\"\nInstead of calling each step individually, use:\n\n\n   from Pose2Sim import Pose2Sim\n   Pose2Sim.runAll()\n\n\nOr with selective steps:\n\n\n   Pose2Sim.runAll(\n       do_calibration=True,\n       do_poseEstimation=True,\n       do_synchronization=True,\n       do_personAssociation=True,\n       do_triangulation=True,\n       do_filtering=True,\n       do_markerAugmentation=False,\n       do_kinematics=True\n   )\n\n\nThis is equivalent to calling each step in sequence, but more concise\nfor production workflows.\n\"\"\")\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4e6.png\" alt=\"\ud83d\udce6\" class=\"wp-smiley\" \/> ALL GENERATED OUTPUT FILES\")\nprint(\"=\" * 60)\n\n\noutput_dirs = ['pose', 'pose-3d', 'kinematics']\nfor dirname in output_dirs:\n   d = work_dir \/ dirname\n   if d.exists():\n       files = list(d.rglob(\"*\"))\n       files = [f for f in files if f.is_file()]\n       total_size = sum(f.stat().st_size for f in files) \/ (1024 * 1024)\n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c1.png\" alt=\"\ud83d\udcc1\" class=\"wp-smiley\" \/> {dirname}\/ ({len(files)} files, {total_size:.1f} MB total)\")\n       for f in sorted(files)[:15]:\n           rel = f.relative_to(d)\n           size = f.stat().st_size \/ 1024\n           print(f\"   {'<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/>'} {rel} ({size:.1f} KB)\")\n       if len(files) &gt; 15:\n           print(f\"   ... and {len(files) - 15} more files\")\n\n\nprint(\"n\" + \"=\" * 60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4e5.png\" alt=\"\ud83d\udce5\" class=\"wp-smiley\" \/> TO DOWNLOAD RESULTS IN GOOGLE COLAB:\")\nprint(\"=\" * 60)\nprint(\"\"\"\n from google.colab import files\n files.download('\/content\/Pose2Sim_Tutorial\/pose-3d\/YOUR_FILE.trc')\n\n\n !zip -r \/content\/pose2sim_results.zip \/content\/Pose2Sim_Tutorial\/pose-3d \/content\/Pose2Sim_Tutorial\/kinematics\n files.download('\/content\/pose2sim_results.zip')\n\"\"\")\n\n\nprint(\"\"\"\n\u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557\n\u2551                         TUTORIAL COMPLETE! <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f389.png\" alt=\"\ud83c\udf89\" class=\"wp-smiley\" \/>                              \u2551\n\u2560\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563\n\u2551                                                                            \u2551\n\u2551  WHAT YOU ACHIEVED:                                                        \u2551\n\u2551  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Installed Pose2Sim on Google Colab                                     \u2551\n\u2551  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Ran camera calibration (conversion mode)                               \u2551\n\u2551  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Performed 2D pose estimation with RTMPose                              \u2551\n\u2551  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Synchronized multi-camera views                                        \u2551\n\u2551  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Associated persons across cameras                                      \u2551\n\u2551  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Triangulated 2D\u21923D keypoints                                           \u2551\n\u2551  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Filtered 3D trajectories                                               \u2551\n\u2551  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Augmented markers with Stanford LSTM                                   \u2551\n\u2551  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Computed joint angles with OpenSim IK (if available)                   \u2551\n\u2551  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Visualized trajectories, joint angles, and data quality                \u2551\n\u2551                                                                            \u2551\n\u2551  NEXT STEPS:                                                               \u2551\n\u2551  \u2022 Try with YOUR OWN multi-camera videos                                   \u2551\n\u2551  \u2022 Experiment with 'performance' mode for higher accuracy                  \u2551\n\u2551  \u2022 Try multi-person mode (Demo_MultiPerson)                                \u2551\n\u2551  \u2022 Visualize in OpenSim GUI or Blender (Pose2Sim_Blender add-on)          \u2551\n\u2551  \u2022 Use custom DeepLabCut models for animal tracking                        \u2551\n\u2551  \u2022 Run batch processing over multiple trials                               \u2551\n\u2551                                                                            \u2551\n\u2551  CITATION:                                                                 \u2551\n\u2551  Pagnon, D., Domalain, M., &amp; Reveret, L. (2022).                         \u2551\n\u2551                                                                            \u2551\n\u2551  GitHub:  https:\/\/github.com\/perfanalytics\/pose2sim                        \u2551\n\u2551  Discord: https:\/\/discord.com\/invite\/4mXUdSFjmt                           \u2551\n\u2551  Docs:    https:\/\/perfanalytics.github.io\/pose2sim\/                        \u2551\n\u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d\n\"\"\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We explore more advanced usage patterns by demonstrating how to modify the Pose2Sim configuration programmatically for experiments and alternative processing setups. We also review how to run the entire pipeline in one call, inspect all generated output files, and prepare the results for download directly from Colab. Finally, we wrap up the tutorial with a summary of what we accomplished and highlight the next steps to extend the workflow to our own datasets and research goals.<\/p>\n<p>In conclusion, we gained a full hands-on understanding of how Pose2Sim transforms multi-view videos into 3D motion trajectories and biomechanical outputs within a practical Colab workflow. We saw how each stage of the pipeline connects to the next, from extracting reliable 2D keypoints to reconstructing filtered 3D coordinates and, when available, estimating joint kinematics through OpenSim. We also went beyond basic execution by visualizing results, checking data quality, and exploring programmatic configuration changes for more advanced experimentation. At the end, we have a reusable, adaptable pipeline that we can extend to our own datasets, refine for greater accuracy, and use as a foundation for deeper motion analysis and markerless biomechanics research.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out\u00a0the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Computer%20Vision\/pose2sim_markerless_3d_kinematics_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">120k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.?\u00a0<strong><a href=\"https:\/\/forms.gle\/MTNLpmJtsFA3VRVd9\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Connect with us<\/mark><\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/04\/10\/a-coding-guide-to-markerless-3d-human-kinematics-with-pose2sim-rtmpose-and-opensim\/\">A Coding Guide to Markerless 3D Human Kinematics with Pose2Sim, RTMPose, and OpenSim<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we build and&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-701","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/701","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=701"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/701\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=701"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=701"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=701"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}