{"id":630,"date":"2026-03-29T13:40:25","date_gmt":"2026-03-29T05:40:25","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=630"},"modified":"2026-03-29T13:40:25","modified_gmt":"2026-03-29T05:40:25","slug":"a-coding-guide-to-exploring-nanobots-full-agent-pipeline-from-wiring-up-tools-and-memory-to-skills-subagents-and-cron-scheduling","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=630","title":{"rendered":"A Coding Guide to Exploring nanobot\u2019s Full Agent Pipeline, from Wiring Up Tools and Memory to Skills, Subagents, and Cron Scheduling"},"content":{"rendered":"<p>In this tutorial, we take a deep dive into <a href=\"https:\/\/github.com\/HKUDS\/nanobot\"><strong>nanobot<\/strong><\/a>, the ultra-lightweight personal AI agent framework from HKUDS that packs full agent capabilities into roughly 4,000 lines of Python. Rather than simply installing and running it out of the box, we crack open the hood and manually recreate each of its core subsystems, the agent loop, tool execution, memory persistence, skills loading, session management, subagent spawning, and cron scheduling, so we understand exactly how they work. We wire everything up with OpenAI\u2019s gpt-4o-mini as our LLM provider, enter our API key securely through the terminal (never exposing it in notebook output), and progressively build from a single tool-calling loop all the way to a multi-step research pipeline that reads and writes files, stores long-term memories, and delegates tasks to concurrent background workers. By the end, we don\u2019t just know how to use nanobots, we understand how to extend them with custom tools, skills, and our own agent architectures.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import sys\nimport os\nimport subprocess\n\n\ndef section(title, emoji=\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f539.png\" alt=\"\ud83d\udd39\" class=\"wp-smiley\" \/>\"):\n   \"\"\"Pretty-print a section header.\"\"\"\n   width = 72\n   print(f\"n{'\u2550' * width}\")\n   print(f\"  {emoji}  {title}\")\n   print(f\"{'\u2550' * width}n\")\n\n\ndef info(msg):\n   print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2139.png\" alt=\"\u2139\" class=\"wp-smiley\" \/>  {msg}\")\n\n\ndef success(msg):\n   print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> {msg}\")\n\n\ndef code_block(code):\n   print(f\"  \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\")\n   for line in code.strip().split(\"n\"):\n       print(f\"  \u2502 {line}\")\n   print(f\"  \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\")\n\n\nsection(\"STEP 1 \u00b7 Installing nanobot-ai &amp; Dependencies\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4e6.png\" alt=\"\ud83d\udce6\" class=\"wp-smiley\" \/>\")\n\n\ninfo(\"Installing nanobot-ai from PyPI (latest stable)...\")\nsubprocess.check_call([\n   sys.executable, \"-m\", \"pip\", \"install\", \"-q\",\n   \"nanobot-ai\", \"openai\", \"rich\", \"httpx\"\n])\nsuccess(\"nanobot-ai installed successfully!\")\n\n\nimport importlib.metadata\nnanobot_version = importlib.metadata.version(\"nanobot-ai\")\nprint(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4cc.png\" alt=\"\ud83d\udccc\" class=\"wp-smiley\" \/> nanobot-ai version: {nanobot_version}\")\n\n\nsection(\"STEP 2 \u00b7 Secure OpenAI API Key Input\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f511.png\" alt=\"\ud83d\udd11\" class=\"wp-smiley\" \/>\")\n\n\ninfo(\"Your API key will NOT be printed or stored in notebook output.\")\ninfo(\"It is held only in memory for this session.n\")\n\n\ntry:\n   from google.colab import userdata\n   OPENAI_API_KEY = userdata.get(\"OPENAI_API_KEY\")\n   if not OPENAI_API_KEY:\n       raise ValueError(\"Not set in Colab secrets\")\n   success(\"Loaded API key from Colab Secrets ('OPENAI_API_KEY').\")\n   info(\"Tip: You can set this in Colab \u2192 <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f511.png\" alt=\"\ud83d\udd11\" class=\"wp-smiley\" \/> Secrets panel on the left sidebar.\")\nexcept Exception:\n   import getpass\n   OPENAI_API_KEY = getpass.getpass(\"Enter your OpenAI API key: \")\n   success(\"API key captured securely via terminal input.\")\n\n\nos.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY\n\n\nimport openai\nclient = openai.OpenAI(api_key=OPENAI_API_KEY)\ntry:\n   client.models.list()\n   success(\"OpenAI API key validated \u2014 connection successful!\")\nexcept Exception as e:\n   print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" \/> API key validation failed: {e}\")\n   print(\"     Please restart and enter a valid key.\")\n   sys.exit(1)\n\n\nsection(\"STEP 3 \u00b7 Configuring nanobot for OpenAI\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2699.png\" alt=\"\u2699\" class=\"wp-smiley\" \/>\")\n\n\nimport json\nfrom pathlib import Path\n\n\nNANOBOT_HOME = Path.home() \/ \".nanobot\"\nNANOBOT_HOME.mkdir(parents=True, exist_ok=True)\n\n\nWORKSPACE = NANOBOT_HOME \/ \"workspace\"\nWORKSPACE.mkdir(parents=True, exist_ok=True)\n(WORKSPACE \/ \"memory\").mkdir(parents=True, exist_ok=True)\n\n\nconfig = {\n   \"providers\": {\n       \"openai\": {\n           \"apiKey\": OPENAI_API_KEY\n       }\n   },\n   \"agents\": {\n       \"defaults\": {\n           \"model\": \"openai\/gpt-4o-mini\",\n           \"maxTokens\": 4096,\n           \"workspace\": str(WORKSPACE)\n       }\n   },\n   \"tools\": {\n       \"restrictToWorkspace\": True\n   }\n}\n\n\nconfig_path = NANOBOT_HOME \/ \"config.json\"\nconfig_path.write_text(json.dumps(config, indent=2))\nsuccess(f\"Config written to {config_path}\")\n\n\nagents_md = WORKSPACE \/ \"AGENTS.md\"\nagents_md.write_text(\n   \"# Agent Instructionsnn\"\n   \"You are nanobot <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f408.png\" alt=\"\ud83d\udc08\" class=\"wp-smiley\" \/>, an ultra-lightweight personal AI assistant.n\"\n   \"You are helpful, concise, and use tools when needed.n\"\n   \"Always explain your reasoning step by step.n\"\n)\n\n\nsoul_md = WORKSPACE \/ \"SOUL.md\"\nsoul_md.write_text(\n   \"# Personalitynn\"\n   \"- Friendly and approachablen\"\n   \"- Technically precisen\"\n   \"- Uses emoji sparingly for warmthn\"\n)\n\n\nuser_md = WORKSPACE \/ \"USER.md\"\nuser_md.write_text(\n   \"# User Profilenn\"\n   \"- The user is exploring the nanobot framework.n\"\n   \"- They are interested in AI agent architectures.n\"\n)\n\n\nmemory_md = WORKSPACE \/ \"memory\" \/ \"MEMORY.md\"\nmemory_md.write_text(\"# Long-term Memorynn_No memories stored yet._n\")\n\n\nsuccess(\"Workspace bootstrap files created:\")\nfor f in [agents_md, soul_md, user_md, memory_md]:\n   print(f\"     <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/> {f.relative_to(NANOBOT_HOME)}\")\n\n\nsection(\"STEP 4 \u00b7 nanobot Architecture Deep Dive\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f3d7.png\" alt=\"\ud83c\udfd7\" class=\"wp-smiley\" \/>\")\n\n\ninfo(\"\"\"nanobot is organized into 7 subsystems in ~4,000 lines of code:\n\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502                    USER INTERFACES                       \u2502\n \u2502         CLI  \u00b7  Telegram  \u00b7  WhatsApp  \u00b7  Discord        \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                    \u2502  InboundMessage \/ OutboundMessage\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502                    MESSAGE BUS                           \u2502\n \u2502          publish_inbound() \/ publish_outbound()          \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                    \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502                  AGENT LOOP (loop.py)                    \u2502\n \u2502    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510  \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510  \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u2502\n \u2502    \u2502 Context  \u2502\u2192 \u2502   LLM    \u2502\u2192 \u2502  Tool Execution    \u2502    \u2502\n \u2502    \u2502 Builder  \u2502  \u2502  Call    \u2502  \u2502  (if tool_calls)   \u2502    \u2502\n \u2502    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518  \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518  \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2502\n \u2502         \u25b2                              \u2502  loop back     \u2502\n \u2502         \u2502          \u25c4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518  until done    \u2502\n \u2502    \u250c\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2510  \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510  \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u2502\n \u2502    \u2502 Memory  \u2502  \u2502  Skills  \u2502  \u2502   Subagent Mgr     \u2502    \u2502\n \u2502    \u2502 Store   \u2502  \u2502  Loader  \u2502  \u2502   (spawn tasks)    \u2502    \u2502\n \u2502    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518  \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518  \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                    \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502               LLM PROVIDER LAYER                         \u2502\n \u2502     OpenAI \u00b7 Anthropic \u00b7 OpenRouter \u00b7 DeepSeek \u00b7 ...     \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n\n The Agent Loop iterates up to 40 times (configurable):\n   1. ContextBuilder assembles system prompt + memory + skills + history\n   2. LLM is called with tools definitions\n   3. If response has tool_calls \u2192 execute tools, append results, loop\n   4. If response is plain text \u2192 return as final answer\n\"\"\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We set up the full foundation of the tutorial by importing the required modules, defining helper functions for clean section display, and installing the nanobot dependencies inside Google Colab. We then securely load and validate the OpenAI API key so the rest of the notebook can interact with the model without exposing credentials in the notebook output. After that, we configure the nanobot workspace and create the core bootstrap files, such as AGENTS.md and SOUL.md, USER.md, and MEMORY.md, and study the high-level architecture so we understand how the framework is organized before moving into implementation.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">section(\"STEP 5 \u00b7 The Agent Loop \u2014 Core Concept in Action\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f504.png\" alt=\"\ud83d\udd04\" class=\"wp-smiley\" \/>\")\n\n\ninfo(\"We'll manually recreate nanobot's agent loop pattern using OpenAI.\")\ninfo(\"This is exactly what loop.py does internally.n\")\n\n\nimport json as _json\nimport datetime\n\n\nTOOLS = [\n   {\n       \"type\": \"function\",\n       \"function\": {\n           \"name\": \"get_current_time\",\n           \"description\": \"Get the current date and time.\",\n           \"parameters\": {\"type\": \"object\", \"properties\": {}, \"required\": []}\n       }\n   },\n   {\n       \"type\": \"function\",\n       \"function\": {\n           \"name\": \"calculate\",\n           \"description\": \"Evaluate a mathematical expression.\",\n           \"parameters\": {\n               \"type\": \"object\",\n               \"properties\": {\n                   \"expression\": {\n                       \"type\": \"string\",\n                       \"description\": \"Math expression to evaluate, e.g. '2**10 + 42'\"\n                   }\n               },\n               \"required\": [\"expression\"]\n           }\n       }\n   },\n   {\n       \"type\": \"function\",\n       \"function\": {\n           \"name\": \"read_file\",\n           \"description\": \"Read the contents of a file in the workspace.\",\n           \"parameters\": {\n               \"type\": \"object\",\n               \"properties\": {\n                   \"path\": {\n                       \"type\": \"string\",\n                       \"description\": \"Relative file path within the workspace\"\n                   }\n               },\n               \"required\": [\"path\"]\n           }\n       }\n   },\n   {\n       \"type\": \"function\",\n       \"function\": {\n           \"name\": \"write_file\",\n           \"description\": \"Write content to a file in the workspace.\",\n           \"parameters\": {\n               \"type\": \"object\",\n               \"properties\": {\n                   \"path\": {\"type\": \"string\", \"description\": \"Relative file path\"},\n                   \"content\": {\"type\": \"string\", \"description\": \"Content to write\"}\n               },\n               \"required\": [\"path\", \"content\"]\n           }\n       }\n   },\n   {\n       \"type\": \"function\",\n       \"function\": {\n           \"name\": \"save_memory\",\n           \"description\": \"Save a fact to the agent's long-term memory.\",\n           \"parameters\": {\n               \"type\": \"object\",\n               \"properties\": {\n                   \"fact\": {\"type\": \"string\", \"description\": \"The fact to remember\"}\n               },\n               \"required\": [\"fact\"]\n           }\n       }\n   }\n]\n\n\ndef execute_tool(name: str, arguments: dict) -&gt; str:\n   \"\"\"Execute a tool call \u2014 mirrors nanobot's ToolRegistry.execute().\"\"\"\n   if name == \"get_current_time\":\n\n\n   elif name == \"calculate\":\n       expr = arguments.get(\"expression\", \"\")\n       try:\n           result = eval(expr, {\"__builtins__\": {}}, {\"abs\": abs, \"round\": round, \"min\": min, \"max\": max})\n           return str(result)\n       except Exception as e:\n           return f\"Error: {e}\"\n\n\n   elif name == \"read_file\":\n       fpath = WORKSPACE \/ arguments.get(\"path\", \"\")\n       if fpath.exists():\n           return fpath.read_text()[:4000]\n       return f\"Error: File not found \u2014 {arguments.get('path')}\"\n\n\n   elif name == \"write_file\":\n       fpath = WORKSPACE \/ arguments.get(\"path\", \"\")\n       fpath.parent.mkdir(parents=True, exist_ok=True)\n       fpath.write_text(arguments.get(\"content\", \"\"))\n       return f\"Successfully wrote {len(arguments.get('content', ''))} chars to {arguments.get('path')}\"\n\n\n   elif name == \"save_memory\":\n       fact = arguments.get(\"fact\", \"\")\n       mem_file = WORKSPACE \/ \"memory\" \/ \"MEMORY.md\"\n       existing = mem_file.read_text()\n       timestamp = datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M\")\n       mem_file.write_text(existing + f\"n- [{timestamp}] {fact}n\")\n       return f\"Memory saved: {fact}\"\n\n\n   return f\"Unknown tool: {name}\"\n\n\n\n\ndef agent_loop(user_message: str, max_iterations: int = 10, verbose: bool = True):\n   \"\"\"\n   Recreates nanobot's AgentLoop._process_message() logic.\n\n\n   The loop:\n     1. Build context (system prompt + bootstrap files + memory)\n     2. Call LLM with tools\n     3. If tool_calls \u2192 execute \u2192 append results \u2192 loop\n     4. If text response \u2192 return final answer\n   \"\"\"\n   system_parts = []\n   for md_file in [\"AGENTS.md\", \"SOUL.md\", \"USER.md\"]:\n       fpath = WORKSPACE \/ md_file\n       if fpath.exists():\n           system_parts.append(fpath.read_text())\n\n\n   mem_file = WORKSPACE \/ \"memory\" \/ \"MEMORY.md\"\n   if mem_file.exists():\n       system_parts.append(f\"n## Your Memoryn{mem_file.read_text()}\")\n\n\n   system_prompt = \"nn\".join(system_parts)\n\n\n   messages = [\n       {\"role\": \"system\", \"content\": system_prompt},\n       {\"role\": \"user\", \"content\": user_message}\n   ]\n\n\n   if verbose:\n       print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4e8.png\" alt=\"\ud83d\udce8\" class=\"wp-smiley\" \/> User: {user_message}\")\n       print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9e0.png\" alt=\"\ud83e\udde0\" class=\"wp-smiley\" \/> System prompt: {len(system_prompt)} chars \"\n             f\"(from {len(system_parts)} bootstrap files)\")\n       print()\n\n\n   for iteration in range(1, max_iterations + 1):\n       if verbose:\n           print(f\"  \u2500\u2500 Iteration {iteration}\/{max_iterations} \u2500\u2500\")\n\n\n       response = client.chat.completions.create(\n           model=\"gpt-4o-mini\",\n           messages=messages,\n           tools=TOOLS,\n           tool_choice=\"auto\",\n           max_tokens=2048\n       )\n\n\n       choice = response.choices[0]\n       message = choice.message\n\n\n       if message.tool_calls:\n           if verbose:\n               print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/> LLM requested {len(message.tool_calls)} tool call(s):\")\n\n\n           messages.append(message.model_dump())\n\n\n           for tc in message.tool_calls:\n               fname = tc.function.name\n               args = _json.loads(tc.function.arguments) if tc.function.arguments else {}\n\n\n               if verbose:\n                   print(f\"     \u2192 {fname}({_json.dumps(args, ensure_ascii=False)[:80]})\")\n\n\n               result = execute_tool(fname, args)\n\n\n               if verbose:\n                   print(f\"     \u2190 {result[:100]}{'...' if len(result) &gt; 100 else ''}\")\n\n\n               messages.append({\n                   \"role\": \"tool\",\n                   \"tool_call_id\": tc.id,\n                   \"content\": result\n               })\n\n\n           if verbose:\n               print()\n\n\n       else:\n           final = message.content or \"\"\n           if verbose:\n               print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ac.png\" alt=\"\ud83d\udcac\" class=\"wp-smiley\" \/> Agent: {final}n\")\n           return final\n\n\n   return \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> Max iterations reached without a final response.\"\n\n\n\n\nprint(\"\u2500\" * 60)\nprint(\"  DEMO 1: Time-aware calculation with tool chaining\")\nprint(\"\u2500\" * 60)\nresult1 = agent_loop(\n   \"What is the current time? Also, calculate 2^20 + 42 for me.\"\n)\n\n\nprint(\"\u2500\" * 60)\nprint(\"  DEMO 2: File creation + memory storage\")\nprint(\"\u2500\" * 60)\nresult2 = agent_loop(\n   \"Write a haiku about AI agents to a file called 'haiku.txt'. \"\n   \"Then remember that I enjoy poetry about technology.\"\n)\n<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We manually recreate the heart of nanobot by defining the tool schemas, implementing their execution logic, and building the iterative agent loop that connects the LLM to tools. We assemble the prompt from the workspace files and memory, send the conversation to the model, detect tool calls, execute them, append the results back into the conversation, and keep looping until the model returns a final answer. We then test this mechanism with practical examples that involve time lookups, calculations, file writing, and memory saving, so we can see the loop operate exactly like the internal nanobot flow.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">section(\"STEP 6 \u00b7 Memory System \u2014 Persistent Agent Memory\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9e0.png\" alt=\"\ud83e\udde0\" class=\"wp-smiley\" \/>\")\n\n\ninfo(\"\"\"nanobot's memory system (memory.py) uses two storage mechanisms:\n\n\n 1. MEMORY.md  \u2014 Long-term facts (always loaded into context)\n 2. YYYY-MM-DD.md \u2014 Daily journal entries (loaded for recent days)\n\n\n Memory consolidation runs periodically to summarize and compress\n old entries, keeping the context window manageable.\n\"\"\")\n\n\nmem_content = (WORKSPACE \/ \"memory\" \/ \"MEMORY.md\").read_text()\nprint(\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c2.png\" alt=\"\ud83d\udcc2\" class=\"wp-smiley\" \/> Current MEMORY.md contents:\")\nprint(\"  \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\")\nfor line in mem_content.strip().split(\"n\"):\n   print(f\"  \u2502 {line}\")\nprint(\"  \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500n\")\n\n\ntoday = datetime.datetime.now().strftime(\"%Y-%m-%d\")\ndaily_file = WORKSPACE \/ \"memory\" \/ f\"{today}.md\"\ndaily_file.write_text(\n   f\"# Daily Log \u2014 {today}nn\"\n   \"- User ran the nanobot advanced tutorialn\"\n   \"- Explored agent loop, tools, and memoryn\"\n   \"- Created a haiku about AI agentsn\"\n)\nsuccess(f\"Daily journal created: memory\/{today}.md\")\n\n\nprint(\"n  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c1.png\" alt=\"\ud83d\udcc1\" class=\"wp-smiley\" \/> Workspace contents:\")\nfor item in sorted(WORKSPACE.rglob(\"*\")):\n   if item.is_file():\n       rel = item.relative_to(WORKSPACE)\n       size = item.stat().st_size\n       print(f\"     {'<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/>' if item.suffix == '.md' else '<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/>'} {rel} ({size} bytes)\")\n\n\nsection(\"STEP 7 \u00b7 Skills System \u2014 Extending Agent Capabilities\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f3af.png\" alt=\"\ud83c\udfaf\" class=\"wp-smiley\" \/>\")\n\n\ninfo(\"\"\"nanobot's SkillsLoader (skills.py) reads Markdown files from the\nskills\/ directory. Each skill has:\n - A name and description (for the LLM to decide when to use it)\n - Instructions the LLM follows when the skill is activated\n - Some skills are 'always loaded'; others are loaded on demand\n\n\nLet's create a custom skill and see how the agent uses it.\n\"\"\")\n\n\nskills_dir = WORKSPACE \/ \"skills\"\nskills_dir.mkdir(exist_ok=True)\n\n\ndata_skill = skills_dir \/ \"data_analyst.md\"\ndata_skill.write_text(\"\"\"# Data Analyst Skill\n\n\n## Description\nAnalyze data, compute statistics, and provide insights from numbers.\n\n\n## Instructions\nWhen asked to analyze data:\n1. Identify the data type and structure\n2. Compute relevant statistics (mean, median, range, std dev)\n3. Look for patterns and outliers\n4. Present findings in a clear, structured format\n5. Suggest follow-up questions\n\n\n## Always Available\nfalse\n\"\"\")\n\n\nreview_skill = skills_dir \/ \"code_reviewer.md\"\nreview_skill.write_text(\"\"\"# Code Reviewer Skill\n\n\n## Description\nReview code for bugs, security issues, and best practices.\n\n\n## Instructions\nWhen reviewing code:\n1. Check for common bugs and logic errors\n2. Identify security vulnerabilities\n3. Suggest performance improvements\n4. Evaluate code style and readability\n5. Rate the code quality on a 1-10 scale\n\n\n## Always Available\ntrue\n\"\"\")\n\n\nsuccess(\"Custom skills created:\")\nfor f in skills_dir.iterdir():\n   print(f\"     <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f3af.png\" alt=\"\ud83c\udfaf\" class=\"wp-smiley\" \/> {f.name}\")\n\n\nprint(\"n  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9ea.png\" alt=\"\ud83e\uddea\" class=\"wp-smiley\" \/> Testing skill-aware agent interaction:\")\nprint(\"  \" + \"\u2500\" * 56)\n\n\nskills_context = \"nn## Available Skillsn\"\nfor skill_file in skills_dir.glob(\"*.md\"):\n   content = skill_file.read_text()\n   skills_context += f\"n### {skill_file.stem}n{content}n\"\n\n\nresult3 = agent_loop(\n   \"Review this Python code for issues:nn\"\n   \"```pythonn\"\n   \"def get_user(id):n\"\n   \"    query = f'SELECT * FROM users WHERE id = {id}'n\"\n   \"    result = db.execute(query)n\"\n   \"    return resultn\"\n   \"```\"\n)\n<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We move into the persistent memory system by inspecting the long-term memory file, creating a daily journal entry, and reviewing how the workspace evolves after earlier interactions. We then extend the agent with a skills system by creating markdown-based skill files that describe specialized behaviors such as data analysis and code review. Finally, we simulate how skill-aware prompting works by exposing these skills to the agent and asking it to review a Python function, which helps us see how nanobot can be guided through modular capability descriptions.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">section(\"STEP 8 \u00b7 Custom Tool Creation \u2014 Extending the Agent\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/>\")\n\n\ninfo(\"\"\"nanobot's tool system uses a ToolRegistry with a simple interface.\nEach tool needs:\n - A name and description\n - A JSON Schema for parameters\n - An execute() method\n\n\nLet's create custom tools and wire them into our agent loop.\n\"\"\")\n\n\nimport random\n\n\nCUSTOM_TOOLS = [\n   {\n       \"type\": \"function\",\n       \"function\": {\n           \"name\": \"roll_dice\",\n           \"description\": \"Roll one or more dice with a given number of sides.\",\n           \"parameters\": {\n               \"type\": \"object\",\n               \"properties\": {\n                   \"num_dice\": {\"type\": \"integer\", \"description\": \"Number of dice to roll\", \"default\": 1},\n                   \"sides\": {\"type\": \"integer\", \"description\": \"Number of sides per die\", \"default\": 6}\n               },\n               \"required\": []\n           }\n       }\n   },\n   {\n       \"type\": \"function\",\n       \"function\": {\n           \"name\": \"text_stats\",\n           \"description\": \"Compute statistics about a text: word count, char count, sentence count, reading time.\",\n           \"parameters\": {\n               \"type\": \"object\",\n               \"properties\": {\n                   \"text\": {\"type\": \"string\", \"description\": \"The text to analyze\"}\n               },\n               \"required\": [\"text\"]\n           }\n       }\n   },\n   {\n       \"type\": \"function\",\n       \"function\": {\n           \"name\": \"generate_password\",\n           \"description\": \"Generate a random secure password.\",\n           \"parameters\": {\n               \"type\": \"object\",\n               \"properties\": {\n                   \"length\": {\"type\": \"integer\", \"description\": \"Password length\", \"default\": 16}\n               },\n               \"required\": []\n           }\n       }\n   }\n]\n\n\n_original_execute = execute_tool\n\n\ndef execute_tool_extended(name: str, arguments: dict) -&gt; str:\n   if name == \"roll_dice\":\n       n = arguments.get(\"num_dice\", 1)\n       s = arguments.get(\"sides\", 6)\n       rolls = [random.randint(1, s) for _ in range(n)]\n       return f\"Rolled {n}d{s}: {rolls} (total: {sum(rolls)})\"\n\n\n   elif name == \"text_stats\":\n       text = arguments.get(\"text\", \"\")\n       words = len(text.split())\n       chars = len(text)\n       sentences = text.count('.') + text.count('!') + text.count('?')\n       reading_time = round(words \/ 200, 1)\n       return _json.dumps({\n           \"words\": words,\n           \"characters\": chars,\n           \"sentences\": max(sentences, 1),\n           \"reading_time_minutes\": reading_time\n       })\n\n\n   elif name == \"generate_password\":\n       import string\n       length = arguments.get(\"length\", 16)\n       chars = string.ascii_letters + string.digits + \"!@#$%^&amp;*\"\n       pwd = ''.join(random.choice(chars) for _ in range(length))\n       return f\"Generated password ({length} chars): {pwd}\"\n\n\n   return _original_execute(name, arguments)\n\n\nexecute_tool = execute_tool_extended\n\n\nALL_TOOLS = TOOLS + CUSTOM_TOOLS\n\n\ndef agent_loop_v2(user_message: str, max_iterations: int = 10, verbose: bool = True):\n   \"\"\"Agent loop with extended custom tools.\"\"\"\n   system_parts = []\n   for md_file in [\"AGENTS.md\", \"SOUL.md\", \"USER.md\"]:\n       fpath = WORKSPACE \/ md_file\n       if fpath.exists():\n           system_parts.append(fpath.read_text())\n   mem_file = WORKSPACE \/ \"memory\" \/ \"MEMORY.md\"\n   if mem_file.exists():\n       system_parts.append(f\"n## Your Memoryn{mem_file.read_text()}\")\n   system_prompt = \"nn\".join(system_parts)\n\n\n   messages = [\n       {\"role\": \"system\", \"content\": system_prompt},\n       {\"role\": \"user\", \"content\": user_message}\n   ]\n\n\n   if verbose:\n       print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4e8.png\" alt=\"\ud83d\udce8\" class=\"wp-smiley\" \/> User: {user_message}\")\n       print()\n\n\n   for iteration in range(1, max_iterations + 1):\n       if verbose:\n           print(f\"  \u2500\u2500 Iteration {iteration}\/{max_iterations} \u2500\u2500\")\n\n\n       response = client.chat.completions.create(\n           model=\"gpt-4o-mini\",\n           messages=messages,\n           tools=ALL_TOOLS,\n           tool_choice=\"auto\",\n           max_tokens=2048\n       )\n       choice = response.choices[0]\n       message = choice.message\n\n\n       if message.tool_calls:\n           if verbose:\n               print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/> {len(message.tool_calls)} tool call(s):\")\n           messages.append(message.model_dump())\n           for tc in message.tool_calls:\n               fname = tc.function.name\n               args = _json.loads(tc.function.arguments) if tc.function.arguments else {}\n               if verbose:\n                   print(f\"     \u2192 {fname}({_json.dumps(args, ensure_ascii=False)[:80]})\")\n               result = execute_tool(fname, args)\n               if verbose:\n                   print(f\"     \u2190 {result[:120]}{'...' if len(result) &gt; 120 else ''}\")\n               messages.append({\n                   \"role\": \"tool\",\n                   \"tool_call_id\": tc.id,\n                   \"content\": result\n               })\n           if verbose:\n               print()\n       else:\n           final = message.content or \"\"\n           if verbose:\n               print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ac.png\" alt=\"\ud83d\udcac\" class=\"wp-smiley\" \/> Agent: {final}n\")\n           return final\n\n\n   return \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> Max iterations reached.\"\n\n\n\n\nprint(\"\u2500\" * 60)\nprint(\"  DEMO 3: Custom tools in action\")\nprint(\"\u2500\" * 60)\nresult4 = agent_loop_v2(\n   \"Roll 3 six-sided dice for me, then generate a 20-character password, \"\n   \"and finally analyze the text stats of this sentence: \"\n)\n\n\nsection(\"STEP 9 \u00b7 Multi-Turn Conversation \u2014 Session Management\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ac.png\" alt=\"\ud83d\udcac\" class=\"wp-smiley\" \/>\")\n\n\ninfo(\"\"\"nanobot's SessionManager (session\/manager.py) maintains conversation\nhistory per session_key (format: 'channel:chat_id'). History is stored\nin JSON files and loaded into context for each new message.\n\n\nLet's simulate a multi-turn conversation with persistent state.\n\"\"\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We expand the agent\u2019s capabilities by defining new custom tools such as dice rolling, text statistics, and password generation, and then wiring them into the tool execution pipeline. We update the executor, merge the built-in and custom tool definitions, and create a second version of the agent loop that can reason over this larger set of capabilities. We then run a demo task that forces the model to chain multiple tool invocations, demonstrating how easy it is to extend nanobot with our own functions while keeping the same overall interaction pattern.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">class SimpleSessionManager:\n   \"\"\"\n   Minimal recreation of nanobot's SessionManager.\n   Stores conversation history and provides context continuity.\n   \"\"\"\n   def __init__(self, workspace: Path):\n       self.workspace = workspace\n       self.sessions: dict[str, list[dict]] = {}\n\n\n   def get_history(self, session_key: str) -&gt; list[dict]:\n       return self.sessions.get(session_key, [])\n\n\n   def add_turn(self, session_key: str, role: str, content: str):\n       if session_key not in self.sessions:\n           self.sessions[session_key] = []\n       self.sessions[session_key].append({\"role\": role, \"content\": content})\n\n\n   def save(self, session_key: str):\n       fpath = self.workspace \/ f\"session_{session_key.replace(':', '_')}.json\"\n       fpath.write_text(_json.dumps(self.sessions.get(session_key, []), indent=2))\n\n\n   def load(self, session_key: str):\n       fpath = self.workspace \/ f\"session_{session_key.replace(':', '_')}.json\"\n       if fpath.exists():\n           self.sessions[session_key] = _json.loads(fpath.read_text())\n\n\n\n\nsession_mgr = SimpleSessionManager(WORKSPACE)\nSESSION_KEY = \"cli:tutorial_user\"\n\n\n\n\ndef chat(user_message: str, verbose: bool = True):\n   \"\"\"Multi-turn chat with session persistence.\"\"\"\n   session_mgr.add_turn(SESSION_KEY, \"user\", user_message)\n\n\n   system_parts = []\n   for md_file in [\"AGENTS.md\", \"SOUL.md\"]:\n       fpath = WORKSPACE \/ md_file\n       if fpath.exists():\n           system_parts.append(fpath.read_text())\n   system_prompt = \"nn\".join(system_parts)\n\n\n   history = session_mgr.get_history(SESSION_KEY)\n   messages = [{\"role\": \"system\", \"content\": system_prompt}] + history\n\n\n   if verbose:\n       print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f464.png\" alt=\"\ud83d\udc64\" class=\"wp-smiley\" \/> You: {user_message}\")\n       print(f\"     (conversation history: {len(history)} messages)\")\n\n\n   response = client.chat.completions.create(\n       model=\"gpt-4o-mini\",\n       messages=messages,\n       max_tokens=1024\n   )\n   reply = response.choices[0].message.content or \"\"\n\n\n   session_mgr.add_turn(SESSION_KEY, \"assistant\", reply)\n   session_mgr.save(SESSION_KEY)\n\n\n   if verbose:\n       print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f408.png\" alt=\"\ud83d\udc08\" class=\"wp-smiley\" \/> nanobot: {reply}n\")\n   return reply\n\n\n\n\nprint(\"\u2500\" * 60)\nprint(\"  DEMO 4: Multi-turn conversation with memory\")\nprint(\"\u2500\" * 60)\n\n\nchat(\"Hi! My name is Alex and I'm building an AI agent.\")\nchat(\"What's my name? And what am I working on?\")\nchat(\"Can you suggest 3 features I should add to my agent?\")\n\n\nsuccess(\"Session persisted with full conversation history!\")\nsession_file = WORKSPACE \/ f\"session_{SESSION_KEY.replace(':', '_')}.json\"\nsession_data = _json.loads(session_file.read_text())\nprint(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/> Session file: {session_file.name} ({len(session_data)} messages)\")\n\n\nsection(\"STEP 10 \u00b7 Subagent Spawning \u2014 Background Task Delegation\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f680.png\" alt=\"\ud83d\ude80\" class=\"wp-smiley\" \/>\")\n\n\ninfo(\"\"\"nanobot's SubagentManager (agent\/subagent.py) allows the main agent\nto delegate tasks to independent background workers. Each subagent:\n - Gets its own tool registry (no SpawnTool to prevent recursion)\n - Runs up to 15 iterations independently\n - Reports results back via the MessageBus\n\n\nLet's simulate this pattern with concurrent tasks.\n\"\"\")\n\n\nimport asyncio\nimport uuid\n\n\n\n\nasync def run_subagent(task_id: str, goal: str, verbose: bool = True):\n   \"\"\"\n   Simulates nanobot's SubagentManager._run_subagent().\n   Runs an independent LLM loop for a specific goal.\n   \"\"\"\n   if verbose:\n       print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f539.png\" alt=\"\ud83d\udd39\" class=\"wp-smiley\" \/> Subagent [{task_id[:8]}] started: {goal[:60]}\")\n\n\n   response = client.chat.completions.create(\n       model=\"gpt-4o-mini\",\n       messages=[\n           {\"role\": \"system\", \"content\": \"You are a focused research assistant. \"\n            \"Complete the assigned task concisely in 2-3 sentences.\"},\n           {\"role\": \"user\", \"content\": goal}\n       ],\n       max_tokens=256\n   )\n\n\n   result = response.choices[0].message.content or \"\"\n   if verbose:\n       print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Subagent [{task_id[:8]}] done: {result[:80]}...\")\n   return {\"task_id\": task_id, \"goal\": goal, \"result\": result}\n\n\n\n\nasync def spawn_subagents(goals: list[str]):\n   \"\"\"Spawn multiple subagents concurrently \u2014 mirrors SubagentManager.spawn().\"\"\"\n   tasks = []\n   for goal in goals:\n       task_id = str(uuid.uuid4())\n       tasks.append(run_subagent(task_id, goal))\n\n\n   print(f\"n  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f680.png\" alt=\"\ud83d\ude80\" class=\"wp-smiley\" \/> Spawning {len(tasks)} subagents concurrently...n\")\n   results = await asyncio.gather(*tasks)\n   return results\n\n\n\n\ngoals = [\n   \"What are the 3 key components of a ReAct agent architecture?\",\n   \"Explain the difference between tool-calling and function-calling in LLMs.\",\n   \"What is MCP (Model Context Protocol) and why does it matter for AI agents?\",\n]\n\n\ntry:\n   loop = asyncio.get_running_loop()\n   import nest_asyncio\n   nest_asyncio.apply()\n   subagent_results = asyncio.get_event_loop().run_until_complete(spawn_subagents(goals))\nexcept RuntimeError:\n   subagent_results = asyncio.run(spawn_subagents(goals))\nexcept ModuleNotFoundError:\n   print(\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2139.png\" alt=\"\u2139\" class=\"wp-smiley\" \/>  Running subagents sequentially (install nest_asyncio for async)...n\")\n   subagent_results = []\n   for goal in goals:\n       task_id = str(uuid.uuid4())\n       response = client.chat.completions.create(\n           model=\"gpt-4o-mini\",\n           messages=[\n               {\"role\": \"system\", \"content\": \"Complete the task concisely in 2-3 sentences.\"},\n               {\"role\": \"user\", \"content\": goal}\n           ],\n           max_tokens=256\n       )\n       r = response.choices[0].message.content or \"\"\n       print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Subagent [{task_id[:8]}] done: {r[:80]}...\")\n       subagent_results.append({\"task_id\": task_id, \"goal\": goal, \"result\": r})\n\n\nprint(f\"n  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4cb.png\" alt=\"\ud83d\udccb\" class=\"wp-smiley\" \/> All {len(subagent_results)} subagent results collected!\")\nfor i, r in enumerate(subagent_results, 1):\n   print(f\"n  \u2500\u2500 Result {i} \u2500\u2500\")\n   print(f\"  Goal: {r['goal'][:60]}\")\n   print(f\"  Answer: {r['result'][:200]}\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We simulate multi-turn conversation management by building a lightweight session manager that stores, retrieves, and persists conversation history across turns. We use that history to maintain continuity in the chat, allowing the agent to remember details from earlier in the interaction and respond more coherently and statefully. After that, we model subagent spawning by launching concurrent background tasks that each handle a focused objective, which helps us understand how nanobot can delegate parallel work to independent agent workers.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">section(\"STEP 11 \u00b7 Scheduled Tasks \u2014 The Cron Pattern\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/23f0.png\" alt=\"\u23f0\" class=\"wp-smiley\" \/>\")\n\n\ninfo(\"\"\"nanobot's CronService (cron\/service.py) uses APScheduler to trigger\nagent actions on a schedule. When a job fires, it creates an\nInboundMessage and publishes it to the MessageBus.\n\n\nLet's demonstrate the pattern with a simulated scheduler.\n\"\"\")\n\n\nfrom datetime import timedelta\n\n\n\n\nclass SimpleCronJob:\n   \"\"\"Mirrors nanobot's cron job structure.\"\"\"\n   def __init__(self, name: str, message: str, interval_seconds: int):\n       self.id = str(uuid.uuid4())[:8]\n       self.name = name\n       self.message = message\n       self.interval = interval_seconds\n       self.enabled = True\n       self.last_run = None\n       self.next_run = datetime.datetime.now() + timedelta(seconds=interval_seconds)\n\n\n\n\njobs = [\n   SimpleCronJob(\"morning_briefing\", \"Give me a brief morning status update.\", 86400),\n   SimpleCronJob(\"memory_cleanup\", \"Review and consolidate my memories.\", 43200),\n   SimpleCronJob(\"health_check\", \"Run a system health check.\", 3600),\n]\n\n\nprint(\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4cb.png\" alt=\"\ud83d\udccb\" class=\"wp-smiley\" \/> Registered Cron Jobs:\")\nprint(\"  \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\")\nprint(\"  \u2502 ID     \u2502 Name               \u2502 Interval \u2502 Next Run             \u2502\")\nprint(\"  \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\")\nfor job in jobs:\n   interval_str = f\"{job.interval \/\/ 3600}h\" if job.interval &gt;= 3600 else f\"{job.interval}s\"\n   print(f\"  \u2502 {job.id} \u2502 {job.name:&lt;18} \u2502 {interval_str:&gt;8} \u2502 {job.next_run.strftime('%Y-%m-%d %H:%M')} \u2502\")\nprint(\"  \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\")\n\n\nprint(f\"n  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/23f0.png\" alt=\"\u23f0\" class=\"wp-smiley\" \/> Simulating cron trigger for '{jobs[2].name}'...\")\ncron_result = agent_loop_v2(jobs[2].message, verbose=True)\n\n\nsection(\"STEP 12 \u00b7 Full Agent Pipeline \u2014 End-to-End Demo\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f3ac.png\" alt=\"\ud83c\udfac\" class=\"wp-smiley\" \/>\")\n\n\ninfo(\"\"\"Now let's run a complex, multi-step task that exercises the full\nnanobot pipeline: context building \u2192 tool use \u2192 memory \u2192 file I\/O.\n\"\"\")\n\n\nprint(\"\u2500\" * 60)\nprint(\"  DEMO 5: Complex multi-step research task\")\nprint(\"\u2500\" * 60)\n\n\ncomplex_result = agent_loop_v2(\n   \"I need you to help me with a small project:n\"\n   \"1. First, check the current timen\"\n   \"2. Write a short project plan to 'project_plan.txt' about building \"\n   \"a personal AI assistant (3-4 bullet points)n\"\n   \"3. Remember that my current project is 'building a personal AI assistant'n\"\n   \"4. Read back the project plan file to confirm it was saved correctlyn\"\n   \"Then summarize everything you did.\",\n   max_iterations=15\n)\n\n\nsection(\"STEP 13 \u00b7 Final Workspace Summary\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/>\")\n\n\nprint(\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c1.png\" alt=\"\ud83d\udcc1\" class=\"wp-smiley\" \/> Complete workspace state after tutorial:n\")\ntotal_files = 0\ntotal_bytes = 0\nfor item in sorted(WORKSPACE.rglob(\"*\")):\n   if item.is_file():\n       rel = item.relative_to(WORKSPACE)\n       size = item.stat().st_size\n       total_files += 1\n       total_bytes += size\n       icon = {\"md\": \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/>\", \"txt\": \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/>\", \"json\": \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4cb.png\" alt=\"\ud83d\udccb\" class=\"wp-smiley\" \/>\"}.get(item.suffix.lstrip(\".\"), \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ce.png\" alt=\"\ud83d\udcce\" class=\"wp-smiley\" \/>\")\n       print(f\"     {icon} {rel} ({size:,} bytes)\")\n\n\nprint(f\"n  \u2500\u2500 Summary \u2500\u2500\")\nprint(f\"  Total files: {total_files}\")\nprint(f\"  Total size:  {total_bytes:,} bytes\")\nprint(f\"  Config:      {config_path}\")\nprint(f\"  Workspace:   {WORKSPACE}\")\n\n\nprint(\"n  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9e0.png\" alt=\"\ud83e\udde0\" class=\"wp-smiley\" \/> Final Memory State:\")\nmem_content = (WORKSPACE \/ \"memory\" \/ \"MEMORY.md\").read_text()\nprint(\"  \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\")\nfor line in mem_content.strip().split(\"n\"):\n   print(f\"  \u2502 {line}\")\nprint(\"  \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\")\n\n\nsection(\"COMPLETE \u00b7 What's Next?\", \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f389.png\" alt=\"\ud83c\udf89\" class=\"wp-smiley\" \/>\")\n\n\nprint(\"\"\"  You've explored the core internals of nanobot! Here's what to try next:\n\n\n <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f539.png\" alt=\"\ud83d\udd39\" class=\"wp-smiley\" \/> Run the real CLI agent:\n    nanobot onboard &amp;&amp; nanobot agent\n\n\n <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f539.png\" alt=\"\ud83d\udd39\" class=\"wp-smiley\" \/> Connect to Telegram:\n    Add a bot token to config.json and run `nanobot gateway`\n\n\n <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f539.png\" alt=\"\ud83d\udd39\" class=\"wp-smiley\" \/> Enable web search:\n    Add a Brave Search API key under tools.web.search.apiKey\n\n\n <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f539.png\" alt=\"\ud83d\udd39\" class=\"wp-smiley\" \/> Try MCP integration:\n    nanobot supports Model Context Protocol servers for external tools\n\n\n <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f539.png\" alt=\"\ud83d\udd39\" class=\"wp-smiley\" \/> Explore the source (~4K lines):\n    https:\/\/github.com\/HKUDS\/nanobot\n\n\n <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f539.png\" alt=\"\ud83d\udd39\" class=\"wp-smiley\" \/> Key files to read:\n    \u2022 agent\/loop.py    \u2014 The agent iteration loop\n    \u2022 agent\/context.py \u2014 Prompt assembly pipeline\n    \u2022 agent\/memory.py  \u2014 Persistent memory system\n    \u2022 agent\/tools\/     \u2014 Built-in tool implementations\n    \u2022 agent\/subagent.py \u2014 Background task delegation\n\n\n\"\"\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We demonstrate the cron-style scheduling pattern by defining simple scheduled jobs, listing their intervals and next run times, and simulating the triggering of an automated agent task. We then run a larger end-to-end example that combines context building, tool use, memory updates, and file operations into a single multi-step workflow, so we can see the full pipeline working together in a realistic task. At the end, we inspect the final workspace state, review the stored memory, and close the tutorial with clear next steps that connect this notebook implementation to the real nanobot project and its source code.<\/p>\n<p>In conclusion, we walked through every major layer of the nanobot\u2019s architecture, from the iterative LLM-tool loop at its core to the session manager that gives our agent conversational memory across turns. We built five built-in tools, three custom tools, two skills, a session persistence layer, a subagent spawner, and a cron simulator, all while keeping everything in a single runnable script. What stands out is how nanobot proves that a production-grade agent framework doesn\u2019t need hundreds of thousands of lines of code; the patterns we implemented here, context assembly, tool dispatch, memory consolidation, and background task delegation, are the same patterns that power far larger systems, just stripped down to their essence. We now have a working mental model of agentic AI internals and a codebase small enough to read in one sitting, which makes nanobot an ideal choice for anyone looking to build, customize, or research AI agents from the ground up.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out\u00a0the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/AI%20Agents%20Codes\/nanobot_deep_dive_build_ai_agent_from_inside_out_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">120k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/03\/28\/a-coding-guide-to-exploring-nanobots-full-agent-pipeline-from-wiring-up-tools-and-memory-to-skills-subagents-and-cron-scheduling\/\">A Coding Guide to Exploring nanobot\u2019s Full Agent Pipeline, from Wiring Up Tools and Memory to Skills, Subagents, and Cron Scheduling<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we take a de&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-630","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/630","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=630"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/630\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=630"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=630"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=630"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}