{"id":608,"date":"2026-03-25T05:31:11","date_gmt":"2026-03-24T21:31:11","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=608"},"modified":"2026-03-25T05:31:11","modified_gmt":"2026-03-24T21:31:11","slug":"a-coding-implementation-to-design-self-evolving-skill-engine-with-openspace-for-skill-learning-token-efficiency-and-collective-intelligence","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=608","title":{"rendered":"A Coding Implementation to Design Self-Evolving Skill Engine with OpenSpace for Skill Learning, Token Efficiency, and Collective Intelligence"},"content":{"rendered":"<p>In this tutorial, we explore <a href=\"https:\/\/github.com\/HKUDS\/OpenSpace\"><strong>OpenSpace<\/strong><\/a>, a self-evolving skill engine developed by HKUDS that makes AI agents smarter, more cost-efficient, and capable of learning from every task they perform. We walk through the complete lifecycle of OpenSpace: from installing and configuring an OpenAI model, to executing cold-start tasks where no prior skills exist, watching the evolution engine capture reusable patterns, and then re-running similar tasks to observe real token savings through skill reuse. Along the way, we create custom skills manually, inspect the SQLite skill database, run multi-task pipelines that accumulate intelligence over time, and demonstrate how the cloud community at open-space.cloud enables agents to share evolved skills. By the end, we have a hands-on understanding of the three evolution modes (FIX, DERIVED, and CAPTURED), the three automatic triggers that keep skills healthy, and the measurable economic impact that OpenSpace delivers, including the 4.2x income improvement and 46% token reduction demonstrated in the GDPVal benchmark across 50 real-world professional tasks.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import subprocess, sys, os\n\n\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/> Installing OpenSpace from GitHub (this may take 2-3 minutes)...\")\nsubprocess.check_call([\n   sys.executable, \"-m\", \"pip\", \"install\", \"-q\",\n   \"git+https:\/\/github.com\/HKUDS\/OpenSpace.git\"\n])\n\n\nsubprocess.check_call([\n   sys.executable, \"-m\", \"pip\", \"install\", \"-q\", \"openai\"\n])\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Installation complete!\")\n\n\ntry:\n   from openspace import OpenSpace\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> OpenSpace imported successfully\")\nexcept ImportError as e:\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/>  Import issue: {e}\")\n   print(\"Trying alternative import path...\")\n   import openspace\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> openspace package found at: {openspace.__file__}\")\n\n\nimport getpass\n\n\nprint(\"Enter your OpenAI API key (input is hidden):\")\napi_key = getpass.getpass(\"OpenAI API Key: \")\nos.environ[\"OPENAI_API_KEY\"] = api_key\n\n\nprint(\"n[Optional] Enter your OpenSpace Cloud API key\")\nprint(\"(Get one free at https:\/\/open-space.cloud \u2014 press Enter to skip):\")\ncloud_key = getpass.getpass(\"OpenSpace Cloud Key: \")\nif cloud_key.strip():\n   os.environ[\"OPENSPACE_API_KEY\"] = cloud_key.strip()\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Cloud API key set\")\nelse:\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/23ed.png\" alt=\"\u23ed\" class=\"wp-smiley\" \/>  Skipping cloud features (local mode only)\")\n\n\nMODEL_NAME = \"openai\/gpt-4o-mini\"\nos.environ[\"OPENSPACE_MODEL\"] = MODEL_NAME\n\n\nprint(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Configuration complete!\")\nprint(f\"   Model: {MODEL_NAME}\")\nprint(f\"   OpenAI Key: {'*' * 8}...{api_key[-4:]}\")\nprint(f\"   Cloud: {'Enabled' if cloud_key.strip() else 'Disabled (local only)'}\")\n\n\nfrom openai import OpenAI\n\n\nclient = OpenAI(api_key=os.environ[\"OPENAI_API_KEY\"])\n\n\ntry:\n   test_resp = client.chat.completions.create(\n       model=\"gpt-4o-mini\",\n       messages=[{\"role\": \"user\", \"content\": \"Say 'OpenSpace ready!' in 3 words or less.\"}],\n       max_tokens=10\n   )\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> OpenAI API working: {test_resp.choices[0].message.content}\")\nexcept Exception as e:\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" \/> OpenAI API error: {e}\")\n   print(\"Please check your API key and try again.\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We begin by installing OpenSpace directly from its GitHub repository along with the OpenAI SDK, ensuring all dependencies, including LiteLLM, the skill engine, and the MCP server, are pulled in automatically. We then securely input our OpenAI API key through the terminal using getpass so it never appears in plain text, and optionally provide an OpenSpace Cloud key for community features. We verify that everything works by making a quick test call to the OpenAI API and confirming that our environment is fully configured and ready for the tutorial.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import os\nimport json\nimport shutil\nimport sqlite3\nimport glob\nimport asyncio\nimport time\nfrom pathlib import Path\n\n\nWORKSPACE = Path(\"\/content\/openspace_tutorial\")\nSKILLS_DIR = WORKSPACE \/ \"skills\"\nOUTPUT_DIR = WORKSPACE \/ \"outputs\"\nDB_DIR = WORKSPACE \/ \".openspace\"\n\n\nif WORKSPACE.exists():\n   shutil.rmtree(WORKSPACE)\n\n\nWORKSPACE.mkdir(parents=True)\nSKILLS_DIR.mkdir(parents=True)\nOUTPUT_DIR.mkdir(parents=True)\nDB_DIR.mkdir(parents=True)\n\n\nos.environ[\"OPENSPACE_WORKSPACE\"] = str(WORKSPACE)\nos.environ[\"OPENSPACE_HOST_SKILL_DIRS\"] = str(SKILLS_DIR)\n\n\nenv_content = f\"\"\"OPENAI_API_KEY={os.environ['OPENAI_API_KEY']}\nOPENSPACE_MODEL={MODEL_NAME}\nOPENSPACE_WORKSPACE={WORKSPACE}\n\"\"\"\n\n\nenv_path = WORKSPACE \/ \".env\"\nenv_path.write_text(env_content)\n\n\nprint(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c1.png\" alt=\"\ud83d\udcc1\" class=\"wp-smiley\" \/> Workspace: {WORKSPACE}\")\nprint(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c1.png\" alt=\"\ud83d\udcc1\" class=\"wp-smiley\" \/> Skills:    {SKILLS_DIR}\")\nprint(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c1.png\" alt=\"\ud83d\udcc1\" class=\"wp-smiley\" \/> Outputs:   {OUTPUT_DIR}\")\nprint(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c1.png\" alt=\"\ud83d\udcc1\" class=\"wp-smiley\" \/> Database:  {DB_DIR}\")\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Workspace ready for cold start execution!\")\n\n\nasync def run_cold_start_task():\n   print(\"=\"*60)\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9ca.png\" alt=\"\ud83e\uddca\" class=\"wp-smiley\" \/> COLD START: No skills exist yet\")\n   print(\"=\"*60)\n  \n   task = (\n       \"Create a Python script that analyzes a CSV file containing \"\n       \"sales data with columns: date, product, quantity, price. \"\n       \"The script should compute monthly revenue, identify the top \"\n       \"3 best-selling products, and generate a summary report as \"\n       \"a formatted text file.\"\n   )\n  \n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Task: {task[:100]}...n\")\n  \n   start_time = time.time()\n  \n   try:\n       from openspace import OpenSpace\n      \n       async with OpenSpace() as cs:\n           result = await cs.execute(task)\n          \n           elapsed = time.time() - start_time\n          \n           print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/23f1.png\" alt=\"\u23f1\" class=\"wp-smiley\" \/>  Execution time: {elapsed:.1f}s\")\n           print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/> Response (first 500 chars):\")\n           print(\"-\" * 40)\n           response_text = result.get(\"response\", str(result))\n           print(response_text[:500])\n          \n           evolved = result.get(\"evolved_skills\", [])\n           if evolved:\n               print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9ec.png\" alt=\"\ud83e\uddec\" class=\"wp-smiley\" \/> Skills Evolved: {len(evolved)}\")\n               for skill in evolved:\n                   origin = skill.get('origin', 'unknown')\n                   name = skill.get('name', 'unnamed')\n                   print(f\"   \u2022 {name} (origin: {origin})\")\n           else:\n               print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> No skills evolved yet (may happen post-analysis)\")\n          \n           return result\n          \n   except Exception as e:\n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/>  Execution error: {type(e).__name__}: {e}\")\n       print(\"nThis is expected if OpenSpace requires additional setup.\")\n       print(\"We'll demonstrate the concepts with direct API calls below.\")\n       return None\n\n\ncold_start_result = await run_cold_start_task()\n\n\ndef inspect_skill_database():\n   db_patterns = [\n       str(WORKSPACE \/ \".openspace\" \/ \"*.db\"),\n       str(WORKSPACE \/ \"*.db\"),\n       str(WORKSPACE \/ \".openspace\" \/ \"openspace.db\"),\n       \"\/content\/.openspace\/*.db\",\n   ]\n  \n   db_files = []\n   for pattern in db_patterns:\n       db_files.extend(glob.glob(pattern))\n  \n   if not db_files:\n       print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ed.png\" alt=\"\ud83d\udced\" class=\"wp-smiley\" \/> No skill database found yet.\")\n       print(\"   This is normal if the cold start task hasn't completed\")\n       print(\"   or if OpenSpace stores skills elsewhere.\")\n       print(\"n   We'll create a demonstration database below.\")\n       return None\n  \n   db_path = db_files[0]\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c2.png\" alt=\"\ud83d\udcc2\" class=\"wp-smiley\" \/> Found database: {db_path}\")\n  \n   conn = sqlite3.connect(db_path)\n   cursor = conn.cursor()\n  \n   cursor.execute(\"SELECT name FROM sqlite_master WHERE type='table'\")\n   tables = cursor.fetchall()\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> Tables: {[t[0] for t in tables]}\")\n  \n   for table in tables:\n       table_name = table[0]\n       cursor.execute(f\"SELECT COUNT(*) FROM {table_name}\")\n       count = cursor.fetchone()[0]\n       print(f\"   {table_name}: {count} records\")\n      \n       if count &gt; 0:\n           cursor.execute(f\"PRAGMA table_info({table_name})\")\n           columns = [col[1] for col in cursor.fetchall()]\n           print(f\"   Columns: {columns[:8]}{'...' if len(columns) &gt; 8 else ''}\")\n          \n           cursor.execute(f\"SELECT * FROM {table_name} LIMIT 3\")\n           rows = cursor.fetchall()\n           for row in rows:\n               print(f\"   \u2192 {str(row)[:120]}...\")\n  \n   conn.close()\n   return db_path\n\n\ndb_path = inspect_skill_database()\n\n\ndef inspect_skill_files():\n   skill_files = []\n   search_dirs = [SKILLS_DIR, WORKSPACE \/ \".openspace\", WORKSPACE]\n  \n   for search_dir in search_dirs:\n       for root, dirs, files in os.walk(search_dir):\n           for f in files:\n               if f.upper() == \"SKILL.MD\" or f.endswith(\".py\"):\n                   full_path = os.path.join(root, f)\n                   skill_files.append(full_path)\n  \n   if not skill_files:\n       print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ed.png\" alt=\"\ud83d\udced\" class=\"wp-smiley\" \/> No skill files found on disk yet.\")\n       print(\"   Skills are created after the evolution engine runs.\")\n       return\n  \n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c1.png\" alt=\"\ud83d\udcc1\" class=\"wp-smiley\" \/> Found {len(skill_files)} skill-related files:n\")\n   for sf in skill_files[:20]:\n       rel_path = os.path.relpath(sf, WORKSPACE)\n       size = os.path.getsize(sf)\n       print(f\"   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/> {rel_path} ({size} bytes)\")\n      \n       if sf.endswith(\".md\"):\n           with open(sf, 'r') as fh:\n               content = fh.read(300)\n               print(f\"      Preview: {content[:150]}...n\")\n\n\ninspect_skill_files()<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We set up a clean workspace with dedicated directories for skills, outputs, and the OpenSpace database, and write a .env file so OpenSpace knows where to find our credentials and model configuration. We then execute our first task in cold-start mode, a CSV sales data analyzer, where no prior skills exist, and we observe how the evolution engine processes the execution to capture reusable patterns. We finish by inspecting both the SQLite skill database and the on-disk SKILL.md files to see exactly what OpenSpace has stored and how it structures evolved skill metadata.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">async def run_warm_start_task():\n   print(\"=\"*60)\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f525.png\" alt=\"\ud83d\udd25\" class=\"wp-smiley\" \/> WARM START: Reusing previously evolved skills\")\n   print(\"=\"*60)\n  \n   task = (\n       \"Create a Python script that analyzes a CSV file containing \"\n       \"inventory data with columns: date, item, quantity, cost. \"\n       \"The script should compute monthly expenditures, identify the top \"\n       \"5 most purchased items, and output a formatted summary report.\"\n   )\n  \n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Task: {task[:100]}...\")\n   print(\"   (Similar to cold start task \u2014 skills should be reused)n\")\n  \n   start_time = time.time()\n  \n   try:\n       from openspace import OpenSpace\n      \n       async with OpenSpace() as cs:\n           result = await cs.execute(task)\n          \n           elapsed = time.time() - start_time\n          \n           print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/23f1.png\" alt=\"\u23f1\" class=\"wp-smiley\" \/>  Execution time: {elapsed:.1f}s\")\n          \n           response_text = result.get(\"response\", str(result))\n           print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/> Response (first 500 chars):\")\n           print(\"-\" * 40)\n           print(response_text[:500])\n          \n           evolved = result.get(\"evolved_skills\", [])\n           reused = result.get(\"reused_skills\", [])\n          \n           if reused:\n               print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/267b.png\" alt=\"\u267b\" class=\"wp-smiley\" \/>  Skills Reused: {len(reused)}\")\n               for skill in reused:\n                   print(f\"   \u2022 {skill.get('name', 'unnamed')}\")\n          \n           if evolved:\n               print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9ec.png\" alt=\"\ud83e\uddec\" class=\"wp-smiley\" \/> New Skills Evolved: {len(evolved)}\")\n               for skill in evolved:\n                   print(f\"   \u2022 {skill.get('name', 'unnamed')} ({skill.get('origin', '')})\")\n          \n           return result\n          \n   except Exception as e:\n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/>  Execution error: {type(e).__name__}: {e}\")\n       print(\"We'll simulate the comparison below.\")\n       return None\n\n\nwarm_start_result = await run_warm_start_task()\n\n\nasync def demo_skill_search():\n   print(\"=\"*60)\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f50e.png\" alt=\"\ud83d\udd0e\" class=\"wp-smiley\" \/> SKILL SEARCH &amp; DISCOVERY\")\n   print(\"=\"*60)\n  \n   try:\n       from openspace import OpenSpace\n      \n       async with OpenSpace() as cs:\n           queries = [\n               \"CSV data analysis with pandas\",\n               \"PDF report generation\",\n               \"web scraping with error handling\",\n           ]\n          \n           for query in queries:\n               print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f50d.png\" alt=\"\ud83d\udd0d\" class=\"wp-smiley\" \/> Query: '{query}'\")\n              \n               if hasattr(cs, 'skill_engine') and cs.skill_engine:\n                   results = await cs.skill_engine.search(query)\n                   if results:\n                       for r in results[:3]:\n                           print(f\"   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4cb.png\" alt=\"\ud83d\udccb\" class=\"wp-smiley\" \/> {r.get('name', 'unnamed')} \"\n                                 f\"(score: {r.get('score', 'N\/A')})\")\n                   else:\n                       print(\"   (no matching skills found)\")\n               else:\n                   print(\"   (skill engine not initialized \u2014 \"\n                         \"skills accumulate after task executions)\")\n                  \n   except Exception as e:\n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/>  Search demo: {e}\")\n       print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4a1.png\" alt=\"\ud83d\udca1\" class=\"wp-smiley\" \/> Skill search becomes available after skills are evolved.\")\n       print(\"   In production, run several tasks first to build up the skill database.\")\n\n\nawait demo_skill_search()\n\n\ndef create_custom_skill(skill_name, description, instructions, triggers):\n   skill_dir = SKILLS_DIR \/ skill_name\n   skill_dir.mkdir(parents=True, exist_ok=True)\n  \n   skill_md = f\"\"\"---\nname: {skill_name}\ndescription: {description}\nversion: 1.0.0\norigin: manual\ntriggers: {json.dumps(triggers)}\n---\n\n\n# {skill_name}\n\n\n{description}\n\n\n## Instructions\n\n\n{instructions}\n\n\n## Quality Metrics\n\n\n- Applied Rate: 0% (new skill)\n- Completion Rate: N\/A\n- Effective Rate: N\/A\n\"\"\"\n  \n   skill_path = skill_dir \/ \"SKILL.md\"\n   skill_path.write_text(skill_md)\n  \n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Created skill: {skill_name}\")\n   print(f\"   Path: {skill_path}\")\n   return skill_path\n\n\n\n\ncreate_custom_skill(\n   skill_name=\"data-validation-csv\",\n   description=\"Validate CSV files for common issues before processing: check encoding, detect delimiter, handle missing values, verify column types.\",\n   instructions=\"\"\"When working with CSV data:\n\n\n1. **Encoding Detection**: Try UTF-8 first, then fall back to latin-1, cp1252\n2. **Delimiter Detection**: Use csv.Sniffer() to auto-detect delimiter\n3. **Missing Values**: Count NaN\/null per column, report percentage\n4. **Type Inference**: Check if numeric columns are actually numeric\n5. **Duplicate Check**: Identify duplicate rows\n\n\n```python\nimport pandas as pd\nimport csv\nimport chardet\n\n\ndef validate_csv(filepath):\n   with open(filepath, 'rb') as f:\n       result = chardet.detect(f.read(10000))\n   encoding = result['encoding']\n  \n   df = pd.read_csv(filepath, encoding=encoding)\n  \n   report = {\n       'rows': len(df),\n       'columns': list(df.columns),\n       'missing': df.isnull().sum().to_dict(),\n       'duplicates': df.duplicated().sum(),\n       'dtypes': df.dtypes.astype(str).to_dict()\n   }\n   return report\n```\"\"\",\n   triggers=[\"csv\", \"data validation\", \"data quality\", \"pandas\"]\n)\n\n\nprint()\n\n\ncreate_custom_skill(\n   skill_name=\"report-gen-fallback\",\n   description=\"Generate reports with multiple fallback strategies: try reportlab PDF first, fall back to HTML, then plain text.\",\n   instructions=\"\"\"When generating reports:\n\n\n1. **Try reportlab PDF** first for professional output\n2. **Fall back to HTML** if reportlab fails (common in sandboxed envs)\n3. **Final fallback: plain text** with formatted tables\n\n\nAlways verify the output file exists and has non-zero size after generation.\n\n\n```python\ndef generate_report(data, output_path):\n   try:\n       from reportlab.lib.pagesizes import letter\n       from reportlab.platypus import SimpleDocTemplate\n       return output_path\n   except ImportError:\n       pass\n  \n   try:\n       html_path = output_path.replace('.pdf', '.html')\n       return html_path\n   except Exception:\n       pass\n  \n   txt_path = output_path.replace('.pdf', '.txt')\n   return txt_path\n```\"\"\",\n   triggers=[\"report\", \"PDF\", \"document generation\", \"reportlab\"]\n)\n\n\nprint()\n\n\ncreate_custom_skill(\n   skill_name=\"execution-recovery\",\n   description=\"Multi-layer execution recovery: handle sandbox failures, shell errors, and file write issues with progressive fallbacks.\",\n   instructions=\"\"\"When code execution fails:\n\n\n1. **Capture the full error** including traceback\n2. **Identify the failure type**: ImportError, PermissionError, TimeoutError, etc.\n3. **Apply targeted fix**:\n  - ImportError \u2192 pip install the missing package\n  - PermissionError \u2192 change output directory to \/tmp\n  - TimeoutError \u2192 reduce data size or add chunking\n  - MemoryError \u2192 process in batches\n4. **Retry with fix applied**\n5. **Log the fix** for future skill evolution\n\n\nThis skill was captured from 28 real execution failures in the GDPVal benchmark.\"\"\",\n   triggers=[\"error\", \"failure\", \"recovery\", \"fallback\", \"retry\"]\n)\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4cb.png\" alt=\"\ud83d\udccb\" class=\"wp-smiley\" \/> All registered skills:\")\nprint(\"=\"*60)\nfor skill_dir in sorted(SKILLS_DIR.iterdir()):\n   if skill_dir.is_dir():\n       skill_md = skill_dir \/ \"SKILL.md\"\n       if skill_md.exists():\n           content = skill_md.read_text()\n           for content line.split('n'):\n               if line.startswith('name:'):\n                   name = line.split(':', 1)[1].strip()\n                   print(f\"   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9e9.png\" alt=\"\ud83e\udde9\" class=\"wp-smiley\" \/> {name}\")\n                   break<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We run a second task deliberately similar to the cold-start task, allowing OpenSpace to discover and reuse previously evolved skills, and we compare execution time and token usage with the first run. We demonstrate the hybrid skill search system that combines BM25 and embedding-based ranking to find the most relevant skills for any given task description. We then manually create three production-quality skills: data validation, report generation with fallbacks, and execution recovery, following the SKILL.md convention, showing how we seed the system with domain knowledge before the evolution engine takes over.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">async def demo_cloud_community():\n   print(\"=\"*60)\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f310.png\" alt=\"\ud83c\udf10\" class=\"wp-smiley\" \/> CLOUD COMMUNITY INTEGRATION\")\n   print(\"=\"*60)\n  \n   cloud_key = os.environ.get(\"OPENSPACE_API_KEY\", \"\")\n  \n   if not cloud_key:\n       print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/23ed.png\" alt=\"\u23ed\" class=\"wp-smiley\" \/>  Cloud API key not set. Showing what's possible:\")\n       print(\"n   With a cloud key, you can:\")\n       print(\"   \u2022 <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f50d.png\" alt=\"\ud83d\udd0d\" class=\"wp-smiley\" \/> Search community skills by keyword or task description\")\n       print(\"   \u2022 <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2b07.png\" alt=\"\u2b07\" class=\"wp-smiley\" \/>  Download evolved skills from other agents\")\n       print(\"   \u2022 <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2b06.png\" alt=\"\u2b06\" class=\"wp-smiley\" \/>  Upload your evolved skills to share\")\n       print(\"   \u2022 <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f465.png\" alt=\"\ud83d\udc65\" class=\"wp-smiley\" \/> Create teams with shared skill repositories\")\n       print(\"   \u2022 <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> Track skill lineage and evolution history\")\n       print(\"n   Get a free key at: https:\/\/open-space.cloud\")\n       print(\"n   CLI commands (outside Colab):\")\n       print(\"   $ openspace-download-skill &lt;skill_id&gt;\")\n       print(\"   $ openspace-upload-skill \/path\/to\/skill\/dir\")\n       return\n  \n   try:\n       from openspace.cloud.client import CloudClient\n      \n       client = CloudClient(api_key=cloud_key)\n      \n       search_queries = [\n           \"data analysis CSV\",\n           \"PDF generation\",\n           \"web scraping\",\n       ]\n      \n       for query in search_queries:\n           print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f50d.png\" alt=\"\ud83d\udd0d\" class=\"wp-smiley\" \/> Searching cloud for: '{query}'\")\n           results = await client.search(query)\n          \n           if results:\n               for r in results[:3]:\n                   print(f\"   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4e6.png\" alt=\"\ud83d\udce6\" class=\"wp-smiley\" \/> {r.get('name', 'unnamed')}\")\n                   print(f\"      Author: {r.get('author', 'unknown')}\")\n                   print(f\"      Downloads: {r.get('downloads', 0)}\")\n                   print(f\"      Version: {r.get('version', '1.0')}\")\n           else:\n               print(\"   (no results)\")\n              \n   except ImportError:\n       print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/>  Cloud client not available in this installation.\")\n       print(\"   Install the full package for cloud features.\")\n   except Exception as e:\n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/>  Cloud error: {e}\")\n\n\nawait demo_cloud_community()\n\n\ngdpval_metrics = {\n   \"categories\": [\n       {\n           \"name\": \"Documents &amp; Correspondence\",\n           \"tasks\": 7,\n           \"phase1_income\": 71, \"phase2_income\": 74,\n           \"token_reduction\": 56,\n           \"top_skill\": \"document-gen-fallback (13 versions)\"\n       },\n       {\n           \"name\": \"Compliance &amp; Forms\",\n           \"tasks\": 11,\n           \"phase1_income\": 51, \"phase2_income\": 70,\n           \"token_reduction\": 51,\n           \"top_skill\": \"PDF checklist pipeline\"\n       },\n       {\n           \"name\": \"Media Production\",\n           \"tasks\": 3,\n           \"phase1_income\": 53, \"phase2_income\": 58,\n           \"token_reduction\": 46,\n           \"top_skill\": \"ffmpeg codec fallbacks\"\n       },\n       {\n           \"name\": \"Engineering\",\n           \"tasks\": 4,\n           \"phase1_income\": 70, \"phase2_income\": 78,\n           \"token_reduction\": 43,\n           \"top_skill\": \"multi-deliverable coordination\"\n       },\n       {\n           \"name\": \"Spreadsheets\",\n           \"tasks\": 15,\n           \"phase1_income\": 63, \"phase2_income\": 70,\n           \"token_reduction\": 37,\n           \"top_skill\": \"xlsx formula patterns\"\n       },\n       {\n           \"name\": \"Strategy &amp; Analysis\",\n           \"tasks\": 10,\n           \"phase1_income\": 88, \"phase2_income\": 89,\n           \"token_reduction\": 32,\n           \"top_skill\": \"document structure reuse\"\n       }\n   ],\n   \"overall\": {\n       \"total_skills_evolved\": 165,\n       \"income_multiplier\": \"4.2x vs ClawWork baseline\",\n       \"value_capture\": \"72.8% ($11,484 \/ $15,764)\",\n       \"avg_quality\": \"70.8%\",\n       \"avg_token_reduction\": \"45.9%\"\n   },\n   \"skill_taxonomy\": {\n       \"File Format I\/O\": 44,\n       \"Execution Recovery\": 29,\n       \"Document Generation\": 26,\n       \"Quality Assurance\": 23,\n       \"Task Orchestration\": 17,\n       \"Domain Workflow\": 13,\n       \"Web &amp; Research\": 11\n   }\n}\n\n\nprint(\"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> GDPVal BENCHMARK RESULTS (OpenSpace vs Baseline)\")\nprint(\"=\"*60)\n\n\nprint(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f3c6.png\" alt=\"\ud83c\udfc6\" class=\"wp-smiley\" \/> Overall Performance:\")\nfor k, v in gdpval_metrics[\"overall\"].items():\n   label = k.replace('_', ' ').title()\n   print(f\"   {label}: {v}\")\n\n\nprint(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c8.png\" alt=\"\ud83d\udcc8\" class=\"wp-smiley\" \/> Performance by Category:\")\nprint(f\"   {'Category':&lt;28} {'P1\u2192P2 Income':&lt;16} {'Token \u2193':&lt;10} {'Tasks'}\")\nprint(f\"   {'-'*70}\")\nfor cat in gdpval_metrics[\"categories\"]:\n   income_str = f\"{cat['phase1_income']}% \u2192 {cat['phase2_income']}%\"\n   token_str = f\"-{cat['token_reduction']}%\"\n   print(f\"   {cat['name']:&lt;28} {income_str:&lt;16} {token_str:&lt;10} {cat['tasks']}\")\n\n\nprint(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9ec.png\" alt=\"\ud83e\uddec\" class=\"wp-smiley\" \/> Evolved Skill Taxonomy ({sum(gdpval_metrics['skill_taxonomy'].values())} total):\")\nfor purpose, count in gdpval_metrics[\"skill_taxonomy\"].items():\n   bar = \"\u2588\" * (count \/\/ 2)\n   print(f\"   {purpose:&lt;24} {count:&gt;3} {bar}\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We explore cloud community integration at openspace.cloud, demonstrating how agents search for, download, and upload evolved skills to share collective intelligence across teams. We display the full GDPVal benchmark results across six professional task categories, showing exactly how OpenSpace achieves its 4.2x income improvement and 46% average token reduction compared to the ClawWork baseline. We visualize the taxonomy of all 165 skills that were autonomously evolved during the benchmark, revealing that the majority focus on execution recovery and file format handling rather than domain-specific knowledge.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">PIPELINE_TASKS = [\n   {\n       \"name\": \"CSV Analyzer\",\n       \"query\": (\n           \"'employee_id, name, department, salary, start_date', \"\n           \"calculates average salary by department, finds employees \"\n           \"with tenure &gt; 5 years, and saves results to a new CSV.\"\n       ),\n       \"category\": \"Spreadsheets\"\n   },\n   {\n       \"name\": \"Text Report Generator\",\n       \"query\": (\n           \"Create a Python script that generates a formatted text \"\n           \"report from a dictionary of financial data including \"\n           \"revenue, expenses, and profit margins. Include headers, \"\n           \"separators, and a summary section with key insights.\"\n       ),\n       \"category\": \"Documents\"\n   },\n   {\n       \"name\": \"Data Quality Checker\",\n       \"query\": (\n           \"Write a Python data quality checking tool that validates a \"\n           \"pandas DataFrame: check for nulls, duplicates, outliers \"\n           \"(using IQR method), type mismatches, and generates a \"\n           \"data quality score from 0-100 with detailed breakdown.\"\n       ),\n       \"category\": \"Quality Assurance\"\n   },\n]\n\n\nasync def run_pipeline():\n   print(\"=\"*60)\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f504.png\" alt=\"\ud83d\udd04\" class=\"wp-smiley\" \/> MULTI-TASK PIPELINE WITH EVOLUTION TRACKING\")\n   print(\"=\"*60)\n  \n   results = []\n   total_skills_before = 0\n  \n   for i, task_info in enumerate(PIPELINE_TASKS, 1):\n       print(f\"n{'\u2500'*60}\")\n       print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4cb.png\" alt=\"\ud83d\udccb\" class=\"wp-smiley\" \/> Task {i}\/{len(PIPELINE_TASKS)}: {task_info['name']}\")\n       print(f\"   Category: {task_info['category']}\")\n       print(f\"{'\u2500'*60}\")\n      \n       start_time = time.time()\n      \n       try:\n           from openspace import OpenSpace\n          \n           async with OpenSpace() as cs:\n               result = await cs.execute(task_info[\"query\"])\n               elapsed = time.time() - start_time\n              \n               evolved = result.get(\"evolved_skills\", [])\n               reused = result.get(\"reused_skills\", [])\n              \n               task_result = {\n                   \"name\": task_info[\"name\"],\n                   \"time\": elapsed,\n                   \"evolved_count\": len(evolved),\n                   \"reused_count\": len(reused),\n                   \"success\": True\n               }\n               results.append(task_result)\n              \n               print(f\"   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Completed in {elapsed:.1f}s\")\n               print(f\"   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9ec.png\" alt=\"\ud83e\uddec\" class=\"wp-smiley\" \/> Skills evolved: {len(evolved)}\")\n               print(f\"   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/267b.png\" alt=\"\u267b\" class=\"wp-smiley\" \/>  Skills reused: {len(reused)}\")\n              \n       except Exception as e:\n           elapsed = time.time() - start_time\n           results.append({\n               \"name\": task_info[\"name\"],\n               \"time\": elapsed,\n               \"evolved_count\": 0,\n               \"reused_count\": 0,\n               \"success\": False,\n               \"error\": str(e)\n           })\n           print(f\"   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/>  Error: {e}\")\n  \n   print(f\"n{'\u2550'*60}\")\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> PIPELINE SUMMARY\")\n   print(f\"{'\u2550'*60}\")\n  \n   total_time = sum(r[\"time\"] for r in results)\n   total_evolved = sum(r[\"evolved_count\"] for r in results)\n   total_reused = sum(r[\"reused_count\"] for r in results)\n   successes = sum(1 for r in results if r[\"success\"])\n  \n   print(f\"   Tasks completed: {successes}\/{len(results)}\")\n   print(f\"   Total time: {total_time:.1f}s\")\n   print(f\"   Total skills evolved: {total_evolved}\")\n   print(f\"   Total skills reused: {total_reused}\")\n  \n   if total_reused &gt; 0:\n       print(f\"n   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4a1.png\" alt=\"\ud83d\udca1\" class=\"wp-smiley\" \/> Skill reuse increased over the pipeline,\")\n       print(f\"      demonstrating the self-evolution loop!\")\n  \n   return results\n\n\npipeline_results = await run_pipeline()\n\n\ndef analyze_evolution_with_openai():\n   print(\"=\"*60)\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f916.png\" alt=\"\ud83e\udd16\" class=\"wp-smiley\" \/> AI-POWERED EVOLUTION ANALYSIS\")\n   print(\"=\"*60)\n  \n   skill_contents = {}\n   for skill_dir in SKILLS_DIR.iterdir():\n       if skill_dir.is_dir():\n           skill_md = skill_dir \/ \"SKILL.md\"\n           if skill_md.exists():\n               skill_contents[skill_dir.name] = skill_md.read_text()\n  \n   if not skill_contents:\n       print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ed.png\" alt=\"\ud83d\udced\" class=\"wp-smiley\" \/> No skills to analyze yet.\")\n       return\n  \n   skills_summary = \"nn\".join([\n       f\"### Skill: {name}n{content[:500]}\"\n       for name, content in skill_contents.items()\n   ])\n  \n   from openai import OpenAI\n   client = OpenAI(api_key=os.environ[\"OPENAI_API_KEY\"])\n  \n   response = client.chat.completions.create(\n       model=\"gpt-4o-mini\",\n       messages=[\n           {\n               \"role\": \"system\",\n               \"content\": (\n                   \"You are a skill evolution analyst for OpenSpace. \"\n                   \"Analyze the given skills and provide insights on: \"\n                   \"1) Skill coverage gaps, 2) Potential evolution paths, \"\n                   \"3) Skill interaction opportunities, 4) Recommended \"\n                   \"new skills to create. Be concise and actionable.\"\n               )\n           },\n           {\n               \"role\": \"user\",\n               \"content\": f\"Analyze these OpenSpace skills:nn{skills_summary}\"\n           }\n       ],\n       max_tokens=800\n   )\n  \n   analysis = response.choices[0].message.content\n   print(f\"n{analysis}\")\n  \n   usage = response.usage\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> Analysis token cost:\")\n   print(f\"   Input:  {usage.prompt_tokens} tokens\")\n   print(f\"   Output: {usage.completion_tokens} tokens\")\n   print(f\"   Total:  {usage.total_tokens} tokens\")\n\n\nanalyze_evolution_with_openai()\n\n\ndef demonstrate_token_savings():\n   print(\"=\"*60)\n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4b0.png\" alt=\"\ud83d\udcb0\" class=\"wp-smiley\" \/> TOKEN SAVINGS DEMONSTRATION\")\n   print(\"=\"*60)\n  \n   from openai import OpenAI\n   client = OpenAI(api_key=os.environ[\"OPENAI_API_KEY\"])\n  \n   task = (\n       \"computes monthly revenue, and generates a text report.\"\n   )\n  \n   cold_messages = [\n       {\"role\": \"system\", \"content\": \"You are a coding assistant. Write complete, working Python code.\"},\n       {\"role\": \"user\", \"content\": task}\n   ]\n  \n   cold_response = client.chat.completions.create(\n       model=\"gpt-4o-mini\", messages=cold_messages, max_tokens=1500\n   )\n   cold_tokens = cold_response.usage.total_tokens\n  \n   skill_context = \"\"\"\n## Available Skill: csv-data-analysis (v3, evolved from 28 executions)\n\n\nPattern: Use pandas read_csv with encoding detection. Group by\nmonth using pd.Grouper(key='date', freq='ME'). Sum revenue column.\nUse tabulate for text report formatting. Fallback: plain text with\nf-strings if tabulate unavailable.\n\n\nTemplate:\n```python\nimport pandas as pd\ndf = pd.read_csv(filepath)\ndf['date'] = pd.to_datetime(df['date'])\nmonthly = df.groupby(pd.Grouper(key='date', freq='ME'))['revenue'].sum()\n```\n\"\"\"\n  \n   warm_messages = [\n       {\n           \"role\": \"system\",\n           \"content\": (\n               \"You are a coding assistant with access to pre-evolved skills. \"\n               \"Reuse the provided skill patterns to write efficient code. \"\n               \"Only add what's missing \u2014 don't re-derive what the skill provides.nn\"\n               f\"{skill_context}\"\n           )\n       },\n       {\"role\": \"user\", \"content\": task}\n   ]\n  \n   warm_response = client.chat.completions.create(\n       model=\"gpt-4o-mini\", messages=warm_messages, max_tokens=1000\n   )\n   warm_tokens = warm_response.usage.total_tokens\n  \n   savings_pct = ((cold_tokens - warm_tokens) \/ cold_tokens) * 100\n  \n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9ca.png\" alt=\"\ud83e\uddca\" class=\"wp-smiley\" \/> Cold Start (no skills):  {cold_tokens:&gt;6} tokens\")\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f525.png\" alt=\"\ud83d\udd25\" class=\"wp-smiley\" \/> Warm Start (with skill): {warm_tokens:&gt;6} tokens\")\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4b0.png\" alt=\"\ud83d\udcb0\" class=\"wp-smiley\" \/> Savings:                 {cold_tokens - warm_tokens:&gt;6} tokens ({savings_pct:.1f}%)\")\n  \n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Cold response length: {len(cold_response.choices[0].message.content)} chars\")\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Warm response length: {len(warm_response.choices[0].message.content)} chars\")\n  \n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4a1.png\" alt=\"\ud83d\udca1\" class=\"wp-smiley\" \/> In OpenSpace's GDPVal benchmark, skill reuse achieved\")\n   print(f\"   an average 45.9% token reduction across 50 professional tasks.\")\n   print(f\"   The warm start also produces higher quality output because\")\n   print(f\"   skills encode battle-tested patterns from real executions.\")\n\n\ndemonstrate_token_savings()\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f389.png\" alt=\"\ud83c\udf89\" class=\"wp-smiley\" \/> Tutorial complete! Star the repo if this helped:\")\nprint(\"   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2b50.png\" alt=\"\u2b50\" class=\"wp-smiley\" \/> https:\/\/github.com\/HKUDS\/OpenSpace\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We run a three-task pipeline sequentially: a CSV analyzer, a text report generator, and a data quality checker, tracking how skills accumulate and reuse increases with each successive task, mirroring the GDPVal benchmark\u2019s Phase 1 design. We use the OpenAI API to perform an AI-powered analysis of our evolved skill library, identifying coverage gaps, potential evolution paths, and recommended new skills to create. We close with a direct cold-versus-warm token comparison that measures real savings by sending the same task with and without skill context, demonstrating concretely how pre-evolved patterns reduce both token cost and response length.<\/p>\n<p>In conclusion, we saw firsthand how OpenSpace transforms the way AI agents operate, shifting them from stateless tools that reason from scratch with every task into self-improving systems that accumulate expertise with each task. We observed the cold-to-warm transition, in which skills learned from earlier executions reduce both cost and latency in subsequent runs. We built and registered our own custom skills to seed domain knowledge, and we use OpenAI\u2019s API to analyze evolution patterns across our skill library. The key insight we take away is that OpenSpace treats skills not as static configuration files but as living entities that auto-repair when tools break, auto-improve when better patterns emerge, and auto-propagate when connected to the cloud community. Whether we integrate OpenSpace into an existing agent like Claude Code or Codex via its MCP server, or use it standalone as an AI co-worker, we now have the foundation to build agents that genuinely get better and cheaper over time.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out\u00a0the\u00a0<a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Agentic%20AI%20Codes\/openspace_self_evolving_skill_evolution_engine_token_efficiency_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Full Notebook here<\/strong><\/a><strong>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">120k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/03\/24\/a-coding-implementation-to-design-self-evolving-skill-engine-with-openspace-for-skill-learning-token-efficiency-and-collective-intelligence\/\">A Coding Implementation to Design Self-Evolving Skill Engine with OpenSpace for Skill Learning, Token Efficiency, and Collective Intelligence<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we explore O&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-608","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/608","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=608"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/608\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=608"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=608"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=608"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}