{"id":525,"date":"2026-03-08T14:46:10","date_gmt":"2026-03-08T06:46:10","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=525"},"modified":"2026-03-08T14:46:10","modified_gmt":"2026-03-08T06:46:10","slug":"building-next-gen-agentic-ai-a-complete-framework-for-cognitive-blueprint-driven-runtime-agents-with-memory-tools-and-validation","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=525","title":{"rendered":"Building Next-Gen Agentic AI: A Complete Framework for Cognitive Blueprint Driven Runtime Agents with Memory Tools and Validation"},"content":{"rendered":"<p>In this tutorial, we build a complete cognitive blueprint and runtime agent framework. We define structured blueprints for identity, goals, planning, memory, validation, and tool access, and use them to create agents that not only respond but also plan, execute, validate, and systematically improve their outputs. Along the tutorial, we show how the same runtime engine can support multiple agent personalities and behaviors through blueprint portability, making the overall design modular, extensible, and practical for advanced agentic AI experimentation.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import json, yaml, time, math, textwrap, datetime, getpass, os\nfrom typing import Any, Callable, Dict, List, Optional\nfrom dataclasses import dataclass, field\nfrom enum import Enum\n\n\nfrom openai import OpenAI\nfrom pydantic import BaseModel\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom rich.table import Table\nfrom rich.tree import Tree\n\n\ntry:\n   from google.colab import userdata\n   OPENAI_API_KEY = userdata.get('OPENAI_API_KEY')\nexcept Exception:\n   OPENAI_API_KEY = getpass.getpass(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f511.png\" alt=\"\ud83d\udd11\" class=\"wp-smiley\" \/> Enter your OpenAI API key: \")\n\n\nos.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY\nclient = OpenAI(api_key=OPENAI_API_KEY)\nconsole = Console()\n\n\nclass PlanningStrategy(str, Enum):\n   SEQUENTIAL   = \"sequential\"\n   HIERARCHICAL = \"hierarchical\"\n   REACTIVE     = \"reactive\"\n\n\nclass MemoryType(str, Enum):\n   SHORT_TERM = \"short_term\"\n   EPISODIC   = \"episodic\"\n   PERSISTENT = \"persistent\"\n\n\nclass BlueprintIdentity(BaseModel):\n   name: str\n   version: str = \"1.0.0\"\n   description: str\n   author: str = \"unknown\"\n\n\nclass BlueprintMemory(BaseModel):\n   type: MemoryType = MemoryType.SHORT_TERM\n   window_size: int = 10\n   summarize_after: int = 20\n\n\nclass BlueprintPlanning(BaseModel):\n   strategy: PlanningStrategy = PlanningStrategy.SEQUENTIAL\n   max_steps: int = 8\n   max_retries: int = 2\n   think_before_acting: bool = True\n\n\nclass BlueprintValidation(BaseModel):\n   require_reasoning: bool = True\n   min_response_length: int = 10\n   forbidden_phrases: List[str] = []\n\n\nclass CognitiveBlueprint(BaseModel):\n   identity: BlueprintIdentity\n   goals: List[str]\n   constraints: List[str] = []\n   tools: List[str] = []\n   memory: BlueprintMemory = BlueprintMemory()\n   planning: BlueprintPlanning = BlueprintPlanning()\n   validation: BlueprintValidation = BlueprintValidation()\n   system_prompt_extra: str = \"\"\n\n\ndef load_blueprint_from_yaml(yaml_str: str) -&gt; CognitiveBlueprint:\n   return CognitiveBlueprint(**yaml.safe_load(yaml_str))\n\n\nRESEARCH_AGENT_YAML = \"\"\"\nidentity:\n name: ResearchBot\n version: 1.2.0\n description: Answers research questions using calculation and reasoning\n author: Auton Framework Demo\ngoals:\n - Answer user questions accurately using available tools\n - Show step-by-step reasoning for all answers\n - Cite the method used for each calculation\nconstraints:\n - Never fabricate numbers or statistics\n - Always validate mathematical results before reporting\n - Do not answer questions outside your tool capabilities\ntools:\n - calculator\n - unit_converter\n - date_calculator\n - search_wikipedia_stub\nmemory:\n type: episodic\n window_size: 12\n summarize_after: 30\nplanning:\n strategy: sequential\n max_steps: 6\n max_retries: 2\n think_before_acting: true\nvalidation:\n require_reasoning: true\n min_response_length: 20\n forbidden_phrases:\n   - \"I don't know\"\n   - \"I cannot determine\"\n\"\"\"\n\n\nDATA_ANALYST_YAML = \"\"\"\nidentity:\n name: DataAnalystBot\n version: 2.0.0\n description: Performs statistical analysis and data summarization\n author: Auton Framework Demo\ngoals:\n - Compute descriptive statistics for given data\n - Identify trends and anomalies\n - Present findings clearly with numbers\nconstraints:\n - Only work with numerical data\n - Always report uncertainty when sample size is small (&lt; 5 items)\ntools:\n - calculator\n - statistics_engine\n - list_sorter\nmemory:\n type: short_term\n window_size: 6\nplanning:\n strategy: hierarchical\n max_steps: 10\n max_retries: 3\n think_before_acting: true\nvalidation:\n require_reasoning: true\n min_response_length: 30\n forbidden_phrases: []\n\"\"\"\n<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We set up the core environment and define the cognitive blueprint, which structures how an agent thinks and behaves. We create strongly typed models for identity, memory configuration, planning strategy, and validation rules using Pydantic and enums. We also define two YAML-based blueprints, allowing us to configure different agent personalities and capabilities without changing the underlying runtime system.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">@dataclass\nclass ToolSpec:\n   name: str\n   description: str\n   parameters: Dict[str, str]\n   function: Callable\n   returns: str\n\n\nclass ToolRegistry:\n   def __init__(self):\n       self._tools: Dict[str, ToolSpec] = {}\n\n\n   def register(self, name: str, description: str,\n                parameters: Dict[str, str], returns: str):\n       def decorator(fn: Callable) -&gt; Callable:\n           self._tools[name] = ToolSpec(name, description, parameters, fn, returns)\n           return fn\n       return decorator\n\n\n   def get(self, name: str) -&gt; Optional[ToolSpec]:\n       return self._tools.get(name)\n\n\n   def call(self, name: str, **kwargs) -&gt; Any:\n       spec = self._tools.get(name)\n       if not spec:\n           raise ValueError(f\"Tool '{name}' not found in registry\")\n       return spec.function(**kwargs)\n\n\n   def get_tool_descriptions(self, allowed: List[str]) -&gt; str:\n       lines = []\n       for name in allowed:\n           spec = self._tools.get(name)\n           if spec:\n               params = \", \".join(f\"{k}: {v}\" for k, v in spec.parameters.items())\n               lines.append(\n                   f\"\u2022 {spec.name}({params})n\"\n                   f\"  \u2192 {spec.description}n\"\n                   f\"  Returns: {spec.returns}\"\n               )\n       return \"n\".join(lines)\n\n\n   def list_tools(self) -&gt; List[str]:\n       return list(self._tools.keys())\n\n\nregistry = ToolRegistry()\n\n\n@registry.register(\n   name=\"calculator\",\n   description=\"Evaluates a safe mathematical expression\",\n   parameters={\"expression\": \"A math expression string, e.g. '2 ** 10 + 5 * 3'\"},\n   returns=\"Numeric result as float\"\n)\ndef calculator(expression: str) -&gt; str:\n   try:\n       allowed = {k: v for k, v in math.__dict__.items() if not k.startswith(\"_\")}\n       allowed.update({\"abs\": abs, \"round\": round, \"pow\": pow})\n       return str(eval(expression, {\"__builtins__\": {}}, allowed))\n   except Exception as e:\n       return f\"Error: {e}\"\n\n\n@registry.register(\n   name=\"unit_converter\",\n   description=\"Converts between common units of measurement\",\n   parameters={\n       \"value\": \"Numeric value to convert\",\n       \"from_unit\": \"Source unit (km, miles, kg, lbs, celsius, fahrenheit, liters, gallons, meters, feet)\",\n       \"to_unit\": \"Target unit\"\n   },\n   returns=\"Converted value as string with units\"\n)\ndef unit_converter(value: float, from_unit: str, to_unit: str) -&gt; str:\n   conversions = {\n       (\"km\", \"miles\"): lambda x: x * 0.621371,\n       (\"miles\", \"km\"): lambda x: x * 1.60934,\n       (\"kg\", \"lbs\"):   lambda x: x * 2.20462,\n       (\"lbs\", \"kg\"):   lambda x: x \/ 2.20462,\n       (\"celsius\", \"fahrenheit\"): lambda x: x * 9\/5 + 32,\n       (\"fahrenheit\", \"celsius\"): lambda x: (x - 32) * 5\/9,\n       (\"liters\", \"gallons\"): lambda x: x * 0.264172,\n       (\"gallons\", \"liters\"): lambda x: x * 3.78541,\n       (\"meters\", \"feet\"): lambda x: x * 3.28084,\n       (\"feet\", \"meters\"): lambda x: x \/ 3.28084,\n   }\n   key = (from_unit.lower(), to_unit.lower())\n   if key in conversions:\n       return f\"{conversions[key](float(value)):.4f} {to_unit}\"\n   return f\"Conversion from {from_unit} to {to_unit} not supported\"\n\n\n@registry.register(\n   name=\"date_calculator\",\n   description=\"Calculates days between two dates, or adds\/subtracts days from a date\",\n   parameters={\n       \"operation\": \"'days_between' or 'add_days'\",\n       \"date1\": \"Date string in YYYY-MM-DD format\",\n       \"date2\": \"Second date for days_between (YYYY-MM-DD), or number of days for add_days\"\n   },\n   returns=\"Result as string\"\n)\ndef date_calculator(operation: str, date1: str, date2: str) -&gt; str:\n   try:\n       d1 = datetime.datetime.strptime(date1, \"%Y-%m-%d\")\n       if operation == \"days_between\":\n           d2 = datetime.datetime.strptime(date2, \"%Y-%m-%d\")\n           return f\"{abs((d2 - d1).days)} days between {date1} and {date2}\"\n       elif operation == \"add_days\":\n           result = d1 + datetime.timedelta(days=int(date2))\n           return f\"{result.strftime('%Y-%m-%d')} (added {date2} days to {date1})\"\n       return f\"Unknown operation: {operation}\"\n   except Exception as e:\n       return f\"Error: {e}\"\n\n\n@registry.register(\n   name=\"search_wikipedia_stub\",\n   description=\"Returns a stub summary for well-known topics (demo \u2014 no live internet)\",\n   parameters={\"topic\": \"Topic to look up\"},\n   returns=\"Short text summary\"\n)\ndef search_wikipedia_stub(topic: str) -&gt; str:\n   stubs = {\n       \"openai\": \"OpenAI is an AI research company founded in 2015. It created GPT-4 and the ChatGPT product.\",\n   }\n   for key, val in stubs.items():\n       if key in topic.lower():\n           return val\n   return f\"No stub found for '{topic}'. In production, this would query Wikipedia's API.\"<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We implement the tool registry that allows agents to discover and use external capabilities dynamically. We design a structured system in which tools are registered with metadata, including parameters, descriptions, and return values. We also implement several practical tools, such as a calculator, unit converter, date calculator, and a Wikipedia search stub that the agents can invoke during execution.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">@registry.register(\n   name=\"statistics_engine\",\n   description=\"Computes descriptive statistics on a list of numbers\",\n   parameters={\"numbers\": \"Comma-separated list of numbers, e.g. '4,8,15,16,23,42'\"},\n   returns=\"JSON with mean, median, std_dev, min, max, count\"\n)\ndef statistics_engine(numbers: str) -&gt; str:\n   try:\n       nums = [float(x.strip()) for x in numbers.split(\",\")]\n       n = len(nums)\n       mean = sum(nums) \/ n\n       sorted_nums = sorted(nums)\n       mid = n \/\/ 2\n       median = sorted_nums[mid] if n % 2 else (sorted_nums[mid-1] + sorted_nums[mid]) \/ 2\n       std_dev = math.sqrt(sum((x - mean) ** 2 for x in nums) \/ n)\n       return json.dumps({\n           \"count\": n, \"mean\": round(mean, 4), \"median\": round(median, 4),\n           \"std_dev\": round(std_dev, 4), \"min\": min(nums),\n           \"max\": max(nums), \"range\": max(nums) - min(nums)\n       }, indent=2)\n   except Exception as e:\n       return f\"Error: {e}\"\n\n\n@registry.register(\n   name=\"list_sorter\",\n   description=\"Sorts a comma-separated list of numbers\",\n   parameters={\"numbers\": \"Comma-separated numbers\", \"order\": \"'asc' or 'desc'\"},\n   returns=\"Sorted comma-separated list\"\n)\ndef list_sorter(numbers: str, order: str = \"asc\") -&gt; str:\n   nums = [float(x.strip()) for x in numbers.split(\",\")]\n   nums.sort(reverse=(order == \"desc\"))\n   return \", \".join(str(n) for n in nums)\n\n\n@dataclass\nclass MemoryEntry:\n   role: str\n   content: str\n   timestamp: float = field(default_factory=time.time)\n   metadata: Dict = field(default_factory=dict)\n\n\nclass MemoryManager:\n   def __init__(self, config: BlueprintMemory, llm_client: OpenAI):\n       self.config = config\n       self.client = llm_client\n       self._history: List[MemoryEntry] = []\n       self._summary: str = \"\"\n\n\n   def add(self, role: str, content: str, metadata: Dict = None):\n       self._history.append(MemoryEntry(role=role, content=content, metadata=metadata or {}))\n       if (self.config.type == MemoryType.EPISODIC and\n               len(self._history) &gt; self.config.summarize_after):\n           self._compress_memory()\n\n\n   def _compress_memory(self):\n       to_compress = self._history[:-self.config.window_size]\n       self._history = self._history[-self.config.window_size:]\n       text = \"n\".join(f\"{e.role}: {e.content[:200]}\" for e in to_compress)\n       try:\n           resp = self.client.chat.completions.create(\n               model=\"gpt-4o-mini\",\n               messages=[{\"role\": \"user\", \"content\":\n                   f\"Summarize this conversation history in 3 sentences:n{text}\"}],\n               max_tokens=150\n           )\n           self._summary += \" \" + resp.choices[0].message.content.strip()\n       except Exception:\n           self._summary += f\" [compressed {len(to_compress)} messages]\"\n\n\n   def get_messages(self, system_prompt: str) -&gt; List[Dict]:\n       messages = [{\"role\": \"system\", \"content\": system_prompt}]\n       if self._summary:\n           messages.append({\"role\": \"system\",\n               \"content\": f\"[Memory Summary]: {self._summary.strip()}\"})\n       for entry in self._history[-self.config.window_size:]:\n           messages.append({\n               \"role\": entry.role if entry.role != \"tool\" else \"assistant\",\n               \"content\": entry.content\n           })\n       return messages\n\n\n   def clear(self):\n       self._history = []\n       self._summary = \"\"\n\n\n   @property\n   def message_count(self) -&gt; int:\n       return len(self._history)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We extend the tool ecosystem and introduce the memory management layer that stores conversation history and compresses it when necessary. We implement statistical tools and sorting utilities that enable the data analysis agent to perform structured numerical operations. At the same time, we design a memory system that tracks interactions, summarizes long histories, and provides contextual messages to the language model.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">@dataclass\nclass PlanStep:\n   step_id: int\n   description: str\n   tool: Optional[str]\n   tool_args: Dict[str, Any]\n   reasoning: str\n\n\n@dataclass\nclass Plan:\n   task: str\n   steps: List[PlanStep]\n   strategy: PlanningStrategy\n\n\nclass Planner:\n   def __init__(self, blueprint: CognitiveBlueprint,\n                registry: ToolRegistry, llm_client: OpenAI):\n       self.blueprint = blueprint\n       self.registry  = registry\n       self.client    = llm_client\n\n\n   def _build_planner_prompt(self) -&gt; str:\n       bp = self.blueprint\n       return textwrap.dedent(f\"\"\"\n           You are {bp.identity.name}, version {bp.identity.version}.\n           {bp.identity.description}\n\n\n           ## Your Goals:\n           {chr(10).join(f'  - {g}' for g in bp.goals)}\n\n\n           ## Your Constraints:\n           {chr(10).join(f'  - {c}' for c in bp.constraints)}\n\n\n           ## Available Tools:\n           {self.registry.get_tool_descriptions(bp.tools)}\n\n\n           ## Planning Strategy: {bp.planning.strategy}\n           ## Max Steps: {bp.planning.max_steps}\n\n\n           Given a user task, produce a JSON execution plan with this exact structure:\n           {{\n             \"steps\": [\n               {{\n                 \"step_id\": 1,\n                 \"description\": \"What this step does\",\n                 \"tool\": \"tool_name or null if no tool needed\",\n                 \"tool_args\": {{\"arg1\": \"value1\"}},\n                 \"reasoning\": \"Why this step is needed\"\n               }}\n             ]\n           }}\n\n\n           Rules:\n           - Only use tools listed above\n           - Set tool to null for pure reasoning steps\n           - Keep steps &lt;= {bp.planning.max_steps}\n           - Return ONLY valid JSON, no markdown fences\n           {bp.system_prompt_extra}\n       \"\"\").strip()\n\n\n   def plan(self, task: str, memory: MemoryManager) -&gt; Plan:\n       system_prompt = self._build_planner_prompt()\n       messages = memory.get_messages(system_prompt)\n       messages.append({\"role\": \"user\", \"content\":\n           f\"Create a plan to complete this task: {task}\"})\n       resp = self.client.chat.completions.create(\n           model=\"gpt-4o-mini\", messages=messages,\n           max_tokens=1200, temperature=0.2\n       )\n       raw = resp.choices[0].message.content.strip()\n       raw = raw.replace(\"```json\", \"\").replace(\"```\", \"\").strip()\n       data = json.loads(raw)\n       steps = [\n           PlanStep(\n               step_id=s[\"step_id\"], description=s[\"description\"],\n               tool=s.get(\"tool\"), tool_args=s.get(\"tool_args\", {}),\n               reasoning=s.get(\"reasoning\", \"\")\n           )\n           for s in data[\"steps\"]\n       ]\n       return Plan(task=task, steps=steps, strategy=self.blueprint.planning.strategy)\n\n\n@dataclass\nclass StepResult:\n   step_id: int\n   success: bool\n   output: str\n   tool_used: Optional[str]\n   error: Optional[str] = None\n\n\n@dataclass\nclass ExecutionTrace:\n   plan: Plan\n   results: List[StepResult]\n   final_answer: str\n\n\nclass Executor:\n   def __init__(self, blueprint: CognitiveBlueprint,\n                registry: ToolRegistry, llm_client: OpenAI):\n       self.blueprint = blueprint\n       self.registry  = registry\n       self.client    = llm_client<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We implement the planning system that transforms a user task into a structured execution plan composed of multiple steps. We design a planner that instructs the language model to produce a JSON plan containing reasoning, tool selection, and arguments for each step. This planning layer allows the agent to break complex problems into smaller executable actions before performing them.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">  def execute_plan(self, plan: Plan, memory: MemoryManager,\n                    verbose: bool = True) -&gt; ExecutionTrace:\n       results: List[StepResult] = []\n       if verbose:\n           console.print(f\"n[bold yellow]<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a1.png\" alt=\"\u26a1\" class=\"wp-smiley\" \/> Executing:[\/] {plan.task}\")\n           console.print(f\"   Strategy: {plan.strategy} | Steps: {len(plan.steps)}\")\n\n\n       for step in plan.steps:\n           if verbose:\n               console.print(f\"n  [cyan]Step {step.step_id}:[\/] {step.description}\")\n           try:\n               if step.tool and step.tool != \"null\":\n                   if verbose:\n                       console.print(f\"   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/> Tool: [green]{step.tool}[\/] | Args: {step.tool_args}\")\n                   output = self.registry.call(step.tool, **step.tool_args)\n                   result = StepResult(step.step_id, True, str(output), step.tool)\n                   if verbose:\n                       console.print(f\"   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Result: {output}\")\n               else:\n                   context_text = \"n\".join(\n                       f\"Step {r.step_id} result: {r.output}\" for r in results)\n                   prompt = (\n                       f\"Previous results:n{context_text}nn\"\n                       f\"Now complete this step: {step.description}n\"\n                       f\"Reasoning hint: {step.reasoning}\"\n                   ) if context_text else (\n                       f\"Complete this step: {step.description}n\"\n                       f\"Reasoning hint: {step.reasoning}\"\n                   )\n                   sys_prompt = (\n                       f\"You are {self.blueprint.identity.name}. \"\n                       f\"{self.blueprint.identity.description}. \"\n                       f\"Constraints: {'; '.join(self.blueprint.constraints)}\"\n                   )\n                   resp = self.client.chat.completions.create(\n                       model=\"gpt-4o-mini\",\n                       messages=[\n                           {\"role\": \"system\", \"content\": sys_prompt},\n                           {\"role\": \"user\",   \"content\": prompt}\n                       ],\n                       max_tokens=500, temperature=0.3\n                   )\n                   output = resp.choices[0].message.content.strip()\n                   result = StepResult(step.step_id, True, output, None)\n                   if verbose:\n                       preview = output[:120] + \"...\" if len(output) &gt; 120 else output\n                       console.print(f\"   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f914.png\" alt=\"\ud83e\udd14\" class=\"wp-smiley\" \/> Reasoning: {preview}\")\n           except Exception as e:\n               result = StepResult(step.step_id, False, \"\", step.tool, str(e))\n               if verbose:\n                   console.print(f\"   <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" \/> Error: {e}\")\n           results.append(result)\n\n\n       final_answer = self._synthesize(plan, results, memory)\n       return ExecutionTrace(plan=plan, results=results, final_answer=final_answer)\n\n\n   def _synthesize(self, plan: Plan, results: List[StepResult],\n                   memory: MemoryManager) -&gt; str:\n       steps_summary = \"n\".join(\n           f\"Step {r.step_id} ({'<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/>' if r.success else '<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" \/>'}): {r.output[:300]}\"\n           for r in results\n       )\n       synthesis_prompt = (\n           f\"Original task: {plan.task}nn\"\n           f\"Step results:n{steps_summary}nn\"\n           f\"Provide a clear, complete final answer. Integrate all step results.\"\n       )\n       sys_prompt = (\n           f\"You are {self.blueprint.identity.name}. \"\n           + (\"Always show your reasoning. \" if self.blueprint.validation.require_reasoning else \"\")\n           + f\"Goals: {'; '.join(self.blueprint.goals)}\"\n       )\n       messages = memory.get_messages(sys_prompt)\n       messages.append({\"role\": \"user\", \"content\": synthesis_prompt})\n       resp = self.client.chat.completions.create(\n           model=\"gpt-4o-mini\", messages=messages,\n           max_tokens=600, temperature=0.3\n       )\n       return resp.choices[0].message.content.strip()\n\n\n@dataclass\nclass ValidationResult:\n   passed: bool\n   issues: List[str]\n   score: float\n\n\nclass Validator:\n   def __init__(self, blueprint: CognitiveBlueprint, llm_client: OpenAI):\n       self.blueprint = blueprint\n       self.client    = llm_client\n\n\n   def validate(self, answer: str, task: str,\n                use_llm_check: bool = False) -&gt; ValidationResult:\n       issues = []\n       v = self.blueprint.validation\n\n\n       if len(answer) &lt; v.min_response_length:\n           issues.append(f\"Response too short: {len(answer)} chars (min: {v.min_response_length})\")\n\n\n       answer_lower = answer.lower()\n       for phrase in v.forbidden_phrases:\n           if phrase.lower() in answer_lower:\n               issues.append(f\"Forbidden phrase detected: '{phrase}'\")\n\n\n       if v.require_reasoning:\n           indicators = [\"because\", \"therefore\", \"since\", \"step\", \"first\",\n                         \"result\", \"calculated\", \"computed\", \"found that\"]\n           if not any(ind in answer_lower for ind in indicators):\n               issues.append(\"Response lacks visible reasoning or explanation\")\n\n\n       if use_llm_check:\n           issues.extend(self._llm_quality_check(answer, task))\n\n\n       return ValidationResult(passed=len(issues) == 0,\n                               issues=issues,\n                               score=max(0.0, 1.0 - len(issues) * 0.25))\n\n\n   def _llm_quality_check(self, answer: str, task: str) -&gt; List[str]:\n       prompt = (\n           f\"Task: {task}nnAnswer: {answer[:500]}nn\"\n           f'Does this answer address the task? Reply JSON: {{\"on_topic\": true\/false, \"issue\": \"...\"}}'\n       )\n       try:\n           resp = self.client.chat.completions.create(\n               model=\"gpt-4o-mini\",\n               messages=[{\"role\": \"user\", \"content\": prompt}],\n               max_tokens=100\n           )\n           raw = resp.choices[0].message.content.strip().replace(\"```json\",\"\").replace(\"```\",\"\")\n           data = json.loads(raw)\n           if not data.get(\"on_topic\", True):\n               return [f\"LLM quality check: {data.get('issue', 'off-topic')}\"]\n       except Exception:\n           pass\n       return []<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We build the executor and validation logic that actually performs the steps generated by the planner. We implement a system that can either call registered tools or perform reasoning through the language model, depending on the step definition. We also add a validator that checks the final response against blueprint constraints such as minimum length, reasoning requirements, and forbidden phrases.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">@dataclass\nclass AgentResponse:\n   agent_name: str\n   task: str\n   final_answer: str\n   trace: ExecutionTrace\n   validation: ValidationResult\n   retries: int\n   total_steps: int\n\n\nclass RuntimeEngine:\n   def __init__(self, blueprint: CognitiveBlueprint,\n                registry: ToolRegistry, llm_client: OpenAI):\n       self.blueprint = blueprint\n       self.memory    = MemoryManager(blueprint.memory, llm_client)\n       self.planner   = Planner(blueprint, registry, llm_client)\n       self.executor  = Executor(blueprint, registry, llm_client)\n       self.validator = Validator(blueprint, llm_client)\n\n\n   def run(self, task: str, verbose: bool = True) -&gt; AgentResponse:\n       bp = self.blueprint\n       if verbose:\n           console.print(Panel(\n               f\"[bold]Agent:[\/] {bp.identity.name} v{bp.identity.version}n\"\n               f\"[bold]Task:[\/] {task}n\"\n               f\"[bold]Strategy:[\/] {bp.planning.strategy} | \"\n               f\"Max Steps: {bp.planning.max_steps} | \"\n               f\"Max Retries: {bp.planning.max_retries}\",\n               title=\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f680.png\" alt=\"\ud83d\ude80\" class=\"wp-smiley\" \/> Runtime Engine Starting\", border_style=\"blue\"\n           ))\n\n\n       self.memory.add(\"user\", task)\n       retries, trace, validation = 0, None, None\n\n\n       for attempt in range(bp.planning.max_retries + 1):\n           if attempt &gt; 0 and verbose:\n               console.print(f\"n[yellow]\u27f3 Retry {attempt}\/{bp.planning.max_retries}[\/]\")\n               console.print(f\"  Issues: {', '.join(validation.issues)}\")\n\n\n           if verbose:\n               console.print(\"n[bold magenta]<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4cb.png\" alt=\"\ud83d\udccb\" class=\"wp-smiley\" \/> Phase 1: Planning...[\/]\")\n           try:\n               plan = self.planner.plan(task, self.memory)\n               if verbose:\n                   tree = Tree(f\"[bold]Plan ({len(plan.steps)} steps)[\/]\")\n                   for s in plan.steps:\n                       icon = \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/>\" if s.tool else \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f914.png\" alt=\"\ud83e\udd14\" class=\"wp-smiley\" \/>\"\n                       branch = tree.add(f\"{icon} Step {s.step_id}: {s.description}\")\n                       if s.tool:\n                           branch.add(f\"[green]Tool:[\/] {s.tool}\")\n                           branch.add(f\"[yellow]Args:[\/] {s.tool_args}\")\n                   console.print(tree)\n           except Exception as e:\n               if verbose: console.print(f\"[red]Planning failed:[\/] {e}\")\n               break\n\n\n           if verbose:\n               console.print(\"n[bold magenta]<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a1.png\" alt=\"\u26a1\" class=\"wp-smiley\" \/> Phase 2: Executing...[\/]\")\n           trace = self.executor.execute_plan(plan, self.memory, verbose=verbose)\n\n\n           if verbose:\n               console.print(\"n[bold magenta]<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Phase 3: Validating...[\/]\")\n           validation = self.validator.validate(trace.final_answer, task)\n\n\n           if verbose:\n               status = \"[green]PASSED[\/]\" if validation.passed else \"[red]FAILED[\/]\"\n               console.print(f\"  Validation: {status} | Score: {validation.score:.2f}\")\n               for issue in validation.issues:\n                   console.print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/>  {issue}\")\n\n\n           if validation.passed:\n               break\n\n\n           retries += 1\n           self.memory.add(\"assistant\", trace.final_answer)\n           self.memory.add(\"user\",\n               f\"Your previous answer had issues: {'; '.join(validation.issues)}. \"\n               f\"Please improve.\"\n           )\n\n\n       if trace:\n           self.memory.add(\"assistant\", trace.final_answer)\n\n\n       if verbose:\n           console.print(Panel(\n               trace.final_answer if trace else \"No answer generated\",\n               title=f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f3af.png\" alt=\"\ud83c\udfaf\" class=\"wp-smiley\" \/> Final Answer \u2014 {bp.identity.name}\",\n               border_style=\"green\"\n           ))\n\n\n       return AgentResponse(\n           agent_name=bp.identity.name, task=task,\n           final_answer=trace.final_answer if trace else \"\",\n           trace=trace, validation=validation,\n           retries=retries,\n           total_steps=len(trace.results) if trace else 0\n       )\n\n\n   def reset_memory(self):\n       self.memory.clear()\n\n\ndef build_engine(blueprint_yaml: str, registry: ToolRegistry,\n                llm_client: OpenAI) -&gt; RuntimeEngine:\n   return RuntimeEngine(load_blueprint_from_yaml(blueprint_yaml), registry, llm_client)\n\n\nif __name__ == \"__main__\":\n\n\n   print(\"n\" + \"=\"*60)\n   print(\"DEMO 1: ResearchBot\")\n   print(\"=\"*60)\n   research_engine = build_engine(RESEARCH_AGENT_YAML, registry, client)\n   research_engine.run(\n       task=(\n           \"how many steps of 20cm height would that be? Also, if I burn 0.15 \"\n           \"calories per step, what's the total calorie burn? Show all calculations.\"\n       )\n   )\n\n\n   print(\"n\" + \"=\"*60)\n   print(\"DEMO 2: DataAnalystBot\")\n   print(\"=\"*60)\n   analyst_engine = build_engine(DATA_ANALYST_YAML, registry, client)\n   analyst_engine.run(\n       task=(\n           \"Analyze this dataset of monthly sales figures (in thousands): \"\n           \"142, 198, 173, 155, 221, 189, 203, 167, 244, 198, 212, 231. \"\n           \"Compute key statistics, identify the best and worst months, \"\n           \"and calculate growth from first to last month.\"\n       )\n   )\n\n\n   print(\"n\" + \"=\"*60)\n   print(\"PORTABILITY DEMO: Same task \u2192 2 different blueprints\")\n   print(\"=\"*60)\n   SHARED_TASK = \"Calculate 15% of 2,500 and tell me the result.\"\n\n\n   responses = {}\n   for name, yaml_str in [\n       (\"ResearchBot\",    RESEARCH_AGENT_YAML),\n       (\"DataAnalystBot\", DATA_ANALYST_YAML),\n   ]:\n       eng = build_engine(yaml_str, registry, client)\n       responses[name] = eng.run(SHARED_TASK, verbose=False)\n\n\n   table = Table(title=\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f504.png\" alt=\"\ud83d\udd04\" class=\"wp-smiley\" \/> Blueprint Portability\", show_header=True, show_lines=True)\n   table.add_column(\"Agent\",   style=\"cyan\",   width=18)\n   table.add_column(\"Steps\",   style=\"yellow\", width=6)\n   table.add_column(\"Valid?\",  width=7)\n   table.add_column(\"Score\",   width=6)\n   table.add_column(\"Answer Preview\", width=55)\n\n\n   for name, r in responses.items():\n       table.add_row(\n           name, str(r.total_steps),\n           \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/>\" if r.validation.passed else \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" \/>\",\n           f\"{r.validation.score:.2f}\",\n           r.final_answer[:140] + \"...\"\n       )\n   console.print(table)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We assemble the runtime engine that orchestrates planning, execution, memory updates, and validation into a complete autonomous workflow. We run multiple demonstrations showing how different blueprints produce different behaviors while using the same core architecture. Finally, we illustrate blueprint portability by running the same task across two agents and comparing their results.<\/p>\n<p>In conclusion, we created a fully functional Auton-style runtime system that integrates cognitive blueprints, tool registries, memory management, planning, execution, and validation into a cohesive framework. We demonstrated how different agents can share the same underlying architecture while behaving differently through customized blueprints, highlighting the design\u2019s flexibility and power. Through this implementation, we not only explored how modern runtime agents operate but also built a strong foundation that we can extend further with richer tools, stronger memory systems, and more advanced autonomous behaviors.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the<a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/AI%20Agents%20Codes\/cognitive_blueprint_runtime_agents_auton_framework_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">\u00a0<strong>Full Codes here<\/strong><\/a><strong> <\/strong>and<strong> <a href=\"https:\/\/arxiv.org\/abs\/2602.23720\" target=\"_blank\" rel=\"noreferrer noopener\">Related Paper<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">120k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/03\/07\/building-next-gen-agentic-ai-a-complete-framework-for-cognitive-blueprint-driven-runtime-agents-with-memory-tools-and-validation\/\">Building Next-Gen Agentic AI: A Complete Framework for Cognitive Blueprint Driven Runtime Agents with Memory Tools and Validation<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we build a c&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-525","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/525","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=525"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/525\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=525"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=525"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=525"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}