{"id":655,"date":"2026-04-02T13:34:13","date_gmt":"2026-04-02T05:34:13","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=655"},"modified":"2026-04-02T13:34:13","modified_gmt":"2026-04-02T05:34:13","slug":"how-to-build-production-ready-agentscope-workflows-with-react-agents-custom-tools-multi-agent-debate-structured-output-and-concurrent-pipelines","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=655","title":{"rendered":"How to Build Production Ready AgentScope Workflows with ReAct Agents, Custom Tools, Multi-Agent Debate, Structured Output and Concurrent Pipelines"},"content":{"rendered":"<p>In this tutorial, we build a complete <a href=\"https:\/\/github.com\/agentscope-ai\/agentscope\"><strong>AgentScope<\/strong><\/a> workflow from the ground up and run everything in Colab. We start by wiring OpenAI through AgentScope and validating a basic model call to understand how messages and responses are handled. From there, we define custom tool functions, register them in a toolkit, and inspect the auto-generated schemas to see how tools are exposed to the agent. We then move into a ReAct-based agent that dynamically decides when to call tools, followed by a multi-agent debate setup using MsgHub to simulate structured interaction between agents. Finally, we enforce structured outputs with Pydantic and execute a concurrent multi-agent pipeline in which multiple specialists analyze a problem in parallel, and a synthesiser combines their insights.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import subprocess, sys\n\n\nsubprocess.check_call([\n   sys.executable, \"-m\", \"pip\", \"install\", \"-q\",\n   \"agentscope\", \"openai\", \"pydantic\", \"nest_asyncio\",\n])\n\n\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/>  All packages installed.n\")\n\n\nimport nest_asyncio\nnest_asyncio.apply()\n\n\nimport asyncio\nimport json\nimport getpass\nimport math\nimport datetime\nfrom typing import Any\n\n\nfrom pydantic import BaseModel, Field\n\n\nfrom agentscope.agent import ReActAgent\nfrom agentscope.formatter import OpenAIChatFormatter, OpenAIMultiAgentFormatter\nfrom agentscope.memory import InMemoryMemory\nfrom agentscope.message import Msg, TextBlock, ToolUseBlock\nfrom agentscope.model import OpenAIChatModel\nfrom agentscope.pipeline import MsgHub, sequential_pipeline\nfrom agentscope.tool import Toolkit, ToolResponse\n\n\nOPENAI_API_KEY = getpass.getpass(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f511.png\" alt=\"\ud83d\udd11\" class=\"wp-smiley\" \/>  Enter your OpenAI API key: \")\nMODEL_NAME = \"gpt-4o-mini\"\n\n\nprint(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/>  API key captured. Using model: {MODEL_NAME}n\")\nprint(\"=\" * 72)\n\n\ndef make_model(stream: bool = False) -&gt; OpenAIChatModel:\n   return OpenAIChatModel(\n       model_name=MODEL_NAME,\n       api_key=OPENAI_API_KEY,\n       stream=stream,\n       generate_kwargs={\"temperature\": 0.7, \"max_tokens\": 1024},\n   )\n\n\nprint(\"n\" + \"\u2550\" * 72)\nprint(\"  PART 1: Basic Model Call\")\nprint(\"\u2550\" * 72)\n\n\nasync def part1_basic_model_call():\n   model = make_model()\n   response = await model(\n       messages=[{\"role\": \"user\", \"content\": \"What is AgentScope in one sentence?\"}],\n   )\n   text = response.content[0][\"text\"]\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f916.png\" alt=\"\ud83e\udd16\" class=\"wp-smiley\" \/>  Model says: {text}\")\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/>  Tokens used: {response.usage}\")\n\n\nasyncio.run(part1_basic_model_call())<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We install all required dependencies and patch the event loop to ensure asynchronous code runs smoothly in Colab. We securely capture the OpenAI API key and configure the model through a helper function for reuse. We then run a basic model call to verify the setup and inspect the response and token usage.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"n\" + \"\u2550\" * 72)\nprint(\"  PART 2: Custom Tool Functions &amp; Toolkit\")\nprint(\"\u2550\" * 72)\n\n\nasync def calculate_expression(expression: str) -&gt; ToolResponse:\n   allowed = {\n       \"abs\": abs, \"round\": round, \"min\": min, \"max\": max,\n       \"sum\": sum, \"pow\": pow, \"int\": int, \"float\": float,\n       \"sqrt\": math.sqrt, \"pi\": math.pi, \"e\": math.e,\n       \"log\": math.log, \"sin\": math.sin, \"cos\": math.cos,\n       \"tan\": math.tan, \"factorial\": math.factorial,\n   }\n   try:\n       result = eval(expression, {\"__builtins__\": {}}, allowed)\n       return ToolResponse(content=[TextBlock(type=\"text\", text=str(result))])\n   except Exception as exc:\n       return ToolResponse(content=[TextBlock(type=\"text\", text=f\"Error: {exc}\")])\n\n\nasync def get_current_datetime(timezone_offset: int = 0) -&gt; ToolResponse:\n   now = datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=timezone_offset)))\n   return ToolResponse(\n       content=[TextBlock(type=\"text\", text=now.strftime(\"%Y-%m-%d %H:%M:%S %Z\"))],\n   )\n\n\ntoolkit = Toolkit()\ntoolkit.register_tool_function(calculate_expression)\ntoolkit.register_tool_function(get_current_datetime)\n\n\nschemas = toolkit.get_json_schemas()\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4cb.png\" alt=\"\ud83d\udccb\" class=\"wp-smiley\" \/>  Auto-generated tool schemas:\")\nprint(json.dumps(schemas, indent=2))\n\n\nasync def part2_test_tool():\n   result_gen = await toolkit.call_tool_function(\n       ToolUseBlock(\n           type=\"tool_use\", id=\"test-1\",\n           name=\"calculate_expression\",\n           input={\"expression\": \"factorial(10)\"},\n       ),\n   )\n   async for resp in result_gen:\n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/>  Tool result for factorial(10): {resp.content[0]['text']}\")\n\n\nasyncio.run(part2_test_tool())<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We define custom tool functions for mathematical evaluation and datetime retrieval using controlled execution. We register these tools into a toolkit and inspect their auto-generated JSON schemas to understand how AgentScope exposes them. We then simulate a direct tool call to validate that the tool execution pipeline works correctly.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"n\" + \"\u2550\" * 72)\nprint(\"  PART 3: ReAct Agent with Tools\")\nprint(\"\u2550\" * 72)\n\n\nasync def part3_react_agent():\n   agent = ReActAgent(\n       name=\"MathBot\",\n       sys_prompt=(\n           \"You are MathBot, a helpful assistant that solves math problems. \"\n           \"Use the calculate_expression tool for any computation. \"\n           \"Use get_current_datetime when asked about the time.\"\n       ),\n       model=make_model(),\n       memory=InMemoryMemory(),\n       formatter=OpenAIChatFormatter(),\n       toolkit=toolkit,\n       max_iters=5,\n   )\n\n\n   queries = [\n       \"What's the current time in UTC+5?\",\n   ]\n   for q in queries:\n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f464.png\" alt=\"\ud83d\udc64\" class=\"wp-smiley\" \/>  User: {q}\")\n       msg = Msg(\"user\", q, \"user\")\n       response = await agent(msg)\n       print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f916.png\" alt=\"\ud83e\udd16\" class=\"wp-smiley\" \/>  MathBot: {response.get_text_content()}\")\n       agent.memory.clear()\n\n\nasyncio.run(part3_react_agent())\n\n\nprint(\"n\" + \"\u2550\" * 72)\nprint(\"  PART 4: Multi-Agent Debate (MsgHub)\")\nprint(\"\u2550\" * 72)\n\n\nDEBATE_TOPIC = (\n   \"Should artificial general intelligence (AGI) research be open-sourced, \"\n   \"or should it remain behind closed doors at major labs?\"\n)\n<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We construct a ReAct agent that reasons about when to use tools and dynamically executes them. We pass user queries and observe how the agent combines reasoning with tool usage to produce answers. We also reset memory between queries to ensure independent and clean interactions.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">async def part4_debate():\n   proponent = ReActAgent(\n       name=\"Proponent\",\n       sys_prompt=(\n           f\"You are the Proponent in a debate. You argue IN FAVOR of open-sourcing AGI research. \"\n           f\"Topic: {DEBATE_TOPIC}n\"\n           \"Keep each response to 2-3 concise paragraphs. Address the other side's points directly.\"\n       ),\n       model=make_model(),\n       memory=InMemoryMemory(),\n       formatter=OpenAIMultiAgentFormatter(),\n   )\n\n\n   opponent = ReActAgent(\n       name=\"Opponent\",\n       sys_prompt=(\n           f\"You are the Opponent in a debate. You argue AGAINST open-sourcing AGI research. \"\n           f\"Topic: {DEBATE_TOPIC}n\"\n           \"Keep each response to 2-3 concise paragraphs. Address the other side's points directly.\"\n       ),\n       model=make_model(),\n       memory=InMemoryMemory(),\n       formatter=OpenAIMultiAgentFormatter(),\n   )\n\n\n   num_rounds = 2\n   for rnd in range(1, num_rounds + 1):\n       print(f\"n{'\u2500' * 60}\")\n       print(f\"  ROUND {rnd}\")\n       print(f\"{'\u2500' * 60}\")\n\n\n       async with MsgHub(\n           participants=[proponent, opponent],\n           announcement=Msg(\"Moderator\", f\"Round {rnd} \u2014 begin. Topic: {DEBATE_TOPIC}\", \"assistant\"),\n       ):\n           pro_msg = await proponent(\n               Msg(\"Moderator\", \"Proponent, please present your argument.\", \"user\"),\n           )\n           print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/>  Proponent:n{pro_msg.get_text_content()}\")\n\n\n           opp_msg = await opponent(\n               Msg(\"Moderator\", \"Opponent, please respond and present your counter-argument.\", \"user\"),\n           )\n           print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" \/>  Opponent:n{opp_msg.get_text_content()}\")\n\n\n   print(f\"n{'\u2500' * 60}\")\n   print(\"  DEBATE COMPLETE\")\n   print(f\"{'\u2500' * 60}\")\n\n\nasyncio.run(part4_debate())\n\n\nprint(\"n\" + \"\u2550\" * 72)\nprint(\"  PART 5: Structured Output with Pydantic\")\nprint(\"\u2550\" * 72)\n\n\nclass MovieReview(BaseModel):\n   year: int = Field(description=\"The release year.\")\n   genre: str = Field(description=\"Primary genre of the movie.\")\n   rating: float = Field(description=\"Rating from 0.0 to 10.0.\")\n   pros: list[str] = Field(description=\"List of 2-3 strengths of the movie.\")\n   cons: list[str] = Field(description=\"List of 1-2 weaknesses of the movie.\")\n   verdict: str = Field(description=\"A one-sentence final verdict.\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We create two agents with opposing roles and connect them using MsgHub for a structured multi-agent debate. We simulate multiple rounds in which each agent responds to the others while maintaining context through shared communication. We observe how agent coordination enables coherent argument exchange across turns.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">async def part5_structured_output():\n   agent = ReActAgent(\n       name=\"Critic\",\n       sys_prompt=\"You are a professional movie critic. When asked to review a movie, provide a thorough analysis.\",\n       model=make_model(),\n       memory=InMemoryMemory(),\n       formatter=OpenAIChatFormatter(),\n   )\n\n\n   msg = Msg(\"user\", \"Review the movie 'Inception' (2010) by Christopher Nolan.\", \"user\")\n   response = await agent(msg, structured_model=MovieReview)\n\n\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f3ac.png\" alt=\"\ud83c\udfac\" class=\"wp-smiley\" \/>  Structured Movie Review:\")\n   print(f\"    Title   : {response.metadata.get('title', 'N\/A')}\")\n   print(f\"    Year    : {response.metadata.get('year', 'N\/A')}\")\n   print(f\"    Genre   : {response.metadata.get('genre', 'N\/A')}\")\n   print(f\"    Rating  : {response.metadata.get('rating', 'N\/A')}\/10\")\n   pros = response.metadata.get('pros', [])\n   cons = response.metadata.get('cons', [])\n   if pros:\n       print(f\"    Pros    : {', '.join(str(p) for p in pros)}\")\n   if cons:\n       print(f\"    Cons    : {', '.join(str(c) for c in cons)}\")\n   print(f\"    Verdict : {response.metadata.get('verdict', 'N\/A')}\")\n\n\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/>  Full text response:n{response.get_text_content()}\")\n\n\nasyncio.run(part5_structured_output())\n\n\nprint(\"n\" + \"\u2550\" * 72)\nprint(\"  PART 6: Concurrent Multi-Agent Pipeline\")\nprint(\"\u2550\" * 72)\n\n\nasync def part6_concurrent_agents():\n   specialists = {\n       \"Economist\": \"You are an economist. Analyze the given topic from an economic perspective in 2-3 sentences.\",\n       \"Ethicist\": \"You are an ethicist. Analyze the given topic from an ethical perspective in 2-3 sentences.\",\n       \"Technologist\": \"You are a technologist. Analyze the given topic from a technology perspective in 2-3 sentences.\",\n   }\n\n\n   agents = []\n   for name, prompt in specialists.items():\n       agents.append(\n           ReActAgent(\n               name=name,\n               sys_prompt=prompt,\n               model=make_model(),\n               memory=InMemoryMemory(),\n               formatter=OpenAIChatFormatter(),\n           )\n       )\n\n\n   topic_msg = Msg(\n       \"user\",\n       \"Analyze the impact of large language models on the global workforce.\",\n       \"user\",\n   )\n\n\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/23f3.png\" alt=\"\u23f3\" class=\"wp-smiley\" \/>  Running 3 specialist agents concurrently...\")\n   results = await asyncio.gather(*(agent(topic_msg) for agent in agents))\n\n\n   for agent, result in zip(agents, results):\n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9e0.png\" alt=\"\ud83e\udde0\" class=\"wp-smiley\" \/>  {agent.name}:n{result.get_text_content()}\")\n\n\n   synthesiser = ReActAgent(\n       name=\"Synthesiser\",\n       sys_prompt=(\n           \"You are a synthesiser. You receive analyses from an Economist, \"\n           \"an Ethicist, and a Technologist. Combine their perspectives into \"\n           \"a single coherent summary of 3-4 sentences.\"\n       ),\n       model=make_model(),\n       memory=InMemoryMemory(),\n       formatter=OpenAIMultiAgentFormatter(),\n   )\n\n\n   combined_text = \"nn\".join(\n       f\"[{agent.name}]: {r.get_text_content()}\" for agent, r in zip(agents, results)\n   )\n   synthesis = await synthesiser(\n       Msg(\"user\", f\"Here are the specialist analyses:nn{combined_text}nnPlease synthesise.\", \"user\"),\n   )\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f517.png\" alt=\"\ud83d\udd17\" class=\"wp-smiley\" \/>  Synthesised Summary:n{synthesis.get_text_content()}\")\n\n\nasyncio.run(part6_concurrent_agents())\n\n\nprint(\"n\" + \"\u2550\" * 72)\nprint(\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f389.png\" alt=\"\ud83c\udf89\" class=\"wp-smiley\" \/>  TUTORIAL COMPLETE!\")\nprint(\"  You have covered:\")\nprint(\"    1. Basic model calls with OpenAIChatModel\")\nprint(\"    2. Custom tool functions &amp; auto-generated JSON schemas\")\nprint(\"    3. ReAct Agent with tool use\")\nprint(\"    4. Multi-agent debate with MsgHub\")\nprint(\"    5. Structured output with Pydantic models\")\nprint(\"    6. Concurrent multi-agent pipelines\")\nprint(\"\u2550\" * 72)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We enforce structured outputs using a Pydantic schema to extract consistent fields from model responses. We then build a concurrent multi-agent pipeline where multiple specialist agents analyze a topic in parallel. Finally, we aggregate their outputs using a synthesiser agent to produce a unified and coherent summary.<\/p>\n<p>In conclusion, we have implemented a full-stack agentic system that goes beyond simple prompting and into orchestrated reasoning, tool usage, and collaboration. We now understand how AgentScope manages memory, formatting, and tool execution under the hood, and how ReAct agents bridge reasoning with action. We also saw how multi-agent systems can be coordinated both sequentially and concurrently, and how structured outputs ensure reliability in downstream applications. With these building blocks, we are in a position to design more advanced agent architectures, extend tool ecosystems, and deploy scalable, production-ready AI systems.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out\u00a0the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Agentic%20Workflows\/agentscope_production_agent_workflows_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Notebook here<\/a>. \u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">120k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/04\/01\/how-to-build-production-ready-agentscope-workflows-with-react-agents-custom-tools-multi-agent-debate-structured-output-and-concurrent-pipelines\/\">How to Build Production Ready AgentScope Workflows with ReAct Agents, Custom Tools, Multi-Agent Debate, Structured Output and Concurrent Pipelines<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we build a c&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-655","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/655","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=655"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/655\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=655"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=655"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=655"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}