{"id":445,"date":"2026-02-21T06:05:04","date_gmt":"2026-02-20T22:05:04","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=445"},"modified":"2026-02-21T06:05:04","modified_gmt":"2026-02-20T22:05:04","slug":"how-to-design-a-swiss-army-knife-research-agent-with-tool-using-ai-web-search-pdf-analysis-vision-and-automated-reporting","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=445","title":{"rendered":"How to Design a Swiss Army Knife Research Agent with Tool-Using AI, Web Search, PDF Analysis, Vision, and Automated Reporting"},"content":{"rendered":"<p>In this tutorial, we build a \u201cSwiss Army Knife\u201d research agent that goes far beyond simple chat interactions and actively solves multi-step research problems end-to-end. We combine a tool-using agent architecture with live web search, local PDF ingestion, vision-based chart analysis, and automated report generation to demonstrate how modern agents can reason, verify, and produce structured outputs. By wiring together small agents, OpenAI models, and practical data-extraction utilities, we show how a single agent can explore sources, cross-check claims, and synthesize findings into professional-grade Markdown and DOCX reports.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">%pip -q install -U smolagents openai trafilatura duckduckgo-search pypdf pymupdf python-docx pillow tqdm\n\n\nimport os, re, json, getpass\nfrom typing import List, Dict, Any\nimport requests\nimport trafilatura\nfrom duckduckgo_search import DDGS\nfrom pypdf import PdfReader\nimport fitz\nfrom docx import Document\nfrom docx.shared import Pt\nfrom datetime import datetime\n\n\nfrom openai import OpenAI\nfrom smolagents import CodeAgent, OpenAIModel, tool\n\n\nif not os.environ.get(\"OPENAI_API_KEY\"):\n   os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Paste your OpenAI API key (hidden): \").strip()\nprint(\"OPENAI_API_KEY set:\", \"YES\" if os.environ.get(\"OPENAI_API_KEY\") else \"NO\")\n\n\nif not os.environ.get(\"SERPER_API_KEY\"):\n   serper = getpass.getpass(\"Optional: Paste SERPER_API_KEY for Google results (press Enter to skip): \").strip()\n   if serper:\n       os.environ[\"SERPER_API_KEY\"] = serper\nprint(\"SERPER_API_KEY set:\", \"YES\" if os.environ.get(\"SERPER_API_KEY\") else \"NO\")\n\n\nclient = OpenAI()\n\n\ndef _now():\n   return datetime.utcnow().strftime(\"%Y-%m-%d %H:%M:%SZ\")\n\n\ndef _safe_filename(s: str) -&gt; str:\n   s = re.sub(r\"[^a-zA-Z0-9._-]+\", \"_\", s).strip(\"_\")\n   return s[:180] if s else \"file\"<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We set up the full execution environment and securely load all required credentials without hardcoding secrets. We import all dependencies required for web search, document parsing, vision analysis, and agent orchestration. We also initialize shared utilities to standardize timestamps and file naming throughout the workflow.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">try:\n   from google.colab import files\n   os.makedirs(\"\/content\/pdfs\", exist_ok=True)\n   uploaded = files.upload()\n   for name, data in uploaded.items():\n       if name.lower().endswith(\".pdf\"):\n           with open(f\"\/content\/pdfs\/{name}\", \"wb\") as f:\n               f.write(data)\n   print(\"PDFs in \/content\/pdfs:\", os.listdir(\"\/content\/pdfs\"))\nexcept Exception as e:\n   print(\"Upload skipped:\", str(e))\n\n\ndef web_search(query: str, k: int = 6) -&gt; List[Dict[str, str]]:\n   serper_key = os.environ.get(\"SERPER_API_KEY\", \"\").strip()\n   if serper_key:\n       resp = requests.post(\n           \"https:\/\/google.serper.dev\/search\",\n           headers={\"X-API-KEY\": serper_key, \"Content-Type\": \"application\/json\"},\n           json={\"q\": query, \"num\": k},\n           timeout=30,\n       )\n       resp.raise_for_status()\n       data = resp.json()\n       out = []\n       for item in (data.get(\"organic\") or [])[:k]:\n           out.append({\n               \"title\": item.get(\"title\",\"\"),\n               \"url\": item.get(\"link\",\"\"),\n               \"snippet\": item.get(\"snippet\",\"\"),\n           })\n       return out\n\n\n   out = []\n   with DDGS() as ddgs:\n       for r in ddgs.text(query, max_results=k):\n           out.append({\n               \"title\": r.get(\"title\",\"\"),\n               \"url\": r.get(\"href\",\"\"),\n               \"snippet\": r.get(\"body\",\"\"),\n           })\n   return out\n\n\ndef fetch_url_text(url: str) -&gt; Dict[str, Any]:\n   try:\n       downloaded = trafilatura.fetch_url(url, timeout=30)\n       if not downloaded:\n           return {\"url\": url, \"ok\": False, \"error\": \"fetch_failed\", \"text\": \"\"}\n       text = trafilatura.extract(downloaded, include_comments=False, include_tables=True)\n       if not text:\n           return {\"url\": url, \"ok\": False, \"error\": \"extract_failed\", \"text\": \"\"}\n       title_guess = next((ln.strip() for ln in text.splitlines() if ln.strip()), \"\")[:120]\n       return {\"url\": url, \"ok\": True, \"title_guess\": title_guess, \"text\": text}\n   except Exception as e:\n       return {\"url\": url, \"ok\": False, \"error\": str(e), \"text\": \"\"}<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We enable local PDF ingestion and establish a flexible web search pipeline that works with or without a paid search API. We show how we gracefully handle optional inputs while maintaining a reliable research flow. We also implement robust URL fetching and text extraction to prepare clean source material for downstream reasoning.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">def read_pdf_text(pdf_path: str, max_pages: int = 30) -&gt; Dict[str, Any]:\n   reader = PdfReader(pdf_path)\n   pages = min(len(reader.pages), max_pages)\n   chunks = []\n   for i in range(pages):\n       try:\n           chunks.append(reader.pages[i].extract_text() or \"\")\n       except Exception:\n           chunks.append(\"\")\n   return {\"pdf_path\": pdf_path, \"pages_read\": pages, \"text\": \"nn\".join(chunks).strip()}\n\n\ndef extract_pdf_images(pdf_path: str, out_dir: str = \"\/content\/extracted_images\", max_pages: int = 10) -&gt; List[str]:\n   os.makedirs(out_dir, exist_ok=True)\n   doc = fitz.open(pdf_path)\n   saved = []\n   pages = min(len(doc), max_pages)\n   base = _safe_filename(os.path.basename(pdf_path).rsplit(\".\", 1)[0])\n\n\n   for p in range(pages):\n       page = doc[p]\n       img_list = page.get_images(full=True)\n       for img_i, img in enumerate(img_list):\n           xref = img[0]\n           pix = fitz.Pixmap(doc, xref)\n           if pix.n - pix.alpha &gt;= 4:\n               pix = fitz.Pixmap(fitz.csRGB, pix)\n           img_path = os.path.join(out_dir, f\"{base}_p{p+1}_img{img_i+1}.png\")\n           pix.save(img_path)\n           saved.append(img_path)\n\n\n   doc.close()\n   return saved\n\n\ndef vision_analyze_image(image_path: str, question: str, model: str = \"gpt-4.1-mini\") -&gt; Dict[str, Any]:\n   with open(image_path, \"rb\") as f:\n       img_bytes = f.read()\n\n\n   resp = client.responses.create(\n       model=model,\n       input=[{\n           \"role\": \"user\",\n           \"content\": [\n               {\"type\": \"input_text\", \"text\": f\"Answer concisely and accurately.nnQuestion: {question}\"},\n               {\"type\": \"input_image\", \"image_data\": img_bytes},\n           ],\n       }],\n   )\n   return {\"image_path\": image_path, \"answer\": resp.output_text}<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We focus on deep document understanding by extracting structured text and visual artifacts from PDFs. We integrate a vision-capable model to interpret charts and figures instead of treating them as opaque images. We ensure that numerical trends and visual insights can be converted into explicit, text-based evidence.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">def write_markdown(path: str, content: str) -&gt; str:\n   os.makedirs(os.path.dirname(path), exist_ok=True)\n   with open(path, \"w\", encoding=\"utf-8\") as f:\n       f.write(content)\n   return path\n\n\ndef write_docx_from_markdown(docx_path: str, md: str, title: str = \"Research Report\") -&gt; str:\n   os.makedirs(os.path.dirname(docx_path), exist_ok=True)\n   doc = Document()\n   t = doc.add_paragraph()\n   run = t.add_run(title)\n   run.bold = True\n   run.font.size = Pt(18)\n   meta = doc.add_paragraph()\n   meta.add_run(f\"Generated: {_now()}\").italic = True\n   doc.add_paragraph(\"\")\n   for line in md.splitlines():\n       line = line.rstrip()\n       if not line:\n           doc.add_paragraph(\"\")\n           continue\n       if line.startswith(\"# \"):\n           doc.add_heading(line[2:].strip(), level=1)\n       elif line.startswith(\"## \"):\n           doc.add_heading(line[3:].strip(), level=2)\n       elif line.startswith(\"### \"):\n           doc.add_heading(line[4:].strip(), level=3)\n       elif re.match(r\"^s*[-*]s+\", line):\n           p = doc.add_paragraph(style=\"List Bullet\")\n           p.add_run(re.sub(r\"^s*[-*]s+\", \"\", line).strip())\n       else:\n           doc.add_paragraph(line)\n   doc.save(docx_path)\n   return docx_path\n\n\n@tool\ndef t_web_search(query: str, k: int = 6) -&gt; str:\n   return json.dumps(web_search(query, k), ensure_ascii=False)\n\n\n@tool\ndef t_fetch_url_text(url: str) -&gt; str:\n   return json.dumps(fetch_url_text(url), ensure_ascii=False)\n\n\n@tool\ndef t_list_pdfs() -&gt; str:\n   pdf_dir = \"\/content\/pdfs\"\n   if not os.path.isdir(pdf_dir):\n       return json.dumps([])\n   paths = [os.path.join(pdf_dir, f) for f in os.listdir(pdf_dir) if f.lower().endswith(\".pdf\")]\n   return json.dumps(sorted(paths), ensure_ascii=False)\n\n\n@tool\ndef t_read_pdf_text(pdf_path: str, max_pages: int = 30) -&gt; str:\n   return json.dumps(read_pdf_text(pdf_path, max_pages=max_pages), ensure_ascii=False)\n\n\n@tool\ndef t_extract_pdf_images(pdf_path: str, max_pages: int = 10) -&gt; str:\n   imgs = extract_pdf_images(pdf_path, max_pages=max_pages)\n   return json.dumps(imgs, ensure_ascii=False)\n\n\n@tool\ndef t_vision_analyze_image(image_path: str, question: str) -&gt; str:\n   return json.dumps(vision_analyze_image(image_path, question), ensure_ascii=False)\n\n\n@tool\ndef t_write_markdown(path: str, content: str) -&gt; str:\n   return write_markdown(path, content)\n\n\n@tool\ndef t_write_docx_from_markdown(docx_path: str, md_path: str, title: str = \"Research Report\") -&gt; str:\n   with open(md_path, \"r\", encoding=\"utf-8\") as f:\n       md = f.read()\n   return write_docx_from_markdown(docx_path, md, title=title)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We implement the full output layer by generating Markdown reports and converting them into polished DOCX documents. We expose all core capabilities as explicit tools that the agent can reason about and invoke step by step. We ensure that every transformation from raw data to final report remains deterministic and inspectable.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">model = OpenAIModel(model_id=\"gpt-5\")\n\n\nagent = CodeAgent(\n   tools=[\n       t_web_search,\n       t_fetch_url_text,\n       t_list_pdfs,\n       t_read_pdf_text,\n       t_extract_pdf_images,\n       t_vision_analyze_image,\n       t_write_markdown,\n       t_write_docx_from_markdown,\n   ],\n   model=model,\n   add_base_tools=False,\n   additional_authorized_imports=[\"json\",\"re\",\"os\",\"math\",\"datetime\",\"time\",\"textwrap\"],\n)\n\n\nSYSTEM_INSTRUCTIONS = \"\"\"\nYou are a Swiss Army Knife Research Agent.\n\"\"\"\n\n\ndef run_research(topic: str):\n   os.makedirs(\"\/content\/report\", exist_ok=True)\n   prompt = f\"\"\"{SYSTEM_INSTRUCTIONS.strip()}\n\n\nResearch question:\n{topic}\n\n\nSteps:\n1) List available PDFs (if any) and decide which are relevant.\n2) Do web search for the topic.\n3) Fetch and extract the text of the best sources.\n4) If PDFs exist, extract text and images.\n5) Visually analyze figures.\n6) Write a Markdown report and convert to DOCX.\n\"\"\"\n   return agent.run(prompt)\n\n\ntopic = \"Build a research brief on the most reliable design patterns for tool-using agents (2024-2026), focusing on evaluation, citations, and failure modes.\"\nout = run_research(topic)\nprint(out[:1500] if isinstance(out, str) else out)\n\n\ntry:\n   from google.colab import files\n   files.download(\"\/content\/report\/report.md\")\n   files.download(\"\/content\/report\/report.docx\")\nexcept Exception as e:\n   print(\"Download skipped:\", str(e))<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We assemble the complete research agent and define a structured execution plan for multi-step reasoning. We guide the agent to search, analyze, synthesize, and write using a single coherent prompt. We demonstrate how the agent produces a finished research artifact that can be reviewed, shared, and reused immediately.<\/p>\n<p>In conclusion, we demonstrated how a well-designed tool-using agent can function as a reliable research assistant rather than a conversational toy. We showcased how explicit tools, disciplined prompting, and step-by-step execution allow the agent to search the web, analyze documents and visuals, and generate traceable, citation-aware reports. This approach offers a practical blueprint for building trustworthy research agents that emphasize evaluation, evidence, and failure awareness, capabilities increasingly essential for real-world AI systems.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/AI%20Agents%20Codes\/swiss_army_knife_research_agent_tool_using_ai_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/02\/20\/how-to-design-a-swiss-army-knife-research-agent-with-tool-using-ai-web-search-pdf-analysis-vision-and-automated-reporting\/\">How to Design a Swiss Army Knife Research Agent with Tool-Using AI, Web Search, PDF Analysis, Vision, and Automated Reporting<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we build a \u201c&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-445","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/445","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=445"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/445\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=445"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=445"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=445"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}