{"id":726,"date":"2026-04-15T08:39:12","date_gmt":"2026-04-15T00:39:12","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=726"},"modified":"2026-04-15T08:39:12","modified_gmt":"2026-04-15T00:39:12","slug":"a-coding-implementation-of-crawl4ai-for-web-crawling-markdown-generation-javascript-execution-and-llm-based-structured-extraction","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=726","title":{"rendered":"A Coding Implementation of Crawl4AI for Web Crawling, Markdown Generation, JavaScript Execution, and LLM-Based Structured Extraction"},"content":{"rendered":"<p>In this tutorial, we build a complete and practical <a href=\"https:\/\/github.com\/unclecode\/crawl4ai\"><strong>Crawl4AI<\/strong><\/a> workflow and explore how modern web crawling goes far beyond simply downloading page HTML. We set up the full environment, configure browser behavior, and work through essential capabilities such as basic crawling, markdown generation, structured CSS-based extraction, JavaScript execution, session handling, screenshots, link analysis, concurrent crawling, and deep multi-page exploration. We also examine how Crawl4AI can be extended with LLM-based extraction to transform raw web content into structured, usable data. Throughout the tutorial, we focus on hands-on implementation to understand the major features of Crawl4AI v0.8.x and learn how to apply them to realistic data extraction and web automation tasks.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">import subprocess\nimport sys\n\n\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4e6.png\" alt=\"\ud83d\udce6\" class=\"wp-smiley\" \/> Installing system dependencies...\")\nsubprocess.run(['apt-get', 'update', '-qq'], capture_output=True)\nsubprocess.run(['apt-get', 'install', '-y', '-qq',\n               'libnss3', 'libnspr4', 'libatk1.0-0', 'libatk-bridge2.0-0',\n               'libcups2', 'libdrm2', 'libxkbcommon0', 'libxcomposite1',\n               'libxdamage1', 'libxfixes3', 'libxrandr2', 'libgbm1',\n               'libasound2', 'libpango-1.0-0', 'libcairo2'], capture_output=True)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> System dependencies installed!\")\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4e6.png\" alt=\"\ud83d\udce6\" class=\"wp-smiley\" \/> Installing Python packages...\")\nsubprocess.run([sys.executable, '-m', 'pip', 'install', '-U', 'crawl4ai', 'nest_asyncio', 'pydantic', '-q'])\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Python packages installed!\")\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4e6.png\" alt=\"\ud83d\udce6\" class=\"wp-smiley\" \/> Installing Playwright browsers (this may take a minute)...\")\nsubprocess.run([sys.executable, '-m', 'playwright', 'install', 'chromium'], capture_output=True)\nsubprocess.run([sys.executable, '-m', 'playwright', 'install-deps', 'chromium'], capture_output=True)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Playwright browsers installed!\")\n\n\nimport nest_asyncio\nnest_asyncio.apply()\n\n\nimport asyncio\nimport json\nfrom typing import List, Optional\nfrom pydantic import BaseModel, Field\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> INSTALLATION COMPLETE! Ready to crawl!\")\nprint(\"=\"*60)\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4d6.png\" alt=\"\ud83d\udcd6\" class=\"wp-smiley\" \/> PART 2: BASIC CRAWLING\")\nprint(\"=\"*60)\n\n\nfrom crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode\n\n\nasync def basic_crawl():\n   \"\"\"The simplest possible crawl - fetch a webpage and get markdown.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f50d.png\" alt=\"\ud83d\udd0d\" class=\"wp-smiley\" \/> Running basic crawl on example.com...\")\n  \n   async with AsyncWebCrawler() as crawler:\n       result = await crawler.arun(url=\"https:\/\/example.com\")\n      \n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Crawl successful: {result.success}\")\n       print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/> Title: {result.metadata.get('title', 'N\/A')}\")\n       print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Markdown length: {len(result.markdown.raw_markdown)} characters\")\n       print(f\"n--- First 500 chars of markdown ---\")\n       print(result.markdown.raw_markdown[:500])\n      \n   return result\n\n\nresult = asyncio.run(basic_crawl())\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2699.png\" alt=\"\u2699\" class=\"wp-smiley\" \/> PART 3: CONFIGURED CRAWLING\")\nprint(\"=\"*60)\n\n\nasync def configured_crawl():\n   \"\"\"Crawling with custom browser and crawler configurations.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/> Running configured crawl with custom settings...\")\n  \n   browser_config = BrowserConfig(\n       headless=True,\n       verbose=True,\n       viewport_width=1920,\n       viewport_height=1080,\n       user_agent=\"Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36\"\n   )\n  \n   run_config = CrawlerRunConfig(\n       cache_mode=CacheMode.BYPASS,\n       word_count_threshold=10,\n       page_timeout=30000,\n       wait_until=\"networkidle\",\n       verbose=True\n   )\n  \n   async with AsyncWebCrawler(config=browser_config) as crawler:\n       result = await crawler.arun(\n           url=\"https:\/\/httpbin.org\/html\",\n           config=run_config\n       )\n      \n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Success: {result.success}\")\n       print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> Status code: {result.status_code}\")\n       print(f\"n--- Content Preview ---\")\n       print(result.markdown.raw_markdown[:400])\n      \n   return result\n\n\nresult = asyncio.run(configured_crawl())\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> PART 4: MARKDOWN GENERATION\")\nprint(\"=\"*60)\n\n\nfrom crawl4ai.content_filter_strategy import PruningContentFilter, BM25ContentFilter\nfrom crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator\n\n\nasync def markdown_generation_demo():\n   \"\"\"Demonstrates raw vs fit markdown with content filtering.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f3af.png\" alt=\"\ud83c\udfaf\" class=\"wp-smiley\" \/> Demonstrating markdown generation strategies...\")\n  \n   browser_config = BrowserConfig(headless=True, verbose=False)\n  \n   run_config = CrawlerRunConfig(\n       cache_mode=CacheMode.BYPASS,\n       markdown_generator=DefaultMarkdownGenerator(\n           content_filter=PruningContentFilter(\n               threshold=0.4,\n               threshold_type=\"fixed\",\n               min_word_threshold=20\n           )\n       )\n   )\n  \n   async with AsyncWebCrawler(config=browser_config) as crawler:\n       result = await crawler.arun(\n           url=\"https:\/\/en.wikipedia.org\/wiki\/Web_scraping\",\n           config=run_config\n       )\n      \n       raw_len = len(result.markdown.raw_markdown)\n       fit_len = len(result.markdown.fit_markdown) if result.markdown.fit_markdown else 0\n      \n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> Markdown Comparison:\")\n       print(f\"   Raw Markdown:  {raw_len:,} characters\")\n       print(f\"   Fit Markdown:  {fit_len:,} characters\")\n       print(f\"   Reduction:     {((raw_len - fit_len) \/ raw_len * 100):.1f}%\")\n      \n       print(f\"n--- Fit Markdown Preview (first 600 chars) ---\")\n       print(result.markdown.fit_markdown[:600] if result.markdown.fit_markdown else \"N\/A\")\n      \n   return result\n\n\nresult = asyncio.run(markdown_generation_demo())<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We prepare the complete Google Colab environment required to run Crawl4AI smoothly, including system packages, Python dependencies, and the Playwright browser setup. We initialize the async-friendly notebook workflow with nest_asyncio, import the core libraries, and confirm that the environment is ready for crawling tasks. We then begin with foundational examples: a simple crawl, followed by a more configurable crawl that demonstrates how browser settings and runtime options affect page retrieval.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f50e.png\" alt=\"\ud83d\udd0e\" class=\"wp-smiley\" \/> PART 5: BM25 QUERY-BASED FILTERING\")\nprint(\"=\"*60)\n\n\nasync def bm25_filtering_demo():\n   \"\"\"Using BM25 algorithm to extract content relevant to a specific query.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f3af.png\" alt=\"\ud83c\udfaf\" class=\"wp-smiley\" \/> Extracting content relevant to a specific query...\")\n  \n   query = \"legal aspects privacy data protection\"\n  \n   run_config = CrawlerRunConfig(\n       cache_mode=CacheMode.BYPASS,\n       markdown_generator=DefaultMarkdownGenerator(\n           content_filter=BM25ContentFilter(\n               user_query=query,\n               bm25_threshold=1.2\n           )\n       )\n   )\n  \n   async with AsyncWebCrawler() as crawler:\n       result = await crawler.arun(\n           url=\"https:\/\/en.wikipedia.org\/wiki\/Web_scraping\",\n           config=run_config\n       )\n      \n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Query: '{query}'\")\n       print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> Fit markdown length: {len(result.markdown.fit_markdown or '')} chars\")\n       print(f\"n--- Query-Relevant Content Preview ---\")\n       print(result.markdown.fit_markdown[:800] if result.markdown.fit_markdown else \"No relevant content found\")\n      \n   return result\n\n\nresult = asyncio.run(bm25_filtering_demo())\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f3d7.png\" alt=\"\ud83c\udfd7\" class=\"wp-smiley\" \/> PART 6: CSS-BASED EXTRACTION (No LLM)\")\nprint(\"=\"*60)\n\n\nfrom crawl4ai import JsonCssExtractionStrategy\n\n\nasync def css_extraction_demo():\n   \"\"\"Extract structured data using CSS selectors - fast and reliable.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/> Extracting data using CSS selectors...\")\n  \n   schema = {\n       \"name\": \"Wikipedia Headings\",\n       \"baseSelector\": \"div.mw-parser-output h2\",\n       \"fields\": [\n           {\n               \"name\": \"heading_text\",\n               \"selector\": \"span.mw-headline\",\n               \"type\": \"text\"\n           },\n           {\n               \"name\": \"heading_id\",\n               \"selector\": \"span.mw-headline\",\n               \"type\": \"attribute\",\n               \"attribute\": \"id\"\n           }\n       ]\n   }\n  \n   extraction_strategy = JsonCssExtractionStrategy(schema, verbose=False)\n  \n   run_config = CrawlerRunConfig(\n       cache_mode=CacheMode.BYPASS,\n       extraction_strategy=extraction_strategy\n   )\n  \n   async with AsyncWebCrawler() as crawler:\n       result = await crawler.arun(\n           url=\"https:\/\/en.wikipedia.org\/wiki\/Python_(programming_language)\",\n           config=run_config\n       )\n      \n       if result.extracted_content:\n           data = json.loads(result.extracted_content)\n           print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Extracted {len(data)} section headings\")\n           print(f\"n--- Extracted Headings ---\")\n           for item in data[:10]:\n               heading = item.get('heading_text', 'N\/A')\n               heading_id = item.get('heading_id', 'N\/A')\n               if heading:\n                   print(f\"  \u2022 {heading} (#{heading_id})\")\n       else:\n           print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" \/> No data extracted\")\n          \n   return result\n\n\nresult = asyncio.run(css_extraction_demo())\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f6d2.png\" alt=\"\ud83d\uded2\" class=\"wp-smiley\" \/> PART 7: ADVANCED CSS EXTRACTION - Hacker News\")\nprint(\"=\"*60)\n\n\nasync def advanced_css_extraction():\n   \"\"\"Extract stories from Hacker News with nested selectors.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f6cd.png\" alt=\"\ud83d\udecd\" class=\"wp-smiley\" \/> Extracting stories from Hacker News...\")\n  \n   schema = {\n       \"name\": \"Hacker News Stories\",\n       \"baseSelector\": \"tr.athing\",\n       \"fields\": [\n           {\n               \"name\": \"rank\",\n               \"selector\": \"span.rank\",\n               \"type\": \"text\"\n           },\n           {\n               \"name\": \"title\",\n               \"selector\": \"span.titleline &gt; a\",\n               \"type\": \"text\"\n           },\n           {\n               \"name\": \"url\",\n               \"selector\": \"span.titleline &gt; a\",\n               \"type\": \"attribute\",\n               \"attribute\": \"href\"\n           },\n           {\n               \"name\": \"site\",\n               \"selector\": \"span.sitestr\",\n               \"type\": \"text\"\n           }\n       ]\n   }\n  \n   extraction_strategy = JsonCssExtractionStrategy(schema)\n  \n   run_config = CrawlerRunConfig(\n       cache_mode=CacheMode.BYPASS,\n       extraction_strategy=extraction_strategy\n   )\n  \n   async with AsyncWebCrawler() as crawler:\n       result = await crawler.arun(\n           url=\"https:\/\/news.ycombinator.com\",\n           config=run_config\n       )\n      \n       if result.extracted_content:\n           stories = json.loads(result.extracted_content)\n           print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Extracted {len(stories)} stories from Hacker News\")\n           print(f\"n--- Top 10 Stories ---\")\n           for story in stories[:10]:\n               rank = story.get('rank', '?').strip('.') if story.get('rank') else '?'\n               title = story.get('title', 'N\/A')[:55]\n               site = story.get('site', 'N\/A')\n               print(f\"  #{rank:&lt;3} {title:&lt;55} ({site})\")\n              \n   return result\n\n\nresult = asyncio.run(advanced_css_extraction())<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We focus on improving the quality and relevance of extracted content by exploring markdown generation and query-aware filtering. We compare raw markdown with fit markdown to see how pruning reduces noise, and we use BM25-based filtering to keep only the parts of a page that align with a specific query. We then move into CSS-based extraction, where we define a structured schema and use selectors to pull clean heading data from a Wikipedia page without relying on an LLM.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a1.png\" alt=\"\u26a1\" class=\"wp-smiley\" \/> PART 8: JAVASCRIPT EXECUTION\")\nprint(\"=\"*60)\n\n\nasync def javascript_execution_demo():\n   \"\"\"Execute JavaScript on pages before extraction.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f3ad.png\" alt=\"\ud83c\udfad\" class=\"wp-smiley\" \/> Executing JavaScript before crawling...\")\n  \n   js_code = \"\"\"\n   \/\/ Scroll down to trigger lazy loading\n   window.scrollTo(0, document.body.scrollHeight);\n  \n   \/\/ Wait for content to load\n   await new Promise(r =&gt; setTimeout(r, 1000));\n  \n   \/\/ Scroll back up\n   window.scrollTo(0, 0);\n  \n   \/\/ Add a marker to verify JS ran\n   document.body.setAttribute('data-crawl4ai', 'executed');\n   \"\"\"\n  \n   run_config = CrawlerRunConfig(\n       cache_mode=CacheMode.BYPASS,\n       js_code=[js_code],\n       wait_for=\"css:body\",\n       delay_before_return_html=1.0\n   )\n  \n   async with AsyncWebCrawler() as crawler:\n       result = await crawler.arun(\n           url=\"https:\/\/httpbin.org\/html\",\n           config=run_config\n       )\n      \n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Page crawled with JS execution\")\n       print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> Status: {result.status_code}\")\n       print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Content length: {len(result.markdown.raw_markdown)} chars\")\n      \n   return result\n\n\nresult = asyncio.run(javascript_execution_demo())\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f916.png\" alt=\"\ud83e\udd16\" class=\"wp-smiley\" \/> PART 9: LLM-BASED EXTRACTION\")\nprint(\"=\"*60)\n\n\nfrom crawl4ai import LLMExtractionStrategy, LLMConfig\n\n\nclass Article(BaseModel):\n   title: str = Field(description=\"The article title\")\n   summary: str = Field(description=\"A brief summary\")\n   topics: List[str] = Field(description=\"Main topics covered\")\n\n\nasync def llm_extraction_demo():\n   \"\"\"Use LLM to intelligently extract and structure data.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f916.png\" alt=\"\ud83e\udd16\" class=\"wp-smiley\" \/> LLM-based extraction setup...\")\n  \n   import os\n   api_key = os.getenv('OPENAI_API_KEY')\n  \n   if not api_key:\n       print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> No OPENAI_API_KEY found. Showing setup code only.\")\n       print(\"nTo enable LLM extraction, run:\")\n       print(\"   import os\")\n       print(\"   os.environ['OPENAI_API_KEY'] = 'sk-your-key-here'\")\n       print(\"n--- Example Code ---\")\n       example_code = '''\nfrom crawl4ai import LLMExtractionStrategy, LLMConfig\nfrom pydantic import BaseModel, Field\n\n\nclass Product(BaseModel):\n   name: str = Field(description=\"Product name\")\n   price: str = Field(description=\"Product price\")\n\n\nllm_strategy = LLMExtractionStrategy(\n   llm_config=LLMConfig(\n       provider=\"openai\/gpt-4o-mini\",  # or \"ollama\/llama3\"\n       api_token=os.getenv('OPENAI_API_KEY')\n   ),\n   schema=Product.model_json_schema(),\n   extraction_type=\"schema\",\n   instruction=\"Extract all products with prices.\"\n)\n\n\nrun_config = CrawlerRunConfig(\n   extraction_strategy=llm_strategy,\n   cache_mode=CacheMode.BYPASS\n)\n\n\nasync with AsyncWebCrawler() as crawler:\n   result = await crawler.arun(url=\"https:\/\/example.com\", config=run_config)\n   products = json.loads(result.extracted_content)\n'''\n       print(example_code)\n       return None\n  \n   llm_strategy = LLMExtractionStrategy(\n       llm_config=LLMConfig(\n           provider=\"openai\/gpt-4o-mini\",\n           api_token=api_key\n       ),\n       schema=Article.model_json_schema(),\n       extraction_type=\"schema\",\n       instruction=\"Extract article titles and summaries.\"\n   )\n  \n   run_config = CrawlerRunConfig(\n       extraction_strategy=llm_strategy,\n       cache_mode=CacheMode.BYPASS\n   )\n  \n   async with AsyncWebCrawler() as crawler:\n       result = await crawler.arun(\n           url=\"https:\/\/news.ycombinator.com\",\n           config=run_config\n       )\n      \n       if result.extracted_content:\n           data = json.loads(result.extracted_content)\n           print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> LLM extracted:\")\n           print(json.dumps(data, indent=2)[:1000])\n          \n   return result\n\n\nresult = asyncio.run(llm_extraction_demo())<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We continue structured extraction by applying nested CSS selectors to collect ranked story information from Hacker News in a clean JSON-like format. We then demonstrate JavaScript execution before extraction, which helps us interact with dynamic pages by scrolling, waiting for content, and modifying the DOM before processing. Finally, we introduce LLM-based extraction, define a schema with Pydantic, and show how Crawl4AI can convert unstructured web content into structured outputs using a language model.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f578.png\" alt=\"\ud83d\udd78\" class=\"wp-smiley\" \/> PART 10: DEEP CRAWLING\")\nprint(\"=\"*60)\n\n\nfrom crawl4ai.deep_crawling import BFSDeepCrawlStrategy\nfrom crawl4ai.deep_crawling.filters import FilterChain, URLPatternFilter, DomainFilter\n\n\nasync def deep_crawl_demo():\n   \"\"\"Crawl multiple pages starting from a seed URL using BFS.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f577.png\" alt=\"\ud83d\udd77\" class=\"wp-smiley\" \/> Starting deep crawl with BFS strategy...\")\n  \n   filter_chain = FilterChain([\n       DomainFilter(\n           allowed_domains=[\"docs.crawl4ai.com\"],\n           blocked_domains=[]\n       ),\n       URLPatternFilter(\n           patterns=[\"*quickstart*\", \"*installation*\", \"*examples*\"]\n       )\n   ])\n  \n   deep_crawl_strategy = BFSDeepCrawlStrategy(\n       max_depth=2,\n       max_pages=5,\n       filter_chain=filter_chain,\n       include_external=False\n   )\n  \n   run_config = CrawlerRunConfig(\n       cache_mode=CacheMode.BYPASS,\n       deep_crawl_strategy=deep_crawl_strategy\n   )\n  \n   pages_crawled = []\n  \n   async with AsyncWebCrawler() as crawler:\n       results = await crawler.arun(\n           url=\"https:\/\/docs.crawl4ai.com\/\",\n           config=run_config\n       )\n      \n       if isinstance(results, list):\n           for result in results:\n               pages_crawled.append(result.url)\n               print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Crawled: {result.url}\")\n               print(f\"     <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/> Content: {len(result.markdown.raw_markdown)} chars\")\n       else:\n           pages_crawled.append(results.url)\n           print(f\"  <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Crawled: {results.url}\")\n           print(f\"     <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4c4.png\" alt=\"\ud83d\udcc4\" class=\"wp-smiley\" \/> Content: {len(results.markdown.raw_markdown)} chars\")\n  \n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> Total pages crawled: {len(pages_crawled)}\")\n   return pages_crawled\n\n\npages = asyncio.run(deep_crawl_demo())\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f680.png\" alt=\"\ud83d\ude80\" class=\"wp-smiley\" \/> PART 11: MULTI-URL CONCURRENT CRAWLING\")\nprint(\"=\"*60)\n\n\nasync def multi_url_crawl():\n   \"\"\"Crawl multiple URLs concurrently for maximum efficiency.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a1.png\" alt=\"\u26a1\" class=\"wp-smiley\" \/> Crawling multiple URLs concurrently...\")\n  \n   urls = [\n       \"https:\/\/httpbin.org\/html\",\n       \"https:\/\/httpbin.org\/robots.txt\",\n       \"https:\/\/httpbin.org\/json\",\n       \"https:\/\/example.com\",\n       \"https:\/\/httpbin.org\/headers\"\n   ]\n  \n   run_config = CrawlerRunConfig(\n       cache_mode=CacheMode.BYPASS,\n       verbose=False\n   )\n  \n   async with AsyncWebCrawler() as crawler:\n       results = await crawler.arun_many(\n           urls=urls,\n           config=run_config\n       )\n      \n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> Results Summary:\")\n       print(f\"{'URL':&lt;40} {'Status':&lt;10} {'Content':&lt;15}\")\n       print(\"-\" * 65)\n      \n       for result in results:\n           url_short = result.url[:38] + \"..\" if len(result.url) &gt; 40 else result.url\n           status = \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/>\" if result.success else \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" \/>\"\n           content_len = f\"{len(result.markdown.raw_markdown):,} chars\" if result.success else \"N\/A\"\n           print(f\"{url_short:&lt;40} {status:&lt;10} {content_len:&lt;15}\")\n          \n   return results\n\n\nresults = asyncio.run(multi_url_crawl())\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4f8.png\" alt=\"\ud83d\udcf8\" class=\"wp-smiley\" \/> PART 12: SCREENSHOTS &amp; MEDIA\")\nprint(\"=\"*60)\n\n\nasync def screenshot_demo():\n   \"\"\"Capture screenshots and extract media from pages.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4f7.png\" alt=\"\ud83d\udcf7\" class=\"wp-smiley\" \/> Capturing screenshot and extracting media...\")\n  \n   run_config = CrawlerRunConfig(\n       cache_mode=CacheMode.BYPASS,\n       screenshot=True,\n       pdf=False,\n   )\n  \n   async with AsyncWebCrawler() as crawler:\n       result = await crawler.arun(\n           url=\"https:\/\/en.wikipedia.org\/wiki\/Web_scraping\",\n           config=run_config\n       )\n      \n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Crawl complete!\")\n       print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4f8.png\" alt=\"\ud83d\udcf8\" class=\"wp-smiley\" \/> Screenshot captured: {result.screenshot is not None}\")\n      \n       if result.screenshot:\n           print(f\"   Screenshot size: {len(result.screenshot)} bytes (base64)\")\n          \n       if result.media and 'images' in result.media:\n           images = result.media['images']\n           print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f5bc.png\" alt=\"\ud83d\uddbc\" class=\"wp-smiley\" \/> Found {len(images)} images:\")\n           for img in images[:5]:\n               print(f\"   \u2022 {img.get('src', 'N\/A')[:60]}...\")\n              \n   return result\n\n\nresult = asyncio.run(screenshot_demo())<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We expand from single-page crawling to deeper and broader workflows by introducing BFS-based deep crawling across multiple related pages. We configure a filter chain to control which domains and URL patterns are allowed, making the crawl targeted and efficient rather than uncontrolled. We also demonstrate concurrent multi-URL crawling and screenshot\/media extraction, showing how Crawl4AI can scale across several pages while also collecting visual and embedded content.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f517.png\" alt=\"\ud83d\udd17\" class=\"wp-smiley\" \/> PART 13: LINK EXTRACTION\")\nprint(\"=\"*60)\n\n\nasync def link_extraction_demo():\n   \"\"\"Extract and analyze all links from a page.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f517.png\" alt=\"\ud83d\udd17\" class=\"wp-smiley\" \/> Extracting and analyzing links...\")\n  \n   run_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)\n  \n   async with AsyncWebCrawler() as crawler:\n       result = await crawler.arun(\n           url=\"https:\/\/docs.crawl4ai.com\/\",\n           config=run_config\n       )\n      \n       internal_links = result.links.get('internal', [])\n       external_links = result.links.get('external', [])\n      \n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> Link Analysis:\")\n       print(f\"   Internal links: {len(internal_links)}\")\n       print(f\"   External links: {len(external_links)}\")\n      \n       print(f\"n--- Sample Internal Links (first 5) ---\")\n       for link in internal_links[:5]:\n           print(f\"   \u2022 {link.get('href', 'N\/A')[:60]}\")\n          \n       print(f\"n--- Sample External Links (first 5) ---\")\n       for link in external_links[:5]:\n           print(f\"   \u2022 {link.get('href', 'N\/A')[:60]}\")\n          \n   return result\n\n\nresult = asyncio.run(link_extraction_demo())\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f3af.png\" alt=\"\ud83c\udfaf\" class=\"wp-smiley\" \/> PART 14: CONTENT SELECTION\")\nprint(\"=\"*60)\n\n\nasync def content_selection_demo():\n   \"\"\"Target specific content using CSS selectors.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f3af.png\" alt=\"\ud83c\udfaf\" class=\"wp-smiley\" \/> Targeting specific content with CSS selectors...\")\n  \n   run_config = CrawlerRunConfig(\n       cache_mode=CacheMode.BYPASS,\n       css_selector=\"article, main, .content, #content, #mw-content-text\",\n       excluded_tags=[\"nav\", \"footer\", \"header\", \"aside\"],\n       remove_overlay_elements=True\n   )\n  \n   async with AsyncWebCrawler() as crawler:\n       result = await crawler.arun(\n           url=\"https:\/\/en.wikipedia.org\/wiki\/Web_scraping\",\n           config=run_config\n       )\n      \n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Content extracted with targeting\")\n       print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Markdown length: {len(result.markdown.raw_markdown):,} chars\")\n       print(f\"n--- Preview (first 500 chars) ---\")\n       print(result.markdown.raw_markdown[:500])\n      \n   return result\n\n\nresult = asyncio.run(content_selection_demo())\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f510.png\" alt=\"\ud83d\udd10\" class=\"wp-smiley\" \/> PART 15: SESSION MANAGEMENT\")\nprint(\"=\"*60)\n\n\nasync def session_management_demo():\n   \"\"\"Maintain browser sessions across multiple requests.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f510.png\" alt=\"\ud83d\udd10\" class=\"wp-smiley\" \/> Demonstrating session management...\")\n  \n   browser_config = BrowserConfig(headless=True)\n  \n   async with AsyncWebCrawler(config=browser_config) as crawler:\n       session_id = \"my_session\"\n      \n       result1 = await crawler.arun(\n           url=\"https:\/\/httpbin.org\/cookies\/set?session=demo123\",\n           config=CrawlerRunConfig(\n               cache_mode=CacheMode.BYPASS,\n               session_id=session_id\n           )\n       )\n       print(f\"  Step 1: Set cookies - Success: {result1.success}\")\n      \n       result2 = await crawler.arun(\n           url=\"https:\/\/httpbin.org\/cookies\",\n           config=CrawlerRunConfig(\n               cache_mode=CacheMode.BYPASS,\n               session_id=session_id\n           )\n       )\n       print(f\"  Step 2: Read cookies - Success: {result2.success}\")\n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Cookie Response:\")\n       print(result2.markdown.raw_markdown[:300])\n      \n   return result2\n\n\nresult = asyncio.run(session_management_demo())<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We analyze the structure and navigability of a site by extracting both internal and external links from a page and summarizing them for inspection. We then demonstrate content targeting with CSS selectors and excluded tags, focusing extraction on the most meaningful sections of a page while avoiding navigation or layout noise. After that, we show session management, where we preserve browser state across requests and verify that cookies persist between sequential crawls.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f31f.png\" alt=\"\ud83c\udf1f\" class=\"wp-smiley\" \/> PART 16: COMPLETE REAL-WORLD EXAMPLE\")\nprint(\"=\"*60)\n\n\nasync def complete_example():\n   \"\"\"Complete example combining CSS extraction with content filtering.\"\"\"\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f31f.png\" alt=\"\ud83c\udf1f\" class=\"wp-smiley\" \/> Running complete example: Hacker News scraper with filtering\")\n  \n   schema = {\n       \"name\": \"HN Stories\",\n       \"baseSelector\": \"tr.athing\",\n       \"fields\": [\n           {\"name\": \"rank\", \"selector\": \"span.rank\", \"type\": \"text\"},\n           {\"name\": \"title\", \"selector\": \"span.titleline &gt; a\", \"type\": \"text\"},\n           {\"name\": \"url\", \"selector\": \"span.titleline &gt; a\", \"type\": \"attribute\", \"attribute\": \"href\"},\n           {\"name\": \"site\", \"selector\": \"span.sitestr\", \"type\": \"text\"}\n       ]\n   }\n  \n   browser_config = BrowserConfig(\n       headless=True,\n       viewport_width=1920,\n       viewport_height=1080\n   )\n  \n   run_config = CrawlerRunConfig(\n       cache_mode=CacheMode.BYPASS,\n       extraction_strategy=JsonCssExtractionStrategy(schema),\n       markdown_generator=DefaultMarkdownGenerator(\n           content_filter=PruningContentFilter(threshold=0.4)\n       )\n   )\n  \n   async with AsyncWebCrawler(config=browser_config) as crawler:\n       result = await crawler.arun(\n           url=\"https:\/\/news.ycombinator.com\",\n           config=run_config\n       )\n      \n       if result.extracted_content:\n           stories = json.loads(result.extracted_content)\n          \n           print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Successfully extracted {len(stories)} stories!\")\n           print(f\"n{'='*70}\")\n           print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4f0.png\" alt=\"\ud83d\udcf0\" class=\"wp-smiley\" \/> TOP HACKER NEWS STORIES\")\n           print(\"=\"*70)\n          \n           for story in stories[:15]:\n               rank = story.get('rank', '?').strip('.') if story.get('rank') else '?'\n               title = story.get('title', 'No title')[:50]\n               site = story.get('site', 'N\/A')\n               url = story.get('url', '')[:30]\n               print(f\"  #{rank:&lt;3} {title:&lt;50} ({site})\")\n              \n           print(\"=\"*70)\n          \n           return stories\n  \n   return []\n\n\nstories = asyncio.run(complete_example())\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4be.png\" alt=\"\ud83d\udcbe\" class=\"wp-smiley\" \/> BONUS: SAVING RESULTS\")\nprint(\"=\"*60)\n\n\nif stories:\n   with open('hacker_news_stories.json', 'w') as f:\n       json.dump(stories, f, indent=2)\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Saved {len(stories)} stories to 'hacker_news_stories.json'\")\n   print(\"nTo download in Colab:\")\n   print(\"   from google.colab import files\")\n   print(\"   files.download('hacker_news_stories.json')\")\n\n\nprint(\"n\" + \"=\"*60)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4da.png\" alt=\"\ud83d\udcda\" class=\"wp-smiley\" \/> TUTORIAL COMPLETE!\")\nprint(\"=\"*60)\n\n\nprint(\"\"\"\n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> What you learned:\n\n\n1. Basic crawling with AsyncWebCrawler\n2. Browser &amp; crawler configuration\n3. Markdown generation (raw vs fit)\n4. BM25 query-based content filtering\n5. CSS-based structured data extraction\n6. Advanced CSS extraction (Hacker News)\n7. JavaScript execution for dynamic content\n8. LLM-based extraction setup\n9. Deep crawling with BFS strategy\n10. Multi-URL concurrent crawling\n11. Screenshots &amp; media extraction\n12. Link extraction &amp; analysis\n13. Content targeting with CSS selectors\n14. Session management\n15. Complete real-world scraping example\n\n\n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4d6.png\" alt=\"\ud83d\udcd6\" class=\"wp-smiley\" \/> RESOURCES:\n \u2022 Docs: https:\/\/docs.crawl4ai.com\/\n \u2022 GitHub: https:\/\/github.com\/unclecode\/crawl4ai\n \u2022 Discord: https:\/\/discord.gg\/jP8KfhDhyN\n\n\n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f680.png\" alt=\"\ud83d\ude80\" class=\"wp-smiley\" \/> Happy Crawling with Crawl4AI!\n\"\"\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We combine several ideas from the tutorial into a complete real-world example that extracts and filters Hacker News stories using structured CSS extraction and Markdown pruning. We format the results into a readable output, demonstrating how Crawl4AI can support a practical scraping workflow from collection to presentation. Finally, we save the extracted stories to a JSON file and close the tutorial with a clear summary of the major concepts and capabilities we have implemented throughout the notebook.<\/p>\n<p>In conclusion, we developed a strong end-to-end understanding of how to use Crawl4AI for both simple and advanced crawling tasks. We moved from straightforward page extraction to more refined workflows involving content filtering, targeted element selection, structured data extraction, dynamic-page interaction, multi-URL concurrency, and deep crawling across linked pages. We also saw how the framework supports richer automation through media capture, persistent sessions, and optional LLM-powered schema extraction. As a result, we finished with a practical foundation for building reliable, efficient, and flexible scraping and crawling pipelines that are ready to support real-world research, monitoring, and intelligent data processing workflows.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out\u00a0the<strong><a href=\"https:\/\/arxiv.org\/pdf\/2604.06425\" target=\"_blank\" rel=\"noreferrer noopener\">\u00a0<\/a><a href=\"https:\/\/github.com\/Marktechpost\/AI-Agents-Projects-Tutorials\/blob\/main\/LLM%20Projects\/crawl4ai_web_crawling_extraction_deepcrawl_llm.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Implementation Codes here<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">130k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.?\u00a0<strong><a href=\"https:\/\/forms.gle\/MTNLpmJtsFA3VRVd9\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Connect with us<\/mark><\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/04\/14\/a-coding-implementation-of-crawl4ai-for-web-crawling-markdown-generation-javascript-execution-and-llm-based-structured-extraction\/\">A Coding Implementation of Crawl4AI for Web Crawling, Markdown Generation, JavaScript Execution, and LLM-Based Structured Extraction<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we build a c&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-726","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/726","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=726"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/726\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=726"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=726"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=726"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}