{"id":745,"date":"2026-04-18T11:39:46","date_gmt":"2026-04-18T03:39:46","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=745"},"modified":"2026-04-18T11:39:46","modified_gmt":"2026-04-18T03:39:46","slug":"a-end-to-end-coding-guide-to-running-openai-gpt-oss-open-weight-models-with-advanced-inference-workflows","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=745","title":{"rendered":"A End-to-End Coding Guide to Running OpenAI GPT-OSS Open-Weight Models with Advanced Inference Workflows"},"content":{"rendered":"<p>In this tutorial, we explore how to run OpenAI\u2019s open-weight <a href=\"http:\/\/github.com\/openai\/gpt-oss\"><strong>GPT-OSS<\/strong><\/a> models in Google Colab with a strong focus on their technical behavior, deployment requirements, and practical inference workflows. We begin by setting up the exact dependencies needed for Transformers-based execution, verifying GPU availability, and loading openai\/gpt-oss-20b with the correct configuration using native MXFP4 quantization, torch.bfloat16 activations. As we move through the tutorial, we work directly with core capabilities such as structured generation, streaming, multi-turn dialogue handling, tool execution patterns, and batch inference, while keeping in mind how open-weight models differ from closed-hosted APIs in terms of transparency, controllability, memory constraints, and local execution trade-offs. Also, we treat GPT-OSS not just as a chatbot, but as a technically inspectable open-weight LLM stack that we can configure, prompt, and extend inside a reproducible workflow.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/> Step 1: Installing required packages...\")\nprint(\"=\" * 70)\n\n\n!pip install -q --upgrade pip\n!pip install -q transformers&gt;=4.51.0 accelerate sentencepiece protobuf\n!pip install -q huggingface_hub gradio ipywidgets\n!pip install -q openai-harmony\n\n\nimport transformers\nprint(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Transformers version: {transformers.__version__}\")\n\n\nimport torch\nprint(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f5a5.png\" alt=\"\ud83d\udda5\" class=\"wp-smiley\" \/> System Information:\")\nprint(f\"   PyTorch version: {torch.__version__}\")\nprint(f\"   CUDA available: {torch.cuda.is_available()}\")\n\n\nif torch.cuda.is_available():\n   gpu_name = torch.cuda.get_device_name(0)\n   gpu_memory = torch.cuda.get_device_properties(0).total_memory \/ 1e9\n   print(f\"   GPU: {gpu_name}\")\n   print(f\"   GPU Memory: {gpu_memory:.2f} GB\")\n  \n   if gpu_memory &lt; 15:\n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/> WARNING: gpt-oss-20b requires ~16GB VRAM.\")\n       print(f\"   Your GPU has {gpu_memory:.1f}GB. Consider using Colab Pro for T4\/A100.\")\n   else:\n       print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> GPU memory sufficient for gpt-oss-20b\")\nelse:\n   print(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/274c.png\" alt=\"\u274c\" class=\"wp-smiley\" \/> No GPU detected!\")\n   print(\"   Go to: Runtime \u2192 Change runtime type \u2192 Select 'T4 GPU'\")\n   raise RuntimeError(\"GPU required for this tutorial\")\n\n\nprint(\"n\" + \"=\" * 70)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4da.png\" alt=\"\ud83d\udcda\" class=\"wp-smiley\" \/> PART 2: Loading GPT-OSS Model (Correct Method)\")\nprint(\"=\" * 70)\n\n\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\nimport torch\n\n\nMODEL_ID = \"openai\/gpt-oss-20b\"\n\n\nprint(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f504.png\" alt=\"\ud83d\udd04\" class=\"wp-smiley\" \/> Loading model: {MODEL_ID}\")\nprint(\"   This may take several minutes on first run...\")\nprint(\"   (Model size: ~40GB download, uses native MXFP4 quantization)\")\n\n\ntokenizer = AutoTokenizer.from_pretrained(\n   MODEL_ID,\n   trust_remote_code=True\n)\n\n\nmodel = AutoModelForCausalLM.from_pretrained(\n   MODEL_ID,\n   torch_dtype=torch.bfloat16,\n   device_map=\"auto\",\n   trust_remote_code=True,\n)\n\n\npipe = pipeline(\n   \"text-generation\",\n   model=model,\n   tokenizer=tokenizer,\n)\n\n\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Model loaded successfully!\")\nprint(f\"   Model dtype: {model.dtype}\")\nprint(f\"   Device: {model.device}\")\n\n\nif torch.cuda.is_available():\n   allocated = torch.cuda.memory_allocated() \/ 1e9\n   reserved = torch.cuda.memory_reserved() \/ 1e9\n   print(f\"   GPU Memory Allocated: {allocated:.2f} GB\")\n   print(f\"   GPU Memory Reserved: {reserved:.2f} GB\")\n\n\nprint(\"n\" + \"=\" * 70)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ac.png\" alt=\"\ud83d\udcac\" class=\"wp-smiley\" \/> PART 3: Basic Inference Examples\")\nprint(\"=\" * 70)\n\n\ndef generate_response(messages, max_new_tokens=256, temperature=0.8, top_p=1.0):\n   \"\"\"\n   Generate a response using gpt-oss with recommended parameters.\n  \n   OpenAI recommends: temperature=1.0, top_p=1.0 for gpt-oss\n   \"\"\"\n   output = pipe(\n       messages,\n       max_new_tokens=max_new_tokens,\n       do_sample=True,\n       temperature=temperature,\n       top_p=top_p,\n       pad_token_id=tokenizer.eos_token_id,\n   )\n   return output[0][\"generated_text\"][-1][\"content\"]\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Example 1: Simple Question Answering\")\nprint(\"-\" * 50)\n\n\nmessages = [\n   {\"role\": \"user\", \"content\": \"What is the Pythagorean theorem? Explain briefly.\"}\n]\n\n\nresponse = generate_response(messages, max_new_tokens=150)\nprint(f\"User: {messages[0]['content']}\")\nprint(f\"nAssistant: {response}\")\n\n\nprint(\"nn<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Example 2: Code Generation\")\nprint(\"-\" * 50)\n\n\nmessages = [\n]\n\n\nresponse = generate_response(messages, max_new_tokens=300)\nprint(f\"User: {messages[0]['content']}\")\nprint(f\"nAssistant: {response}\")\n\n\nprint(\"nn<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Example 3: Creative Writing\")\nprint(\"-\" * 50)\n\n\nmessages = [\n   {\"role\": \"user\", \"content\": \"Write a haiku about artificial intelligence.\"}\n]\n\n\nresponse = generate_response(messages, max_new_tokens=100, temperature=1.0)\nprint(f\"User: {messages[0]['content']}\")\nprint(f\"nAssistant: {response}\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We set up the full Colab environment required to run GPT-OSS properly and verify that the system has a compatible GPU with enough VRAM. We install the core libraries, check the PyTorch and Transformers versions, and confirm that the runtime is suitable for loading an open-weight model like gpt-oss-20b. We then load the tokenizer, initialize the model with the correct technical configuration, and run a few basic inference examples to confirm that the open-weight pipeline is working end to end.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"n\" + \"=\" * 70)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9e0.png\" alt=\"\ud83e\udde0\" class=\"wp-smiley\" \/> PART 4: Configurable Reasoning Effort\")\nprint(\"=\" * 70)\n\n\nprint(\"\"\"\nGPT-OSS supports different reasoning effort levels:\n \u2022 LOW    - Quick, concise answers (fewer tokens, faster)\n \u2022 MEDIUM - Balanced reasoning and response\n \u2022 HIGH   - Deep thinking with full chain-of-thought\n\n\nThe reasoning effort is controlled through system prompts and generation parameters.\n\"\"\")\n\n\nclass ReasoningEffortController:\n   \"\"\"\n   Controls reasoning effort levels for gpt-oss generations.\n   \"\"\"\n  \n   EFFORT_CONFIGS = {\n       \"low\": {\n           \"system_prompt\": \"You are a helpful assistant. Be concise and direct.\",\n           \"max_tokens\": 200,\n           \"temperature\": 0.7,\n           \"description\": \"Quick, concise answers\"\n       },\n       \"medium\": {\n           \"system_prompt\": \"You are a helpful assistant. Think through problems step by step and provide clear, well-reasoned answers.\",\n           \"max_tokens\": 400,\n           \"temperature\": 0.8,\n           \"description\": \"Balanced reasoning\"\n       },\n       \"high\": {\n           \"system_prompt\": \"\"\"You are a helpful assistant with advanced reasoning capabilities.\nFor complex problems:\n1. First, analyze the problem thoroughly\n2. Consider multiple approaches\n3. Show your complete chain of thought\n4. Provide a comprehensive, well-reasoned answer\n\n\nTake your time to think deeply before responding.\"\"\",\n           \"max_tokens\": 800,\n           \"temperature\": 1.0,\n           \"description\": \"Deep chain-of-thought reasoning\"\n       }\n   }\n  \n   def __init__(self, pipeline, tokenizer):\n       self.pipe = pipeline\n       self.tokenizer = tokenizer\n  \n   def generate(self, user_message: str, effort: str = \"medium\") -&gt; dict:\n       \"\"\"Generate response with specified reasoning effort.\"\"\"\n       if effort not in self.EFFORT_CONFIGS:\n           raise ValueError(f\"Effort must be one of: {list(self.EFFORT_CONFIGS.keys())}\")\n      \n       config = self.EFFORT_CONFIGS[effort]\n      \n       messages = [\n           {\"role\": \"system\", \"content\": config[\"system_prompt\"]},\n           {\"role\": \"user\", \"content\": user_message}\n       ]\n      \n       output = self.pipe(\n           messages,\n           max_new_tokens=config[\"max_tokens\"],\n           do_sample=True,\n           temperature=config[\"temperature\"],\n           top_p=1.0,\n           pad_token_id=self.tokenizer.eos_token_id,\n       )\n      \n       return {\n           \"effort\": effort,\n           \"description\": config[\"description\"],\n           \"response\": output[0][\"generated_text\"][-1][\"content\"],\n           \"max_tokens_used\": config[\"max_tokens\"]\n       }\n\n\nreasoning_controller = ReasoningEffortController(pipe, tokenizer)\n\n\n\n\nprint(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f9e9.png\" alt=\"\ud83e\udde9\" class=\"wp-smiley\" \/> Logic Puzzle: {test_question}n\")\n\n\nfor effort in [\"low\", \"medium\", \"high\"]:\n   result = reasoning_controller.generate(test_question, effort)\n   print(f\"\u2501\u2501\u2501 {effort.upper()} ({result['description']}) \u2501\u2501\u2501\")\n   print(f\"{result['response'][:500]}...\")\n   print()\n\n\nprint(\"n\" + \"=\" * 70)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4cb.png\" alt=\"\ud83d\udccb\" class=\"wp-smiley\" \/> PART 5: Structured Output Generation (JSON Mode)\")\nprint(\"=\" * 70)\n\n\nimport json\nimport re\n\n\nclass StructuredOutputGenerator:\n   \"\"\"\n   Generate structured JSON outputs with schema validation.\n   \"\"\"\n  \n   def __init__(self, pipeline, tokenizer):\n       self.pipe = pipeline\n       self.tokenizer = tokenizer\n  \n   def generate_json(self, prompt: str, schema: dict, max_retries: int = 2) -&gt; dict:\n       \"\"\"\n       Generate JSON output in accordance with a specified schema.\n      \n       Args:\n           prompt: The user's request\n           schema: JSON schema description\n           max_retries: Number of retries on parse failure\n       \"\"\"\n       schema_str = json.dumps(schema, indent=2)\n      \n       system_prompt = f\"\"\"You are a helpful assistant that ONLY outputs valid JSON.\nYour response must exactly match this JSON schema:\n{schema_str}\n\n\nRULES:\n- Output ONLY the JSON object, nothing else\n- No markdown code blocks (no ```)\n- No explanations before or after\n- Ensure all required fields are present\n- Use correct data types as specified\"\"\"\n\n\n       messages = [\n           {\"role\": \"system\", \"content\": system_prompt},\n           {\"role\": \"user\", \"content\": prompt}\n       ]\n      \n       for attempt in range(max_retries + 1):\n           output = self.pipe(\n               messages,\n               max_new_tokens=500,\n               do_sample=True,\n               temperature=0.3,\n               top_p=1.0,\n               pad_token_id=self.tokenizer.eos_token_id,\n           )\n          \n           response_text = output[0][\"generated_text\"][-1][\"content\"]\n          \n           cleaned = self._clean_json_response(response_text)\n          \n           try:\n               parsed = json.loads(cleaned)\n               return {\"success\": True, \"data\": parsed, \"attempts\": attempt + 1}\n           except json.JSONDecodeError as e:\n               if attempt == max_retries:\n                   return {\n                       \"success\": False,\n                       \"error\": str(e),\n                       \"raw_response\": response_text,\n                       \"attempts\": attempt + 1\n                   }\n               messages.append({\"role\": \"assistant\", \"content\": response_text})\n               messages.append({\"role\": \"user\", \"content\": f\"That wasn't valid JSON. Error: {e}. Please try again with ONLY valid JSON.\"})\n  \n   def _clean_json_response(self, text: str) -&gt; str:\n       \"\"\"Remove markdown code blocks and extra whitespace.\"\"\"\n       text = re.sub(r'^```(?:json)?s*', '', text.strip())\n       text = re.sub(r's*```$', '', text)\n       return text.strip()\n\n\njson_generator = StructuredOutputGenerator(pipe, tokenizer)\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Example 1: Entity Extraction\")\nprint(\"-\" * 50)\n\n\nentity_schema = {\n   \"name\": \"string\",\n   \"type\": \"string (person\/company\/place)\",\n   \"description\": \"string (1-2 sentences)\",\n   \"key_facts\": [\"list of strings\"]\n}\n\n\nentity_result = json_generator.generate_json(\n   \"Extract information about: Tesla, Inc.\",\n   entity_schema\n)\n\n\nif entity_result[\"success\"]:\n   print(json.dumps(entity_result[\"data\"], indent=2))\nelse:\n   print(f\"Error: {entity_result['error']}\")\n\n\nprint(\"nn<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Example 2: Recipe Generation\")\nprint(\"-\" * 50)\n\n\nrecipe_schema = {\n   \"name\": \"string\",\n   \"prep_time_minutes\": \"integer\",\n   \"cook_time_minutes\": \"integer\",\n   \"servings\": \"integer\",\n   \"difficulty\": \"string (easy\/medium\/hard)\",\n   \"ingredients\": [{\"item\": \"string\", \"amount\": \"string\"}],\n   \"steps\": [\"string\"]\n}\n\n\nrecipe_result = json_generator.generate_json(\n   \"Create a simple recipe for chocolate chip cookies\",\n   recipe_schema\n)\n\n\nif recipe_result[\"success\"]:\n   print(json.dumps(recipe_result[\"data\"], indent=2))\nelse:\n   print(f\"Error: {recipe_result['error']}\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We build more advanced generation controls by introducing configurable reasoning effort and a structured JSON output workflow. We define different effort modes to vary how deeply the model reasons, how many tokens it uses, and how detailed its answers are during inference. We also create a JSON generation utility that guides the open-weight model toward schema-like outputs, cleans the returned text, and retries when the response is not valid JSON.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"n\" + \"=\" * 70)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ac.png\" alt=\"\ud83d\udcac\" class=\"wp-smiley\" \/> PART 6: Multi-turn Conversations with Memory\")\nprint(\"=\" * 70)\n\n\nclass ConversationManager:\n   \"\"\"\n   Manages multi-turn conversations with context memory.\n   Implements the Harmony format pattern used by gpt-oss.\n   \"\"\"\n  \n   def __init__(self, pipeline, tokenizer, system_message: str = None):\n       self.pipe = pipeline\n       self.tokenizer = tokenizer\n       self.history = []\n      \n       if system_message:\n           self.system_message = system_message\n       else:\n           self.system_message = \"You are a helpful, friendly AI assistant. Remember the context of our conversation.\"\n  \n   def chat(self, user_message: str, max_new_tokens: int = 300) -&gt; str:\n       \"\"\"Send a message and get a response, maintaining conversation history.\"\"\"\n      \n       messages = [{\"role\": \"system\", \"content\": self.system_message}]\n       messages.extend(self.history)\n       messages.append({\"role\": \"user\", \"content\": user_message})\n      \n       output = self.pipe(\n           messages,\n           max_new_tokens=max_new_tokens,\n           do_sample=True,\n           temperature=0.8,\n           top_p=1.0,\n           pad_token_id=self.tokenizer.eos_token_id,\n       )\n      \n       assistant_response = output[0][\"generated_text\"][-1][\"content\"]\n      \n       self.history.append({\"role\": \"user\", \"content\": user_message})\n       self.history.append({\"role\": \"assistant\", \"content\": assistant_response})\n      \n       return assistant_response\n  \n   def get_history_length(self) -&gt; int:\n       \"\"\"Get number of turns in conversation.\"\"\"\n       return len(self.history) \/\/ 2\n  \n   def clear_history(self):\n       \"\"\"Clear conversation history.\"\"\"\n       self.history = []\n       print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f5d1.png\" alt=\"\ud83d\uddd1\" class=\"wp-smiley\" \/> Conversation history cleared.\")\n  \n   def get_context_summary(self) -&gt; str:\n       \"\"\"Get a summary of the conversation context.\"\"\"\n       if not self.history:\n           return \"No conversation history yet.\"\n      \n       summary = f\"Conversation has {self.get_history_length()} turns:n\"\n       for i, msg in enumerate(self.history):\n           role = \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f464.png\" alt=\"\ud83d\udc64\" class=\"wp-smiley\" \/> User\" if msg[\"role\"] == \"user\" else \"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f916.png\" alt=\"\ud83e\udd16\" class=\"wp-smiley\" \/> Assistant\"\n           preview = msg[\"content\"][:50] + \"...\" if len(msg[\"content\"]) &gt; 50 else msg[\"content\"]\n           summary += f\"  {i+1}. {role}: {preview}n\"\n       return summary\n\n\nconvo = ConversationManager(pipe, tokenizer)\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f5e3.png\" alt=\"\ud83d\udde3\" class=\"wp-smiley\" \/> Multi-turn Conversation Demo:\")\nprint(\"-\" * 50)\n\n\nconversation_turns = [\n   \"Hi! My name is Alex and I'm a software engineer.\",\n   \"I'm working on a machine learning project. What framework would you recommend?\",\n   \"Good suggestion! What's my name, by the way?\",\n   \"Can you remember what field I work in?\"\n]\n\n\nfor turn in conversation_turns:\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f464.png\" alt=\"\ud83d\udc64\" class=\"wp-smiley\" \/> User: {turn}\")\n   response = convo.chat(turn)\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f916.png\" alt=\"\ud83e\udd16\" class=\"wp-smiley\" \/> Assistant: {response}\")\n\n\nprint(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> {convo.get_context_summary()}\")\n\n\nprint(\"n\" + \"=\" * 70)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a1.png\" alt=\"\u26a1\" class=\"wp-smiley\" \/> PART 7: Streaming Token Generation\")\nprint(\"=\" * 70)\n\n\nfrom transformers import TextIteratorStreamer\nfrom threading import Thread\nimport time\n\n\ndef stream_response(prompt: str, max_tokens: int = 200):\n   \"\"\"\n   Stream tokens as they're generated for real-time output.\n   \"\"\"\n   messages = [{\"role\": \"user\", \"content\": prompt}]\n  \n   inputs = tokenizer.apply_chat_template(\n       messages,\n       add_generation_prompt=True,\n       return_tensors=\"pt\"\n   ).to(model.device)\n  \n   streamer = TextIteratorStreamer(\n       tokenizer,\n       skip_prompt=True,\n       skip_special_tokens=True\n   )\n  \n   generation_kwargs = {\n       \"input_ids\": inputs,\n       \"streamer\": streamer,\n       \"max_new_tokens\": max_tokens,\n       \"do_sample\": True,\n       \"temperature\": 0.8,\n       \"top_p\": 1.0,\n       \"pad_token_id\": tokenizer.eos_token_id,\n   }\n  \n   thread = Thread(target=model.generate, kwargs=generation_kwargs)\n   thread.start()\n  \n   print(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Streaming: \", end=\"\", flush=True)\n   full_response = \"\"\n  \n   for token in streamer:\n       print(token, end=\"\", flush=True)\n       full_response += token\n       time.sleep(0.01)\n  \n   thread.join()\n   print(\"n\")\n  \n   return full_response\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f504.png\" alt=\"\ud83d\udd04\" class=\"wp-smiley\" \/> Streaming Demo:\")\nprint(\"-\" * 50)\n\n\nstreamed = stream_response(\n   \"Count from 1 to 10, with a brief comment about each number.\",\n   max_tokens=250\n)\n<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We move from single prompts to stateful interactions by creating a conversation manager that stores multi-turn chat history and reuses that context in future responses. We demonstrate how we maintain memory across turns, summarize prior context, and make the interaction feel more like a persistent assistant instead of a one-off generation call. We also implement streaming generation so we can watch tokens arrive in real time, which helps us understand the model\u2019s live decoding behavior more clearly.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"n\" + \"=\" * 70)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/> PART 8: Function Calling \/ Tool Use\")\nprint(\"=\" * 70)\n\n\nimport math\nfrom datetime import datetime\n\n\nclass ToolExecutor:\n   \"\"\"\n   Manages tool definitions and execution for gpt-oss.\n   \"\"\"\n  \n   def __init__(self):\n       self.tools = {}\n       self._register_default_tools()\n  \n   def _register_default_tools(self):\n       \"\"\"Register built-in tools.\"\"\"\n      \n       @self.register(\"calculator\", \"Perform mathematical calculations\")\n       def calculator(expression: str) -&gt; str:\n           \"\"\"Evaluate a mathematical expression.\"\"\"\n           try:\n               allowed_names = {\n                   k: v for k, v in math.__dict__.items()\n                   if not k.startswith(\"_\")\n               }\n               allowed_names.update({\"abs\": abs, \"round\": round})\n               result = eval(expression, {\"__builtins__\": {}}, allowed_names)\n               return f\"Result: {result}\"\n           except Exception as e:\n               return f\"Error: {str(e)}\"\n      \n       @self.register(\"get_time\", \"Get current date and time\")\n       def get_time() -&gt; str:\n           \"\"\"Get the current date and time.\"\"\"\n           now = datetime.now()\n           return f\"Current time: {now.strftime('%Y-%m-%d %H:%M:%S')}\"\n      \n       @self.register(\"weather\", \"Get weather for a city (simulated)\")\n       def weather(city: str) -&gt; str:\n           \"\"\"Get weather information (simulated).\"\"\"\n           import random\n           temp = random.randint(60, 85)\n           conditions = random.choice([\"sunny\", \"partly cloudy\", \"cloudy\", \"rainy\"])\n           return f\"Weather in {city}: {temp}\u00b0F, {conditions}\"\n      \n       @self.register(\"search\", \"Search for information (simulated)\")\n       def search(query: str) -&gt; str:\n           \"\"\"Search the web (simulated).\"\"\"\n           return f\"Search results for '{query}': [Simulated results - in production, connect to a real search API]\"\n  \n   def register(self, name: str, description: str):\n       \"\"\"Decorator to register a tool.\"\"\"\n       def decorator(func):\n           self.tools[name] = {\n               \"function\": func,\n               \"description\": description,\n               \"name\": name\n           }\n           return func\n       return decorator\n  \n   def get_tools_prompt(self) -&gt; str:\n       \"\"\"Generate tools description for the system prompt.\"\"\"\n       tools_desc = \"You have access to the following tools:nn\"\n       for name, tool in self.tools.items():\n           tools_desc += f\"- {name}: {tool['description']}n\"\n      \n       tools_desc += \"\"\"\nTo use a tool, respond with:\nTOOL: &lt;tool_name&gt;\nARGS: &lt;arguments as JSON&gt;\n\n\nAfter receiving the tool result, provide your final answer to the user.\"\"\"\n       return tools_desc\n  \n   def execute(self, tool_name: str, args: dict) -&gt; str:\n       \"\"\"Execute a tool with given arguments.\"\"\"\n       if tool_name not in self.tools:\n           return f\"Error: Unknown tool '{tool_name}'\"\n      \n       try:\n           func = self.tools[tool_name][\"function\"]\n           if args:\n               result = func(**args)\n           else:\n               result = func()\n           return result\n       except Exception as e:\n           return f\"Error executing {tool_name}: {str(e)}\"\n  \n   def parse_tool_call(self, response: str) -&gt; tuple:\n       \"\"\"Parse a tool call from model response.\"\"\"\n       if \"TOOL:\" not in response:\n           return None, None\n      \n       lines = response.split(\"n\")\n       tool_name = None\n       args = {}\n      \n       for line in lines:\n           if line.startswith(\"TOOL:\"):\n               tool_name = line.replace(\"TOOL:\", \"\").strip()\n           elif line.startswith(\"ARGS:\"):\n               try:\n                   args_str = line.replace(\"ARGS:\", \"\").strip()\n                   args = json.loads(args_str) if args_str else {}\n               except json.JSONDecodeError:\n                   args = {\"expression\": args_str} if tool_name == \"calculator\" else {\"query\": args_str}\n      \n       return tool_name, args\n\n\ntools = ToolExecutor()\n\n\ndef chat_with_tools(user_message: str) -&gt; str:\n   \"\"\"\n   Chat with tool use capability.\n   \"\"\"\n   system_prompt = f\"\"\"You are a helpful assistant with access to tools.\n{tools.get_tools_prompt()}\n\n\nIf the user's request can be answered directly, do so.\nIf you need to use a tool, indicate which tool and with what arguments.\"\"\"\n\n\n   messages = [\n       {\"role\": \"system\", \"content\": system_prompt},\n       {\"role\": \"user\", \"content\": user_message}\n   ]\n  \n   output = pipe(\n       messages,\n       max_new_tokens=200,\n       do_sample=True,\n       temperature=0.7,\n       pad_token_id=tokenizer.eos_token_id,\n   )\n  \n   response = output[0][\"generated_text\"][-1][\"content\"]\n  \n   tool_name, args = tools.parse_tool_call(response)\n  \n   if tool_name:\n       tool_result = tools.execute(tool_name, args)\n      \n       messages.append({\"role\": \"assistant\", \"content\": response})\n       messages.append({\"role\": \"user\", \"content\": f\"Tool result: {tool_result}nnNow provide your final answer.\"})\n      \n       final_output = pipe(\n           messages,\n           max_new_tokens=200,\n           do_sample=True,\n           temperature=0.7,\n           pad_token_id=tokenizer.eos_token_id,\n       )\n      \n       return final_output[0][\"generated_text\"][-1][\"content\"]\n  \n   return response\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f527.png\" alt=\"\ud83d\udd27\" class=\"wp-smiley\" \/> Tool Use Examples:\")\nprint(\"-\" * 50)\n\n\ntool_queries = [\n   \"What is 15 * 23 + 7?\",\n   \"What time is it right now?\",\n   \"What's the weather like in Tokyo?\",\n]\n\n\nfor query in tool_queries:\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f464.png\" alt=\"\ud83d\udc64\" class=\"wp-smiley\" \/> User: {query}\")\n   response = chat_with_tools(query)\n   print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f916.png\" alt=\"\ud83e\udd16\" class=\"wp-smiley\" \/> Assistant: {response}\")\n\n\nprint(\"n\" + \"=\" * 70)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4e6.png\" alt=\"\ud83d\udce6\" class=\"wp-smiley\" \/> PART 9: Batch Processing for Efficiency\")\nprint(\"=\" * 70)\n\n\ndef batch_generate(prompts: list, batch_size: int = 2, max_new_tokens: int = 100) -&gt; list:\n   \"\"\"\n   Process multiple prompts in batches for efficiency.\n  \n   Args:\n       prompts: List of prompts to process\n       batch_size: Number of prompts per batch\n       max_new_tokens: Maximum tokens per response\n      \n   Returns:\n       List of responses\n   \"\"\"\n   results = []\n   total_batches = (len(prompts) + batch_size - 1) \/\/ batch_size\n  \n   for i in range(0, len(prompts), batch_size):\n       batch = prompts[i:i + batch_size]\n       batch_num = i \/\/ batch_size + 1\n       print(f\"   Processing batch {batch_num}\/{total_batches}...\")\n      \n       batch_messages = [\n           [{\"role\": \"user\", \"content\": prompt}]\n           for prompt in batch\n       ]\n      \n       for messages in batch_messages:\n           output = pipe(\n               messages,\n               max_new_tokens=max_new_tokens,\n               do_sample=True,\n               temperature=0.7,\n               pad_token_id=tokenizer.eos_token_id,\n           )\n           results.append(output[0][\"generated_text\"][-1][\"content\"])\n  \n   return results\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Batch Processing Example:\")\nprint(\"-\" * 50)\n\n\nbatch_prompts = [\n   \"What is the capital of France?\",\n   \"What is 7 * 8?\",\n   \"Name a primary color.\",\n   \"What season comes after summer?\",\n   \"What is H2O commonly called?\",\n]\n\n\nprint(f\"Processing {len(batch_prompts)} prompts...n\")\nbatch_results = batch_generate(batch_prompts, batch_size=2)\n\n\nfor prompt, result in zip(batch_prompts, batch_results):\n   print(f\"Q: {prompt}\")\n   print(f\"A: {result[:100]}...n\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We extend the tutorial to include tool use and batch inference, enabling the open-weight model to support more realistic application patterns. We define a lightweight tool execution framework, let the model choose tools through a structured text pattern, and then feed the tool results back into the generation loop to produce a final answer. We also add batch processing to handle multiple prompts efficiently, which is useful for testing throughput and reusing the same inference pipeline across several tasks.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\" no-line-numbers\"><code class=\" no-wrap language-php\">print(\"n\" + \"=\" * 70)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f916.png\" alt=\"\ud83e\udd16\" class=\"wp-smiley\" \/> PART 10: Interactive Chatbot Interface\")\nprint(\"=\" * 70)\n\n\nimport gradio as gr\n\n\ndef create_chatbot():\n   \"\"\"Create a Gradio chatbot interface for gpt-oss.\"\"\"\n  \n   def respond(message, history):\n       \"\"\"Generate chatbot response.\"\"\"      \n       for user_msg, assistant_msg in history:\n           messages.append({\"role\": \"user\", \"content\": user_msg})\n           if assistant_msg:\n               messages.append({\"role\": \"assistant\", \"content\": assistant_msg})\n      \n       messages.append({\"role\": \"user\", \"content\": message})\n      \n       output = pipe(\n           messages,\n           max_new_tokens=400,\n           do_sample=True,\n           temperature=0.8,\n           top_p=1.0,\n           pad_token_id=tokenizer.eos_token_id,\n       )\n      \n       return output[0][\"generated_text\"][-1][\"content\"]\n  \n   demo = gr.ChatInterface(\n       fn=respond,\n       title=\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f680.png\" alt=\"\ud83d\ude80\" class=\"wp-smiley\" \/> GPT-OSS Chatbot\",\n       description=\"Chat with OpenAI's open-weight GPT-OSS model!\",\n       examples=[\n           \"Explain quantum computing in simple terms.\",\n           \"What are the benefits of open-source AI?\",\n           \"Tell me a fun fact about space.\",\n       ],\n       theme=gr.themes.Soft(),\n   )\n  \n   return demo\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f680.png\" alt=\"\ud83d\ude80\" class=\"wp-smiley\" \/> Creating Gradio chatbot interface...\")\nchatbot = create_chatbot()\n\n\nprint(\"n\" + \"=\" * 70)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f381.png\" alt=\"\ud83c\udf81\" class=\"wp-smiley\" \/> PART 11: Utility Helpers\")\nprint(\"=\" * 70)\n\n\nclass GptOssHelpers:\n   \"\"\"Collection of utility functions for common tasks.\"\"\"\n  \n   def __init__(self, pipeline, tokenizer):\n       self.pipe = pipeline\n       self.tokenizer = tokenizer\n  \n   def summarize(self, text: str, max_words: int = 50) -&gt; str:\n       \"\"\"Summarize text to specified length.\"\"\"\n       messages = [\n           {\"role\": \"system\", \"content\": f\"Summarize the following text in {max_words} words or less. Be concise.\"},\n           {\"role\": \"user\", \"content\": text}\n       ]\n       output = self.pipe(messages, max_new_tokens=150, temperature=0.5, pad_token_id=self.tokenizer.eos_token_id)\n       return output[0][\"generated_text\"][-1][\"content\"]\n  \n   def translate(self, text: str, target_language: str) -&gt; str:\n       \"\"\"Translate text to target language.\"\"\"\n       messages = [\n           {\"role\": \"user\", \"content\": f\"Translate to {target_language}: {text}\"}\n       ]\n       output = self.pipe(messages, max_new_tokens=200, temperature=0.3, pad_token_id=self.tokenizer.eos_token_id)\n       return output[0][\"generated_text\"][-1][\"content\"]\n  \n   def explain_simply(self, concept: str) -&gt; str:\n       \"\"\"Explain a concept in simple terms.\"\"\"\n       messages = [\n           {\"role\": \"system\", \"content\": \"Explain concepts simply, as if to a curious 10-year-old. Use analogies and examples.\"},\n           {\"role\": \"user\", \"content\": f\"Explain: {concept}\"}\n       ]\n       output = self.pipe(messages, max_new_tokens=200, temperature=0.8, pad_token_id=self.tokenizer.eos_token_id)\n       return output[0][\"generated_text\"][-1][\"content\"]\n  \n   def extract_keywords(self, text: str, num_keywords: int = 5) -&gt; list:\n       \"\"\"Extract key topics from text.\"\"\"\n       messages = [\n           {\"role\": \"user\", \"content\": f\"Extract exactly {num_keywords} keywords from this text. Return only the keywords, comma-separated:nn{text}\"}\n       ]\n       output = self.pipe(messages, max_new_tokens=50, temperature=0.3, pad_token_id=self.tokenizer.eos_token_id)\n       keywords = output[0][\"generated_text\"][-1][\"content\"]\n       return [k.strip() for k in keywords.split(\",\")]\n\n\nhelpers = GptOssHelpers(pipe, tokenizer)\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4dd.png\" alt=\"\ud83d\udcdd\" class=\"wp-smiley\" \/> Helper Functions Demo:\")\nprint(\"-\" * 50)\n\n\nsample_text = \"\"\"\nArtificial intelligence has transformed many industries in recent years.\nFrom healthcare diagnostics to autonomous vehicles, AI systems are becoming\n\"\"\"\n\n\nprint(\"n1\u20e3 Summarization:\")\nsummary = helpers.summarize(sample_text, max_words=20)\nprint(f\"   {summary}\")\n\n\nprint(\"n2\u20e3 Simple Explanation:\")\nexplanation = helpers.explain_simply(\"neural networks\")\nprint(f\"   {explanation[:200]}...\")\n\n\nprint(\"n\" + \"=\" * 70)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> TUTORIAL COMPLETE!\")\nprint(\"=\" * 70)\n\n\nprint(\"\"\"\n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f389.png\" alt=\"\ud83c\udf89\" class=\"wp-smiley\" \/> You've learned how to use GPT-OSS on Google Colab!\n\n\nWHAT YOU LEARNED:\n \u2713 Correct model loading (no load_in_4bit - uses native MXFP4)\n \u2713 Basic inference with proper parameters\n \u2713 Configurable reasoning effort (low\/medium\/high)\n \u2713 Structured JSON output generation\n \u2713 Multi-turn conversations with memory\n \u2713 Streaming token generation\n \u2713 Function calling and tool use\n \u2713 Batch processing for efficiency\n \u2713 Interactive Gradio chatbot\n\n\nKEY TAKEAWAYS:\n \u2022 GPT-OSS uses native MXFP4 quantization (don't use bitsandbytes)\n \u2022 Recommended: temperature=1.0, top_p=1.0\n \u2022 gpt-oss-20b fits on T4 GPU (~16GB VRAM)\n \u2022 gpt-oss-120b requires H100\/A100 (~80GB VRAM)\n \u2022 Always use trust_remote_code=True\n\n\nRESOURCES:\n <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4da.png\" alt=\"\ud83d\udcda\" class=\"wp-smiley\" \/> GitHub: https:\/\/github.com\/openai\/gpt-oss\n <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4da.png\" alt=\"\ud83d\udcda\" class=\"wp-smiley\" \/> Hugging Face: https:\/\/huggingface.co\/openai\/gpt-oss-20b\n <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4da.png\" alt=\"\ud83d\udcda\" class=\"wp-smiley\" \/> Model Card: https:\/\/arxiv.org\/abs\/2508.10925\n <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4da.png\" alt=\"\ud83d\udcda\" class=\"wp-smiley\" \/> Harmony Format: https:\/\/github.com\/openai\/harmony\n <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4da.png\" alt=\"\ud83d\udcda\" class=\"wp-smiley\" \/> Cookbook: https:\/\/cookbook.openai.com\/topic\/gpt-oss\n\n\nALTERNATIVE INFERENCE OPTIONS (for better performance):\n \u2022 vLLM: Production-ready, OpenAI-compatible server\n \u2022 Ollama: Easy local deployment\n \u2022 LM Studio: Desktop GUI application\n\"\"\")\n\n\nif torch.cuda.is_available():\n   print(f\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> Final GPU Memory Usage:\")\n   print(f\"   Allocated: {torch.cuda.memory_allocated() \/ 1e9:.2f} GB\")\n   print(f\"   Reserved: {torch.cuda.memory_reserved() \/ 1e9:.2f} GB\")\n\n\nprint(\"n\" + \"=\" * 70)\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/1f680.png\" alt=\"\ud83d\ude80\" class=\"wp-smiley\" \/> Launch the chatbot by running: chatbot.launch(share=True)\")\nprint(\"=\" * 70)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We turn the model pipeline into a usable application by building a Gradio chatbot interface and then adding helper utilities for summarization, translation, simplified explanation, and keyword extraction. We show how the same open-weight model can support both interactive chat and reusable task-specific functions inside a single Colab workflow. We end by summarizing the tutorial, reviewing the key technical takeaways, and reinforcing how GPT-OSS can be loaded, controlled, and extended as a practical open-weight system.<\/p>\n<p>In conclusion, we built a comprehensive hands-on understanding of how to use GPT-OSS as an open-source language model rather than a black-box endpoint. We loaded the model with the correct inference path, avoiding incorrect low-bit loading approaches, and worked through important implementation patterns, including configurable reasoning effort, JSON-constrained outputs, Harmony-style conversational formatting, token streaming, lightweight tool use orchestration, and Gradio-based interaction. In doing so, we saw the real advantage of open-weight models: we can directly control model loading, inspect runtime behavior, shape generation flows, and design custom utilities on top of the base model without depending entirely on managed infrastructure.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out\u00a0the<strong><a href=\"https:\/\/arxiv.org\/pdf\/2604.06425\" target=\"_blank\" rel=\"noreferrer noopener\">\u00a0<\/a><a href=\"https:\/\/github.com\/Marktechpost\/AI-Agents-Projects-Tutorials\/blob\/main\/LLM%20Projects\/gpt_oss_open_weight_advanced_inference_tutorial_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">Full Code Implementation<\/a><\/strong>.<strong>\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">130k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.?\u00a0<strong><a href=\"https:\/\/forms.gle\/MTNLpmJtsFA3VRVd9\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Connect with us<\/mark><\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/04\/17\/a-end-to-end-coding-guide-to-running-openai-gpt-oss-open-weight-models-with-advanced-inference-workflows\/\">A End-to-End Coding Guide to Running OpenAI GPT-OSS Open-Weight Models with Advanced Inference Workflows<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we explore h&hellip;<\/p>\n","protected":false},"author":1,"featured_media":29,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-745","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/745","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=745"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/745\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=745"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=745"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=745"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}