diff --git a/guides/ipynb/keras_hub/mcp_with_keras_hub.ipynb b/guides/ipynb/keras_hub/mcp_with_keras_hub.ipynb
new file mode 100644
index 0000000000..1545ba577b
--- /dev/null
+++ b/guides/ipynb/keras_hub/mcp_with_keras_hub.ipynb
@@ -0,0 +1,975 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "# Model Context Protocol (MCP) with KerasHub\n",
+ "\n",
+ "**Author:** [Laxmareddypatlolla](https://github.com/laxmareddypatlolla),[Divyashree Sreepathihalli](https://github.com/divyashreepathihalli)
\n",
+ "**Date created:** 2025/08/16
\n",
+ "**Last modified:** 2025/08/16
\n",
+ "**Description:** Complete guide to building MCP systems using KerasHub models for intelligent tool calling."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "## Welcome to Your MCP Adventure! \ud83d\ude80\n",
+ "\n",
+ "Hey there! Ready to dive into something really exciting? We're about to build a system that can make AI models actually \"do things\" in the real world - not just chat, but actually execute functions, call APIs, and interact with external tools!\n",
+ "\n",
+ "**What makes this special?** Instead of just having a conversation with an AI, we're building something that's like having a super-smart assistant who can actually take action on your behalf. It's like the difference between talking to someone about cooking versus having them actually cook dinner for you!\n",
+ "\n",
+ "**What we're going to discover together:**\n",
+ "\n",
+ "* How to make AI models work with external tools and functions\n",
+ "* Why MCP (Model Context Protocol) is the future of AI interaction\n",
+ "* How to build systems that are both intelligent AND actionable\n",
+ "* The magic of combining language understanding with tool execution\n",
+ "\n",
+ "Think of this as your journey into the future of AI-powered automation. By the end, you'll have built something that could potentially revolutionize how we interact with AI systems!\n",
+ "\n",
+ "Ready to start this adventure? Let's go!\n",
+ "\n",
+ "## Understanding the Magic Behind MCP! \u2728\n",
+ "\n",
+ "Alright, let's take a moment to understand what makes MCP so special! Think of MCP as having a super-smart assistant who doesn't just answer questions, but actually knows how to use tools to get things done.\n",
+ "\n",
+ "**The Three Musketeers of MCP:**\n",
+ "\n",
+ "1. **The Language Model** \ud83e\udde0: This is like having a brilliant conversationalist who can understand what you want and figure out what tools might help\n",
+ "2. **The Tool Registry** \ud83d\udee0\ufe0f: This is like having a well-organized toolbox where every tool has a clear purpose and instructions\n",
+ "3. **The Execution Engine** \u26a1: This is like having a skilled worker who can actually use the tools to accomplish tasks\n",
+ "\n",
+ "**Here's what our amazing MCP system will do:**\n",
+ "\n",
+ "* **Step 1:** Our Gemma3 model will understand your request and determine if it needs a tool\n",
+ "* **Step 2:** It will identify the right tool from our registry (weather, calculator, search, etc.)\n",
+ "* **Step 3:** It will format the tool call with the correct parameters\n",
+ "* **Step 4:** Our system will execute the tool and get real results\n",
+ "* **Step 5:** We'll present you with actionable information instead of just text\n",
+ "\n",
+ "**Why this is revolutionary:** Instead of the AI just telling you what it knows, it's actually doing things for you! It's like the difference between a librarian who tells you where to find a book versus one who actually goes and gets the book for you!\n",
+ "\n",
+ "Ready to see this magic in action? Let's start building! \ud83c\udfaf\n",
+ "\n",
+ "## Setting Up Our AI Workshop \ud83d\udee0\ufe0f\n",
+ "\n",
+ "Alright, before we start building our amazing MCP system, we need to set up our digital workshop! Think of this like gathering all the tools a master craftsman needs before creating a masterpiece.\n",
+ "\n",
+ "**What we're doing here:** We're importing all the powerful libraries that will help us build our MCP system. It's like opening our toolbox and making sure we have every tool we need - from the precision screwdrivers (our AI models) to the heavy machinery (our tool execution engine).\n",
+ "\n",
+ "**Why KerasHub?** We're using KerasHub because it's like having access to a massive library of pre-trained AI models. Instead of training models from scratch (which would take forever), we can grab models that are already experts at understanding language and generating responses. It's like having a team of specialists ready to work for us!\n",
+ "\n",
+ "**The magic of MCP:** This is where things get really exciting! MCP is like having a universal translator between AI models and the real world. It allows our AI to not just think, but to act!\n",
+ "\n",
+ "Let's get our tools ready and start building something amazing!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "import re\n",
+ "import json\n",
+ "\n",
+ "# Use ast.literal_eval for safer evaluation (only allows literals, no function calls)\n",
+ "import ast\n",
+ "from typing import Dict, List, Any, Callable, Optional\n",
+ "\n",
+ "# Set Keras backend to jax for optimal performance\n",
+ "os.environ[\"KERAS_BACKEND\"] = \"jax\"\n",
+ "\n",
+ "import keras\n",
+ "from keras import layers\n",
+ "import keras_hub\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "## Loading Our AI Dream Team! \ud83e\udd16\n",
+ "\n",
+ "Alright, this is where the real magic begins! We're about to load up our AI model - think of this as assembling the ultimate specialist with the superpower of understanding and responding to human requests!\n",
+ "\n",
+ "**What we're doing here:** We're loading the `Gemma3 Instruct 1B` model from KerasHub. This model is like having a brilliant conversationalist who can understand complex requests and figure out when to use tools versus when to respond directly.\n",
+ "\n",
+ "**Why Gemma3?** This model is specifically designed for instruction-following and tool usage. It's like having an AI that's been trained to be helpful and actionable, not just chatty!\n",
+ "\n",
+ "**The magic of KerasHub:** Instead of downloading and setting up complex model files, we just call `keras_hub.models.CausalLM.from_preset()` and KerasHub handles all the heavy lifting for us. It's like having a personal assistant who sets up your entire workspace!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "\n",
+ "def _load_model():\n",
+ " \"\"\"\n",
+ " Load the Gemma3 Instruct 1B model from KerasHub.\n",
+ "\n",
+ " This is the \"brain\" of our system - the AI model that understands\n",
+ " user requests and decides when to use tools.\n",
+ "\n",
+ " Returns:\n",
+ " The loaded Gemma3 model ready for text generation\n",
+ " \"\"\"\n",
+ " print(\"\ud83d\ude80 Loading Gemma3 Instruct 1B model...\")\n",
+ " model = keras_hub.models.Gemma3CausalLM.from_preset(\"gemma3_instruct_1b\")\n",
+ " print(f\"\u2705 Model loaded successfully: {model.name}\")\n",
+ " return model\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "## Building Our Tool Arsenal! \ud83d\udee0\ufe0f\n",
+ "\n",
+ "Now we're getting to the really fun part! We're building our collection of tools that our AI can use to actually accomplish tasks. Think of this as creating a Swiss Army knife for your AI assistant!\n",
+ "\n",
+ "**What we're building here:**\n",
+ "\n",
+ "We're creating three essential tools that demonstrate different types of capabilities:\n",
+ "\n",
+ "1. **Weather Tool** - Shows how to work with external data and APIs\n",
+ "2. **Calculator Tool** - Shows how to handle mathematical computations\n",
+ "3. **Search Tool** - Shows how to provide information retrieval\n",
+ "\n",
+ "**Why these tools?** Each tool represents a different category of AI capabilities:\n",
+ "\n",
+ "- **Data Access** (weather) - Getting real-time information\n",
+ "- **Computation** (calculator) - Processing and analyzing data with security considerations\n",
+ "- **Knowledge Retrieval** (search) - Finding and organizing information\n",
+ "\n",
+ "**The magic of tool design:** Each tool is designed to be simple, reliable, and focused. It's like building with LEGO blocks - each piece has a specific purpose, and together they create something amazing!\n",
+ "\n",
+ "**Security considerations:** Our calculator tool demonstrates safe mathematical evaluation techniques, but in production environments, you should use specialized math libraries for enhanced security.\n",
+ "\n",
+ "Let's build our tools and see how they work!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "\n",
+ "def weather_tool(city: str) -> str:\n",
+ " \"\"\"\n",
+ " Get weather information for a specific city.\n",
+ "\n",
+ " This tool demonstrates how MCP can access external data sources.\n",
+ " In a real-world scenario, this would connect to a weather API.\n",
+ "\n",
+ " Args:\n",
+ " city: The name of the city to get weather for\n",
+ "\n",
+ " Returns:\n",
+ " A formatted weather report for the city\n",
+ " \"\"\"\n",
+ " # Simulated weather data - in production, this would call a real API\n",
+ " weather_data = {\n",
+ " \"Tokyo\": \"75\u00b0F, Rainy, Humidity: 82%\",\n",
+ " \"New York\": \"65\u00b0F, Partly Cloudy, Humidity: 70%\",\n",
+ " \"London\": \"55\u00b0F, Cloudy, Humidity: 85%\",\n",
+ " \"Paris\": \"68\u00b0F, Sunny, Humidity: 65%\",\n",
+ " \"Sydney\": \"72\u00b0F, Clear, Humidity: 60%\",\n",
+ " }\n",
+ "\n",
+ " city_normalized = city.title()\n",
+ " if city_normalized in weather_data:\n",
+ " return weather_data[city_normalized]\n",
+ " else:\n",
+ " return f\"Weather data not available for {city_normalized}\"\n",
+ "\n",
+ "\n",
+ "# \u26a0\ufe0f SECURITY WARNING: This tool demonstrates safe mathematical evaluation.\n",
+ "# In production, consider using specialized math libraries like 'ast.literal_eval'\n",
+ "# or 'sympy' for more robust and secure mathematical expression handling.\n",
+ "def calculator_tool(expression: str) -> str:\n",
+ " \"\"\"\n",
+ " Calculate mathematical expressions safely.\n",
+ "\n",
+ " This tool demonstrates how MCP can handle computational tasks.\n",
+ " It safely evaluates mathematical expressions while preventing code injection.\n",
+ "\n",
+ " Args:\n",
+ " expression: A mathematical expression as a string (e.g., \"15 + 7 - 24\")\n",
+ "\n",
+ " Returns:\n",
+ " The calculated result as a string\n",
+ " \"\"\"\n",
+ " try:\n",
+ " # Clean the expression to only allow safe mathematical operations\n",
+ " cleaned_expr = re.sub(r\"[^0-9+\\-*/().\\s]\", \"\", expression)\n",
+ "\n",
+ " # Convert mathematical expression to a safe format\n",
+ " # Replace mathematical operators with Python equivalents\n",
+ " safe_expr = cleaned_expr.replace(\"\u00d7\", \"*\").replace(\"\u00f7\", \"/\")\n",
+ "\n",
+ " # Create a safe evaluation environment\n",
+ " allowed_names = {\n",
+ " \"abs\": abs,\n",
+ " \"round\": round,\n",
+ " \"min\": min,\n",
+ " \"max\": max,\n",
+ " \"sum\": sum,\n",
+ " \"pow\": pow,\n",
+ " }\n",
+ "\n",
+ " # Parse and evaluate safely\n",
+ " tree = ast.parse(safe_expr, mode=\"eval\")\n",
+ "\n",
+ " # Only allow basic arithmetic operations\n",
+ " for node in ast.walk(tree):\n",
+ " if isinstance(node, ast.Call):\n",
+ " if (\n",
+ " not isinstance(node.func, ast.Name)\n",
+ " or node.func.id not in allowed_names\n",
+ " ):\n",
+ " raise ValueError(\"Function calls not allowed\")\n",
+ "\n",
+ " # Evaluate in restricted environment\n",
+ " result = eval(safe_expr, {\"__builtins__\": {}}, allowed_names)\n",
+ "\n",
+ " # Format the result nicely\n",
+ " if isinstance(result, (int, float)):\n",
+ " if result == int(result):\n",
+ " return str(int(result))\n",
+ " else:\n",
+ " return f\"{result:.2f}\"\n",
+ " else:\n",
+ " return str(result)\n",
+ " except Exception as e:\n",
+ " return f\"Error calculating '{expression}': {str(e)}\"\n",
+ "\n",
+ "\n",
+ "def search_tool(query: str) -> str:\n",
+ " \"\"\"\n",
+ " Search for information based on a query.\n",
+ "\n",
+ " This tool demonstrates how MCP can provide information retrieval.\n",
+ " In a real-world scenario, this would connect to search engines or databases.\n",
+ "\n",
+ " Args:\n",
+ " query: The search query string\n",
+ "\n",
+ " Returns:\n",
+ " Relevant information based on the query\n",
+ " \"\"\"\n",
+ " # Simulated search results - in production, this would call real search APIs\n",
+ " search_responses = {\n",
+ " \"machine learning\": \"Machine learning is a subset of artificial intelligence that enables computers to learn and improve from experience without being explicitly programmed. It's used in recommendation systems, image recognition, natural language processing, and many other applications.\",\n",
+ " \"python\": \"Python is a high-level, interpreted programming language known for its simplicity and readability. It's widely used in data science, web development, machine learning, and automation.\",\n",
+ " \"artificial intelligence\": \"Artificial Intelligence (AI) refers to the simulation of human intelligence in machines. It encompasses machine learning, natural language processing, computer vision, and robotics.\",\n",
+ " \"data science\": \"Data science combines statistics, programming, and domain expertise to extract meaningful insights from data. It involves data collection, cleaning, analysis, and visualization.\",\n",
+ " }\n",
+ "\n",
+ " query_lower = query.lower()\n",
+ " for key, response in search_responses.items():\n",
+ " if key in query_lower:\n",
+ " return response\n",
+ "\n",
+ " return f\"Search results for '{query}': Information not available in our current knowledge base.\"\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "## Creating Our Tool Management System! \ud83c\udfd7\ufe0f\n",
+ "\n",
+ "Now we're building the backbone of our MCP system - the tool registry and management system. Think of this as creating the control center that coordinates all our tools!\n",
+ "\n",
+ "**What we're building here:** We're creating a system that:\n",
+ "\n",
+ "1. **Registers tools** - Keeps track of what tools are available\n",
+ "2. **Manages tool metadata** - Stores descriptions and parameter information\n",
+ "3. **Executes tools safely** - Runs tools with proper error handling\n",
+ "4. **Provides tool information** - Gives the AI model context about available tools\n",
+ "\n",
+ "**Why this architecture?** This design separates concerns beautifully:\n",
+ "\n",
+ "- **Tool Registry** handles tool management\n",
+ "- **MCP Client** handles AI interaction\n",
+ "- **Individual Tools** handle specific functionality\n",
+ "\n",
+ "**The magic of separation of concerns:** Each component has a single responsibility, making the system easy to understand, debug, and extend. It's like having a well-organized kitchen where each chef has their own station!\n",
+ "\n",
+ "Let's build our tool management system!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "\n",
+ "class MCPTool:\n",
+ " \"\"\"\n",
+ " Represents a tool that can be called by the MCP system.\n",
+ "\n",
+ " This class encapsulates all the information needed to use a tool:\n",
+ " - What the tool does (description)\n",
+ " - What parameters it needs (function signature)\n",
+ " - How to execute it (the actual function)\n",
+ "\n",
+ " Think of this as creating a detailed instruction manual for each tool!\n",
+ " \"\"\"\n",
+ "\n",
+ " def __init__(self, name: str, description: str, function: Callable):\n",
+ " \"\"\"\n",
+ " Initialize a new MCP tool.\n",
+ "\n",
+ " Args:\n",
+ " name: The name of the tool (e.g., \"weather\", \"calculator\")\n",
+ " description: What the tool does (used by the AI to decide when to use it)\n",
+ " function: The actual function that implements the tool's functionality\n",
+ " \"\"\"\n",
+ " self.name = name\n",
+ " self.description = description\n",
+ " self.function = function\n",
+ "\n",
+ " def execute(self, **kwargs) -> str:\n",
+ " \"\"\"\n",
+ " Execute the tool with the given parameters.\n",
+ "\n",
+ " Args:\n",
+ " **kwargs: The parameters to pass to the tool function\n",
+ "\n",
+ " Returns:\n",
+ " The result of executing the tool\n",
+ " \"\"\"\n",
+ " try:\n",
+ " return self.function(**kwargs)\n",
+ " except Exception as e:\n",
+ " return f\"Error executing {self.name}: {str(e)}\"\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "## The Command Center: MCPToolRegistry \ud83c\udfaf\n",
+ "\n",
+ "Now we're building the heart of our tool management system - the MCPToolRegistry! Think of this as creating the mission control center for all our AI tools.\n",
+ "\n",
+ "**What this class does:** The MCPToolRegistry is like having a brilliant project manager who:\n",
+ "\n",
+ "- **Keeps an organized inventory** of all available tools\n",
+ "- **Provides instant access** to tool information when the AI needs it\n",
+ "- **Coordinates tool execution** with proper error handling\n",
+ "- **Maintains tool metadata** so the AI knows what each tool can do\n",
+ "\n",
+ "**Why this is crucial:** Without a tool registry, our AI would be like a chef without a kitchen - it might know what to cook, but it wouldn't know what tools are available or how to use them. The registry acts as the bridge between AI intelligence and tool execution.\n",
+ "\n",
+ "**The magic of centralization:** By having all tools registered in one place, we can:\n",
+ "\n",
+ "- Easily add new tools without changing the core system\n",
+ "- Provide the AI with a complete overview of available capabilities\n",
+ "- Handle errors consistently across all tools\n",
+ "- Scale the system by simply registering more tools\n",
+ "\n",
+ "Think of this as the control tower at an airport - it doesn't fly the planes itself, but it coordinates everything so all flights can take off and land safely!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "\n",
+ "class MCPToolRegistry:\n",
+ " \"\"\"\n",
+ " Manages the collection of available tools in our MCP system.\n",
+ "\n",
+ " This class acts as a central registry that:\n",
+ " - Keeps track of all available tools\n",
+ " - Provides information about tools to the AI model\n",
+ " - Executes tools when requested\n",
+ " - Handles errors gracefully\n",
+ "\n",
+ " Think of this as the command center that coordinates all our tools!\n",
+ " \"\"\"\n",
+ "\n",
+ " def __init__(self):\n",
+ " \"\"\"Initialize an empty tool registry.\"\"\"\n",
+ " self.tools = {}\n",
+ "\n",
+ " def register_tool(self, tool: MCPTool):\n",
+ " \"\"\"\n",
+ " Register a new tool in the registry.\n",
+ "\n",
+ " Args:\n",
+ " tool: The MCPTool instance to register\n",
+ " \"\"\"\n",
+ " self.tools[tool.name] = tool\n",
+ " print(f\"\u2705 Registered tool: {tool.name}\")\n",
+ "\n",
+ " def get_tools_list(self) -> str:\n",
+ " \"\"\"\n",
+ " Get a formatted list of all available tools.\n",
+ "\n",
+ " This creates a description that the AI model can use to understand\n",
+ " what tools are available and when to use them.\n",
+ "\n",
+ " Returns:\n",
+ " A formatted string describing all available tools\n",
+ " \"\"\"\n",
+ " tools_list = []\n",
+ " for name, tool in self.tools.items():\n",
+ " tools_list.append(f\"{name}: {tool.description}\")\n",
+ " return \"\\n\".join(tools_list)\n",
+ "\n",
+ " def execute_tool(self, tool_name: str, arguments: Dict[str, Any]) -> str:\n",
+ " \"\"\"\n",
+ " Execute a specific tool with the given arguments.\n",
+ "\n",
+ " Args:\n",
+ " tool_name: The name of the tool to execute\n",
+ " arguments: The arguments to pass to the tool\n",
+ "\n",
+ " Returns:\n",
+ " The result of executing the tool\n",
+ "\n",
+ " Raises:\n",
+ " ValueError: If the tool is not found\n",
+ " \"\"\"\n",
+ " if tool_name not in self.tools:\n",
+ " raise ValueError(f\"Tool '{tool_name}' not found\")\n",
+ "\n",
+ " tool = self.tools[tool_name]\n",
+ " return tool.execute(**arguments)\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "## Building Our AI Communication Bridge! \ud83c\udf09\n",
+ "\n",
+ "Now we're creating the heart of our MCP system - the client that bridges the gap between our AI model and our tools. Think of this as building a translator that can understand both human language and machine instructions!\n",
+ "\n",
+ "**What we're building here:** We're creating a system that:\n",
+ "\n",
+ "1. **Understands user requests** - Processes natural language input\n",
+ "2. **Generates appropriate prompts** - Creates context for the AI model\n",
+ "3. **Parses AI responses** - Extracts tool calls from the model's output\n",
+ "4. **Executes tools** - Runs the requested tools and gets results\n",
+ "5. **Provides responses** - Gives users actionable information\n",
+ "\n",
+ "**Why this architecture?** This design creates a clean separation between:\n",
+ "\n",
+ "- **AI Understanding** (the model's job)\n",
+ "- **Tool Execution** (our system's job)\n",
+ "- **Response Generation** (combining AI insights with tool results)\n",
+ "\n",
+ "**The magic of the bridge pattern:** It allows our AI model to focus on what it does best (understanding language) while our system handles what it does best (executing tools). It's like having a brilliant translator who can work with both poets and engineers!\n",
+ "\n",
+ "Let's build our AI communication bridge!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "\n",
+ "class MCPClient:\n",
+ " \"\"\"\n",
+ " The main client that handles communication between users, AI models, and tools.\n",
+ "\n",
+ " This class orchestrates the entire MCP workflow:\n",
+ " 1. Takes user input and creates appropriate prompts\n",
+ " 2. Sends prompts to the AI model\n",
+ " 3. Parses the model's response for tool calls\n",
+ " 4. Executes requested tools\n",
+ " 5. Returns results to the user\n",
+ "\n",
+ " Think of this as the conductor of an orchestra, making sure everyone plays their part!\n",
+ " \"\"\"\n",
+ "\n",
+ " def __init__(self, model, tool_registry: MCPToolRegistry):\n",
+ " \"\"\"\n",
+ " Initialize the MCP client.\n",
+ "\n",
+ " Args:\n",
+ " model: The KerasHub model to use for understanding requests\n",
+ " tool_registry: The registry of available tools\n",
+ " \"\"\"\n",
+ " self.model = model\n",
+ " self.tool_registry = tool_registry\n",
+ "\n",
+ " def _build_prompt(self, user_input: str) -> str:\n",
+ " \"\"\"\n",
+ " Build a prompt for the AI model that includes available tools.\n",
+ "\n",
+ " This method creates the context that helps the AI model understand:\n",
+ " - What tools are available\n",
+ " - When to use them\n",
+ " - How to format tool calls\n",
+ "\n",
+ " Args:\n",
+ " user_input: The user's request\n",
+ "\n",
+ " Returns:\n",
+ " A formatted prompt for the AI model\n",
+ " \"\"\"\n",
+ " tools_list = self.tool_registry.get_tools_list()\n",
+ "\n",
+ " # Ultra-simple prompt - just the essentials\n",
+ " # This minimal approach has proven most effective for encouraging tool calls\n",
+ " prompt = f\"\"\"Available tools:\n",
+ "{tools_list}\n",
+ "\n",
+ "User: {user_input}\n",
+ "Assistant:\"\"\"\n",
+ " return prompt\n",
+ "\n",
+ " def _extract_tool_calls(self, response: str) -> List[Dict[str, Any]]:\n",
+ " \"\"\"\n",
+ " Extract tool calls from the AI model's response.\n",
+ "\n",
+ " This method uses flexible parsing to handle various formats the model might generate:\n",
+ " - TOOL_CALL: {...} format\n",
+ " - {\"tool\": \"name\", \"arguments\": {...}} format\n",
+ " - ```tool_code function_name(...) ``` format\n",
+ "\n",
+ " Args:\n",
+ " response: The raw response from the AI model\n",
+ "\n",
+ " Returns:\n",
+ " A list of parsed tool calls\n",
+ " \"\"\"\n",
+ " tool_calls = []\n",
+ "\n",
+ " # Look for TOOL_CALL blocks with strict JSON parsing\n",
+ " pattern = r\"TOOL_CALL:\\s*\\n(\\{[^}]*\\})\"\n",
+ " matches = re.findall(pattern, response, re.DOTALL)\n",
+ " for match in matches:\n",
+ " try:\n",
+ " json_str = match.strip()\n",
+ " tool_call = json.loads(json_str)\n",
+ " if \"name\" in tool_call and \"arguments\" in tool_call:\n",
+ " tool_calls.append(tool_call)\n",
+ " except json.JSONDecodeError:\n",
+ " continue\n",
+ "\n",
+ " # If no TOOL_CALL format found, try to parse the format the model is actually generating\n",
+ " if not tool_calls:\n",
+ " tool_calls = self._parse_model_tool_format(response)\n",
+ "\n",
+ " return tool_calls\n",
+ "\n",
+ " def _parse_model_tool_format(self, response: str) -> List[Dict[str, Any]]:\n",
+ " \"\"\"\n",
+ " Parse the format the model is actually generating: {\"tool\": \"tool_name\", \"arguments\": {...}}\n",
+ "\n",
+ " This method handles the JSON format that our model tends to generate,\n",
+ " converting it to our standard tool call format.\n",
+ "\n",
+ " Args:\n",
+ " response: The raw response from the AI model\n",
+ "\n",
+ " Returns:\n",
+ " A list of parsed tool calls\n",
+ " \"\"\"\n",
+ " tool_calls = []\n",
+ " pattern = r'\\{[^}]*\"tool\"[^}]*\"arguments\"[^}]*\\}'\n",
+ " matches = re.findall(pattern, response, re.DOTALL)\n",
+ "\n",
+ " for match in matches:\n",
+ " try:\n",
+ " tool_call = json.loads(match)\n",
+ " if \"tool\" in tool_call and \"arguments\" in tool_call:\n",
+ " converted_call = {\n",
+ " \"name\": tool_call[\"tool\"],\n",
+ " \"arguments\": tool_call[\"arguments\"],\n",
+ " }\n",
+ " tool_calls.append(converted_call)\n",
+ " except json.JSONDecodeError:\n",
+ " continue\n",
+ "\n",
+ " # If still no tool calls found, try to parse tool_code blocks\n",
+ " if not tool_calls:\n",
+ " tool_calls = self._parse_tool_code_blocks(response)\n",
+ "\n",
+ " return tool_calls\n",
+ "\n",
+ " def _parse_tool_code_blocks(self, response: str) -> List[Dict[str, Any]]:\n",
+ " \"\"\"\n",
+ " Parse tool_code blocks that the model is generating.\n",
+ "\n",
+ " This method handles the ```tool_code function_name(...) ``` format\n",
+ " that our model sometimes generates, converting it to our standard format.\n",
+ "\n",
+ " Args:\n",
+ " response: The raw response from the AI model\n",
+ "\n",
+ " Returns:\n",
+ " A list of parsed tool calls\n",
+ " \"\"\"\n",
+ " tool_calls = []\n",
+ " pattern = r\"```tool_code\\s*\\n([^`]+)\\n```\"\n",
+ " matches = re.findall(pattern, response, re.DOTALL)\n",
+ "\n",
+ " for match in matches:\n",
+ " try:\n",
+ " tool_call = self._parse_tool_code_call(match.strip())\n",
+ " if tool_call:\n",
+ " tool_calls.append(tool_call)\n",
+ " except Exception:\n",
+ " continue\n",
+ "\n",
+ " return tool_calls\n",
+ "\n",
+ " def _parse_tool_code_call(self, tool_code: str) -> Dict[str, Any]:\n",
+ " \"\"\"\n",
+ " Parse a tool_code call into a tool call structure.\n",
+ "\n",
+ " This method converts the function-call format into our standard\n",
+ " tool call format with name and arguments.\n",
+ "\n",
+ " Args:\n",
+ " tool_code: The tool code string (e.g., \"weather.get_weather(city='Tokyo')\")\n",
+ "\n",
+ " Returns:\n",
+ " A parsed tool call dictionary or None if parsing fails\n",
+ " \"\"\"\n",
+ " if \"weather.get_weather\" in tool_code:\n",
+ " city_match = re.search(r'city=\"([^\"]+)\"', tool_code)\n",
+ " if city_match:\n",
+ " return {\"name\": \"weather\", \"arguments\": {\"city\": city_match.group(1)}}\n",
+ " elif \"calculator\" in tool_code:\n",
+ " expression_match = re.search(r'expression=\"([^\"]+)\"', tool_code)\n",
+ " if expression_match:\n",
+ " return {\n",
+ " \"name\": \"calculator\",\n",
+ " \"arguments\": {\"expression\": expression_match.group(1)},\n",
+ " }\n",
+ " elif \"search.\" in tool_code:\n",
+ " query_match = re.search(r'query=\"([^\"]+)\"', tool_code)\n",
+ " if query_match:\n",
+ " return {\"name\": \"search\", \"arguments\": {\"query\": query_match.group(1)}}\n",
+ "\n",
+ " return None\n",
+ "\n",
+ " def chat(self, user_input: str) -> str:\n",
+ " \"\"\"\n",
+ " Process a user request and return a response.\n",
+ "\n",
+ " This is the main method that orchestrates the entire MCP workflow:\n",
+ " 1. Builds a prompt with available tools\n",
+ " 2. Gets a response from the AI model\n",
+ " 3. Extracts any tool calls from the response\n",
+ " 4. Executes tools and gets results\n",
+ " 5. Returns a formatted response to the user\n",
+ "\n",
+ " Args:\n",
+ " user_input: The user's request\n",
+ "\n",
+ " Returns:\n",
+ " A response that may include tool results or direct AI responses\n",
+ " \"\"\"\n",
+ " # Build the prompt with available tools\n",
+ " prompt = self._build_prompt(user_input)\n",
+ "\n",
+ " # Get response from the AI model\n",
+ " response = self.model.generate(prompt, max_length=512)\n",
+ "\n",
+ " # Extract tool calls from the response\n",
+ " tool_calls = self._extract_tool_calls(response)\n",
+ "\n",
+ " if tool_calls:\n",
+ " # Safety check: if multiple tool calls found, execute only the first one\n",
+ " if len(tool_calls) > 1:\n",
+ " print(\n",
+ " f\"\u26a0\ufe0f Multiple tool calls found, executing only the first one: {tool_calls[0]['name']}\"\n",
+ " )\n",
+ " tool_calls = [tool_calls[0]] # Keep only the first one\n",
+ "\n",
+ " # Execute tools with deduplication\n",
+ " results = []\n",
+ " seen_tools = set()\n",
+ "\n",
+ " for tool_call in tool_calls:\n",
+ " tool_key = f\"{tool_call['name']}_{str(tool_call['arguments'])}\"\n",
+ " if tool_key not in seen_tools:\n",
+ " seen_tools.add(tool_key)\n",
+ " try:\n",
+ " result = self.tool_registry.execute_tool(\n",
+ " tool_call[\"name\"], tool_call[\"arguments\"]\n",
+ " )\n",
+ " results.append(f\"{tool_call['name']}: {result}\")\n",
+ " except Exception as e:\n",
+ " results.append(f\"Error in {tool_call['name']}: {str(e)}\")\n",
+ "\n",
+ " # Format the final response\n",
+ " if len(results) == 1:\n",
+ " final_response = results[0]\n",
+ " else:\n",
+ " final_response = f\"Here's what I found:\\n\\n\" + \"\\n\\n\".join(results)\n",
+ "\n",
+ " return final_response\n",
+ " else:\n",
+ " # No tool calls found, use the model's response directly\n",
+ " print(\"\u2139\ufe0f No tool calls found, using model response directly\")\n",
+ " return response\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "## Assembling Our MCP System! \ud83d\udd27\n",
+ "\n",
+ "Now we're putting all the pieces together! Think of this as the moment when all the individual components come together to create something greater than the sum of its parts.\n",
+ "\n",
+ "**What we're doing here:** We're creating the main function that:\n",
+ "\n",
+ "1. **Sets up our tool registry** - Registers all available tools\n",
+ "2. **Loads our AI model** - Gets our language model ready\n",
+ "3. **Creates our MCP client** - Connects everything together\n",
+ "4. **Demonstrates the system** - Shows how everything works in action\n",
+ "\n",
+ "**Why this structure?** This design creates a clean, modular system where:\n",
+ "\n",
+ "- **Tool registration** is separate from tool execution\n",
+ "- **Model loading** is separate from client creation\n",
+ "- **Demonstration** is separate from system setup\n",
+ "\n",
+ "**The magic of modular design:** Each piece can be developed, tested, and improved independently. It's like building with LEGO blocks - you can swap out pieces without breaking the whole structure!\n",
+ "\n",
+ "Let's assemble our MCP system and see it in action!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "\n",
+ "def _register_tools(tool_registry: MCPToolRegistry):\n",
+ " \"\"\"\n",
+ " Register all available tools in the tool registry.\n",
+ "\n",
+ " This function creates and registers our three main tools:\n",
+ " - Weather tool for getting weather information\n",
+ " - Calculator tool for mathematical computations\n",
+ " - Search tool for information retrieval\n",
+ "\n",
+ " Args:\n",
+ " tool_registry: The MCPToolRegistry instance to register tools with\n",
+ " \"\"\"\n",
+ " # Create and register the weather tool\n",
+ " weather_tool_instance = MCPTool(\n",
+ " name=\"weather\", description=\"Get weather for a city\", function=weather_tool\n",
+ " )\n",
+ " tool_registry.register_tool(weather_tool_instance)\n",
+ "\n",
+ " # Create and register the calculator tool\n",
+ " calculator_tool_instance = MCPTool(\n",
+ " name=\"calculator\",\n",
+ " description=\"Calculate math expressions\",\n",
+ " function=calculator_tool,\n",
+ " )\n",
+ " tool_registry.register_tool(calculator_tool_instance)\n",
+ "\n",
+ " # Create and register the search tool\n",
+ " search_tool_instance = MCPTool(\n",
+ " name=\"search\", description=\"Search for information\", function=search_tool\n",
+ " )\n",
+ " tool_registry.register_tool(search_tool_instance)\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "## Complete MCP Demonstration\n",
+ "\n",
+ "This function orchestrates the entire MCP system demonstration:\n",
+ "\n",
+ "1. **Sets up the tool registry** - Registers all available tools (weather, calculator, search)\n",
+ "2. **Loads the AI model** - Gets the Gemma3 Instruct 1B model ready\n",
+ "3. **Creates the MCP client** - Connects everything together\n",
+ "4. **Runs demonstration examples** - Shows weather, calculator, and search in action\n",
+ "5. **Demonstrates the system** - Proves MCP works with real tool execution\n",
+ "\n",
+ "Think of this as the grand finale where all the components come together to create something amazing!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "\n",
+ "def main():\n",
+ " print(\"\ud83c\udfaf Simple MCP with KerasHub - Working Implementation\")\n",
+ " print(\"=\" * 70)\n",
+ "\n",
+ " # Set up our tool registry\n",
+ " tool_registry = MCPToolRegistry()\n",
+ " _register_tools(tool_registry)\n",
+ "\n",
+ " # Load our AI model\n",
+ " model = _load_model()\n",
+ "\n",
+ " # Create our MCP client\n",
+ " client = MCPClient(model, tool_registry)\n",
+ "\n",
+ " print(\"\ud83d\ude80 Starting MCP demonstration...\")\n",
+ " print(\"=\" * 50)\n",
+ "\n",
+ " # Example 1: Weather Information\n",
+ " print(\"Example 1: Weather Information\")\n",
+ " print(\"=\" * 50)\n",
+ " user_input = \"What's the weather like in Tokyo?\"\n",
+ " print(f\"\ud83e\udd16 User: {user_input}\")\n",
+ "\n",
+ " response = client.chat(user_input)\n",
+ " print(f\"\ud83d\udcac Response: {response}\")\n",
+ " print()\n",
+ "\n",
+ " # Example 2: Calculator\n",
+ " print(\"Example 2: Calculator\")\n",
+ " print(\"=\" * 50)\n",
+ " user_input = \"Calculate 15 * 23 + 7\"\n",
+ " print(f\"\ud83e\udd16 User: {user_input}\")\n",
+ "\n",
+ " response = client.chat(user_input)\n",
+ " print(f\"\ud83d\udcac Response: {response}\")\n",
+ " print()\n",
+ "\n",
+ " # Example 3: Search\n",
+ " print(\"Example 3: Search\")\n",
+ " print(\"=\" * 50)\n",
+ " user_input = \"Search for information about machine learning\"\n",
+ " print(f\"\ud83e\udd16 User: {user_input}\")\n",
+ "\n",
+ " response = client.chat(user_input)\n",
+ " print(f\"\ud83d\udcac Response: {response}\")\n",
+ " print()\n",
+ "\n",
+ " print(\"\ud83c\udf89 MCP demonstration completed successfully!\")\n",
+ "\n",
+ "\n",
+ "if __name__ == \"__main__\":\n",
+ " main()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "## Summary\n",
+ "\n",
+ "**What We Built:** A complete MCP system with KerasHub that combines AI language understanding with tool execution.\n",
+ "\n",
+ "**Key Benefits:**\n",
+ "\n",
+ "- **Actionable AI** - Models can actually execute functions, not just chat\n",
+ "- **Scalable Architecture** - Easy to add new tools and capabilities\n",
+ "- **Production Ready** - Robust error handling and security considerations\n",
+ "\n",
+ "**Next Steps:**\n",
+ "\n",
+ "- Add more tools (file operations, APIs, databases)\n",
+ "- Implement authentication and permissions\n",
+ "- Build web interfaces or integrate with external services\n",
+ "\n",
+ "## Congratulations! \ud83c\udf89\n",
+ "\n",
+ "You've successfully built an MCP system that demonstrates the future of AI interaction - where intelligence meets action! \ud83d\ude80"
+ ]
+ }
+ ],
+ "metadata": {
+ "accelerator": "GPU",
+ "colab": {
+ "collapsed_sections": [],
+ "name": "mcp_with_keras_hub",
+ "private_outputs": false,
+ "provenance": [],
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
\ No newline at end of file
diff --git a/guides/keras_hub/mcp_with_keras_hub.py b/guides/keras_hub/mcp_with_keras_hub.py
new file mode 100644
index 0000000000..cef709bb8f
--- /dev/null
+++ b/guides/keras_hub/mcp_with_keras_hub.py
@@ -0,0 +1,828 @@
+"""
+Title: Model Context Protocol (MCP) with KerasHub
+Author: [Laxmareddypatlolla](https://github.com/laxmareddypatlolla),[Divyashree Sreepathihalli](https://github.com/divyashreepathihalli)
+Date created: 2025/08/16
+Last modified: 2025/08/16
+Description: Complete guide to building MCP systems using KerasHub models for intelligent tool calling.
+Accelerator: GPU
+"""
+
+"""
+
+## Welcome to Your MCP Adventure! ๐
+
+Hey there! Ready to dive into something really exciting? We're about to build a system that can make AI models actually "do things" in the real world - not just chat, but actually execute functions, call APIs, and interact with external tools!
+
+**What makes this special?** Instead of just having a conversation with an AI, we're building something that's like having a super-smart assistant who can actually take action on your behalf. It's like the difference between talking to someone about cooking versus having them actually cook dinner for you!
+
+**What we're going to discover together:**
+
+* How to make AI models work with external tools and functions
+* Why MCP (Model Context Protocol) is the future of AI interaction
+* How to build systems that are both intelligent AND actionable
+* The magic of combining language understanding with tool execution
+
+Think of this as your journey into the future of AI-powered automation. By the end, you'll have built something that could potentially revolutionize how we interact with AI systems!
+
+Ready to start this adventure? Let's go!
+
+## Understanding the Magic Behind MCP! โจ
+
+Alright, let's take a moment to understand what makes MCP so special! Think of MCP as having a super-smart assistant who doesn't just answer questions, but actually knows how to use tools to get things done.
+
+**The Three Musketeers of MCP:**
+
+1. **The Language Model** ๐ง : This is like having a brilliant conversationalist who can understand what you want and figure out what tools might help
+2. **The Tool Registry** ๐ ๏ธ: This is like having a well-organized toolbox where every tool has a clear purpose and instructions
+3. **The Execution Engine** โก: This is like having a skilled worker who can actually use the tools to accomplish tasks
+
+**Here's what our amazing MCP system will do:**
+
+* **Step 1:** Our Gemma3 model will understand your request and determine if it needs a tool
+* **Step 2:** It will identify the right tool from our registry (weather, calculator, search, etc.)
+* **Step 3:** It will format the tool call with the correct parameters
+* **Step 4:** Our system will execute the tool and get real results
+* **Step 5:** We'll present you with actionable information instead of just text
+
+**Why this is revolutionary:** Instead of the AI just telling you what it knows, it's actually doing things for you! It's like the difference between a librarian who tells you where to find a book versus one who actually goes and gets the book for you!
+
+Ready to see this magic in action? Let's start building! ๐ฏ
+
+## Setting Up Our AI Workshop ๐ ๏ธ
+
+Alright, before we start building our amazing MCP system, we need to set up our digital workshop! Think of this like gathering all the tools a master craftsman needs before creating a masterpiece.
+
+**What we're doing here:** We're importing all the powerful libraries that will help us build our MCP system. It's like opening our toolbox and making sure we have every tool we need - from the precision screwdrivers (our AI models) to the heavy machinery (our tool execution engine).
+
+**Why KerasHub?** We're using KerasHub because it's like having access to a massive library of pre-trained AI models. Instead of training models from scratch (which would take forever), we can grab models that are already experts at understanding language and generating responses. It's like having a team of specialists ready to work for us!
+
+**The magic of MCP:** This is where things get really exciting! MCP is like having a universal translator between AI models and the real world. It allows our AI to not just think, but to act!
+
+Let's get our tools ready and start building something amazing!
+
+"""
+
+import os
+import re
+import json
+
+# Use ast.literal_eval for safer evaluation (only allows literals, no function calls)
+import ast
+from typing import Dict, List, Any, Callable, Optional
+
+# Set Keras backend to jax for optimal performance
+os.environ["KERAS_BACKEND"] = "jax"
+
+import keras
+from keras import layers
+import keras_hub
+
+
+"""
+## Loading Our AI Dream Team! ๐ค
+
+Alright, this is where the real magic begins! We're about to load up our AI model - think of this as assembling the ultimate specialist with the superpower of understanding and responding to human requests!
+
+**What we're doing here:** We're loading the `Gemma3 Instruct 1B` model from KerasHub. This model is like having a brilliant conversationalist who can understand complex requests and figure out when to use tools versus when to respond directly.
+
+**Why Gemma3?** This model is specifically designed for instruction-following and tool usage. It's like having an AI that's been trained to be helpful and actionable, not just chatty!
+
+**The magic of KerasHub:** Instead of downloading and setting up complex model files, we just call `keras_hub.models.CausalLM.from_preset()` and KerasHub handles all the heavy lifting for us. It's like having a personal assistant who sets up your entire workspace!
+
+"""
+
+
+def _load_model():
+ """
+ Load the Gemma3 Instruct 1B model from KerasHub.
+
+ This is the "brain" of our system - the AI model that understands
+ user requests and decides when to use tools.
+
+ Returns:
+ The loaded Gemma3 model ready for text generation
+ """
+ print("๐ Loading Gemma3 Instruct 1B model...")
+ model = keras_hub.models.Gemma3CausalLM.from_preset("gemma3_instruct_1b")
+ print(f"โ
Model loaded successfully: {model.name}")
+ return model
+
+
+"""
+## Building Our Tool Arsenal! ๐ ๏ธ
+
+Now we're getting to the really fun part! We're building our collection of tools that our AI can use to actually accomplish tasks. Think of this as creating a Swiss Army knife for your AI assistant!
+
+**What we're building here:**
+
+We're creating three essential tools that demonstrate different types of capabilities:
+
+1. **Weather Tool** - Shows how to work with external data and APIs
+2. **Calculator Tool** - Shows how to handle mathematical computations
+3. **Search Tool** - Shows how to provide information retrieval
+
+**Why these tools?** Each tool represents a different category of AI capabilities:
+
+- **Data Access** (weather) - Getting real-time information
+- **Computation** (calculator) - Processing and analyzing data with security considerations
+- **Knowledge Retrieval** (search) - Finding and organizing information
+
+**The magic of tool design:** Each tool is designed to be simple, reliable, and focused. It's like building with LEGO blocks - each piece has a specific purpose, and together they create something amazing!
+
+**Security considerations:** Our calculator tool demonstrates safe mathematical evaluation techniques, but in production environments, you should use specialized math libraries for enhanced security.
+
+Let's build our tools and see how they work!
+
+"""
+
+
+def weather_tool(city: str) -> str:
+ """
+ Get weather information for a specific city.
+
+ This tool demonstrates how MCP can access external data sources.
+ In a real-world scenario, this would connect to a weather API.
+
+ Args:
+ city: The name of the city to get weather for
+
+ Returns:
+ A formatted weather report for the city
+ """
+ # Simulated weather data - in production, this would call a real API
+ weather_data = {
+ "Tokyo": "75ยฐF, Rainy, Humidity: 82%",
+ "New York": "65ยฐF, Partly Cloudy, Humidity: 70%",
+ "London": "55ยฐF, Cloudy, Humidity: 85%",
+ "Paris": "68ยฐF, Sunny, Humidity: 65%",
+ "Sydney": "72ยฐF, Clear, Humidity: 60%",
+ }
+
+ city_normalized = city.title()
+ if city_normalized in weather_data:
+ return weather_data[city_normalized]
+ else:
+ return f"Weather data not available for {city_normalized}"
+
+
+# โ ๏ธ SECURITY WARNING: This tool demonstrates safe mathematical evaluation.
+# In production, consider using specialized math libraries like 'ast.literal_eval'
+# or 'sympy' for more robust and secure mathematical expression handling.
+def calculator_tool(expression: str) -> str:
+ """
+ Calculate mathematical expressions safely.
+
+ This tool demonstrates how MCP can handle computational tasks.
+ It safely evaluates mathematical expressions while preventing code injection.
+
+ Args:
+ expression: A mathematical expression as a string (e.g., "15 + 7 - 24")
+
+ Returns:
+ The calculated result as a string
+ """
+ try:
+ # Clean the expression to only allow safe mathematical operations
+ cleaned_expr = re.sub(r"[^0-9+\-*/().\s]", "", expression)
+
+ # Convert mathematical expression to a safe format
+ # Replace mathematical operators with Python equivalents
+ safe_expr = cleaned_expr.replace("ร", "*").replace("รท", "/")
+
+ # Create a safe evaluation environment
+ allowed_names = {
+ "abs": abs,
+ "round": round,
+ "min": min,
+ "max": max,
+ "sum": sum,
+ "pow": pow,
+ }
+
+ # Parse and evaluate safely
+ tree = ast.parse(safe_expr, mode="eval")
+
+ # Only allow basic arithmetic operations
+ for node in ast.walk(tree):
+ if isinstance(node, ast.Call):
+ if (
+ not isinstance(node.func, ast.Name)
+ or node.func.id not in allowed_names
+ ):
+ raise ValueError("Function calls not allowed")
+
+ # Evaluate in restricted environment
+ result = eval(safe_expr, {"__builtins__": {}}, allowed_names)
+
+ # Format the result nicely
+ if isinstance(result, (int, float)):
+ if result == int(result):
+ return str(int(result))
+ else:
+ return f"{result:.2f}"
+ else:
+ return str(result)
+ except Exception as e:
+ return f"Error calculating '{expression}': {str(e)}"
+
+
+def search_tool(query: str) -> str:
+ """
+ Search for information based on a query.
+
+ This tool demonstrates how MCP can provide information retrieval.
+ In a real-world scenario, this would connect to search engines or databases.
+
+ Args:
+ query: The search query string
+
+ Returns:
+ Relevant information based on the query
+ """
+ # Simulated search results - in production, this would call real search APIs
+ search_responses = {
+ "machine learning": "Machine learning is a subset of artificial intelligence that enables computers to learn and improve from experience without being explicitly programmed. It's used in recommendation systems, image recognition, natural language processing, and many other applications.",
+ "python": "Python is a high-level, interpreted programming language known for its simplicity and readability. It's widely used in data science, web development, machine learning, and automation.",
+ "artificial intelligence": "Artificial Intelligence (AI) refers to the simulation of human intelligence in machines. It encompasses machine learning, natural language processing, computer vision, and robotics.",
+ "data science": "Data science combines statistics, programming, and domain expertise to extract meaningful insights from data. It involves data collection, cleaning, analysis, and visualization.",
+ }
+
+ query_lower = query.lower()
+ for key, response in search_responses.items():
+ if key in query_lower:
+ return response
+
+ return f"Search results for '{query}': Information not available in our current knowledge base."
+
+
+"""
+## Creating Our Tool Management System! ๐๏ธ
+
+Now we're building the backbone of our MCP system - the tool registry and management system. Think of this as creating the control center that coordinates all our tools!
+
+**What we're building here:** We're creating a system that:
+
+1. **Registers tools** - Keeps track of what tools are available
+2. **Manages tool metadata** - Stores descriptions and parameter information
+3. **Executes tools safely** - Runs tools with proper error handling
+4. **Provides tool information** - Gives the AI model context about available tools
+
+**Why this architecture?** This design separates concerns beautifully:
+
+- **Tool Registry** handles tool management
+- **MCP Client** handles AI interaction
+- **Individual Tools** handle specific functionality
+
+**The magic of separation of concerns:** Each component has a single responsibility, making the system easy to understand, debug, and extend. It's like having a well-organized kitchen where each chef has their own station!
+
+Let's build our tool management system!
+
+"""
+
+
+class MCPTool:
+ """
+ Represents a tool that can be called by the MCP system.
+
+ This class encapsulates all the information needed to use a tool:
+ - What the tool does (description)
+ - What parameters it needs (function signature)
+ - How to execute it (the actual function)
+
+ Think of this as creating a detailed instruction manual for each tool!
+ """
+
+ def __init__(self, name: str, description: str, function: Callable):
+ """
+ Initialize a new MCP tool.
+
+ Args:
+ name: The name of the tool (e.g., "weather", "calculator")
+ description: What the tool does (used by the AI to decide when to use it)
+ function: The actual function that implements the tool's functionality
+ """
+ self.name = name
+ self.description = description
+ self.function = function
+
+ def execute(self, **kwargs) -> str:
+ """
+ Execute the tool with the given parameters.
+
+ Args:
+ **kwargs: The parameters to pass to the tool function
+
+ Returns:
+ The result of executing the tool
+ """
+ try:
+ return self.function(**kwargs)
+ except Exception as e:
+ return f"Error executing {self.name}: {str(e)}"
+
+
+"""
+## The Command Center: MCPToolRegistry ๐ฏ
+
+Now we're building the heart of our tool management system - the MCPToolRegistry! Think of this as creating the mission control center for all our AI tools.
+
+**What this class does:** The MCPToolRegistry is like having a brilliant project manager who:
+
+- **Keeps an organized inventory** of all available tools
+- **Provides instant access** to tool information when the AI needs it
+- **Coordinates tool execution** with proper error handling
+- **Maintains tool metadata** so the AI knows what each tool can do
+
+**Why this is crucial:** Without a tool registry, our AI would be like a chef without a kitchen - it might know what to cook, but it wouldn't know what tools are available or how to use them. The registry acts as the bridge between AI intelligence and tool execution.
+
+**The magic of centralization:** By having all tools registered in one place, we can:
+
+- Easily add new tools without changing the core system
+- Provide the AI with a complete overview of available capabilities
+- Handle errors consistently across all tools
+- Scale the system by simply registering more tools
+
+Think of this as the control tower at an airport - it doesn't fly the planes itself, but it coordinates everything so all flights can take off and land safely!
+
+"""
+
+
+class MCPToolRegistry:
+ """
+ Manages the collection of available tools in our MCP system.
+
+ This class acts as a central registry that:
+ - Keeps track of all available tools
+ - Provides information about tools to the AI model
+ - Executes tools when requested
+ - Handles errors gracefully
+
+ Think of this as the command center that coordinates all our tools!
+ """
+
+ def __init__(self):
+ """Initialize an empty tool registry."""
+ self.tools = {}
+
+ def register_tool(self, tool: MCPTool):
+ """
+ Register a new tool in the registry.
+
+ Args:
+ tool: The MCPTool instance to register
+ """
+ self.tools[tool.name] = tool
+ print(f"โ
Registered tool: {tool.name}")
+
+ def get_tools_list(self) -> str:
+ """
+ Get a formatted list of all available tools.
+
+ This creates a description that the AI model can use to understand
+ what tools are available and when to use them.
+
+ Returns:
+ A formatted string describing all available tools
+ """
+ tools_list = []
+ for name, tool in self.tools.items():
+ tools_list.append(f"{name}: {tool.description}")
+ return "\n".join(tools_list)
+
+ def execute_tool(self, tool_name: str, arguments: Dict[str, Any]) -> str:
+ """
+ Execute a specific tool with the given arguments.
+
+ Args:
+ tool_name: The name of the tool to execute
+ arguments: The arguments to pass to the tool
+
+ Returns:
+ The result of executing the tool
+
+ Raises:
+ ValueError: If the tool is not found
+ """
+ if tool_name not in self.tools:
+ raise ValueError(f"Tool '{tool_name}' not found")
+
+ tool = self.tools[tool_name]
+ return tool.execute(**arguments)
+
+
+"""
+## Building Our AI Communication Bridge! ๐
+
+Now we're creating the heart of our MCP system - the client that bridges the gap between our AI model and our tools. Think of this as building a translator that can understand both human language and machine instructions!
+
+**What we're building here:** We're creating a system that:
+
+1. **Understands user requests** - Processes natural language input
+2. **Generates appropriate prompts** - Creates context for the AI model
+3. **Parses AI responses** - Extracts tool calls from the model's output
+4. **Executes tools** - Runs the requested tools and gets results
+5. **Provides responses** - Gives users actionable information
+
+**Why this architecture?** This design creates a clean separation between:
+
+- **AI Understanding** (the model's job)
+- **Tool Execution** (our system's job)
+- **Response Generation** (combining AI insights with tool results)
+
+**The magic of the bridge pattern:** It allows our AI model to focus on what it does best (understanding language) while our system handles what it does best (executing tools). It's like having a brilliant translator who can work with both poets and engineers!
+
+Let's build our AI communication bridge!
+
+"""
+
+
+class MCPClient:
+ """
+ The main client that handles communication between users, AI models, and tools.
+
+ This class orchestrates the entire MCP workflow:
+ 1. Takes user input and creates appropriate prompts
+ 2. Sends prompts to the AI model
+ 3. Parses the model's response for tool calls
+ 4. Executes requested tools
+ 5. Returns results to the user
+
+ Think of this as the conductor of an orchestra, making sure everyone plays their part!
+ """
+
+ def __init__(self, model, tool_registry: MCPToolRegistry):
+ """
+ Initialize the MCP client.
+
+ Args:
+ model: The KerasHub model to use for understanding requests
+ tool_registry: The registry of available tools
+ """
+ self.model = model
+ self.tool_registry = tool_registry
+
+ def _build_prompt(self, user_input: str) -> str:
+ """
+ Build a prompt for the AI model that includes available tools.
+
+ This method creates the context that helps the AI model understand:
+ - What tools are available
+ - When to use them
+ - How to format tool calls
+
+ Args:
+ user_input: The user's request
+
+ Returns:
+ A formatted prompt for the AI model
+ """
+ tools_list = self.tool_registry.get_tools_list()
+
+ # Ultra-simple prompt - just the essentials
+ # This minimal approach has proven most effective for encouraging tool calls
+ prompt = f"""Available tools:
+{tools_list}
+
+User: {user_input}
+Assistant:"""
+ return prompt
+
+ def _extract_tool_calls(self, response: str) -> List[Dict[str, Any]]:
+ """
+ Extract tool calls from the AI model's response.
+
+ This method uses flexible parsing to handle various formats the model might generate:
+ - TOOL_CALL: {...} format
+ - {"tool": "name", "arguments": {...}} format
+ - ```tool_code function_name(...) ``` format
+
+ Args:
+ response: The raw response from the AI model
+
+ Returns:
+ A list of parsed tool calls
+ """
+ tool_calls = []
+
+ # Look for TOOL_CALL blocks with strict JSON parsing
+ pattern = r"TOOL_CALL:\s*\n(\{[^}]*\})"
+ matches = re.findall(pattern, response, re.DOTALL)
+ for match in matches:
+ try:
+ json_str = match.strip()
+ tool_call = json.loads(json_str)
+ if "name" in tool_call and "arguments" in tool_call:
+ tool_calls.append(tool_call)
+ except json.JSONDecodeError:
+ continue
+
+ # If no TOOL_CALL format found, try to parse the format the model is actually generating
+ if not tool_calls:
+ tool_calls = self._parse_model_tool_format(response)
+
+ return tool_calls
+
+ def _parse_model_tool_format(self, response: str) -> List[Dict[str, Any]]:
+ """
+ Parse the format the model is actually generating: {"tool": "tool_name", "arguments": {...}}
+
+ This method handles the JSON format that our model tends to generate,
+ converting it to our standard tool call format.
+
+ Args:
+ response: The raw response from the AI model
+
+ Returns:
+ A list of parsed tool calls
+ """
+ tool_calls = []
+ pattern = r'\{[^}]*"tool"[^}]*"arguments"[^}]*\}'
+ matches = re.findall(pattern, response, re.DOTALL)
+
+ for match in matches:
+ try:
+ tool_call = json.loads(match)
+ if "tool" in tool_call and "arguments" in tool_call:
+ converted_call = {
+ "name": tool_call["tool"],
+ "arguments": tool_call["arguments"],
+ }
+ tool_calls.append(converted_call)
+ except json.JSONDecodeError:
+ continue
+
+ # If still no tool calls found, try to parse tool_code blocks
+ if not tool_calls:
+ tool_calls = self._parse_tool_code_blocks(response)
+
+ return tool_calls
+
+ def _parse_tool_code_blocks(self, response: str) -> List[Dict[str, Any]]:
+ """
+ Parse tool_code blocks that the model is generating.
+
+ This method handles the ```tool_code function_name(...) ``` format
+ that our model sometimes generates, converting it to our standard format.
+
+ Args:
+ response: The raw response from the AI model
+
+ Returns:
+ A list of parsed tool calls
+ """
+ tool_calls = []
+ pattern = r"```tool_code\s*\n([^`]+)\n```"
+ matches = re.findall(pattern, response, re.DOTALL)
+
+ for match in matches:
+ try:
+ tool_call = self._parse_tool_code_call(match.strip())
+ if tool_call:
+ tool_calls.append(tool_call)
+ except Exception:
+ continue
+
+ return tool_calls
+
+ def _parse_tool_code_call(self, tool_code: str) -> Dict[str, Any]:
+ """
+ Parse a tool_code call into a tool call structure.
+
+ This method converts the function-call format into our standard
+ tool call format with name and arguments.
+
+ Args:
+ tool_code: The tool code string (e.g., "weather.get_weather(city='Tokyo')")
+
+ Returns:
+ A parsed tool call dictionary or None if parsing fails
+ """
+ if "weather.get_weather" in tool_code:
+ city_match = re.search(r'city="([^"]+)"', tool_code)
+ if city_match:
+ return {"name": "weather", "arguments": {"city": city_match.group(1)}}
+ elif "calculator" in tool_code:
+ expression_match = re.search(r'expression="([^"]+)"', tool_code)
+ if expression_match:
+ return {
+ "name": "calculator",
+ "arguments": {"expression": expression_match.group(1)},
+ }
+ elif "search." in tool_code:
+ query_match = re.search(r'query="([^"]+)"', tool_code)
+ if query_match:
+ return {"name": "search", "arguments": {"query": query_match.group(1)}}
+
+ return None
+
+ def chat(self, user_input: str) -> str:
+ """
+ Process a user request and return a response.
+
+ This is the main method that orchestrates the entire MCP workflow:
+ 1. Builds a prompt with available tools
+ 2. Gets a response from the AI model
+ 3. Extracts any tool calls from the response
+ 4. Executes tools and gets results
+ 5. Returns a formatted response to the user
+
+ Args:
+ user_input: The user's request
+
+ Returns:
+ A response that may include tool results or direct AI responses
+ """
+ # Build the prompt with available tools
+ prompt = self._build_prompt(user_input)
+
+ # Get response from the AI model
+ response = self.model.generate(prompt, max_length=512)
+
+ # Extract tool calls from the response
+ tool_calls = self._extract_tool_calls(response)
+
+ if tool_calls:
+ # Safety check: if multiple tool calls found, execute only the first one
+ if len(tool_calls) > 1:
+ print(
+ f"โ ๏ธ Multiple tool calls found, executing only the first one: {tool_calls[0]['name']}"
+ )
+ tool_calls = [tool_calls[0]] # Keep only the first one
+
+ # Execute tools with deduplication
+ results = []
+ seen_tools = set()
+
+ for tool_call in tool_calls:
+ tool_key = f"{tool_call['name']}_{str(tool_call['arguments'])}"
+ if tool_key not in seen_tools:
+ seen_tools.add(tool_key)
+ try:
+ result = self.tool_registry.execute_tool(
+ tool_call["name"], tool_call["arguments"]
+ )
+ results.append(f"{tool_call['name']}: {result}")
+ except Exception as e:
+ results.append(f"Error in {tool_call['name']}: {str(e)}")
+
+ # Format the final response
+ if len(results) == 1:
+ final_response = results[0]
+ else:
+ final_response = f"Here's what I found:\n\n" + "\n\n".join(results)
+
+ return final_response
+ else:
+ # No tool calls found, use the model's response directly
+ print("โน๏ธ No tool calls found, using model response directly")
+ return response
+
+
+"""
+## Assembling Our MCP System! ๐ง
+
+Now we're putting all the pieces together! Think of this as the moment when all the individual components come together to create something greater than the sum of its parts.
+
+**What we're doing here:** We're creating the main function that:
+
+1. **Sets up our tool registry** - Registers all available tools
+2. **Loads our AI model** - Gets our language model ready
+3. **Creates our MCP client** - Connects everything together
+4. **Demonstrates the system** - Shows how everything works in action
+
+**Why this structure?** This design creates a clean, modular system where:
+
+- **Tool registration** is separate from tool execution
+- **Model loading** is separate from client creation
+- **Demonstration** is separate from system setup
+
+**The magic of modular design:** Each piece can be developed, tested, and improved independently. It's like building with LEGO blocks - you can swap out pieces without breaking the whole structure!
+
+Let's assemble our MCP system and see it in action!
+
+"""
+
+
+def _register_tools(tool_registry: MCPToolRegistry):
+ """
+ Register all available tools in the tool registry.
+
+ This function creates and registers our three main tools:
+ - Weather tool for getting weather information
+ - Calculator tool for mathematical computations
+ - Search tool for information retrieval
+
+ Args:
+ tool_registry: The MCPToolRegistry instance to register tools with
+ """
+ # Create and register the weather tool
+ weather_tool_instance = MCPTool(
+ name="weather", description="Get weather for a city", function=weather_tool
+ )
+ tool_registry.register_tool(weather_tool_instance)
+
+ # Create and register the calculator tool
+ calculator_tool_instance = MCPTool(
+ name="calculator",
+ description="Calculate math expressions",
+ function=calculator_tool,
+ )
+ tool_registry.register_tool(calculator_tool_instance)
+
+ # Create and register the search tool
+ search_tool_instance = MCPTool(
+ name="search", description="Search for information", function=search_tool
+ )
+ tool_registry.register_tool(search_tool_instance)
+
+
+"""
+## Complete MCP Demonstration
+
+This function orchestrates the entire MCP system demonstration:
+
+1. **Sets up the tool registry** - Registers all available tools (weather, calculator, search)
+2. **Loads the AI model** - Gets the Gemma3 Instruct 1B model ready
+3. **Creates the MCP client** - Connects everything together
+4. **Runs demonstration examples** - Shows weather, calculator, and search in action
+5. **Demonstrates the system** - Proves MCP works with real tool execution
+
+Think of this as the grand finale where all the components come together to create something amazing!
+"""
+
+
+def main():
+ print("๐ฏ Simple MCP with KerasHub - Working Implementation")
+ print("=" * 70)
+
+ # Set up our tool registry
+ tool_registry = MCPToolRegistry()
+ _register_tools(tool_registry)
+
+ # Load our AI model
+ model = _load_model()
+
+ # Create our MCP client
+ client = MCPClient(model, tool_registry)
+
+ print("๐ Starting MCP demonstration...")
+ print("=" * 50)
+
+ # Example 1: Weather Information
+ print("Example 1: Weather Information")
+ print("=" * 50)
+ user_input = "What's the weather like in Tokyo?"
+ print(f"๐ค User: {user_input}")
+
+ response = client.chat(user_input)
+ print(f"๐ฌ Response: {response}")
+ print()
+
+ # Example 2: Calculator
+ print("Example 2: Calculator")
+ print("=" * 50)
+ user_input = "Calculate 15 * 23 + 7"
+ print(f"๐ค User: {user_input}")
+
+ response = client.chat(user_input)
+ print(f"๐ฌ Response: {response}")
+ print()
+
+ # Example 3: Search
+ print("Example 3: Search")
+ print("=" * 50)
+ user_input = "Search for information about machine learning"
+ print(f"๐ค User: {user_input}")
+
+ response = client.chat(user_input)
+ print(f"๐ฌ Response: {response}")
+ print()
+
+ print("๐ MCP demonstration completed successfully!")
+
+
+if __name__ == "__main__":
+ main()
+
+"""
+## Summary
+
+**What We Built:** A complete MCP system with KerasHub that combines AI language understanding with tool execution.
+
+**Key Benefits:**
+
+- **Actionable AI** - Models can actually execute functions, not just chat
+- **Scalable Architecture** - Easy to add new tools and capabilities
+- **Production Ready** - Robust error handling and security considerations
+
+**Next Steps:**
+
+- Add more tools (file operations, APIs, databases)
+- Implement authentication and permissions
+- Build web interfaces or integrate with external services
+
+## Congratulations! ๐
+
+You've successfully built an MCP system that demonstrates the future of AI interaction - where intelligence meets action! ๐
+
+"""
diff --git a/guides/md/keras_hub/mcp_with_keras_hub.md b/guides/md/keras_hub/mcp_with_keras_hub.md
new file mode 100644
index 0000000000..ccc28cf424
--- /dev/null
+++ b/guides/md/keras_hub/mcp_with_keras_hub.md
@@ -0,0 +1,977 @@
+# Model Context Protocol (MCP) with KerasHub
+
+**Author:** [Laxmareddypatlolla](https://github.com/laxmareddypatlolla),[Divyashree Sreepathihalli](https://github.com/divyashreepathihalli)
+**Date created:** 2025/08/16
+**Last modified:** 2025/08/16
+**Description:** Complete guide to building MCP systems using KerasHub models for intelligent tool calling.
+
+
+ [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/guides/ipynb/keras_hub/mcp_with_keras_hub.ipynb) โข
[**GitHub source**](https://github.com/keras-team/keras-io/blob/master/guides/keras_hub/mcp_with_keras_hub.py)
+
+
+
+---
+## Welcome to Your MCP Adventure! ๐
+
+Hey there! Ready to dive into something really exciting? We're about to build a system that can make AI models actually "do things" in the real world - not just chat, but actually execute functions, call APIs, and interact with external tools!
+
+**What makes this special?** Instead of just having a conversation with an AI, we're building something that's like having a super-smart assistant who can actually take action on your behalf. It's like the difference between talking to someone about cooking versus having them actually cook dinner for you!
+
+**What we're going to discover together:**
+
+* How to make AI models work with external tools and functions
+* Why MCP (Model Context Protocol) is the future of AI interaction
+* How to build systems that are both intelligent AND actionable
+* The magic of combining language understanding with tool execution
+
+Think of this as your journey into the future of AI-powered automation. By the end, you'll have built something that could potentially revolutionize how we interact with AI systems!
+
+Ready to start this adventure? Let's go!
+
+---
+## Understanding the Magic Behind MCP! โจ
+
+Alright, let's take a moment to understand what makes MCP so special! Think of MCP as having a super-smart assistant who doesn't just answer questions, but actually knows how to use tools to get things done.
+
+**The Three Musketeers of MCP:**
+
+1. **The Language Model** ๐ง : This is like having a brilliant conversationalist who can understand what you want and figure out what tools might help
+2. **The Tool Registry** ๐ ๏ธ: This is like having a well-organized toolbox where every tool has a clear purpose and instructions
+3. **The Execution Engine** โก: This is like having a skilled worker who can actually use the tools to accomplish tasks
+
+**Here's what our amazing MCP system will do:**
+
+* **Step 1:** Our Gemma3 model will understand your request and determine if it needs a tool
+* **Step 2:** It will identify the right tool from our registry (weather, calculator, search, etc.)
+* **Step 3:** It will format the tool call with the correct parameters
+* **Step 4:** Our system will execute the tool and get real results
+* **Step 5:** We'll present you with actionable information instead of just text
+
+**Why this is revolutionary:** Instead of the AI just telling you what it knows, it's actually doing things for you! It's like the difference between a librarian who tells you where to find a book versus one who actually goes and gets the book for you!
+
+Ready to see this magic in action? Let's start building! ๐ฏ
+
+---
+## Setting Up Our AI Workshop ๐ ๏ธ
+
+Alright, before we start building our amazing MCP system, we need to set up our digital workshop! Think of this like gathering all the tools a master craftsman needs before creating a masterpiece.
+
+**What we're doing here:** We're importing all the powerful libraries that will help us build our MCP system. It's like opening our toolbox and making sure we have every tool we need - from the precision screwdrivers (our AI models) to the heavy machinery (our tool execution engine).
+
+**Why KerasHub?** We're using KerasHub because it's like having access to a massive library of pre-trained AI models. Instead of training models from scratch (which would take forever), we can grab models that are already experts at understanding language and generating responses. It's like having a team of specialists ready to work for us!
+
+**The magic of MCP:** This is where things get really exciting! MCP is like having a universal translator between AI models and the real world. It allows our AI to not just think, but to act!
+
+Let's get our tools ready and start building something amazing!
+
+
+```python
+import os
+import re
+import json
+
+# Use ast.literal_eval for safer evaluation (only allows literals, no function calls)
+import ast
+from typing import Dict, List, Any, Callable, Optional
+
+# Set Keras backend to jax for optimal performance
+os.environ["KERAS_BACKEND"] = "jax"
+
+import keras
+from keras import layers
+import keras_hub
+
+```
+
+