Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
04612c3
Add resource management to MCPAggregator and Agent classes
StreetLamb May 27, 2025
b2ad462
Update MCPAggregator to fetch resources using URI
StreetLamb May 28, 2025
c995ec5
Add utility modules for content and resource handling
StreetLamb May 28, 2025
2d4f3bd
Add resource URI parameter to LLM generation methods in AugmentedLLM …
StreetLamb May 28, 2025
f9161bf
Fix type conversion for resource URI in MCPAggregator
StreetLamb May 28, 2025
6967174
Add basic example to showcase using mcp resources
StreetLamb May 28, 2025
6bbb18d
Refactor resource URI handling in AugmentedLLM and OpenAIAugmentedLLM…
StreetLamb May 29, 2025
d5265a5
Enhance MCP Primitives example to demonstrate resource usage and serv…
StreetLamb May 29, 2025
3b6be04
Add OpenAIConverter class for converting MCP message types to OpenAI …
StreetLamb May 29, 2025
fec2a5a
Add AnthropicConverter for converting MCP message types to Anthropic …
StreetLamb May 29, 2025
fbda14a
Add PromptMessageMultipart class for handling multiple content parts …
StreetLamb May 29, 2025
f8c844a
Add resource_uris parameter to generate_str and related methods in An…
StreetLamb May 29, 2025
efb8108
Add resource_uris parameter to generate methods in BedrockAugmentedLL…
StreetLamb May 29, 2025
6d4d6c6
Add resource handling in AzureAugmentedLLM and implement AzureConvert…
StreetLamb May 29, 2025
38f2b60
Add resource handling and GoogleConverter for multipart message conve…
StreetLamb May 29, 2025
b6db823
Add resource_uris parameter to generate_structured method in OllamaAu…
StreetLamb May 29, 2025
1e0f411
Merge branch 'main' of https://github.com/lastmile-ai/mcp-agent into …
StreetLamb May 29, 2025
551a075
Refactor resource handling in LLM classes to use attached resources i…
StreetLamb May 30, 2025
874537e
Add prompt attachment functionality to LLM classes and update message…
StreetLamb May 30, 2025
9c908aa
Add demo server implementation with resource and prompt handling
StreetLamb May 30, 2025
345fecf
Update README to include prompts in MCP primitives example
StreetLamb May 30, 2025
b034621
Refactor settings in main.py
StreetLamb May 30, 2025
7d9120f
Refactor LLM message handling to integrate PromptMessage support and …
StreetLamb May 31, 2025
0d81b9a
Remove unused settings and health status resources from demo server; …
StreetLamb May 31, 2025
effa673
Refactor and add comments in example
StreetLamb May 31, 2025
c27b588
Refactor assertion in TestAnthropicAugmentedLLM to improve readability
StreetLamb May 31, 2025
66550c1
Add create_prompt method to generate prompt messages from names and r…
StreetLamb Jun 1, 2025
aa23e59
Update README and main.py to reflect changes in resource and prompt r…
StreetLamb Jun 1, 2025
fced30c
Enhance MCPAggregator to return resources alongside tools and prompts…
StreetLamb Jun 1, 2025
e4a256b
Add comprehensive tests for MIME utilities and multipart converters
StreetLamb Jun 1, 2025
332d71d
Refactor resource URI handling to use AnyUrl for improved type safety…
StreetLamb Jun 1, 2025
b096b6e
Fix exception class docstring and update file tags in OpenAIConverter…
StreetLamb Jun 1, 2025
ccdd310
Add comprehensive tests for Azure, Bedrock, and Google multipart conv…
StreetLamb Jun 1, 2025
5da0ec4
Minor code formatting
StreetLamb Jun 1, 2025
d6bc44c
Add tests for generating responses with various input types in Augmen…
StreetLamb Jun 1, 2025
fa96d30
Refactor message conversion methods to use unified mixed message hand…
StreetLamb Jun 1, 2025
11956c5
Refactor message tracing logic in AugmentedLLM to simplify attribute …
StreetLamb Jun 1, 2025
bd65320
Minor formatting
StreetLamb Jun 1, 2025
cbbbd9c
Refactor AzureConverter tests to assert list content structure for te…
StreetLamb Jun 1, 2025
2ba325f
Fix potential issues raised by coderabbitai
StreetLamb Jun 1, 2025
7a8ece7
Refactor URI handling to use str() for better compatibility and clari…
StreetLamb Jun 1, 2025
5b47509
Refactor URI handling in Azure and Google converters to use str() for…
StreetLamb Jun 2, 2025
6242777
Remove unnecessary import of AnyUrl in test_create_fallback_text_with…
StreetLamb Jun 2, 2025
a1f43e0
Add async get_poem tool to retrieve poems based on a topic; fix loggi…
StreetLamb Jun 3, 2025
22242c9
Store active LLM instance in context for MCP sampling callbacks; upda…
StreetLamb Jun 4, 2025
ad3e04a
Implement SamplingHandler for human-in-the-loop sampling requests; re…
StreetLamb Jun 4, 2025
e1eee3a
Refactor human approval workflow in SamplingHandler to include reject…
StreetLamb Jun 4, 2025
cf6f205
Update requirements and lock files to include fastmcp dependency and …
StreetLamb Jun 4, 2025
e81f8b2
Refactor get_poem tool to get_haiku; update sampling logic for haiku …
StreetLamb Jun 4, 2025
26eb541
Refactor demo server to streamline user data retrieval; update main a…
StreetLamb Jun 4, 2025
3601bd0
Merge branch 'main' of https://github.com/lastmile-ai/mcp-agent into …
StreetLamb Jun 24, 2025
8dcbf8b
Remove external FastMCP dependency, update example to use native Fast…
StreetLamb Jun 24, 2025
d17b2c0
Merge branch 'main' into feat/sampling
roman-van-der-krogt Aug 18, 2025
5134ee0
sampling updates
roman-van-der-krogt Aug 22, 2025
1901dba
remove temp files that shouldn't have been checked in
roman-van-der-krogt Aug 27, 2025
2fa0347
fix linting issues
roman-van-der-krogt Aug 28, 2025
4bd0a3d
test fixes & review feedback
roman-van-der-krogt Aug 28, 2025
d39b979
fix linting issue after test updates
roman-van-der-krogt Aug 28, 2025
4f8bf92
Merge branch 'main' into rvdk/sampling
roman-van-der-krogt Aug 29, 2025
882dc29
review comments
roman-van-der-krogt Sep 1, 2025
7176f5d
Merge branch 'main' into rvdk/sampling
roman-van-der-krogt Sep 1, 2025
50470a8
update code comment
roman-van-der-krogt Sep 1, 2025
dfdd558
fix lint errors
roman-van-der-krogt Sep 1, 2025
fd2f1c1
add proxy for temporal based flows
roman-van-der-krogt Sep 2, 2025
553a567
Merge branch 'main' into rvdk/sampling
roman-van-der-krogt Sep 8, 2025
98b1d7e
be consistent with notification handling
roman-van-der-krogt Sep 8, 2025
83ce967
Merge branch 'main' into rvdk/sampling
roman-van-der-krogt Sep 11, 2025
39751c2
cleanups
roman-van-der-krogt Sep 11, 2025
ec3b314
more cleanups
roman-van-der-krogt Sep 11, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 1 addition & 3 deletions examples/mcp/mcp_prompts_and_resources/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,7 @@ This example demonstrates **both resources and prompts**.

- **Resources:**
- `demo://docs/readme`: A sample README file (Markdown)
- `demo://config/settings`: Example configuration settings (JSON)
- `demo://data/users`: Example user data (JSON)
- `demo://status/health`: Dynamic server health/status info (JSON)
- `demo://data/friends`: Example user data (JSON)
- **Prompt:**
- `echo`: A simple prompt that echoes back the provided message

Expand Down
91 changes: 49 additions & 42 deletions examples/mcp/mcp_prompts_and_resources/demo_server.py
Original file line number Diff line number Diff line change
@@ -1,54 +1,14 @@
from mcp.server.fastmcp import FastMCP
import datetime
from mcp.types import ModelPreferences, ModelHint, SamplingMessage, TextContent
import json

# Store server start time
SERVER_START_TIME = datetime.datetime.utcnow()

mcp = FastMCP("Resource Demo MCP Server")

# Define some static resources
STATIC_RESOURCES = {
"demo://docs/readme": {
"name": "README",
"description": "A sample README file.",
"content_type": "text/markdown",
"content": "# Demo Resource Server\n\nThis is a sample README resource provided by the demo MCP server.",
},
"demo://data/users": {
"name": "User Data",
"description": "Sample user data in JSON format.",
"content_type": "application/json",
"content": json.dumps(
[
{"id": 1, "name": "Alice"},
{"id": 2, "name": "Bob"},
{"id": 3, "name": "Charlie"},
],
indent=2,
),
},
}


@mcp.resource("demo://docs/readme")
def get_readme():
"""Provide the README file content."""
meta = STATIC_RESOURCES["demo://docs/readme"]
return meta["content"]


@mcp.resource("demo://data/users")
def get_users():
"""Provide user data."""
meta = STATIC_RESOURCES["demo://data/users"]
return meta["content"]


@mcp.resource("demo://{city}/weather")
def get_weather(city: str) -> str:
"""Provide a simple weather report for a given city."""
return f"It is sunny in {city} today!"
return "# Demo Resource Server\n\nThis is a sample README resource provided by the demo MCP server."


@mcp.prompt()
Expand All @@ -60,6 +20,53 @@ def echo(message: str) -> str:
return f"Prompt: {message}"


@mcp.resource("demo://data/friends")
def get_users():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function name get_users() doesn't match its purpose or registration. It's registered as a resource for demo://data/friends and returns friend data, but the function name suggests it retrieves user data. Consider renaming this function to get_friends() to maintain semantic consistency between the function name, its registration path, and the data it returns.

Suggested change
def get_users():
def get_friends():

Spotted by Diamond

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

"""Provide my friend list."""
return (
json.dumps(
[
{"id": 1, "friend": "Alice"},
]
)
)


@mcp.prompt()
def get_haiku_prompt(topic: str) -> str:
"""Get a haiku prompt about a given topic."""
return f"I am fascinated about {topic}. Can you generate a haiku combining {topic} + my friend name?"


@mcp.tool()
async def get_haiku(topic: str) -> str:
"""Get a haiku about a given topic."""
haiku = await mcp.get_context().session.create_message(
messages=[
SamplingMessage(
role="user",
content=TextContent(
type="text", text=f"Generate a haiku about {topic}."
),
)
],
system_prompt="You are a poet.",
max_tokens=100,
temperature=0.7,
model_preferences=ModelPreferences(
hints=[ModelHint(name="gpt-4o-mini")],
costPriority=0.1,
speedPriority=0.8,
intelligencePriority=0.1,
),
)

if isinstance(haiku.content, TextContent):
return haiku.content.text
else:
return "Haiku generation failed, unexpected content type."


def main():
"""Main entry point for the MCP server."""
mcp.run()
Expand Down
13 changes: 10 additions & 3 deletions examples/mcp/mcp_prompts_and_resources/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@
)
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM
from mcp_agent.human_input.handler import console_input_callback


settings = Settings(
execution_engine="asyncio",
Expand All @@ -30,7 +32,9 @@

# Settings can either be specified programmatically,
# or loaded from mcp_agent.config.yaml/mcp_agent.secrets.yaml
app = MCPApp(name="mcp_basic_agent") # settings=settings)
app = MCPApp(
name="mcp_basic_agent", human_input_callback=console_input_callback
) # settings=settings)


async def example_usage():
Expand Down Expand Up @@ -71,13 +75,16 @@ async def example_usage():
)

llm = await agent.attach_llm(OpenAIAugmentedLLM)
res = await llm.generate_str(
summary = await llm.generate_str(
[
"Summarise what are my prompts and resources?",
*combined_messages,
]
)
logger.info(f"Summary: {res}")
logger.info(f"Summary: {summary}")

haiku = await llm.generate_str("Write me a haiku")
logger.info(f"Haiku: {haiku}")


if __name__ == "__main__":
Expand Down
101 changes: 101 additions & 0 deletions examples/mcp/mcp_sampling/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
# MCP Sampling Example

This example demonstrates how to use **MCP sampling** in an agent application.
It shows how to connect to an MCP server that exposes a tool that uses a sampling request to generate a response.

---

## What is MCP sampling?
Sampling in MCP allows servers to implement agentic behaviors, by enabling LLM calls to occur nested inside other MCP server features.
Following the MCP recommendations, users are prompted to approve sampling requests, as well as the output produced by the LLM for the sampling request.
More details can be found in the [MCP documentation](https://modelcontextprotocol.io/specification/2025-06-18/client/sampling).

This example demonstrates sampling using [MCP agent servers](https://github.com/lastmile-ai/mcp-agent/blob/main/examples/mcp_agent_server/README.md).
It is also possible to use sampling when explicitly creating an MCP client. The code for that would look like the following:

```python
settings = ... # MCP agent configuration
registry = ServerRegistry(settings)

@mcp.tool()
async def my_tool(input: str, ctx: Context) -> str:
async with gen_client("my_server", registry, upstream_session=ctx.session) as my_client:
result = await my_client.call_tool("some_tool", {"input": input})
... # etc
```

---

## Example Overview

- **nested_server.py** implements a simple MCP server that uses sampling to generate a haiku about a given topic
- **demo_server.py** implements a simple MCP server that implements an agent generating haikus using the tool exposed by `nested_server.py`
- **main.py** shows how to:
1. Connect an agent to the demo MCP server, and then
2. Invoke the agent implemented by the demo MCP server, thereby triggering a sampling request.

---

## Architecture

```plaintext
┌────────────────────┐
│ nested_server │──────┐
│ MCP Server │ │
└─────────┬──────────┘ │
│ │
▼ │
┌────────────────────┐ │
│ demo_server │ │
│ MCP Server │ │
└─────────┬──────────┘ │
│ sampling, via user approval
▼ │
┌────────────────────┐ │
│ Agent (Python) │ │
│ + LLM (OpenAI) │◀─────┘
└─────────┬──────────┘
[User/Developer]
```

---

## 1. Setup

Clone the repo and navigate to this example:

```bash
git clone https://github.com/lastmile-ai/mcp-agent.git
cd mcp-agent/examples/mcp/mcp_sampling
```

---

## 2. Run the Agent Example

Run the agent script which should auto install all necessary dependencies:

```bash
uv run main.py
```

You should see logs showing:

- The agent connecting to the demo server, and calling the tool
- A request to approve the sampling request; type `approve` to approve (anything else will deny the request)
- A request to approve the result of the sampling request
- The final result of the tool call

---

## References

- [Model Context Protocol (MCP) Introduction](https://modelcontextprotocol.io/introduction)
- [MCP Agent Framework](https://github.com/lastmile-ai/mcp-agent)
- [MCP Server Sampling](https://modelcontextprotocol.io/specification/2025-06-18/client/sampling)

---

This example is a minimal, practical demonstration of how to use **MCP sampling** as first-class context for agent applications.
120 changes: 120 additions & 0 deletions examples/mcp/mcp_sampling/asyncio/demo_server.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
"""
A simple workflow server which generates haikus on request using a tool.
"""

import asyncio
import logging

import yaml
from mcp.server.fastmcp import FastMCP

from mcp_agent.app import MCPApp
from mcp_agent.config import Settings, LoggerSettings, MCPSettings, MCPServerSettings, LogPathSettings
from mcp_agent.server.app_server import create_mcp_server_for_app
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM
from mcp_agent.executor.workflow import Workflow, WorkflowResult

# Initialize logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

# Note: This is purely optional:
# if not provided, a default FastMCP server will be created by MCPApp using create_mcp_server_for_app()
mcp = FastMCP(name="haiku_generation_server", description="Server to generate haikus")


# Create settings explicitly, as we want to use a different configuration from the main app
secrets_file = Settings.find_secrets()
if secrets_file and secrets_file.exists():
with open(secrets_file, "r", encoding="utf-8") as f:
yaml_secrets = yaml.safe_load(f) or {}
openai_secret = yaml_secrets["openai"]


settings = Settings(
Comment on lines +27 to +35
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Avoid NameError/KeyError when secrets are absent.

openai_secret may be undefined or key missing.

-secrets_file = Settings.find_secrets()
-if secrets_file and secrets_file.exists():
-    with open(secrets_file, "r", encoding="utf-8") as f:
-        yaml_secrets = yaml.safe_load(f) or {}
-        openai_secret = yaml_secrets["openai"]
+secrets_file = Settings.find_secrets()
+openai_secret = None
+if secrets_file and secrets_file.exists():
+    with open(secrets_file, "r", encoding="utf-8") as f:
+        yaml_secrets = yaml.safe_load(f) or {}
+        openai_secret = yaml_secrets.get("openai")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Create settings explicitly, as we want to use a different configuration from the main app
secrets_file = Settings.find_secrets()
if secrets_file and secrets_file.exists():
with open(secrets_file, "r", encoding="utf-8") as f:
yaml_secrets = yaml.safe_load(f) or {}
openai_secret = yaml_secrets["openai"]
settings = Settings(
# Create settings explicitly, as we want to use a different configuration from the main app
secrets_file = Settings.find_secrets()
openai_secret = None
if secrets_file and secrets_file.exists():
with open(secrets_file, "r", encoding="utf-8") as f:
yaml_secrets = yaml.safe_load(f) or {}
openai_secret = yaml_secrets.get("openai")
settings = Settings(
🤖 Prompt for AI Agents
In examples/mcp/mcp_sampling/asyncio/demo_server.py around lines 27 to 35,
openai_secret can be left undefined or KeyError raised when the secrets file or
"openai" key is absent; initialize openai_secret = None before the if block and
when loading YAML replace yaml_secrets["openai"] with yaml_secrets.get("openai")
(or use a try/except KeyError) so openai_secret is always defined (None if
missing) before passing into Settings, and optionally log/warn if the secret is
missing.

execution_engine="asyncio",
logger=LoggerSettings(
type="file",
level="debug",
path_settings=LogPathSettings(
path_pattern="logs/demo_server-{unique_id}.jsonl",
unique_id="timestamp",
timestamp_format="%Y%m%d_%H%M%S"),
),
mcp=MCPSettings(
servers={
"haiku_server": MCPServerSettings(
command="uv",
args=["run", "nested_server.py"],
description="nested server providing a haiku generator"
)
}
),
openai=openai_secret
)

# Define the MCPApp instance
app = MCPApp(
name="haiku_server",
description="Haiku server",
mcp=mcp,
settings=settings
)

@app.workflow
class HaikuWorkflow(Workflow[str]):
"""
A workflow that generates haikus on request.
"""

@app.workflow_run
async def run(self, input: str) -> WorkflowResult[str]:
"""
Run the haiku agent workflow.

Args:
input: The topic to create a haiku about

Returns:
WorkflowResult containing the processed data.
"""

logger = app.logger

haiku_agent = Agent(
name="poet",
instruction="""You are an agent with access to a tool that helps you write haikus.""",
server_names=["haiku_server"],
)

async with haiku_agent:
llm = await haiku_agent.attach_llm(OpenAIAugmentedLLM)

result = await llm.generate_str(
message=f"Write a haiku about {input} using the tool at your disposal",
)
logger.info(f"Input: {input}, Result: {result}")

return WorkflowResult(value=result)


async def main():
async with app.run() as agent_app:
# Log registered workflows and agent configurations
logger.info(f"Creating MCP server for {agent_app.name}")

logger.info("Registered workflows:")
for workflow_id in agent_app.workflows:
logger.info(f" - {workflow_id}")

# Create the MCP server that exposes both workflows and agent configurations
mcp_server = create_mcp_server_for_app(agent_app, **({}))
logger.info(f"MCP Server settings: {mcp_server.settings}")

# Run the server
await mcp_server.run_stdio_async()


if __name__ == "__main__":
asyncio.run(main())
Loading
Loading