Skip to content

Conversation

saqadri
Copy link
Collaborator

@saqadri saqadri commented Aug 9, 2025

Leading up to a wider revamp that removes the need to use both Agent and AugmentedLLM classes (stay tuned)

Summary by CodeRabbit

  • New Features

    • Added new asynchronous methods for easier access to tool listing and connection management in LLM workflows.
    • Enabled routing components to support augmented LLM interfaces, allowing for more flexible workflow composition.
    • Introduced new asynchronous methods for generating structured and string-based routing results.
  • Chores

    • Updated router creation methods to accept an optional LLM parameter for future extensibility.

Copy link

coderabbitai bot commented Aug 9, 2025

Walkthrough

The changes introduce new asynchronous proxy methods to the AugmentedLLM class for direct agent access, refactor LLMRouter to inherit from both AugmentedLLM and Router with new async methods, and update router factory methods in Anthropic and OpenAI router classes to accept an optional llm parameter.

Changes

Cohort / File(s) Change Summary
AugmentedLLM Proxy Methods
src/mcp_agent/workflows/llm/augmented_llm.py
Added async proxy methods (list_tools, close, __aenter__, __aexit__) to delegate to the underlying agent, enabling direct access to agent functionality and async context management.
LLMRouter Refactor and AugmentedLLM Integration
src/mcp_agent/workflows/router/router_llm.py
Refactored LLMRouter to inherit from both AugmentedLLM and Router. Added classifier_llm attribute, updated internal references, unified tracing, and implemented async generate, generate_str, and generate_structured methods to conform to the AugmentedLLM interface.
Anthropic Router Factory Signature Update
src/mcp_agent/workflows/router/router_llm_anthropic.py
Added an optional llm parameter to the AnthropicLLMRouter.create async class method and imported AugmentedLLM. No internal logic changes.
OpenAI Router Factory Signature Update
src/mcp_agent/workflows/router/router_llm_openai.py
Added an optional llm parameter to the OpenAILLMRouter.create async class method and imported AugmentedLLM. No internal logic changes.

Sequence Diagram(s)

LLMRouter as AugmentedLLM: Routing Request Flow

sequenceDiagram
    participant User
    participant LLMRouter
    participant classifier_llm
    participant Router

    User->>LLMRouter: generate / generate_str / generate_structured(message)
    LLMRouter->>Router: _route_with_llm(message)
    Router->>classifier_llm: generate_structured(message)
    classifier_llm-->>Router: StructuredResponse
    Router-->>LLMRouter: Routing result
    LLMRouter-->>User: Result (dict, str, or StructuredResponse)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~18 minutes

Poem

In fields of code, a rabbit hops—
New async doors, the agent unlocks.
Routers now wear two clever hats,
Passing messages in structured chats.
Anthropic and OpenAI, too,
Await a friendly LLM view.
🐇✨ The workflow garden grows anew!

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/augmented_llm_revamp

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
src/mcp_agent/workflows/router/router_llm.py (2)

398-425: Expose top_k instead of hard-coding 5

generate() always calls _route_with_llm(..., top_k=5), but callers have no way to tune the breadth of routing. Making top_k a parameter (or deriving it from request_params) would give the API necessary flexibility.

-    async def generate(
-        self,
-        message: str | Any | List[Any],
-        request_params: RequestParams | None = None,
-    ) -> List[Any]:
+    async def generate(
+        self,
+        message: str | Any | List[Any],
+        *,
+        top_k: int = 5,
+        request_params: RequestParams | None = None,
+    ) -> List[Any]:-        results = await self._route_with_llm(str(message), top_k=5)
+        results = await self._route_with_llm(str(message), top_k=top_k)

336-337: Span attribute key "llm" may collide

Using the generic key "llm" for the classifier name could overwrite or conflict with other tracing attributes. A more specific key (e.g., "router.classifier_llm") avoids ambiguity.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e66bd0a and 0edd193.

📒 Files selected for processing (4)
  • src/mcp_agent/workflows/llm/augmented_llm.py (2 hunks)
  • src/mcp_agent/workflows/router/router_llm.py (7 hunks)
  • src/mcp_agent/workflows/router/router_llm_anthropic.py (2 hunks)
  • src/mcp_agent/workflows/router/router_llm_openai.py (2 hunks)
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
PR: lastmile-ai/mcp-agent#0
File: examples/usecases/reliable_conversation/CLAUDE.md:0-0
Timestamp: 2025-07-22T18:59:49.368Z
Learning: Applies to examples/usecases/reliable_conversation/examples/reliable_conversation/src/**/*.py : Use mcp-agent's Agent abstraction for ALL LLM interactions, including quality evaluation, to ensure consistent tool access, logging, and error handling.
📚 Learning: 2025-07-22T18:59:49.368Z
Learnt from: CR
PR: lastmile-ai/mcp-agent#0
File: examples/usecases/reliable_conversation/CLAUDE.md:0-0
Timestamp: 2025-07-22T18:59:49.368Z
Learning: Applies to examples/usecases/reliable_conversation/examples/reliable_conversation/src/**/*.py : Use mcp-agent's Agent abstraction for ALL LLM interactions, including quality evaluation, to ensure consistent tool access, logging, and error handling.

Applied to files:

  • src/mcp_agent/workflows/router/router_llm_anthropic.py
  • src/mcp_agent/workflows/router/router_llm_openai.py
  • src/mcp_agent/workflows/llm/augmented_llm.py
  • src/mcp_agent/workflows/router/router_llm.py
📚 Learning: 2025-07-22T18:59:49.368Z
Learnt from: CR
PR: lastmile-ai/mcp-agent#0
File: examples/usecases/reliable_conversation/CLAUDE.md:0-0
Timestamp: 2025-07-22T18:59:49.368Z
Learning: Applies to examples/usecases/reliable_conversation/examples/reliable_conversation/src/utils/config.py : Configuration values such as quality_threshold, max_refinement_attempts, consolidation_interval, and evaluator_model_provider must be loaded from mcp_agent.config.yaml.

Applied to files:

  • src/mcp_agent/workflows/router/router_llm_anthropic.py
  • src/mcp_agent/workflows/router/router_llm_openai.py
  • src/mcp_agent/workflows/router/router_llm.py
📚 Learning: 2025-05-27T19:10:22.911Z
Learnt from: rholinshead
PR: lastmile-ai/mcp-agent#241
File: src/mcp_agent/mcp/mcp_agent_client_session.py:174-187
Timestamp: 2025-05-27T19:10:22.911Z
Learning: In the MCP Agent codebase, all request parameter types inherit from RequestParams or NotificationParams with consistent Meta class structures, so trace context injection using the base Meta classes doesn't cause type-mismatch issues.

Applied to files:

  • src/mcp_agent/workflows/router/router_llm.py
📚 Learning: 2025-05-27T19:10:22.911Z
Learnt from: rholinshead
PR: lastmile-ai/mcp-agent#241
File: src/mcp_agent/mcp/mcp_agent_client_session.py:174-187
Timestamp: 2025-05-27T19:10:22.911Z
Learning: In MCP Agent trace context injection, creating params objects when trace context exists is justified because param types either require other non-meta parameters (so params must exist) or are optional (so creating them for trace context is appropriate).

Applied to files:

  • src/mcp_agent/workflows/router/router_llm.py
🧬 Code Graph Analysis (3)
src/mcp_agent/workflows/router/router_llm_anthropic.py (1)
src/mcp_agent/workflows/llm/augmented_llm.py (1)
  • AugmentedLLM (220-698)
src/mcp_agent/workflows/router/router_llm_openai.py (1)
src/mcp_agent/workflows/llm/augmented_llm.py (1)
  • AugmentedLLM (220-698)
src/mcp_agent/workflows/llm/augmented_llm.py (4)
src/mcp_agent/agents/agent.py (2)
  • list_tools (394-485)
  • close (302-307)
src/mcp_agent/mcp/mcp_aggregator.py (2)
  • list_tools (611-654)
  • close (198-279)
tests/mcp/test_mcp_aggregator.py (1)
  • list_tools (786-793)
src/mcp_agent/logging/transport.py (1)
  • close (186-188)
🔇 Additional comments (1)
src/mcp_agent/workflows/router/router_llm.py (1)

114-122: Double-passing **kwargs risks collision

__init__ forwards the same **kwargs to both Router.__init__ and AugmentedLLM.__init__.
If either base mutates or consumes a key the second call may receive unexpected/duplicate parameters, raising TypeError. Consider splitting or explicitly listing shared kwargs.

Comment on lines +697 to +698
async def __aexit__(self, exc_type, exc_val, exc_tb):
await self.agent.__aexit__(exc_type, exc_val, exc_tb)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Propagate the underlying __aexit__ return value

Agent.__aexit__() may return True to suppress an exception; dropping that value changes behaviour to “always re-raise”. Forward the result explicitly:

-    async def __aexit__(self, exc_type, exc_val, exc_tb):
-        await self.agent.__aexit__(exc_type, exc_val, exc_tb)
+    async def __aexit__(self, exc_type, exc_val, exc_tb):
+        return await self.agent.__aexit__(exc_type, exc_val, exc_tb)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def __aexit__(self, exc_type, exc_val, exc_tb):
await self.agent.__aexit__(exc_type, exc_val, exc_tb)
async def __aexit__(self, exc_type, exc_val, exc_tb):
return await self.agent.__aexit__(exc_type, exc_val, exc_tb)
🤖 Prompt for AI Agents
In src/mcp_agent/workflows/llm/augmented_llm.py around lines 697 to 698, the
async __aexit__ method calls self.agent.__aexit__ but does not return its
result. Modify the method to return the awaited result of self.agent.__aexit__
so that any True value indicating exception suppression is properly propagated.

Comment on lines +53 to 54
llm: AugmentedLLM | None = None,
server_names: List[str] | None = None,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

llm parameter is unused

create() now accepts llm: AugmentedLLM | None but doesn’t pass it to cls(...) or store it, so callers cannot inject a custom classifier LLM. Either:

  1. Wire it through (cls(llm=llm, …)) and add llm to __init__, or
  2. Drop the parameter until it’s needed.

Leaving it unused will trigger linters and confuse API consumers.

🤖 Prompt for AI Agents
In src/mcp_agent/workflows/router/router_llm_anthropic.py around lines 53 to 54,
the create() method has an llm parameter that is not used or passed to the class
constructor, causing confusion and linter warnings. To fix this, either pass llm
to cls(...) in create() and add llm as an __init__ parameter to store it
properly, or remove the llm parameter from create() until it is actually needed.

Comment on lines +53 to 54
llm: AugmentedLLM | None = None,
server_names: List[str] | None = None,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

llm parameter currently serves no purpose

The optional llm argument is accepted but ignored. Align implementation with intent (thread it into the instance or remove it) to avoid dead parameters and static-analysis warnings.

🤖 Prompt for AI Agents
In src/mcp_agent/workflows/router/router_llm_openai.py around lines 53 to 54,
the llm parameter is accepted but not used anywhere in the code, causing dead
code and static-analysis warnings. To fix this, either remove the llm parameter
entirely if it is not needed, or if it is intended to be used, assign it to an
instance attribute or otherwise integrate it into the class or function logic so
it serves a purpose.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant