-
Notifications
You must be signed in to change notification settings - Fork 77
MCP Architecture #87
Description
Is this project open to a rearchitecture of the MCP server?
Currently, the Plane MCP server appears to be a direct replica of the underlying REST APIs. This forces the AI to act as a web browser making sequential CRUD requests, which is an anti-pattern for LLM tooling. It leads to high latency, increased surface area for reasoning errors, and severe context window exhaustion due to returning massive, non-useful JSON payloads.
The MCP server should be architected for expected AI interactions—abstracting multi-step workflows into single, intent-driven tool calls.
Problem Context: Current API-Replica Approach
Here are two examples of how the current 1:1 mapping breaks down in practice:
Scenario 1: Creating an Issue
Current workflow:
list_projects- AI reads, parses, and locates the project ID.
create_issueusingproject_idand new issue name.- Result: AI receives a massive JSON dump containing all the information it just submitted, burning context for no reason.
Proposed "AI Journey" workflow:
create_issue("WEBUI", "name of issue")- Result: AI receives a concise acknowledgment:
{"id": "WEBUI-1", "success": true}
Scenario 2: Beginning Work on an Issue
Current workflow (High failure rate):
- Attempt
retrieve_work_item_by_identifier(Currently fails due to MCP bug). - Fallback to
list_all_issues(Massive context burden). - AI parses to locate the issue ID.
list_projects-> locate project ID.get_me-> locate user ID for assignment.list_states-> locate "in-progress" state ID.update_work_item(project_id, work_item_id, state, assignee)- Result: Another massive context dump of the entire issue.
Proposed "AI Journey" workflow:
get_item("WEBUI-1")-> returns essential context{"id":"WEBUI-1", "title":"...", "details":"..."}start_work_on("WEBUI-1")-> server automatically handles the state transition, assignment to the active AI/user, and background lookups.- Result:
{"id": "WEBUI-1", "success": true}
Proposed Architectural Solution
I recommend reworking the MCP to drastically reduce the testing surface area and optimize for semantic compression. Tools should be intent-based:
project_info(project_metadata: true, workspaces: true, cycles: true)- Delivers all project and cycle metadata contextually in one call. Eliminates the need for multiple metadata lookups.
create_issue(project_key, title, ...)- Creates an issue and returns only the newly generated ID and success status.
list_issues(type: [basic | status | extended | details], filter: {...})- Provides token-optimized views of issues to prevent context flooding.
get_issue(type: [basic | extended])- Provides only what is strictly required to work on the issue, or the full JSON if explicitly requested.
update_issue(issue_key, ...)- Intent-driven updates.
add_comment(issue_key, comment)- Simple append operation.
manage_cycle(cycle_id, add_items: [], remove_items: [], name, description)- Batch operations for cycle management in a single tool call.
High-Friction Destructive Actions
To prevent AI hallucinations from causing data loss, destructive actions should be implemented as a 2-stage, high-friction process:
request_delete(key/cycle/project): Declares intent to delete and returns a warning with arequest_id.execute_delete(request_id, understanding_declaration, full_content_hash): The AI must explicitly echo back what it is deleting, making unintentional deletions virtually impossible.