This is a proof-of-concept implementation demonstrating how to use Model Context Protocol (MCP) with OpenAI function calling to build real-time LLM-native workflows.
- Connects GPT-4 to backend tools via the MCP protocol
- Lets GPT choose the right tool dynamically
- Extracts required arguments automatically
- Calls backend logic securely using
StdioClientTransport
With just a natural language prompt:
“Please block my card 1234567890123456 because it was stolen.”
✅ GPT:
- Selects the
blockCreditCardtool - Extracts
cardNumberandreason - Calls the backend tool via MCP
- Returns a real-time response
- 🔗 Model Context Protocol (MCP)
- 🧠 OpenAI function calling (tool_choice: "auto")
- ⚙️ TypeScript + Node.js
- 📦 Zod schema for input validation
- 🔄 Stdio transport for client/server connection
- 🌱 Lightweight and easy to extend
src/
├── client/ # Smart OpenAI-powered client
│ └── client.ts
├── server/ # MCP server exposing tools
│ └── index.ts
.env # Your OpenAI API key
package.json
tsconfig.json
- Install dependencies
npm install- Set your OpenAI key
Create a .env file:
OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx- Run the project
In one terminal:
npm run start:serverIn another terminal:
npm run start:client- Try a natural prompt
Please block my credit card 1234567890123456 due to fraud
- ✅ No hardcoded logic or prompts
- ✅ Tool selection + argument extraction are automatic
- ✅ Works with
.ts,.js, or.pyserver tools - ✅ Protocol-driven, agent-ready foundation
Built by Venkata Sairam Gollamudi
📬 vsairamtech@gmail.com