This document explains how to run tests, write new tests, and use the testing utilities in this project.
- Quick Start
- Test Structure
- Running Tests
- Writing Tests
- Mock Patterns
- Custom Assertions
- Test Fixtures
- E2E Testing
- Coverage
- Troubleshooting
# Run all tests
npm test
# Run unit tests only
npm run test:unit
# Run integration tests
npm run test:integration
# Run E2E tests
npm run test:e2e
# Run tests with coverage
npm run test:coverage
# Run tests in watch mode
npm run test:watchtests/
├── setup.ts # Global test setup
├── e2e/ # End-to-end tests
│ ├── setup.ts # E2E test setup and utilities
│ ├── defi-tools.e2e.test.ts
│ ├── evm-tools.e2e.test.ts
│ ├── market-data.e2e.test.ts
│ ├── multichain.e2e.test.ts
│ └── error-recovery.e2e.test.ts
├── integration/ # Integration tests
│ ├── evm-tools.test.ts
│ ├── multichain.test.ts
│ └── server.test.ts
├── mocks/ # Shared mock utilities
│ ├── mcp.ts # MCP server mocks
│ └── viem.ts # Viem client mocks
└── utils/ # Test utilities
├── assertions.ts # Custom vitest matchers
└── fixtures.ts # Reusable test data
npm test# Run a specific test file
npx vitest run tests/e2e/defi-tools.e2e.test.ts
# Run tests matching a pattern
npx vitest run --grep "DeFi"# Re-run tests on file changes
npm run test:watch
# Watch specific files
npx vitest watch tests/e2e/npm run test:coverageCoverage reports are generated in coverage/ directory.
Unit tests are co-located with source files (e.g., src/utils/helper.test.ts).
import { describe, it, expect } from "vitest"
import { myFunction } from "./helper"
describe("myFunction", () => {
it("should return expected result", () => {
expect(myFunction("input")).toBe("expected")
})
})Integration tests test multiple components together.
import { describe, it, expect, beforeEach } from "vitest"
import { MockMcpServer, createMockMcpServer } from "../mocks/mcp"
describe("Tool Integration", () => {
let mockServer: MockMcpServer
beforeEach(() => {
mockServer = createMockMcpServer()
})
it("should register and execute tool", async () => {
const result = await mockServer.executeTool("tool_name", { param: "value" })
expect(result).toBeDefined()
})
})E2E tests run against the actual MCP server.
import { describe, it, expect, beforeAll, afterAll } from "vitest"
import { Client } from "@modelcontextprotocol/sdk/client/index.js"
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js"
import { assertToolSuccess, parseToolResult } from "./setup"
describe("E2E Tests", () => {
let client: Client
let transport: StdioClientTransport
beforeAll(async () => {
transport = new StdioClientTransport({
command: "npx",
args: ["tsx", "src/index.ts"],
env: { ...process.env, NODE_ENV: "test" }
})
client = new Client({ name: "test", version: "1.0.0" })
await client.connect(transport)
}, 30000)
afterAll(async () => {
await client.close()
await transport.close()
})
it("should call tool successfully", async () => {
const result = await client.callTool({
name: "tool_name",
arguments: { param: "value" }
})
assertToolSuccess(result)
const data = parseToolResult<{ key: string }>(result)
expect(data.key).toBeDefined()
})
})import { vi } from "vitest"
// Mock fetch globally
vi.spyOn(global, "fetch").mockResolvedValue({
ok: true,
json: () => Promise.resolve({ data: "mocked" })
} as Response)
// Mock specific API
vi.mock("@/utils/api", () => ({
fetchData: vi.fn().mockResolvedValue({ result: "mocked" })
}))import { vi } from "vitest"
import { createMockViemClient } from "./mocks/viem"
vi.mock("@/evm/services/clients", () => ({
getPublicClient: vi.fn(() => createMockViemClient())
}))import { createMockMcpServer } from "../mocks/mcp"
const mockServer = createMockMcpServer()
registerTools(mockServer as any)
// Execute tool
const result = await mockServer.executeTool("tool_name", args)vi.mock("viem/chains", () => ({
mainnet: { id: 1, name: "Ethereum" },
sepolia: { id: 11155111, name: "Sepolia" }
}))Import custom matchers in your test file:
import "../utils/assertions"// Check for successful response
expect(result).toBeSuccessfulToolResponse()
// Check for error response
expect(result).toBeErrorToolResponse()
// Check for valid JSON content
expect(result).toHaveValidJsonContent()
// Check for specific JSON properties
expect(result).toHaveJsonContent({ key: "value" })
expect(result).toHaveJsonProperty("nested.path", expectedValue)
// Check for error messages
expect(result).toContainToolError(/pattern/i)
// Check for specific text
expect(result).toContainText("expected text")
// Check for content type
expect(result).toHaveContentType("text")
// Check for array content
expect(result).toHaveArrayContent(5) // minimum length
// Check for valid addresses/hashes
expect(result).toContainValidAddress()
expect(result).toContainValidTxHash()
// Check for valid numeric fields
expect(result).toContainValidNumericField("balance")import { assertSuccessAndParse, assertError, getTextOrThrow } from "../utils/assertions"
// Parse successful response
const data = assertSuccessAndParse<{ name: string }>(result)
// Assert error with pattern
assertError(result, /invalid address/i)
// Get text content
const text = getTextOrThrow(result)Import fixtures for consistent test data:
import {
ETH_MAINNET_ADDRESSES,
ETH_SEPOLIA_ADDRESSES,
BSC_MAINNET_ADDRESSES,
MOCK_TOKEN_DATA,
MOCK_DEFI_PROTOCOLS,
MOCK_MARKET_DATA,
NETWORK_CONFIGS,
ERROR_SCENARIOS,
generateRandomAddress,
generateRandomTxHash,
createMockBalanceResponse,
createMockTokenInfo,
createMockBlockResponse
} from "../utils/fixtures"// Well-known addresses
const address = ETH_MAINNET_ADDRESSES.VITALIK
// Mock token data
const tokenInfo = MOCK_TOKEN_DATA.USDC
// Generate random test data
const randomAddr = generateRandomAddress()
const randomTxHash = generateRandomTxHash()
// Create mock responses
const balanceResponse = createMockBalanceResponse("1.5", 18)
const blockResponse = createMockBlockResponse(18000000)E2E tests use tests/e2e/setup.ts which provides:
startMCPServer()/stopMCPServer()- Server lifecycleassertToolSuccess()- Validate successful responseparseToolResult<T>()- Parse JSON responseretryWithBackoff()- Retry failed requestsTEST_NETWORKS/TEST_ADDRESSES- Test data
- Use retry with backoff for network requests:
const result = await retryWithBackoff(async () => {
return await client.callTool({ name: "tool", arguments: {} })
}, 3, 2000)- Set appropriate timeouts:
it("test name", async () => {
// test code
}, 30000) // 30 second timeout- Clean up resources:
afterAll(async () => {
await client.close()
await transport.close()
})- Use test networks:
- Prefer testnets (Sepolia, BSC Testnet) over mainnets
- Avoid rate limiting by using appropriate delays
npm run test:coverage- HTML report:
coverage/index.html - LCOV:
coverage/lcov.info - JSON:
coverage/coverage-final.json
Configure in vitest.config.ts:
coverage: {
thresholds: {
statements: 80,
branches: 75,
functions: 80,
lines: 80
}
}Tests timing out:
- Increase timeout:
it("test", async () => {...}, 60000) - Check network connectivity
- Use
retryWithBackoff()for flaky tests
Mock not working:
- Ensure mock is defined before imports
- Check mock path matches import path
- Use
vi.resetAllMocks()inbeforeEach
E2E server not starting:
- Check server build is up to date
- Verify environment variables
- Check port availability
Rate limiting:
- Add delays between requests
- Use testnets instead of mainnet
- Implement retry logic
# Run with verbose output
DEBUG=* npm test
# Run single test in isolation
npx vitest run tests/path/to/test.ts --reporter=verbose# Clear test cache
npx vitest --clearCache
# Update snapshots
npx vitest -u
# Run only failed tests
npx vitest --failed
# Type-check tests
npx tsc --noEmitWhen adding new tests:
- Follow existing patterns and naming conventions
- Add tests for both success and failure cases
- Use appropriate timeouts for async operations
- Mock external dependencies where possible
- Update this README if adding new patterns/utilities