Skip to content

yongggquannn/api-test-generation-llm

Repository files navigation

API Test Generation LLM

An LLM-powered API test generation tool that automatically generates comprehensive Jest test cases from OpenAPI 3.1 specifications. This tool uses Azure OpenAI models (GPT-4.1, GPT-5, Grok, O3-Mini) to create production-ready test suites with full coverage including functional tests, authentication, error handling, validation, security, and performance tests.

Project Overview

This project provides a complete solution for automated API test generation:

  • LLM-Powered Test Generation: Uses advanced language models to generate comprehensive Jest test suites from OpenAPI specifications
  • Wizard-Based Frontend: Angular application with a step-by-step wizard interface for uploading specs, validating, generating tests, reviewing cases, running tests, and viewing dashboards
  • Mock Server Generation: Automatically generates companion mock servers for testing
  • OpenAPI Analysis: CLI and API tools for analyzing OpenAPI 3.1 specifications with comprehensive path detail extraction
  • Multiple LLM Support: Supports GPT-4.1, GPT-5, Grok, and O3-Mini models
  • Complete Test Coverage: Generates tests for functional endpoints, authentication, error handling, validation rules, security, performance, and integration scenarios

Getting Started

Prerequisites

  • Node.js (v18 or higher recommended)
  • npm (comes with Node.js)
  • Azure OpenAI credentials (API keys and endpoints for your chosen models)

Installation

  1. Clone the repository:

    git clone <repository-url>
    cd api-test-generation-llm
  2. Install dependencies:

    npm install
  3. Build the project:

    npm run build

Environment Configuration

Create a .env file in the project root with your Azure OpenAI credentials. The exact variables depend on which models you want to use:

For GPT-4.1:

GPT41_API_KEY=your_gpt41_api_key
GPT41_ENDPOINT=https://your-resource.openai.azure.com
GPT41_DEPLOYMENT=your_deployment_name
GPT41_API_VERSION=2024-02-15-preview

For GPT-5:

GPT5_API_KEY=your_gpt5_api_key
GPT5_ENDPOINT=https://your-resource.openai.azure.com
GPT5_DEPLOYMENT=your_deployment_name
GPT5_API_VERSION=2024-02-15-preview

For Grok:

GROK_API_KEY=your_grok_api_key
GROK_ENDPOINT=https://your-resource.openai.azure.com
GROK_DEPLOYMENT=your_deployment_name

For O3-Mini:

MINI_API_KEY=your_mini_api_key
MINI_ENDPOINT=https://your-resource.openai.azure.com
MINI_DEPLOYMENT=your_deployment_name
MINI_API_VERSION=2024-02-15-preview

Server Configuration:

API_PORT=3000
MOCK_PORT=3001

Running the Project Locally

To run the complete application, you need to start both the backend API server and the frontend Angular application.

Step 1: Start the Backend API Server

The backend API server provides LLM-powered test generation endpoints for the Angular application.

npm run api:server

The API server will start on http://localhost:3000 (or the port specified in your .env file via API_PORT).

Available Endpoints:

  • GET /health - Health check
  • GET /api/models - List available LLM models
  • POST /api/generate-tests - Generate Jest tests from OpenAPI spec

Step 2: Start the Frontend Angular Application

In a new terminal, run the Angular app in development mode:

npm run ng:serve

The application will be available at http://localhost:4200/

Quick Start

For full functionality, run all three services:

# Terminal 1 - Backend API Server
npm run api:server

# Terminal 2 - Frontend Angular App
npm run ng:serve

Then open http://localhost:4200/ in your browser to access the wizard interface.

Other Build Options

Production Build (Angular Only):

npm run ng:build

Build artifacts will be stored in the dist/angular-app/ directory.

Watch Mode (Development):

npm run ng:watch

Builds the Angular app in watch mode for continuous compilation.

Electron Desktop Application:

npm run ng:start

Builds and launches the Angular app as an Electron desktop application.


Features

Comprehensive Path Analysis

  • Path Information: Name, summary, description
  • Operations: All HTTP methods (GET, POST, PUT, DELETE, etc.)
  • Parameters: Complete parameter details including:
    • Name, location (query, header, path, cookie)
    • Required/optional status
    • Schema information (type, constraints, examples)
    • Deprecation status
  • Request Body: Content types, schemas, required status
  • Responses: Status codes, content types, schemas, headers
  • Security: Authentication requirements per operation

Utility Functions

  • getOperationsByTag() - Get operations by tag name
  • getOperationsByMethod() - Get operations by HTTP method
  • getSchemaByName() - Get schema by name
  • getSecuritySchemeByName() - Get security scheme by name
  • getPathByOperationId() - Get path by operation ID
  • getAllOperationIds() - Get all operation IDs
  • getRequiredSchemas() - Get schemas with required fields
  • getDeprecatedOperations() - Get deprecated operations

Generated Test Coverage

The LLM generates comprehensive test suites covering:

  • Functional & endpoint coverage (all CRUD operations)
  • Authentication & authorization scenarios
  • Negative & error handling cases
  • Validation rules and schema compliance
  • Response validation against schemas
  • Integration & workflow scenarios
  • Performance & rate limiting tests
  • Security & edge cases (SQL injection, XSS protection)
  • Test isolation and maintainability

CLI Usage

OpenAPI Spec Analysis

Analyze OpenAPI specifications using the CLI:

xapi-analyze path/to/spec.yaml --format json

Generate Test Categories

node generateCategories.js <openapi-spec-file> <test-file> [output-file]

Example:

node generateCategories.js sample-openapi.yaml llm-feature/4.1.test.js 4.1categorizedTests.json

Run Jest Test Runner

node runner.js <test-file> <categorization-file> [test-name-pattern]

Examples:

# Run entire test suite
node runner.js llm-feature/4.1.test.js 4.1categorizedTests.json

# Run specific test cases
node runner.js llm-feature/4.1.test.js 4.1categorizedTests.json "should create a new pet|should get pet by ID|should update an existing pet"

Run Test Cases

Run Jest tests directly:

npm test

API Usage

Basic Analysis

import { analyzeSpec } from 'xapi-new';
const res = await analyzeSpec(specString);

Detailed Path Extraction

import { 
  analyzeSpec, 
  getDetailedPathInfo, 
  getPathDetailsByPath,
  getAllPathDetails 
} from 'xapi-new';

// Get all path details
const result = await analyzeSpec(specString);
const allPathDetails = getAllPathDetails(result.uiModel);

// Get specific path details
const petsPathDetails = getPathDetailsByPath(result.uiModel, '/pets');

Example Output

// Detailed path information
{
  path: "/pets",
  operations: [
    {
      method: "GET",
      operationId: "listPets",
      summary: "List all pets",
      parameters: [
        {
          name: "limit",
          in: "query",
          required: false,
          schema: { type: "integer", minimum: 1, maximum: 100 }
        }
      ],
      responses: [
        {
          statusCode: "200",
          description: "A list of pets",
          content: {
            "application/json": {
              schema: { type: "array", items: { "$ref": "#/components/schemas/Pet" } }
            }
          }
        }
      ]
    }
  ]
}

How It Works

  • Parses YAML or JSON with js-yaml
  • Detects OpenAPI 3.1.x specifications by looking at openapi field
  • Validates basic structure and required fields
  • Returns comprehensive UI-friendly model with detailed path information
  • Extracts all parameters, responses, and security details
  • Provides utility functions for easy data access
  • Uses LLM models to generate comprehensive Jest test suites following industry best practices

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors