A comprehensive visualization and analysis tool for Triton IR files, designed to help developers analyze, debug, and understand Triton kernel compilation processes.
- Interactive Kernel Explorer: Display detailed kernel information and stack traces
- Multi-format IR Support: View and explore multiple Triton IR formats:
- TTGIR (Triton GPU IR)
- TTIR (Triton IR)
- LLIR (LLVM IR)
- PTX (Parallel Thread Execution)
- AMDGCN (AMD GPU IR)
- Side-by-side Comparison: Compare TTGIR and PTX code with synchronized highlighting
- Interactive Code Views: Click-to-highlight corresponding lines across different formats
- Source Mapping: Trace relationships between different compilation stages
- Compilation Tracing: Capture detailed Triton compilation events
- Stack Trace Integration: Full Python stack traces for compilation events
- Metadata Extraction: Comprehensive kernel metadata and compilation statistics
- NDJSON Output: Structured logging format for easy processing
- GitHub Pages: Automatic deployment with GitHub Actions
- Standalone Build: Single HTML file with all assets inlined
- Local Development: Full development environment setup
Frontend:
- React 19 with TypeScript
- Vite for build tooling
- Tailwind CSS for styling
- Monaco Editor for code display
- React Syntax Highlighter for syntax highlighting
- React Resizable Panels for layout
Backend/Processing:
- Python with Triton integration
- Structured logging and event tracing
- Source mapping extraction utilities
Prerequisites:
- Python >= 3.8
- Triton >= 3.3.1
Quick Start:
# Clone the repository
git clone https://github.com/pytorch-labs/tritonparse.git
cd tritonparse
# Install Python dependencies
pip install -e .
Additional Prerequisites:
- Node.js >= 18.0.0
- npm
Website Setup:
# Install website dependencies
cd website
npm install
First, integrate TritonParse with your Triton/PyTorch code to generate trace files:
import torch
import tritonparse.structured_logging
# Initialize structured logging to capture Triton compilation events
tritonparse.structured_logging.init("./logs/")
# Example: Using with torch.compile
def your_kernel():
# Your PyTorch/Triton kernel code
pass
compiled_kernel = torch.compile(your_kernel)
result = compiled_kernel() # This will generate trace logs in ./logs/
# The trace files can then be analyzed using the web interface
Visit https://pytorch-labs.github.io/tritonparse/ to use the tool directly in your browser:
- Open your local trace file directly in the browser
- Explore the visualization using the Overview and Code Comparison tabs
For contributors working on the website:
cd website
npm run dev
Access the application at http://localhost:5173
Available Scripts:
npm run build
- Standard buildnpm run build:single
- Standalone HTML filenpm run preview
- Preview production build
tritonparse/
βββ tritonparse/ # Python package
β βββ structured_logging.py # Main logging infrastructure
β βββ extract_source_mappings.py # Source mapping utilities
β βββ source_type.py # Source type definitions
β βββ utils.py # Helper utilities
β βββ common.py # Common functions
β βββ tp_logger.py # Logger configuration
βββ website/ # React web application
β βββ src/ # React source code
β βββ public/ # Static assets and example files
β βββ scripts/ # Build utilities (inline-html.js)
β βββ node_modules/ # Dependencies
β βββ package.json # Node.js dependencies
β βββ vite.config.ts # Vite configuration
β βββ dist/ # Built application (after build)
βββ tests/ # Test files and example traces
β βββ test_add.py # Example Triton kernel test
β βββ unit_tests.py # Unit tests
β βββ *.ndjson # Example trace files
βββ run.py # Main runner script
βββ pyproject.toml # Python package configuration
βββ LICENSE # BSD-3 license
βββ CONTRIBUTING.md # Contribution guidelines
βββ CODE_OF_CONDUCT.md # Code of conduct
Install in development mode:
pip install -e .
Example test:
cd tests
python test_add.py
Start development server:
cd website
npm run dev
Available Scripts:
npm run dev
- Start development servernpm run build
- Production buildnpm run build:single
- Standalone HTML buildnpm run lint
- Run ESLintnpm run preview
- Preview production build
The TritonParse visualization tool is automatically deployed and available at: https://pytorch-labs.github.io/tritonparse/
Build standalone version:
cd website
npm run build:single
The dist/standalone.html
file contains the entire application and can be deployed anywhere.
TritonParse helps visualize the Triton compilation pipeline:
- Python Source β Triton kernel functions
- TTIR β Triton's high-level IR
- TTGIR β GPU-specific Triton IR
- LLIR β LLVM IR representation
- PTX β NVIDIA PTX assembly
- AMDGCN β AMD GPU IR
Each stage can be inspected and compared to understand optimization transformations.
TRITONPARSE_DEBUG=1
- Enable debug loggingTRITONPARSE_NDJSON=1
- Output in NDJSON format (default)
tritonparse.structured_logging.init("/custom/log/path/")
python tritonparse/extract_source_mappings.py input.ndjson output_mapped.ndjson
- Fork the repository
- Create a feature branch:
git checkout -b feature-name
- Make your changes
- Run tests:
npm test
(website) andpython -m pytest
(Python) - Submit a pull request
This project is licensed under the BSD-3 License - see the LICENSE file for details.
- OpenAI Triton - The Triton compiler and language
- PyTorch - Deep learning framework with Triton integration
- Issues: GitHub Issues
- Live Tool: https://pytorch-labs.github.io/tritonparse/
- Examples: Check the
tests/
directory for usage examples
Note: This tool is designed for developers working with Triton kernels and GPU computing. Basic familiarity with CUDA, GPU programming concepts, and the Triton language is recommended for effective use.