Welcome to the SmartBrain documentation. This comprehensive guide covers all aspects of the SmartBrain AI/ML Engine for the CyberAi Ecosystem.
Quick Navigation:
- New to SmartBrain? Start with the Quick Start Guide
- Looking for a specific feature? Check the Features Quick Reference
- Comparing tools? See Feature Comparison
- Need help? Visit Troubleshooting or Support
- Introduction
- Architecture Overview
- Model Lifecycle
- Model Versioning
- Dataset Requirements
- Inference Usage Guide
- Training Pipeline Guide
- Terminal Command Integration
- Ecosystem Integration
- API Reference
- Best Practices
- Smart Functions — Auto-analyze, auto-fix, auto-test, auto-sync, smart-suggest
- Orval DB Virtual Memory System — AI brain memory layer
- Self-Updating Documentation — Docs engine and freshness scoring
- Troubleshooting Guide — Common issues and quick fixes
- FAQ — Frequently asked questions
SmartBrain is a comprehensive AI/ML engine and automation platform designed specifically for smart contract automation and blockchain development within the CyberAi ecosystem. It combines powerful machine learning capabilities with intelligent automation features to streamline the entire development lifecycle.
SmartBrain serves as the central intelligence hub for smart contract development, providing automated analysis, testing, synchronization, and deployment capabilities. Whether you're developing DeFi protocols, NFT marketplaces, or complex multi-chain applications, SmartBrain offers the tools and automation you need to build secure, efficient, and reliable smart contracts.
- Model Registry and Versioning: Comprehensive model management with semantic versioning
- Training Pipeline Infrastructure: Complete ML pipeline for model development and training
- Inference Engine: Robust inference system with CLI and API access
- Dataset Validation and Management: Automated dataset validation and quality assurance
- Intelligent Automation: Auto Sync, Auto Test, Auto Analysis, and Auto Fix capabilities
- Integration with GitHub Copilot: Seamless integration with GitHub development workflows
- Multi-Chain Support: Built-in support for Ethereum, Solana, Polygon, and more
Time Savings: Automated workflows reduce manual development time by 40-60% Quality Assurance: Continuous testing and analysis catch issues before deployment Cost Efficiency: Gas optimization and early bug detection save significant costs Developer Experience: Intuitive CLI and API make complex tasks simple Security First: Built-in security analysis and vulnerability detection
This quick start guide will get you up and running with SmartBrain in under 10 minutes.
Before you begin, ensure you have:
- Node.js v16.0.0 or higher
- npm v8.0.0 or higher
- Git installed on your system
- A GitHub account (for GitHub Copilot integration)
-
Clone the Repository
git clone https://github.com/SolanaRemix/SmartBrain.git cd SmartBrain -
Install Dependencies
npm install
-
Set Up Environment
cp .env.example .env # Edit .env with your configuration -
Run Bootstrap Script
./scripts/bootstrap.sh
-
Verify Installation
npm test npm run lint ./scripts/audit.sh
Let's train a simple model to get familiar with SmartBrain:
# NOTE: The training CLI is currently a placeholder and not yet implemented.
# The commands below will be updated once training and configuration
# subcommands (config/train) and flags like --output/--epochs are available.
# Current behavior:
# - Running the training CLI with no arguments shows the help/usage output.
node training/cli/index.js
# => (prints help/usage information)
# - Running a specific command prints a placeholder "not yet implemented" message.
node training/cli/index.js train
# => "Training CLI - train command (not yet implemented)"
# For model training and inference, please refer to the dedicated
# training and inference documentation or higher-level scripts.- Read the Architecture Overview to understand SmartBrain's structure
- Explore Automation Features to leverage intelligent automation
- Check out Best Practices for optimal usage
- Join our community for support and updates
This section provides detailed installation and configuration instructions for production environments.
Minimum Requirements:
- CPU: 2 cores
- RAM: 4 GB
- Storage: 10 GB free space
- OS: Linux, macOS, or Windows (with WSL2)
Recommended Requirements:
- CPU: 4+ cores
- RAM: 8+ GB
- Storage: 50+ GB SSD
- OS: Linux (Ubuntu 20.04+ or similar)
git clone https://github.com/SolanaRemix/SmartBrain.git
cd SmartBrainnpm install --productionFor development with all dev dependencies:
npm installCopy the example environment file:
cp .env.example .envEdit .env and configure the following variables:
# GitHub Integration
GITHUB_TOKEN=your_github_personal_access_token
# Stripe Configuration (Optional, for bot features)
STRIPE_SECRET_KEY=your_stripe_secret_key
STRIPE_PUBLISHABLE_KEY=your_stripe_publishable_key
STRIPE_WEBHOOK_SECRET=your_webhook_secret
# Model Configuration
MODEL_DIR=./models
DATASET_DIR=./datasets
# Inference Configuration
INFERENCE_BATCH_SIZE=32
INFERENCE_TIMEOUT=30000
# Training Configuration
TRAINING_CHECKPOINT_FREQ=5
TRAINING_LOG_LEVEL=infoRun the bootstrap script to set up directories and permissions:
chmod +x ./scripts/bootstrap.sh
./scripts/bootstrap.shThis script will:
- Check Node.js and npm availability
- Create necessary directories
- Install project dependencies
- Copy the
.envfile from the template if needed - Verify
models/metadata/schema.jsonand workflow files exist - Set appropriate script permissions
Run the comprehensive audit script:
./scripts/audit.shThis checks:
- ✅ Directory structure
- ✅ Required files and dependencies
- ✅ Configuration validity
- ✅ Permissions
- ✅ Model registry
- ✅ Workflow files
Use the available automated tests and validation commands to verify your installation:
# Run Jest unit tests
npm test
# Or run unit tests only
npm run test:unit
# Validate model configurations and registry
npm run validate:models
# Validate dataset configurations
npm run validate:datasets
# Verify inference CLI is working (shows top-level help)
node inference/cli/index.js help
# Or verify model info for a specific model
node inference/cli/index.js info --model path/to/model.onnxEdit models/config.json to configure model defaults:
{
"default_framework": "tensorflow",
"versioning": {
"strategy": "semantic",
"auto_increment": true
},
"validation": {
"required_metadata": ["name", "version", "framework", "task"],
"check_integrity": true
}
}Global training settings in training/config.json:
{
"defaults": {
"batch_size": 32,
"learning_rate": 2e-5,
"optimizer": "adamw",
"checkpoint_frequency": 5
},
"hardware": {
"gpu_enabled": true,
"mixed_precision": true
}
}Configure inference behavior in inference/config.json:
{
"engine": {
"batch_size": 32,
"timeout": 30000,
"caching": true
},
"api": {
"port": 3000,
"rate_limit": 100
}
}Issue: npm install fails
# Clear npm cache
npm cache clean --force
# Try again
npm installIssue: Permission denied on scripts
# Make scripts executable
chmod +x ./scripts/*.shIssue: Bootstrap script fails
# Check Node.js version
node --version # Should be >= 16.0.0
# Check npm version
npm --version # Should be >= 8.0.0Issue: Tests fail
# Ensure all dependencies are installed
npm install
# Run specific test suite
npm run test:unitSmartBrain is built on a modular architecture with clearly separated concerns:
┌─────────────────────────────────────────────────────────────┐
│ SmartBrain Core │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌──────────────────┐ │
│ │ Models │ │ Training │ │ Inference │ │
│ │ │ │ │ │ │ │
│ │ • Registry │ │ • Pipeline │ │ • Engine │ │
│ │ • Metadata │ │ • Configs │ │ • CLI │ │
│ │ • Versions │ │ • Jobs │ │ • API │ │
│ └─────────────┘ └─────────────┘ └──────────────────┘ │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌──────────────────┐ │
│ │ Datasets │ │ Tools │ │ Automation │ │
│ │ │ │ │ │ │ │
│ │ • Validation│ │ • ML Helpers│ │ • Auto Sync │ │
│ │ • Schemas │ │ • Utilities │ │ • Auto Test │ │
│ │ • Storage │ │ • Debuggers │ │ • Auto Analysis │ │
│ └─────────────┘ └─────────────┘ │ • Auto Fix │ │
│ └──────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Models Registry
- Centralized model storage and versioning
- Semantic versioning support (MAJOR.MINOR.PATCH)
- Metadata management and validation
- Multi-framework support (TensorFlow, PyTorch, ONNX)
- Automatic integrity checking
Training Pipeline
- Configuration-driven training workflows
- Checkpoint management and resumption
- Distributed training support
- Real-time metrics tracking
- Automatic hyperparameter validation
Inference Engine
- High-performance model serving
- Batch processing optimization
- Caching and optimization
- CLI and REST API interfaces
- Real-time and batch modes
Dataset Management
- Schema-based validation
- Quality assurance checks
- Version control integration
- Split management (train/val/test)
- Automatic preprocessing
Automation Suite
- Auto Sync: Automated repository synchronization
- Auto Test: Intelligent test execution
- Auto Analysis: Continuous code and model analysis
- Auto Fix: Automated issue resolution
Tools & Utilities
- ML helper functions
- Data preprocessing utilities
- Model debugging tools
- Performance profilers
- Validation scripts
- GitHub Copilot: Agent integration via
.github/copilot/agent.yaml - GitHub Actions: CI/CD workflows for training, validation, and deployment
- SmartContract Bots: Integration with deployment and audit bots
- CyberAi Ecosystem: Part of the broader CyberAi infrastructure
# Generate training configuration
node training/cli/index.js config --output training/configs/my-model.json
# Edit configuration
vim training/configs/my-model.json# Train model
node training/cli/index.js train \
--config training/configs/my-model.json \
--output models/my-model \
--epochs 10# Validate model files and metadata
./scripts/validate-model.sh models/my-modelPlace model in /models directory with proper metadata:
{
"name": "smart-contract-classifier",
"version": "1.0.0",
"framework": "tensorflow",
"task": "classification",
"description": "Classifies smart contract vulnerabilities",
"author": "SmartBrain Team",
"created_at": "2025-01-11T00:00:00Z",
"metrics": {
"accuracy": 0.95,
"precision": 0.93,
"recall": 0.94,
"f1_score": 0.935
}
}# Run inference
node inference/cli/index.js predict \
--model models/my-model \
--input data/input.jsonSmartBrain uses semantic versioning (SemVer) for models:
- MAJOR.MINOR.PATCH (e.g., 2.1.3)
- MAJOR: Incompatible API changes
- MINOR: Backward-compatible functionality additions
- PATCH: Backward-compatible bug fixes
/models
/smart-contract-classifier
/1.0.0
model.h5
metadata.json
README.md
/1.1.0
model.h5
metadata.json
README.md
/2.0.0
model.pb
metadata.json
README.md
// Load specific version
const model = loadModel('smart-contract-classifier', '1.1.0');
// Load latest version
const model = loadModel('smart-contract-classifier', 'latest');Datasets should follow this structure:
{
"metadata": {
"name": "smart-contract-vulnerabilities",
"version": "1.0.0",
"description": "Dataset of smart contract code samples",
"size": 10000,
"split": {
"train": 0.7,
"validation": 0.15,
"test": 0.15
}
},
"data": [
{
"id": "sample-001",
"input": "contract code here",
"label": "reentrancy",
"metadata": {
"source": "etherscan",
"date": "2024-01-01"
}
}
]
}# Validate dataset
node datasets/validation/validate.js \
--dataset data/my-dataset.json \
--schema models/metadata/schema.json \
--verbose- Quality: Ensure high-quality, clean data
- Balance: Balance class distributions
- Splits: Maintain consistent train/val/test splits
- Documentation: Document data sources and processing
- Versioning: Version datasets alongside models
# Single prediction
node inference/cli/index.js predict \
--model models/vulnerability-detector \
--input contract.json \
--output result.json
# Batch inference
node inference/cli/index.js batch \
--model models/vulnerability-detector \
--input contracts.json \
--output results.json \
--batch-size 32
# Model information
node inference/cli/index.js info \
--model models/vulnerability-detectorconst express = require('express');
const { InferenceEngine } = require('./inference/engine');
const app = express();
const engine = new InferenceEngine('models/my-model');
app.post('/predict', async (req, res) => {
const prediction = await engine.predict(req.body);
res.json(prediction);
});
app.listen(3000);# Process large dataset
node inference/cli/index.js batch \
--model models/my-model \
--input large-dataset.json \
--output predictions.json \
--batch-size 64Create a training configuration:
model:
name: vulnerability-detector
architecture: transformer
parameters:
layers: 12
hidden_size: 768
num_heads: 12
dropout: 0.1
training:
batch_size: 32
learning_rate: 2e-5
epochs: 10
optimizer: adamw
scheduler: linear_warmup
data:
train_path: datasets/train.json
validation_path: datasets/validation.json
test_path: datasets/test.json
max_length: 512
output:
model_dir: models/vulnerability-detector
checkpoint_dir: models/vulnerability-detector/checkpoints
save_frequency: 5# Start training
node training/cli/index.js train \
--config training/configs/vulnerability-detector.yaml \
--output models/vulnerability-detector
# Resume from checkpoint
node training/cli/index.js resume \
--checkpoint models/vulnerability-detector/checkpoints/epoch-5 \
--config training/configs/vulnerability-detector.yamlTraining logs and metrics are saved to the model directory:
models/vulnerability-detector/
├── training.log
├── metrics.json
├── checkpoints/
│ ├── epoch-1/
│ ├── epoch-5/
│ └── epoch-10/
└── metadata.json
Auto Sync is SmartBrain's intelligent repository synchronization feature that automatically keeps your models, datasets, and configurations synchronized across different environments and team members.
Auto Sync monitors your SmartBrain workspace for changes and automatically synchronizes them with remote repositories, ensuring that all team members have access to the latest models, datasets, and configurations. It eliminates manual synchronization tasks and reduces version conflicts.
- Automatic Detection: Monitors file system for changes to models, datasets, and configs
- Smart Synchronization: Only syncs changed files to minimize bandwidth usage
- Conflict Resolution: Intelligent conflict resolution with customizable strategies
- Multi-Repository Support: Sync to multiple Git repositories simultaneously
- Selective Sync: Configure which files and directories to sync
- Real-Time Updates: Near real-time synchronization with configurable intervals
- Audit Trail: Complete history of all synchronization operations
Create or edit .smartbrain/sync.json:
{
"enabled": true,
"interval": 300,
"repositories": [
{
"name": "origin",
"url": "https://github.com/your-org/models.git",
"branch": "main",
"paths": ["models/", "datasets/"]
},
{
"name": "backup",
"url": "https://github.com/your-org/backup.git",
"branch": "main",
"paths": ["models/"]
}
],
"conflict_resolution": "prefer_remote",
"ignore_patterns": [
"*.tmp",
"*/checkpoints/*",
"*/logs/*"
]
}# Enable Auto Sync
node tools/sync/enable.js
# Check sync status
node tools/sync/status.js
# Manual sync trigger
node tools/sync/trigger.js
# View sync history
node tools/sync/history.js
# Disable Auto Sync
node tools/sync/disable.js# Enable and configure Auto Sync
/terminal SmartBrain.autoSync --enable
# Check sync status
/terminal SmartBrain.autoSync --status
# Force immediate sync
/terminal SmartBrain.autoSync --now
# Configure sync interval (seconds)
/terminal SmartBrain.autoSync --interval 6001. Automatic (Default)
- Syncs changes every N seconds (configurable)
- Best for: Active development with frequent changes
2. On-Commit
- Syncs only when you commit changes
- Best for: Controlled synchronization
const vulnerabilityDetector = new InferenceEngine('models/vulnerability-detector');
async function auditContract(code) { const prediction = await vulnerabilityDetector.predict({ code: code });
return { vulnerabilities: prediction.vulnerabilities, confidence: prediction.confidence, recommendations: prediction.recommendations }; }
**prefer_remote**: Keep remote changes, discard local changes
```json
{
"conflict_resolution": "prefer_remote"
}
merge: Attempt intelligent merge (for compatible file types)
{
"conflict_resolution": "merge"
}prompt: Ask user to resolve conflicts manually
{
"conflict_resolution": "prompt"
}- Start with Manual Mode: Test your sync configuration before enabling automatic sync
- Use .gitignore Patterns: Exclude temporary files, logs, and checkpoints
- Monitor Initial Sync: Watch the first few sync operations to ensure correct behavior
- Regular Backups: Configure a backup repository for critical models
- Selective Sync: Only sync necessary files to reduce bandwidth and storage
View real-time sync status:
# Watch sync activity
node tools/sync/watch.js
# View sync logs
tail -f logs/sync.log
# Check for sync errors
node tools/sync/errors.jsSync conflicts occurring frequently
# Review conflict history
node tools/sync/conflicts.js
# Adjust conflict resolution strategy
node tools/sync/config.js --conflict-resolution mergeSync is too slow
# Check what's being synced
node tools/sync/status.js --verbose
# Add ignore patterns for large files
node tools/sync/config.js --ignore "*.h5,*.pb"Sync not triggering
# Check sync service status
node tools/sync/status.js
# Restart sync service
node tools/sync/restart.jsAuto Test is SmartBrain's intelligent testing framework that automatically runs appropriate tests when code or models change, ensuring continuous quality assurance throughout the development lifecycle.
Auto Test monitors your workspace for changes and automatically executes relevant test suites. It intelligently determines which tests to run based on what changed, provides detailed reports, and can even suggest fixes for failing tests.
- Intelligent Test Selection: Runs only tests affected by your changes
- Continuous Testing: Automatically runs tests on file changes
- Parallel Execution: Runs multiple test suites simultaneously
- Coverage Tracking: Monitors and reports test coverage metrics
- Failure Analysis: Provides detailed failure reports with suggestions
- Integration Testing: Supports model, dataset, and code testing
- Performance Testing: Tracks model performance over time
- Test History: Maintains history of test results for trend analysis
Auto Test supports multiple test categories:
1. Unit Tests
- Individual function and component testing
- Fast execution for rapid feedback
- Isolated from external dependencies
2. Integration Tests
- Model inference accuracy tests
- Dataset validation tests
- API endpoint tests
3. Performance Tests
- Model inference speed benchmarks
- Memory usage monitoring
- Training performance tests
4. Model Tests
- Accuracy validation against benchmarks
- Inference output consistency
- Model file integrity checks
Create or edit .smartbrain/test.json:
{
"enabled": true,
"mode": "smart",
"test_suites": {
"unit": {
"enabled": true,
"pattern": "tests/**/*.test.js",
"timeout": 5000
},
"integration": {
"enabled": true,
"pattern": "tests/integration/**/*.test.js",
"timeout": 30000
},
"model": {
"enabled": true,
"pattern": "tests/models/**/*.test.js",
"timeout": 60000,
"benchmark": {
"accuracy_threshold": 0.85,
"speed_threshold_ms": 100
}
}
},
"coverage": {
"enabled": true,
"threshold": 80,
"report_formats": ["html", "json", "text"]
},
"on_failure": {
"notify": true,
"auto_fix": false,
"create_issue": true
}
}# Run all tests (uses the default npm test script)
npm test
# Run specific test suites
npm run test:unit
npm run test:integration
# View test coverage
npm run test:coverageNote: The
/terminal SmartBrain.autoTestintegration is planned and is not yet available in the current.github/copilot/agent.yamlconfiguration.
# (Planned) Enable Auto Test
/terminal SmartBrain.autoTest --enable
# (Planned) Run tests
/terminal SmartBrain.autoTest --run
# (Planned) Check test status
/terminal SmartBrain.autoTest --status
# (Planned) View coverage report
/terminal SmartBrain.autoTest --coverageSmart Mode (Recommended)
{
"mode": "smart"
}- Analyzes changes and runs only affected tests
- Provides fastest feedback
- Automatically escalates to full suite if needed
Full Mode
{
"mode": "full"
}- Runs complete test suite on every change
- Thorough but slower
- Best for critical changes or pre-deployment
Fast Mode
{
"mode": "fast"
}- Runs only unit tests for quick feedback
- Skips integration and performance tests
- Best for rapid iteration
Scheduled Mode
{
"mode": "scheduled",
"schedule": "0 */4 * * *"
}- Runs tests on a schedule (cron format)
- Best for continuous monitoring
Auto Test includes specialized model testing capabilities:
// tests/models/accuracy.test.js
const { ModelTester } = require('@smartbrain/test');
describe('Model Accuracy Tests', () => {
it('should maintain accuracy above threshold', async () => {
const tester = new ModelTester('models/my-model');
const accuracy = await tester.testAccuracy('datasets/test.json');
expect(accuracy).toBeGreaterThan(0.85);
});
});// tests/models/performance.test.js
const { ModelTester } = require('@smartbrain/test');
describe('Model Performance Tests', () => {
it('should complete inference within time limit', async () => {
const tester = new ModelTester('models/my-model');
const duration = await tester.testInferenceSpeed({
samples: 100,
iterations: 10
});
expect(duration).toBeLessThan(100); // milliseconds
});
});// tests/models/regression.test.js
const { ModelTester } = require('@smartbrain/test');
describe('Model Regression Tests', () => {
it('should produce consistent outputs', async () => {
const tester = new ModelTester('models/my-model');
const consistent = await tester.testConsistency({
input: 'test-input.json',
iterations: 5
});
expect(consistent).toBe(true);
});
});Auto Test generates comprehensive reports:
# View latest test report (example via npm script)
npm run test:report
# Generate HTML report
npm run test:report -- --format html --output reports/
# Compare test runs
npm run test:compare -- --runs 5Report Contents:
- Test execution summary
- Pass/fail statistics
- Coverage metrics
- Performance benchmarks
- Failure details with stack traces
- Historical trends
- Suggested fixes
Auto Test integrates seamlessly with GitHub Actions:
# .github/workflows/test.yml
name: Auto Test
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Auto Test
run: node tools/test/run.js --all --ci
- name: Upload Coverage
uses: codecov/codecov-action@v3
with:
files: ./coverage/coverage.json- Start with Smart Mode: Provides best balance of speed and coverage
- Set Realistic Thresholds: Don't set coverage thresholds too high initially
- Write Fast Unit Tests: Keep unit tests under 5 seconds for quick feedback
- Use Tags: Tag tests by category for selective execution
- Monitor Trends: Watch test performance over time
- Fix Flaky Tests: Address intermittent failures immediately
Tests running too slowly
# Analyze slow tests
node tools/test/analyze.js --slow
# Enable parallel execution
node tools/test/config.js --parallel trueTests failing intermittently
# Identify flaky tests
node tools/test/flaky.js
# Run specific test repeatedly
node tools/test/run.js --test "test-name" --repeat 10Auto Analysis is SmartBrain's continuous code and model analysis system that automatically examines your code, models, and datasets to identify issues, suggest improvements, and provide insights.
Auto Analysis continuously monitors your SmartBrain workspace and automatically performs various types of analysis including code quality checks, model performance analysis, dataset quality assessment, and security vulnerability scanning.
- Code Quality Analysis: Identifies code smells, complexity issues, and style violations
- Model Performance Analysis: Tracks model metrics and performance trends
- Dataset Quality Analysis: Validates dataset quality and identifies issues
- Security Scanning: Detects potential security vulnerabilities
- Dependency Analysis: Monitors dependency health and updates
- Gas Optimization: Analyzes and suggests smart contract gas optimizations
- Complexity Metrics: Tracks code and model complexity
- Trend Analysis: Identifies performance trends over time
Examines code quality, style, and potential issues:
# Planned: Run code analysis (implementation coming in a future release)
# The final command will be exposed via a dedicated analysis entrypoint,
# such as an npm script or CLI wrapper. For example, it may look like:
#
# npm run smart:analyze -- --files "src/**/*.js" --score
#
# Refer to the latest README or release notes for the actual command
# once the analysis tooling has been implemented.Checks:
- Code complexity (cyclomatic complexity)
- Code duplication
- Style violations
- Potential bugs
- Security vulnerabilities
- Documentation coverage
Analyzes model performance and characteristics:
# Analyze model
npm run smart:analyze -- --mode model --model models/my-model
# Compare model versions
npm run smart:analyze -- --mode model --compare v1.0.0 v1.1.0
# Performance trends
npm run smart:analyze -- --mode model --trends --days 30Metrics:
- Accuracy, precision, recall, F1 score
- Inference latency and throughput
- Model size and complexity
- Resource utilization
- Prediction confidence distribution
Examines dataset quality and characteristics:
# Analyze dataset
node tools/analysis/dataset.js --dataset datasets/my-data.json
# Quality report
node tools/analysis/dataset.js --quality
# Distribution analysis
node tools/analysis/dataset.js --distributionChecks:
- Data distribution and balance
- Missing values and outliers
- Feature correlations
- Data quality metrics
- Schema compliance
Scans for security vulnerabilities:
# Security scan
node tools/analysis/security.js
# Check dependencies
node tools/analysis/security.js --dependencies
# Smart contract analysis
node tools/analysis/security.js --contractsScans For:
- Known CVE vulnerabilities
- Private key exposure
- Insecure configurations
- Smart contract vulnerabilities
- Dependency issues
Create or edit .smartbrain/analysis.json:
{
"enabled": true,
"schedule": "0 */6 * * *",
"analyses": {
"code": {
"enabled": true,
"complexity_threshold": 10,
"duplication_threshold": 5,
"exclude_patterns": ["node_modules/", "tests/"]
},
"model": {
"enabled": true,
"performance_threshold": {
"accuracy": 0.80,
"latency_ms": 200
},
"track_trends": true
},
"dataset": {
"enabled": true,
"quality_threshold": 0.90,
"check_distribution": true
},
"security": {
"enabled": true,
"severity_threshold": "medium",
"auto_update_dependencies": false
}
},
"reporting": {
"format": "json",
"output_dir": "reports/analysis",
"notify_on_issues": true
}
}# Enable Auto Analysis
node tools/analysis/enable.js
# Configure analysis types
node tools/analysis/config.js --enable code,model,dataset,security
# Set analysis schedule
node tools/analysis/schedule.js --cron "0 */6 * * *"
# Run analysis now
node tools/analysis/run.js --allAuto Analysis generates comprehensive reports with actionable insights:
# View latest analysis report
node tools/analysis/report.js
# Generate detailed report
node tools/analysis/report.js --detailed --format html
# View specific analysis
node tools/analysis/report.js --type model
# Compare reports
node tools/analysis/compare.js --dates 2025-01-01 2025-01-15Report Sections:
- Executive Summary: High-level overview of findings
- Critical Issues: Urgent problems requiring immediate attention
- Warnings: Potential issues to investigate
- Recommendations: Suggested improvements
- Metrics: Quantitative measurements and trends
- Historical Comparison: Changes over time
Auto Analysis tracks various code quality metrics:
Complexity Score: Measures code complexity (0-100, lower is better)
Score < 10: Excellent
Score 10-20: Good
Score 20-40: Moderate
Score > 40: High complexity (refactoring recommended)
Maintainability Index: Overall maintainability (0-100, higher is better)
Score > 80: Highly maintainable
Score 60-80: Maintainable
Score 40-60: Moderate maintainability
Score < 40: Low maintainability
Duplication Percentage: Amount of duplicated code
< 3%: Excellent
3-5%: Good
5-10%: Acceptable
> 10%: Too much duplication
Track model performance over time:
# View performance trends
node tools/analysis/trends.js --model my-model --metric accuracy --days 30
# Compare model versions
node tools/analysis/compare-models.js v1.0.0 v2.0.0
# Performance regression detection
node tools/analysis/regression.js --model my-modelTracked Metrics:
- Accuracy over time
- Inference latency trends
- Resource utilization
- Error rates
- Prediction confidence
# .github/workflows/analysis.yml
name: Auto Analysis
on:
schedule:
- cron: '0 */6 * * *'
push:
branches: [main]
jobs:
analyze:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Auto Analysis
run: node tools/analysis/run.js --all --ci
- name: Upload Report
uses: actions/upload-artifact@v3
with:
name: analysis-report
path: reports/analysis/- Regular Schedule: Run analysis at least daily for active projects
- Act on Findings: Review and address critical issues promptly
- Track Trends: Monitor metrics over time to identify patterns
- Set Thresholds: Configure appropriate thresholds for your project
- Integrate with CI: Make analysis part of your CI/CD pipeline
Analysis taking too long
# Run specific analysis types
node tools/analysis/run.js --type code
# Exclude unnecessary files
node tools/analysis/config.js --exclude "tests/,docs/"Too many false positives
# Adjust sensitivity
node tools/analysis/config.js --sensitivity medium
# Configure ignore patterns
node tools/analysis/config.js --ignore-pattern "*.generated.js"Auto Fix is SmartBrain's intelligent automated issue resolution system that can automatically fix common problems in code, models, and configurations based on analysis findings.
Auto Fix analyzes issues detected by Auto Analysis and other tools, and automatically applies fixes for common problems. It can fix code style issues, optimize configurations, update dependencies, and even suggest model improvements.
- Automatic Code Fixes: Fixes style violations, simple bugs, and code smells
- Configuration Optimization: Optimizes training and inference configurations
- Dependency Updates: Safely updates outdated dependencies
- Model Optimization: Applies model optimization techniques
- Smart Contract Gas Optimization: Reduces gas costs automatically
- Safe Execution: Creates backups before applying fixes
- Preview Mode: Review fixes before applying them
- Rollback Support: Undo fixes if needed
Automatically fixes code issues:
# Run default auto-fix on the current project
/terminal SmartBrain.fix
# Fix configuration-related issues
/terminal SmartBrain.fix configs
# Fix permission-related issues
/terminal SmartBrain.fix permissions
# Run all available auto-fixes
/terminal SmartBrain.fix allFixable Issues:
- Style violations (indentation, spacing, quotes)
- Unused variables and imports
- Simple logic errors
- Missing semicolons
- Inconsistent naming
- Missing documentation
Optimizes and fixes model issues:
# NOTE: Placeholder commands — the model fix CLI is not yet exposed as
# a public entrypoint in this repository. These examples are illustrative
# only and will be updated in a future release once the actual CLI/API
# for model auto-fix is finalized.
#
# Example (placeholder) usage:
#
# Optimize model
# node tools/fix/model.js --model models/my-model --optimize
#
# Fix model metadata
# node tools/fix/model.js --model models/my-model --metadata
#
# Convert model format
# node tools/fix/model.js --model models/my-model --convert onnxFixable Issues:
- Incorrect metadata
- Missing required fields
- Suboptimal model format
- Inefficient model structure
- Missing documentation
Optimizes configuration files:
# Fix training config
node tools/fix/config.js --file training/configs/my-config.json
# Optimize for performance
node tools/fix/config.js --optimize performance
# Optimize for accuracy
node tools/fix/config.js --optimize accuracyFixable Issues:
- Suboptimal hyperparameters
- Incorrect paths
- Missing required fields
- Inefficient batch sizes
- Incorrect data types
Updates and fixes dependencies:
# Update outdated dependencies
node tools/fix/dependencies.js --update
# Fix security vulnerabilities
node tools/fix/dependencies.js --security
# Remove unused dependencies
node tools/fix/dependencies.js --unusedFixable Issues:
- Outdated dependencies
- Security vulnerabilities
- Unused dependencies
- Version conflicts
- Missing dependencies
Create or edit .smartbrain/fix.json:
{
"enabled": true,
"mode": "preview",
"categories": {
"code": {
"enabled": true,
"auto_apply": false,
"types": ["style", "bugs", "naming"]
},
"model": {
"enabled": true,
"auto_apply": false,
"optimize": true
},
"config": {
"enabled": true,
"auto_apply": true,
"optimize_for": "balanced"
},
"dependencies": {
"enabled": true,
"auto_apply": false,
"security_only": true
}
},
"safety": {
"create_backup": true,
"require_confirmation": true,
"max_fixes_per_run": 50
},
"rollback": {
"enabled": true,
"keep_backups": 5
}
}# Enable Auto Fix
node tools/fix/enable.js
# Set to preview mode (safe)
node tools/fix/mode.js --preview
# Set to auto mode (applies fixes automatically)
node tools/fix/mode.js --auto
# Enable specific fix categories
node tools/fix/config.js --enable code,config,dependencies# Enable Auto Fix
/terminal SmartBrain.fix --enable
# Preview fixes
/terminal SmartBrain.fix --preview
# Apply fixes
/terminal SmartBrain.fix --apply
# Rollback last fixes
/terminal SmartBrain.fix --rollbackPreview Mode (Default)
{
"mode": "preview"
}- Shows what would be fixed without applying changes
- Safest option for testing
- Generates detailed fix reports
Interactive Mode
{
"mode": "interactive"
}- Shows each fix and asks for confirmation
- Good balance of automation and control
- Allows selective application
Auto Mode
{
"mode": "auto"
}- Automatically applies fixes
- Fastest but requires trust in the system
- Always creates backups
Auto Fix includes multiple safety mechanisms:
1. Automatic Backups
# List backups
node tools/fix/backups.js --list
# Restore from backup
node tools/fix/backups.js --restore <backup-id>
# Clean old backups
node tools/fix/backups.js --clean --older-than 30d2. Fix Validation
- Validates syntax after code fixes
- Tests configuration after config fixes
- Verifies model integrity after model fixes
- Runs tests after applying fixes
3. Rollback Support
# Rollback last fix
node tools/fix/rollback.js
# Rollback specific fix session
node tools/fix/rollback.js --session <session-id>
# View rollback history
node tools/fix/rollback.js --historyAuto Fix generates detailed reports for all operations:
# View latest fix report
node tools/fix/report.js
# View specific fix session
node tools/fix/report.js --session <session-id>
# Generate detailed HTML report
node tools/fix/report.js --format html --output reports/Report Contents:
- Fixes applied
- Files modified
- Validation results
- Before/after comparisons
- Rollback information
- Recommendations
Auto Fix includes specialized gas optimization for smart contracts:
# Analyze and fix gas issues
node tools/fix/gas.js --contract contracts/MyContract.sol
# Preview gas optimizations
node tools/fix/gas.js --contract contracts/MyContract.sol --preview
# Apply specific optimizations
node tools/fix/gas.js --contract contracts/MyContract.sol --types storage,loopsOptimization Types:
- Storage layout optimization
- Loop unrolling
- Function visibility optimization
- Variable packing
- Short-circuit evaluation
- Batch operations
# .github/workflows/autofix.yml
name: Auto Fix
on:
schedule:
- cron: '0 2 * * *'
jobs:
fix:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Auto Fix
run: node tools/fix/run.js --preview --ci
- name: Create PR with fixes
if: success()
uses: peter-evans/create-pull-request@v5
with:
title: 'Auto Fix: Automated fixes'
body: 'Automated fixes applied by SmartBrain Auto Fix'
branch: autofix/automated-fixes- Start with Preview Mode: Always test fixes before auto-applying
- Enable Backups: Always create backups before applying fixes
- Selective Fixing: Start with safe categories (style, config)
- Review Changes: Review auto-applied fixes regularly
- Test After Fixing: Run tests after applying fixes
- Keep Backups: Maintain recent backup history
Create custom fix rules in .smartbrain/fix-rules.json:
{
"custom_rules": [
{
"name": "enforce-naming-convention",
"pattern": "function\\s+([a-z])",
"replacement": "function $1",
"description": "Enforce camelCase for functions"
}
]
}Create custom fix plugins:
// tools/fix/plugins/my-fixer.js
module.exports = {
name: 'my-fixer',
description: 'Custom fix logic',
async analyze(files) {
// Analyze files and return issues
return issues;
},
async fix(issue) {
// Apply fix for the issue
return fixResult;
},
async validate(fixResult) {
// Validate the fix was successful
return isValid;
}
};Fixes not being applied
# Check fix status
node tools/fix/status.js
# View fix logs
tail -f logs/fix.log
# Test fix manually
node tools/fix/test.js --issue <issue-id>Rollback not working
# Check backup existence
node tools/fix/backups.js --list
# Force rollback
node tools/fix/rollback.js --force --session <session-id>Too many fixes proposed
# Adjust sensitivity
node tools/fix/config.js --sensitivity low
# Limit fix types
node tools/fix/config.js --types style,bugs
# Set max fixes
node tools/fix/config.js --max-fixes 20SmartBrain integrates with GitHub Copilot terminal commands:
# Check SmartBrain status
/terminal SmartBrain.status
# Validate models and configurations
/terminal SmartBrain.validate
# Run inference
/terminal SmartBrain.inference --model my-model --input data.json
# Run training
/terminal SmartBrain.train --config training/configs/my-config.json
# List and manage models
/terminal SmartBrain.models
# Auto-fix common issues
/terminal SmartBrain.fix# Get system status
$ /terminal SmartBrain.status
✓ Models: 5 registered
✓ Training jobs: 2 running
✓ Inference engine: Ready
✓ Datasets: 10 validated
# Validate everything
$ /terminal SmartBrain.validate
Validating models... ✓
Validating datasets... ✓
Validating configurations... ✓
# List models
$ /terminal SmartBrain.models
Available models:
- vulnerability-detector (v2.1.0)
- gas-optimizer (v1.5.0)
- code-classifier (v3.0.0)SmartBrain is part of the CyberAi ecosystem:
CyberAi Ecosystem
├── SmartBrain (AI/ML Engine)
├── SmartContractDeploy Bot
├── SmartContractAudit Bot
└── Additional Components
SmartBrain provides ML capabilities to other bots:
Conceptual example – illustrative only; not using the current SmartBrain public API.
// Conceptual example: how a SmartContractAudit bot might call an inference engine
const { InferenceEngine } = require('@smartbrain/inference');
const vulnerabilityDetector = new InferenceEngine(
'models/vulnerability-detector'
);
async function auditContract(code) {
const prediction = await vulnerabilityDetector.predict({
code: code
});
return {
vulnerabilities: prediction.vulnerabilities,
confidence: prediction.confidence,
recommendations: prediction.recommendations
};
}GitHub Actions workflows can trigger SmartBrain operations:
- name: Run Model Validation
run: |
./scripts/validate-model.sh models/my-model
- name: Run Inference
run: |
node inference/cli/index.js predict \
--model models/my-model \
--input data/input.json// Load model
const engine = new InferenceEngine('models/my-model');
// Single prediction
const result = await engine.predict(inputData);
// Batch prediction
const results = await engine.predictBatch(inputDataArray);
// Get model info
const info = engine.getModelInfo();// Create trainer
const trainer = new ModelTrainer(config);
// Start training
await trainer.train();
// Resume from checkpoint
await trainer.resume(checkpointPath);
// Evaluate model
const metrics = await trainer.evaluate(testData);// Validate dataset
const validator = new DatasetValidator(schema);
const isValid = validator.validate(dataset);
// Get validation errors
const errors = validator.getErrors();
// Calculate statistics
const stats = validator.getStatistics();- Version Control: Always version models using semantic versioning
- Metadata: Include comprehensive metadata with every model
- Documentation: Document model architecture, training, and usage
- Validation: Validate models before deployment
- Testing: Test models on diverse datasets
- Configuration: Use configuration files for reproducibility
- Checkpoints: Save checkpoints regularly
- Monitoring: Monitor training metrics continuously
- Validation: Validate on held-out data during training
- Experimentation: Track experiments with metadata
- Input Validation: Validate all inputs before inference
- Error Handling: Handle inference errors gracefully
- Performance: Optimize for latency and throughput
- Monitoring: Monitor inference performance
- Versioning: Use specific model versions in production
- Model Integrity: Validate model checksums
- Access Control: Restrict access to sensitive models
- Input Sanitization: Sanitize all user inputs
- Secrets: Never commit secrets or credentials
- Updates: Keep dependencies updated
- Testing: Test thoroughly before deployment
- Rollback: Have rollback procedures ready
- Monitoring: Set up monitoring and alerts
- Documentation: Update documentation
- Communication: Communicate changes to users
- Enable Auto Sync: Keep models and datasets synchronized
- Use Auto Test: Ensure continuous quality assurance
- Leverage Auto Analysis: Monitor code and model quality
- Configure Auto Fix: Automate routine maintenance tasks
- Review Automation Reports: Regularly check automation outputs
SmartBrain seamlessly integrates with CI/CD pipelines, particularly GitHub Actions, to automate your ML and smart contract development workflows.
Create .github/workflows/smartbrain.yml:
name: SmartBrain CI/CD Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
schedule:
- cron: '0 2 * * *' # Daily at 2 AM
jobs:
validation:
name: Validate Infrastructure
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '16'
cache: 'npm'
- name: Install Dependencies
run: npm ci
- name: Run Bootstrap Script
run: ./scripts/bootstrap.sh
- name: Run Audit Script
run: ./scripts/audit.sh
- name: Validate Models
run: npm run validate:models
- name: Validate Datasets
run: npm run validate:datasets
lint-and-format:
name: Lint and Format
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '16'
- name: Install Dependencies
run: npm ci
- name: Run Linter
run: npm run lint
- name: Check Formatting
run: npm run format:check
auto-test:
name: Run Auto Test
runs-on: ubuntu-latest
needs: [validation, lint-and-format]
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '16'
- name: Install Dependencies
run: npm ci
- name: Run Unit Tests
run: npm run test:unit
- name: Run Integration Tests
run: npm run test:integration
- name: Generate Coverage Report
run: npm run test:coverage
- name: Upload Coverage
uses: codecov/codecov-action@v3
with:
files: ./coverage/coverage.json
auto-analysis:
name: Run Auto Analysis
runs-on: ubuntu-latest
needs: [validation]
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '16'
- name: Install Dependencies
run: npm ci
- name: Run Code Analysis
run: node tools/analysis/code.js --ci
- name: Run Model Analysis
run: node tools/analysis/model.js --all --ci
- name: Run Security Analysis
run: node tools/analysis/security.js --ci
- name: Upload Analysis Report
uses: actions/upload-artifact@v3
with:
name: analysis-report
path: reports/analysis/
auto-fix:
name: Auto Fix Issues
runs-on: ubuntu-latest
needs: [auto-test, auto-analysis]
if: github.event_name == 'schedule'
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '16'
- name: Install Dependencies
run: npm ci
- name: Run Auto Fix (illustrative)
run: echo "Run your auto-fix tooling here (e.g., npm run lint -- --fix)"
- name: Create Pull Request
if: success()
uses: peter-evans/create-pull-request@v5
with:
token: ${{ secrets.GITHUB_TOKEN }}
commit-message: 'chore: Auto Fix - Automated fixes'
title: 'Auto Fix: Automated fixes from SmartBrain'
body: |
Automated fixes applied by SmartBrain Auto Fix.
Please review the changes carefully before merging.
branch: autofix/automated-fixes
delete-branch: true
model-training:
name: Train Models
runs-on: ubuntu-latest
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
needs: [auto-test]
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '16'
- name: Install Dependencies
run: npm ci
- name: Train Model
run: node training/cli/index.js train --config "$TRAINING_CONFIG_PATH"
timeout-minutes: 120
- name: Validate Trained Model
run: ./scripts/validate-model.sh models/production-model
- name: Upload Model Artifacts
uses: actions/upload-artifact@v3
with:
name: trained-model
path: models/production-model/
auto-sync:
name: Sync Models and Datasets
runs-on: ubuntu-latest
needs: [model-training]
if: success()
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '16'
- name: Install Dependencies
run: npm ci
- name: Run Auto Sync (placeholder)
run: |
echo "Auto Sync tooling is not yet implemented."
echo "Replace this step with your sync implementation when available."
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
deploy:
name: Deploy to Production
runs-on: ubuntu-latest
needs: [model-training, auto-sync]
if: github.ref == 'refs/heads/main'
environment:
name: production
url: https://smartbrain.example.com
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '16'
- name: Install Dependencies
run: npm ci
- name: Deploy Application
run: npm run deploy
env:
DEPLOYMENT_KEY: ${{ secrets.DEPLOYMENT_KEY }}.gitlab-ci.yml:
stages:
- validate
- test
- analyze
- deploy
variables:
NODE_VERSION: "16"
validate:
stage: validate
image: node:${NODE_VERSION}
script:
- npm ci
- ./scripts/bootstrap.sh
- ./scripts/audit.sh
- npm run validate:models
test:
stage: test
image: node:${NODE_VERSION}
script:
- npm ci
- npm run test
- npm run lint
coverage: '/Coverage: \d+\.\d+/'
analyze:
stage: analyze
image: node:${NODE_VERSION}
script:
- npm ci
- node tools/analysis/run.js --all --ci
artifacts:
paths:
- reports/analysis/
deploy:
stage: deploy
image: node:${NODE_VERSION}
script:
- npm ci
- npm run deploy
only:
- mainJenkinsfile:
pipeline {
agent any
environment {
NODE_VERSION = '16'
}
stages {
stage('Setup') {
steps {
sh 'npm ci'
sh './scripts/bootstrap.sh'
}
}
stage('Validate') {
steps {
sh './scripts/audit.sh'
sh 'npm run validate:models'
sh 'npm run validate:datasets'
}
}
stage('Test') {
steps {
sh 'npm test'
sh 'npm run lint'
}
}
stage('Analyze') {
steps {
sh 'node tools/analysis/run.js --all --ci'
}
}
stage('Deploy') {
when {
branch 'main'
}
steps {
sh 'npm run deploy'
}
}
}
post {
always {
junit 'reports/test-results.xml'
publishHTML([
reportDir: 'reports/analysis',
reportFiles: 'index.html',
reportName: 'Analysis Report'
])
}
}
}Set up automated model retraining:
# .github/workflows/retrain-models.yml
name: Retrain Models
on:
schedule:
- cron: '0 0 * * 0' # Weekly on Sunday
workflow_dispatch: # Manual trigger
jobs:
retrain:
runs-on: ubuntu-latest
strategy:
matrix:
model: [classifier, detector, optimizer]
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '16'
- name: Install Dependencies
run: npm ci
- name: Download Latest Dataset
run: node scripts/download-dataset.js ${{ matrix.model }}
- name: Train Model
run: |
node training/cli/index.js train \
--config training/configs/${{ matrix.model }}.json \
--output models/${{ matrix.model }}-new
- name: Validate Model
run: ./scripts/validate-model.sh models/${{ matrix.model }}-new
- name: Compare Performance
run: node scripts/compare-models.js models/${{ matrix.model }} models/${{ matrix.model }}-new
- name: Upload Model
if: success()
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.model }}-model
path: models/${{ matrix.model }}-new/jobs:
deploy-blue-green:
runs-on: ubuntu-latest
steps:
- name: Deploy to Green Environment
run: |
npm run deploy:green
npm run health-check:green
- name: Switch Traffic to Green
run: npm run switch-traffic:green
- name: Monitor Green Environment
run: npm run monitor:green --duration 300
- name: Rollback if Issues Detected
if: failure()
run: npm run switch-traffic:bluejobs:
deploy-canary:
runs-on: ubuntu-latest
steps:
- name: Deploy Canary Version
run: npm run deploy:canary --percentage 10
- name: Monitor Canary Metrics
run: npm run monitor:canary --duration 600
- name: Increase Canary Traffic
if: success()
run: |
npm run deploy:canary --percentage 50
sleep 600
npm run deploy:canary --percentage 100This section covers common issues and their solutions.
Symptoms:
npm ERR! code EACCES
npm ERR! syscall access
Solution:
# Fix npm permissions
sudo chown -R $USER ~/.npm
sudo chown -R $USER /usr/local/lib/node_modules
# Clear cache and retry
npm cache clean --force
npm installSymptoms:
Error: Node.js version 14.x is not supported
Solution:
# Check Node.js version
node --version
# Update Node.js to v16 or higher
# Using nvm:
nvm install 16
nvm use 16
# Using apt (Ubuntu/Debian):
curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
sudo apt-get install -y nodejsSymptoms:
Error: JavaScript heap out of memory
Solution:
# Increase Node.js memory limit
export NODE_OPTIONS="--max-old-space-size=4096"
# Reduce batch size in config
node tools/fix/config.js --batch-size 16
# Enable gradient accumulation
node training/cli/index.js train \
--config training/configs/my-model.json \
--accumulation-steps 2Symptoms: Training stops without error message or checkpoint creation fails
Solution:
# Check available disk space
df -h
# Verify write permissions
ls -la models/
# Enable verbose logging
node training/cli/index.js train \
--config training/configs/my-model.json \
--log-level debugSymptoms: Training accuracy high but validation accuracy low
Solution:
// Adjust training configuration
{
"training": {
"dropout": 0.3, // Increase dropout
"l2_regularization": 0.01, // Add regularization
"early_stopping": {
"enabled": true,
"patience": 5,
"monitor": "val_loss"
}
}
}Symptoms: Single prediction takes several seconds
Solution:
# Enable caching
node tools/fix/config.js --inference-cache true
# Use batch inference
node inference/cli/index.js batch \
--model models/my-model \
--input data.json \
--batch-size 32
# Convert model to optimized format
node tools/model/optimize.js --model models/my-modelSymptoms:
Error: Invalid prediction output: NaN
Solution:
# Validate model integrity
./scripts/validate-model.sh models/my-model
# Check input data format
node tools/validation/validate-input.js --input data.json
# Re-run with debug mode
node inference/cli/index.js predict \
--model models/my-model \
--input data.json \
--debugSymptoms:
Error: Sync conflict detected in models/my-model/metadata.json
Solution:
# View conflict details
node tools/sync/conflicts.js
# Resolve manually
node tools/sync/resolve.js --conflict <conflict-id> --strategy prefer_local
# Or use automatic resolution
node tools/sync/config.js --conflict-resolution mergeSymptoms: Files not syncing despite changes
Solution:
# Check sync status
node tools/sync/status.js
# Verify configuration
cat .smartbrain/sync.json
# Restart sync service
node tools/sync/restart.js
# Force manual sync
node tools/sync/trigger.js --forceSymptoms:
Error: Test timeout after 30000ms
Solution:
// Increase timeout in .smartbrain/test.json
{
"test_suites": {
"integration": {
"timeout": 60000
}
}
}Symptoms: Tests pass sometimes and fail other times
Solution:
# Identify flaky tests
node tools/test/flaky.js
# Run test multiple times
node tools/test/run.js --test "test-name" --repeat 10
# Fix identified flaky tests
# Common causes: timing issues, external dependencies, random dataSymptoms: Analysis runs for hours without completing
Solution:
# Run selective analysis
node tools/analysis/run.js --type code
# Exclude large directories
node tools/analysis/config.js --exclude "node_modules/,dist/,*.log"
# Reduce analysis depth
node tools/analysis/config.js --depth 2Symptoms: Tests pass before fix but fail after
Solution:
# Rollback the fix
node tools/fix/rollback.js
# Review what was changed
node tools/fix/report.js --session <session-id>
# Adjust fix configuration
node tools/fix/config.js --types style # Only fix style issues
# Re-run with preview mode
node tools/fix/run.js --preview# Set debug environment variable
export DEBUG=smartbrain:*
# Or use debug flag
node <command> --debug
# View logs
tail -f logs/smartbrain.log# Comprehensive status check
/terminal SmartBrain.status
# Check specific components
node tools/status/models.js
node tools/status/datasets.js
node tools/status/training.js
node tools/status/inference.js# Command help
node <command> --help
# View documentation
cat docs/index.md | less
# Check version
node --version
npm --versionOptimize SmartBrain for maximum performance in your environment.
Reduce model size and improve inference speed:
# Example (conceptual) workflow:
#
# 1. Use your framework's quantization tooling to convert the model to INT8.
# - For example, with:
# - PyTorch: torch.ao.quantization / torch.quantization APIs
# - ONNX: onnxruntime.quantization (e.g., quantize_dynamic)
# Save the result as: models/my-model-quantized
#
# 2. Evaluate the baseline vs. quantized model on your validation dataset.
# - Run inference with models/my-model and models/my-model-quantized
# - Compare accuracy and latency to ensure quantization meets your targets.Expected Benefits:
- 4x smaller model size
- 2-4x faster inference
- Minimal accuracy loss (< 1%)
Remove unnecessary connections:
# Prune model (future tooling - CLI not yet included in this repo)
# The pruning CLI will be provided in a future release. For now, use your
# preferred framework's pruning utilities or a custom script.
<pruning-command> \
--model models/my-model \
--sparsity 0.3 \
--output models/my-model-prunedConvert to optimized format:
# The model conversion CLI will be provided in a future release. For now,
# use your preferred framework's conversion utilities (e.g., ONNX converters
# or the TensorFlow Lite Converter).
# Convert to ONNX
<model-convert-command> \
--model models/my-model \
--format onnx \
--optimize
# Convert to TensorFlow Lite
<model-convert-command> \
--model models/my-model \
--format tflite \
--optimizeUse batching for multiple predictions:
// Instead of individual predictions
for (const input of inputs) {
await engine.predict(input); // Slow
}
// Use batch prediction
const results = await engine.predictBatch(inputs); // FastEnable prediction caching:
{
"inference": {
"cache": {
"enabled": true,
"max_size": 1000,
"ttl": 3600
}
}
}For API-based inference:
const engine = new InferenceEngine('models/my-model', {
pool_size: 10,
max_queue: 100
});Enable mixed precision for faster training:
{
"training": {
"mixed_precision": true,
"loss_scale": "dynamic"
}
}Benefits:
- 2-3x faster training
- Reduced memory usage
- Minimal accuracy impact
Use multiple GPUs or machines:
# Multi-GPU training
node training/cli/index.js train \
--config training/configs/my-model.json \
--distributed \
--gpus 4Simulate larger batch sizes:
{
"training": {
"batch_size": 16,
"accumulation_steps": 4 // Effective batch size: 64
}
}Preprocess datasets once:
# Preprocess and cache
node datasets/preprocess.js \
--input datasets/raw/data.json \
--output datasets/processed/data.json \
--cacheOptimize data loading:
{
"data": {
"num_workers": 4,
"prefetch_factor": 2,
"pin_memory": true
}
}# Monitor memory usage
node tools/monitor/memory.js
# Set memory limits
export NODE_OPTIONS="--max-old-space-size=4096"
# Enable garbage collection logging
node --expose-gc <command># Set CPU affinity
taskset -c 0-3 node inference/cli/index.js predict ...
# Enable threading
export UV_THREADPOOL_SIZE=8# Profile inference
node --prof inference/cli/index.js predict \
--model models/my-model \
--input data.json
# Process profile
node --prof-process isolate-*.log > profile.txt# Real-time metrics
node tools/monitor/metrics.js --live
# Historical analysis
node tools/monitor/analyze.js --days 7# Inference benchmark
node tools/benchmark/inference.js \
--model models/my-model \
--iterations 1000
# Training benchmark
node tools/benchmark/training.js \
--config training/configs/benchmark.json
# End-to-end benchmark
node tools/benchmark/e2e.js --allTypical performance metrics:
| Operation | Latency | Throughput |
|---|---|---|
| Single Inference | 10-50ms | 20-100 req/s |
| Batch Inference (32) | 100-300ms | 100-320 req/s |
| Model Loading | 100-500ms | N/A |
| Training (per epoch) | 5-30 min | N/A |
- Enable model quantization or pruning
- Use batch inference for multiple predictions
- Enable caching for repeated predictions
- Use mixed precision training
- Preprocess datasets once and cache
- Configure appropriate batch sizes
- Monitor resource usage
- Profile critical paths
- Use connection pooling for APIs
- Enable gradient accumulation if needed
- Features Quick Reference: Quick reference to all SmartBrain features
- GitHub Issues: Report bugs and request features
- GitHub Discussions: Join community discussions
- Documentation: This guide and inline code documentation
- Examples: See the Quick Start Guide for usage examples
- Feature Comparison: SmartBrain vs Alternatives
See CONTRIBUTING.md for guidelines on contributing to SmartBrain.
SmartBrain is licensed under the Apache License 2.0. See LICENSE for details.