Harbormaster is a lightweight, API-based AI model security tool designed for deployment in Hugging Face environments. It provides comprehensive protection for AI models by addressing OWASP vulnerabilities and implementing robust security measures.
Harbormaster builds upon the security foundations established by LLMGuardian, adapting them for seamless integration with Hugging Face's infrastructure. The project aims to democratize AI security by making enterprise-grade protection accessible to the broader AI community.
CLICK HERE FOR THE FULL PROJECT
Comprehensive Large Language Model (LLM) and Artificial Intelligence (AI) protection toolset aligned to addressing OWASP idenitfied top 10 vulnerabilities
Author: DeWitt Gibson https://www.linkedin.com/in/dewitt-gibson/
- Model Security Validation: Comprehensive scanning and validation of AI models
- Threat Detection: Real-time monitoring and detection of potential security threats
- Data Protection: Safeguards against sensitive data exposure and poisoning attempts
- Vector Store Security: Protection for embedding operations and supply chain vulnerabilities
- Agency Control: Management of model permissions and execution boundaries
- Hugging Face Integration: Native support for Hugging Face's model ecosystem
harbormaster/
├── .github/ # GitHub specific configurations
│ ├── workflows/ # GitHub Actions workflows
│ ├── CODEOWNERS # Repository ownership rules
│ └── ISSUE_TEMPLATE/ # Issue reporting templates
│
├── src/ # Source code
│ └── harbormaster/ # Main package directory
│ ├── api/ # API endpoints and integration
│ │ ├── routes/ # API route definitions
│ │ └── middleware/ # API middleware components
│ │
│ ├── core/ # Core functionality
│ │ ├── config/ # Configuration management
│ │ ├── logging/ # Logging system
│ │ └── utils/ # Utility functions
│ │
│ ├── security/ # Security components
│ │ ├── scanners/ # Security scanning modules
│ │ ├── defenders/ # Defense mechanisms
│ │ └── monitors/ # Security monitoring
│ │
│ ├── huggingface/ # Hugging Face specific integration
│ │ ├── adapters/ # Model adapters
│ │ └── endpoints/ # Hugging Face endpoints
│ │
│ ├── dashboard/ # Web interface
│ │ ├── components/ # UI components
│ │ └── pages/ # Dashboard pages
│ │
│ └── cli/ # Command-line interface
│
├── tests/ # Test suite
│ ├── unit/ # Unit tests
│ ├── integration/ # Integration tests
│ └── security/ # Security-specific tests
│
├── docs/ # Documentation
│ ├── api/ # API documentation
│ ├── guides/ # User guides
│ └── security/ # Security documentation
│
├── examples/ # Example implementations
├── scripts/ # Utility scripts
└── docker/ # Docker configurations
- Prompt injection detection
- Data leakage prevention
- Model validation checks
- Output sanitization
- Input sanitization
- Rate limiting
- Access control
- Token validation
- Usage tracking
- Threat detection
- Performance monitoring
- Security logging
- Python 3.8+
- Hugging Face account
- Docker (optional)
pip install harbormaster
from harbormaster import HarborMaster
# Initialize Harbormaster
harbor = HarborMaster(api_key="your-api-key")
# Secure your model
secured_model = harbor.secure_model("your-model-id")
# Make secure predictions
result = secured_model.predict("Your input here")
Comprehensive documentation is available at docs/. This includes:
- API Reference
- Security Best Practices
- Implementation Guides
- Example Use Cases
We welcome contributions! Please see our Contributing Guidelines for details.
For security concerns, please review our Security Policy. Report vulnerabilities to [email protected]
This project is licensed under the MIT License - see the LICENSE file for details.
@misc{harbormaster,
title={Harbormaster: Lightweight API-based AI Model Security},
author={Safe-Harbor-Cybersecurity, DeWitt Gibson},
year={2025},
url={https://github.com/Safe-Harbor-Cybersecurity/harbormaster}
}
This project builds upon the security foundations established by LLMGuardian, adapting and extending them for the Hugging Face ecosystem.