Security is a top priority for LLM-Latency-Lens. This document outlines our security policies, vulnerability reporting process, and best practices.
We provide security updates for the following versions:
| Version | Supported | End of Support |
|---|---|---|
| 0.1.x | ✅ | TBD |
| < 0.1 | ❌ | - |
DO NOT open public GitHub issues for security vulnerabilities.
Instead, report security vulnerabilities to:
Email: security@llm-devops.com
PGP Key: Available at https://llm-latency-lens.dev/pgp-key.asc
Please include the following information:
- Description: Clear description of the vulnerability
- Impact: Potential security impact
- Reproduction: Step-by-step instructions to reproduce
- Environment: Version, OS, configuration details
- Proof of Concept: Code or commands demonstrating the issue (if applicable)
- Suggested Fix: Your thoughts on how to fix it (optional)
- Acknowledgment: Within 24 hours
- Initial Assessment: Within 72 hours
- Status Update: Weekly until resolved
- Fix Timeline: Depends on severity (see below)
| Severity | Description | Fix Timeline | Example |
|---|---|---|---|
| Critical | Remote code execution, data breach | 24-48 hours | API key exposure in logs |
| High | Privilege escalation, DoS | 1-2 weeks | Authentication bypass |
| Medium | Information disclosure | 2-4 weeks | Timing attack vulnerability |
| Low | Minor issues with limited impact | 4-8 weeks | Verbose error messages |
// ❌ BAD: Hardcoded API keys
let api_key = "sk-1234567890abcdef";
let provider = OpenAIProvider::new(api_key);// ❌ BAD: Keys in version control
// config.toml
api_key = "sk-1234567890abcdef"// ❌ BAD: Keys in logs
println!("Using API key: {}", api_key);// ✅ GOOD: Environment variables
use std::env;
let api_key = env::var("OPENAI_API_KEY")
.expect("OPENAI_API_KEY not set");
let provider = OpenAIProvider::new(api_key);// ✅ GOOD: .env file (not in git)
// .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
// .gitignore
.env// ✅ GOOD: Redacted logging
println!("Using API key: {}***", &api_key[..8]);# Restrict config file permissions
chmod 600 config.yaml
chmod 600 .env
# Verify permissions
ls -la config.yaml
# Should show: -rw------- (600)// Validate configuration before use
pub fn validate_config(config: &Config) -> Result<(), ConfigError> {
// Check for suspicious values
if config.providers.is_empty() {
return Err(ConfigError::NoProviders);
}
// Validate URLs
for provider in &config.providers {
if !provider.endpoint.starts_with("https://") {
return Err(ConfigError::InsecureEndpoint(
provider.endpoint.clone()
));
}
}
Ok(())
}// Always use HTTPS
let provider = OpenAIProvider::builder()
.api_key(api_key)
.base_url("https://api.openai.com/v1") // ✅ HTTPS
.verify_ssl(true) // ✅ Verify certificates
.build();# config.yaml
execution:
http:
verify_ssl: true # Always verify SSL certificates
ca_bundle: /etc/ssl/certs/ca-certificates.crt # Custom CA bundle if needed# Check for security vulnerabilities
cargo audit
# Update dependencies
cargo update
# Check for outdated dependencies
cargo outdated# Cargo.toml - Pin major versions
[dependencies]
tokio = "1.41" # Lock to 1.x
reqwest = "0.12" # Lock to 0.12.x// Example: AWS Secrets Manager
use aws_sdk_secretsmanager::Client;
async fn get_api_key() -> Result<String, Box<dyn std::error::Error>> {
let config = aws_config::load_from_env().await;
let client = Client::new(&config);
let secret = client
.get_secret_value()
.secret_id("llm-api-keys/openai")
.send()
.await?;
Ok(secret.secret_string().unwrap().to_string())
}// Example: HashiCorp Vault
use vaultrs::client::{VaultClient, VaultClientSettingsBuilder};
async fn get_api_key() -> Result<String, Box<dyn std::error::Error>> {
let client = VaultClient::new(
VaultClientSettingsBuilder::default()
.address("https://vault.example.com")
.token(env::var("VAULT_TOKEN")?)
.build()?
)?;
let secret = client
.read("secret/data/llm-api-keys")
.await?;
Ok(secret["data"]["openai_key"].as_str().unwrap().to_string())
}// Validate user inputs
pub fn validate_prompt(prompt: &str) -> Result<(), ValidationError> {
// Check length
if prompt.is_empty() {
return Err(ValidationError::EmptyPrompt);
}
if prompt.len() > MAX_PROMPT_LENGTH {
return Err(ValidationError::PromptTooLong);
}
// Check for suspicious content
if contains_injection_patterns(prompt) {
return Err(ValidationError::SuspiciousContent);
}
Ok(())
}// Don't expose sensitive information in errors
pub enum Error {
// ❌ BAD: Exposes API key
AuthError(String),
// ✅ GOOD: Generic error
AuthenticationFailed,
}
// Error messages
impl Display for Error {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
match self {
// ❌ BAD: Exposes details
Error::AuthError(key) => write!(f, "Auth failed with key: {}", key),
// ✅ GOOD: Generic message
Error::AuthenticationFailed => write!(f, "Authentication failed"),
}
}
}Built-in rate limiting prevents API abuse:
use llm_latency_lens::RateLimiter;
let limiter = RateLimiter::new()
.requests_per_second(10)
.burst(20)
.build();Prevent resource exhaustion with timeouts:
let request = StreamingRequest::builder()
.model("gpt-4")
.prompt("Hello")
.timeout(Duration::from_secs(30)) // 30 second timeout
.build();Secure connection management:
execution:
http:
pool_size: 100
pool_idle_timeout: 90 # Close idle connections
max_idle_per_host: 10use tracing_subscriber::fmt::format::FmtSpan;
tracing_subscriber::fmt()
.with_env_filter("llm_latency_lens=info")
.with_span_events(FmtSpan::CLOSE)
.json() // Structured logging
.init();DO Log:
- Authentication attempts (success/failure)
- API calls (timestamp, provider, model)
- Errors and exceptions
- Configuration changes
- Rate limit violations
DO NOT Log:
- API keys or credentials
- Sensitive prompt content
- User PII
- Full request/response bodies
{
"timestamp": "2024-11-07T18:30:00Z",
"event": "api_call",
"provider": "openai",
"model": "gpt-4",
"status": "success",
"duration_ms": 1234,
"user_id": "user_abc123",
"request_id": "req_xyz789"
}- Data Minimization: Only collect necessary data
- Right to Erasure: Provide data deletion capabilities
- Data Portability: Export data in standard formats
- Privacy by Design: Built-in privacy features
- Security: Secure credential management
- Availability: High availability architecture
- Processing Integrity: Data integrity checks
- Confidentiality: Encryption at rest and in transit
- Privacy: Privacy controls and consent management
- Encryption: All data encrypted
- Audit Logs: Comprehensive audit trails
- Access Controls: Role-based access
- Data Retention: Configurable retention policies
- No hardcoded credentials
- Input validation on all inputs
- Secure error messages
- Dependencies up to date
- Security lints pass (
cargo clippy) - No vulnerable dependencies (
cargo audit)
- HTTPS only (no HTTP)
- SSL certificate verification enabled
- API keys in environment variables or secret store
- File permissions restricted (600 for sensitive files)
- Audit logging enabled
- Rate limiting configured
- Timeouts configured
- Resource limits set
- Regular security updates
- Monitor audit logs
- Rotate credentials periodically
- Review access logs
- Test disaster recovery
- Security scanning enabled
Risk: API keys could be exposed in logs, error messages, or version control.
Mitigation:
- Never hardcode API keys
- Use environment variables or secret stores
- Implement key redaction in logs
- Add .env to .gitignore
Risk: Malicious users could bypass rate limits.
Mitigation:
- Implement both client-side and server-side rate limiting
- Use token bucket algorithm
- Monitor for suspicious patterns
Risk: User prompts could contain malicious content.
Mitigation:
- Validate and sanitize inputs
- Implement content filtering
- Use provider safety features
Risk: Resource exhaustion from excessive requests.
Mitigation:
- Configure request timeouts
- Implement rate limiting
- Set connection pool limits
- Monitor resource usage
For security issues:
- Email: security@llm-devops.com
- PGP Key: https://llm-latency-lens.dev/pgp-key.asc
- Bug Bounty: Contact us for details
We thank the following security researchers for responsibly disclosing vulnerabilities:
(List will be updated as vulnerabilities are reported and fixed)
Subscribe to security updates:
- GitHub Watch: Watch the repository for security advisories
- Mailing List: security-announce@llm-devops.com
- RSS Feed: https://llm-latency-lens.dev/security.rss
Version: 1.0 Last Updated: 2025-11-07
For questions about this security policy, contact security@llm-devops.com.