Skip to content

OWASP/AISVS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

783 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

OWASP Artificial Intelligence Security Verification Standard (AISVS)

CC BY-SA 4.0

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

CC BY-SA 4.0

What is AISVS?

The Artificial Intelligence Security Verification Standard (AISVS) is a community-driven catalogue of testable security requirements for AI-enabled systems. It gives developers, architects, security engineers, and auditors a structured framework to design, build, test, and verify the security of AI applications throughout their lifecycle, from data collection and model training to deployment, monitoring, and retirement.

AISVS is modeled after the OWASP Application Security Verification Standard (ASVS) and follows the same philosophy: every requirement should be verifiable, testable, and implementable.

What AISVS is NOT

  • Not a governance framework. Governance is well-covered by NIST AI RMF, ISO/IEC 42001, and EU AI Act compliance guides.
  • Not a risk management framework. AISVS provides the technical controls that risk frameworks point to, but does not define risk assessment methodology.
  • Not a tool recommendation list. AISVS is vendor-neutral and does not endorse specific products or frameworks.

How AISVS complements other standards

Standard Focus AISVS relationship
OWASP ASVS Web application security AISVS extends ASVS concepts to AI-specific threats
OWASP Top 10 for LLMs Awareness of top LLM risks AISVS provides the detailed controls to mitigate those risks
NIST AI RMF AI risk governance AISVS supplies the testable technical controls that AI RMF references
ISO/IEC 42001 AI management systems AISVS complements with implementation-level security verification

Verification Levels

Each AISVS requirement is assigned a verification level (1, 2, or 3) indicating the depth of security assurance:

Level Description When to use
1 Essential baseline controls that every AI system should implement. All AI applications, including internal tools and low-risk systems.
2 Standard controls for systems handling sensitive data or making consequential decisions. Production systems, customer-facing AI, systems processing personal data.
3 Advanced controls for high-assurance environments requiring defense against sophisticated attacks. Critical infrastructure, safety-critical AI, high-value targets, regulated industries.

Organizations should select a target level based on the risk profile of their AI system. Most production systems should aim for at least Level 2.

How to use AISVS

  • During design. Use requirements as a security checklist when architecting AI systems.
  • During development. Integrate requirements into CI/CD pipelines, code reviews, and testing.
  • During security assessments. Use as a verification framework for penetration testing and audits.
  • For procurement. Reference specific requirements when evaluating AI vendors and third-party models.

Requirement Chapters

  1. Training Data Integrity & Traceability
  2. User Input Validation
  3. Model Lifecycle Management & Change Control
  4. Infrastructure, Configuration & Deployment Security
  5. Access Control & Identity
  6. Supply Chain Security for Models, Frameworks & Data
  7. Model Behavior, Output Control & Safety Assurance
  8. Memory, Embeddings & Vector Database Security
  9. Autonomous Orchestration & Agentic Action Security
  10. Model Context Protocol (MCP) Security
  11. Adversarial Robustness & Attack Resistance
  12. Privacy Protection & Personal Data Management
  13. Monitoring, Logging & Anomaly Detection
  14. Human Oversight and Trust

Appendices

Contributing

We welcome contributions from the community. Please open an issue to report bugs or suggest improvements. We may ask you to submit a pull request based on the discussion.

Project Leaders

This project was founded by Jim Manico. Current project leadership includes Jim Manico, Otto Sulin, and Russ Memisyazici.

License

The entire project content is under the Creative Commons Attribution-Share Alike v4.0 license.

About

The AI Security Verification Standard (AISVS) focuses on providing developers, architects, and security professionals with a structured checklist to verify the security of AI-driven applications.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors