This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
The Artificial Intelligence Security Verification Standard (AISVS) is a community-driven catalogue of testable security requirements for AI-enabled systems. It gives developers, architects, security engineers, and auditors a structured framework to design, build, test, and verify the security of AI applications throughout their lifecycle, from data collection and model training to deployment, monitoring, and retirement.
AISVS is modeled after the OWASP Application Security Verification Standard (ASVS) and follows the same philosophy: every requirement should be verifiable, testable, and implementable.
- Not a governance framework. Governance is well-covered by NIST AI RMF, ISO/IEC 42001, and EU AI Act compliance guides.
- Not a risk management framework. AISVS provides the technical controls that risk frameworks point to, but does not define risk assessment methodology.
- Not a tool recommendation list. AISVS is vendor-neutral and does not endorse specific products or frameworks.
| Standard | Focus | AISVS relationship |
|---|---|---|
| OWASP ASVS | Web application security | AISVS extends ASVS concepts to AI-specific threats |
| OWASP Top 10 for LLMs | Awareness of top LLM risks | AISVS provides the detailed controls to mitigate those risks |
| NIST AI RMF | AI risk governance | AISVS supplies the testable technical controls that AI RMF references |
| ISO/IEC 42001 | AI management systems | AISVS complements with implementation-level security verification |
Each AISVS requirement is assigned a verification level (1, 2, or 3) indicating the depth of security assurance:
| Level | Description | When to use |
|---|---|---|
| 1 | Essential baseline controls that every AI system should implement. | All AI applications, including internal tools and low-risk systems. |
| 2 | Standard controls for systems handling sensitive data or making consequential decisions. | Production systems, customer-facing AI, systems processing personal data. |
| 3 | Advanced controls for high-assurance environments requiring defense against sophisticated attacks. | Critical infrastructure, safety-critical AI, high-value targets, regulated industries. |
Organizations should select a target level based on the risk profile of their AI system. Most production systems should aim for at least Level 2.
- During design. Use requirements as a security checklist when architecting AI systems.
- During development. Integrate requirements into CI/CD pipelines, code reviews, and testing.
- During security assessments. Use as a verification framework for penetration testing and audits.
- For procurement. Reference specific requirements when evaluating AI vendors and third-party models.
- Training Data Integrity & Traceability
- User Input Validation
- Model Lifecycle Management & Change Control
- Infrastructure, Configuration & Deployment Security
- Access Control & Identity
- Supply Chain Security for Models, Frameworks & Data
- Model Behavior, Output Control & Safety Assurance
- Memory, Embeddings & Vector Database Security
- Autonomous Orchestration & Agentic Action Security
- Model Context Protocol (MCP) Security
- Adversarial Robustness & Attack Resistance
- Privacy Protection & Personal Data Management
- Monitoring, Logging & Anomaly Detection
- Human Oversight and Trust
- Appendix A: Glossary
- Appendix B: References
- Appendix C: AI-Assisted Secure Coding
- Appendix D: AI Security Controls Inventory
We welcome contributions from the community. Please open an issue to report bugs or suggest improvements. We may ask you to submit a pull request based on the discussion.
This project was founded by Jim Manico. Current project leadership includes Jim Manico, Otto Sulin, and Russ Memisyazici.
The entire project content is under the Creative Commons Attribution-Share Alike v4.0 license.
