OWASP AI Exchange’s cover photo
OWASP AI Exchange

OWASP AI Exchange

Computer and Network Security

owaspai.org : the go-to resource for AI Security, forming the core of international standards. 300 pages.

About us

The OWASP AI Exchange at owaspai.org is a collaborative working document to advance the development of global AI security standards and regulations. It provides a comprehensive overview of AI threats, vulnerabilities, and controls to foster alignment among different standardization initiatives. This includes the EU AI act, ISO/IEC 27090 (AI security), OWASP LLM top 10, and OpenCRE - which we want to use to provide the AI Exchange content through the security chatbot OpenCRE-Chat Our mission is to be the authoritative source for consensus, foster alignment, and drive collaboration among initiatives. By doing so, it provides a safe, open, and independent place to find and share insights for everyone. The Exchange is a Flagship Project at the OWASP foundation: the largest community on system security.

Industry
Computer and Network Security
Company size
51-200 employees
Type
Nonprofit
Founded
2022

Employees at OWASP AI Exchange

Updates

  • AI Exchange founder Rob van der Veer highlights a possible misconception about Mythos, and AI vulnerability discovery.

    Mythos is not what many people think. AI vulnerability discovery hasn't suddenly made all systems transparent. Its strength mostly lies where it has visibility: when it has access to source code and binaries. In practice, that often means the external components in your system are much more a target than your proprietary software. We are clearly seeing a leap in how fast vulnerabilities can be discovered. But an important detail is often missed: this progress is largely driven by analysing how software works internally — through code review and reverse engineering. The recently published examples demonstrate this. What we do not see strong evidence of is a similar leap in external attack techniques, such as fuzzing. That doesn’t mean AI cannot do this — it can — but the step change appears to come from internal understanding rather than black-box probing. This has an important implication: 👉 If your proprietary code or binaries are not publicly accessible, AI-driven discovery threats mostly come from what IS accessible — such as open source components and third-party binaries — rather than the parts you have built yourself. This suggests that many internal systems and SaaS platforms may be less exposed than people fear in this specific sense — but at the same time, more exposed through the components they rely on. That is where the attack surface is expanding fastest, and where attention is often most needed. That said, this is not a reason to ignore your own code. Strong defence in depth remains essential: 1️⃣ harden your own code and architecture by applying zero-trust thinking to components 2️⃣ strengthen the overall system against AI-enabled attack capabilities Two caveats: - This view is based on current evidence. The contrary could theoretically be true: AI could be making a leap with testing similarly to the a leap in internal understanding. If I find contradicting evidence, you'll be the first to know. Opinions are my own - and not the views of my employer. That sort of thing. Next week in DC I will be speaking with people directly involved in the Mythos effort. - My goal is not to downplay the importance of AI in security, but to help focus effort where it has the biggest impact. What a time to be alive. #ai #security #appsec

    • No alternative text description for this image
  • We hope to see you in DC.

    Come meet the OWASP AI Exchange in Washington DC on April 20th and 21st, for exciting workshops and the timely AI Security Policy forum, with SANS as co-host during the SANS Institute AI Summit. Humans don't download skills (except in The Matrix). We learn them by doing. And humans don't coordinate through MCP - we sit down in a room. That's why we're bringing folks to DC to fulfil OWASP® Foundation's mission: To be the global open community that powers secure software through education, tools, and collaboration. Here's what we organize: 👉 Workshop: Hands-On Threat Modelling with the OWASP AI Exchange 🗓️ April 20 — 2:30PM-4:00PM Learn from Disesdi Shoshana Cox and yours truly, through practical exercises how to quickly identify key threats to AI systems, so you know how to secure them. 👉 Workshop: Hacking a Smart Pizza Place with the OWASP AI Exchange—PwnzzAI! 🗓️ April 20 — 4:00PM-5:30PM Gain insight through hands-on AI hacking labs on your laptop from Maryam Mouzarani, Spyros Gasteratos, and AI Exchange co-lead Aruneesh Salhotra. More details soon. 👉 Policy Forum on AI Security Standardization. 🗓️ April 21 — Alongside the SANS AI Summit at a special location overlooking Washington. A closed, invitation-only gathering of selected policy stakeholders and standardization leaders, convened by the OWASP AI Exchange in partnership with SANS Institute. The goal: coordinate standardization efforts to increase collaboration and alignment, brief policy stakeholders on the AI security landscape, and work with them to provide support in AI oversight. The workshops are for conference visitors attending in-person. The awesome conference keynotes and panels can be attended online. The Policy forum is invitation-only. Hope to see you there! Thank you so much Rob T. Lee for our ongoing fruitful collaboration, and to our sponsors Straiker, Casco (YC X25), and AI Security Academy for supporting our events. Next event: The OWASP conference in Vienna with an AI Exchange showcase, AI Exchange training, and book signing. Stay tuned! (links will be in comments) #AI #AISecurity #Cybersecurity #AIgovernance OWASP® Foundation Software Improvement Group

    • No alternative text description for this image
  • OWASP AI Exchange reposted this

    This week I joined the OWASP AI Exchange and I am genuinely honoured to sit at this table. The group brings together builders, red teamers, and policy architects who are doing the actual work of defining what AI security looks like in practice. Not in whitepapers. In production environments, under real regulatory pressure. The signal-to-noise ratio in that room is rare. My focus within the Exchange will sit at three intersections that I believe define the next frontier of enterprise security: 1. Project Agentic AI (Threats, Controls & Testing). Autonomous agents operating across banking infrastructure introduce threat vectors that traditional assurance frameworks were never designed to catch. This goes far beyond just testing. Model exploitation, decision flow hijacking, and agent abuse are not theoretical - they are already appearing in the environments I validate. We must define hard technical controls and operational risk boundaries for autonomous AI. 2. Framework Harmonization. The industry does not need more standalone guidelines; it needs execution. Mapping standards across NIST and OWASP into a unified, actionable baseline is critical. Closing this gap is the foundational work that determines whether AI governance has teeth or just terminology. 3. Regulatory Alignment (e.g. EU AI Act). I have seen what happens when governance frameworks lag behind technology adoption. With AI, the cost of repeating that pattern is unacceptable. We must ensure our controls align directly with the EU AI Act. But even if we map every control and build perfect taxonomies, we can still miss the moment a human organization quietly stops making its own decisions. That decision drift is the ultimate systemic risk. That is where I intend to press. Looking forward to contributing alongside Rob van der Veer, Aruneesh Salhotra, Behnaz Karimi, and the broader Exchange community. #OWASP #AgenticAI #AIGovernance #AIAct #DORA #CyberRisk #TechRisk #RedTeaming #OWASPOWASP AI Exchange

    • No alternative text description for this image
  • The new OWASP Impact Report starts with the success of the AI Exchange, explaining the history and its mission. What an honour!

    Just out! Learn the gems that OWASP® Foundation is bringing to you in the impressive OWASP Impact report, celebrating 25 years. The report talks about the grown strategic role of OWASP in the security landscape. Of the many great OWASP projects, the report highlights: OWASP ASVS, OWASP CRS, OWASP CycloneDX SBOM/xBOM Standard, OWASP Dependency-Track, OWASP GenAI Security Project, OWASP SAMM, good old top 10, and the very first mention of OWASP success goes to our darling, the OWASP AI Exchange, with these very kind words: "In 2025, OWASP effectively set the standard for AI security, through the AI Exchange. The Exchange was founded in 2022 by Rob van der Veer, for writing down what he learned on security and privacy of AI systems as an AI engineer, hacker and entrepreneur since the beginning of the nineties. The goal: to help security practitioners with this important new topic, trying to make it comprehensive, but simple. Through the OWASP network, he quickly gathered a growing group of experts to continue building the body of knowledge and co-leaders Aruneesh Salhotra and Behnaz Karimi joined the project. Then Rob got involved in ISO/IEC 27090, the global standard for AI security and got elected as co-editor of prEN18282, the Security standard for the AI Act. These working groups had a hard time finding the right expertise, so Rob forged a unique liaison partnership between international standardization and the OWASP AI Exchange, allowing the material from the Exchange to be donated directly to these new standards - effectively becoming the main source. Next, the Exchange was adopted by SANS Institute, ISACA and EXIN, as a key resource for training. The material is open source, free of copyright and attribution, and aligns with standards - making it the perfect material for training and certification. So what started as a personal notebook from experience, turned into an OWASP flagship project with a framework of AI security threats, controls, and best practices that effectively has become the standard, and the go-to bookmark for practitioners to rely on." Wow. #ai #aisecurity #OWASP #appsec #security

  • Sometimes it's the little things that make you happy: cool new AI Exchange stickers are on their way! These stickers will pop up at for example: 🗓️ April 20: Our two workshops at the SANS AI summit in DC (AI threat modelling and AI hacking with the Exchange) 🗓️ April 21: The AI policy forum we organize with SANS Institute, also in DC 🗓️ May 19 NIST Supply Chain Assurance Forum at MITRE 🗓️ June 24: The masteraisecurity dot com training in Vienna 🗓️ June 25-26: OWASP Global Appsec conference in Vienna Hope to see you there and personally hand you the sticker. #ai #aisecurity

  • Check out our upated learning guide. #ai #aisecurity

    How to master AI security? Check out the just updated learning guide at the OWASP AI Exchange. It is our mission to enable practitioners to make sense of it all. Just go to the Exchange website owaspai dot org. Press ‘Get started’ and you will be guided, depending on your needs: 👉 Ask any question to AI Exchange Agent 👉 Learn what the Exchange is 👉 How to start as an organization 👉 How to secure an AI system 👉 How to learn AI security To learn AI security: 1️⃣ First study the brief AI security essentials for the big picture. 2️⃣ Do high-over threat modelling according to the risk analysis section - or let AI interview you to find out, or skip this step if you want to learn the complete threat picture. 3️⃣ If you’re involved in Agentic AI, see the section of how agentic threats are covered. 4️⃣ If you run a ready-made model, have a look at the threat model on ready-made models. 5️⃣ See your threats in their context in our AI threat model. 6️⃣ Click on your threats to to get more information. 7️⃣ Check the Controls section of that threat, or the periodic table which lists the controls for every threat. 8️⃣ To learn about the bigger picture of controls, study the controls overview. 9️⃣ If privacy is in scope for you: see the privacy section. 🔟 If you’re involved in testing: see the testing section. We have collected a large table of futher training resources in our references section. I will put links in the comments, but you’ll find it anyhow. There is another way: come join the threat modelling workshop in Washington DC on April 20th, where I'll teach together with Disesdi Shoshana Cox, or join my full day ‘Master AI security’ training in-person or remote during the OWASP Appsec conference in Vienna, on June 24th. We'll go through the learning steps together, in-depth, and hands on, featuring yours truly and my Software Improvement Group co-trainers. #ai #aisecurity

    • No alternative text description for this image
  • Learn about the FOUNDATION mission of the Exchange, its history, and how to secure agentic AI in this revealing interview. #ai #aisecurity

    Learn the latest and greatest on AI security from the wonderful Vandana Verma interviewing me on a wide range of topics. We talk about how to be successful with agentic AI, common misconceptions, alignment between standards, and the history and mission of the OWASP AI Exchange. Key takeaways: 🔹 Apply Zero model trust - making ‘blast radius control’ the key approach to agentic security, through least privilege, oversight, and transparency. 🔹 Create a culture where it is okay to be an ‘AI fool’ sometimes. We don’t know everything about AI and we should appreciate that and be open about it, instead of making things up. There is too much AI noise already. 🔹 Be aware of the speed/care paradox: ignoring security risks will NOT let you win the race. Instead, take a bit of time to anticipate and prevent problems along the way. It feels slower, but it will save you expensive rework and cleanup, and teams will move quicker and carefree. 🔹 Give teams the mandate to share security issues that may delay the release. If you don’t, you’ll create a ‘culture of hush’. No one wants to spoil the release party. The result: going live misinformed with an accident waiting to happen. 🔹 Carefully consider every application of Human-in-the-loop. We see too often it is used as a quick fix, but people need to be capable, motivated, and alert for it to work. Avoid HTIL cargo cult. 🔹 Cross-user indirect prompt injection is an emerging threat many organizations still underestimate. 🔹 If you leave hardening of agents to end users and admins, you better make sure they understand the risks and what to do. And we talk about the key role of OWASP® Foundation and the OWASP AI Exchange mission - using the FOUNDATION Acronym. The recording will be mentioned in the comments. What gives hope? Collaboration. Seeing organizations like SANS Institute and OWASP AI Exchange align on shared controls is exactly the kind of signal the industry needs. On April 21st, SANS and the Exchange are bringing together standard makers and US policy stakeholders to work on coordination. Our shared goal: break down the silos and create clarity for practitioners. Thank you Snyk, for organising. #ai #aisecurity

    • No alternative text description for this image
  • Our Aruneesh has been nominated Global Cybersecurity Visionary 2026.

    𝗛𝘂𝗺𝗯𝗹𝗲𝗱 𝘁𝗼 𝗯𝗲 𝗡𝗼𝗺𝗶𝗻𝗮𝘁𝗲𝗱: 𝗚𝗹𝗼𝗯𝗮𝗹 𝗖𝘆𝗯𝗲𝗿𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗩𝗶𝘀𝗶𝗼𝗻𝗮𝗿𝘆 𝟮𝟬𝟮𝟲 I am honored to share that I have been nominated for the 𝗚𝗹𝗼𝗯𝗮𝗹 𝗖𝘆𝗯𝗲𝗿𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗩𝗶𝘀𝗶𝗼𝗻𝗮𝗿𝘆 𝗮𝘄𝗮𝗿𝗱 in the 2026 Cybersecurity Excellence Awards. True vision in this field isn’t just about predicting the next threat .... 𝗶𝘁’𝘀 𝗮𝗯𝗼𝘂𝘁 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝘁𝗵𝗲 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆 𝘁𝗼 𝗺𝗲𝗲𝘁 𝗶𝘁. For me, this nomination is a reflection of the collective effort across the initiatives I am most passionate about:  • 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆 & 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝗶𝘇𝗶𝗻𝗴 𝗔𝗜: My work as Project Lead for OWASP AIBOM and Project Co-Lead for the OWASP AI Exchange is fueled by the belief that we are stronger when we collaborate on open-source frameworks.  • 𝗘𝗱𝘂𝗰𝗮𝘁𝗶𝗻𝗴 𝘁𝗵𝗲 𝗠𝗼𝘀𝘁 𝗩𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗹𝗲: Security is a human right. Beyond the boardroom, my mission is to ensure that the most vulnerable among us—particularly seniors and children are equipped with the knowledge to navigate our hyper-connected world safely.  • 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗦𝗵𝗮𝗿𝗶𝗻𝗴: Whether through my O’Reilly publications, my blog, or the podcast, my goal is to demystify complex security shifts for the masses.  • 𝗚𝗹𝗼𝗯𝗮𝗹 𝗔𝗱𝘃𝗼𝗰𝗮𝗰𝘆: From speaking at international conferences like RSA, Black Hat, IAPP, All Day DevOps, Machines Can See, Palo Alto Ignite, ISACA, OWASP and SANS, to serving on Advisory Boards, I am committed to shaping the future of Agentic AI and autonomous system security. 𝗛𝗼𝘄 𝘆𝗼𝘂 𝗰𝗮𝗻 𝘀𝘂𝗽𝗽𝗼𝗿𝘁: Recognition in these awards is unique .... it’s based on s𝗼𝗰𝗶𝗮𝗹 𝗿𝗲𝘀𝗵𝗮𝗿𝗲𝘀 𝗱𝗶𝗿𝗲𝗰𝘁𝗹𝘆 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝗻𝗼𝗺𝗶𝗻𝗮𝘁𝗶𝗼𝗻 𝗽𝗮𝗴𝗲. If you feel inclined to support my journey and these missions, you can do so here: 👉 https://lnkd.in/eP2a8HBW Regardless of the outcome, I am incredibly grateful to stand alongside the innovators and practitioners who make this community so resilient. #Cybersecurity #AI #OWASP #GenAI #Infosec #CommunityBuilding #CybersecurityExcellenceAwards #Visionary

    • No alternative text description for this image
  • OWASP AI Exchange reposted this

    "Red teaming your AI systems without informed AI threat modeling is a waste of everyone's time, effort, and money." That's Disesdi Shoshana Cox. She and Rob van der Veer, who founded the OWASP AI Exchange are co-leading Hands-On Threat Modeling with the OWASP AI Exchange at the SANS AI Cybersecurity Summit in April. (When the person who built the framework offers to teach you how to use it, you say yes.) Rob built the Exchange over two years with 70+ contributors, producing hundreds of pages of guidance now embedded in ISO/IEC standards and the EU AI Act. Disesdi is a core author on the Exchange, lead contributor to the CSA Agentic Red Teaming Guide on behalf of the OWASP AI Exchange, patent holder in distributed model training security, and 10+ years in mission-critical AI systems. Between the two of them, they've mapped more of what can go wrong in AI systems than almost anyone in this field. (I mean that literally.) Their 90-minute workshop is small groups, real AI architectures, and systematic threat modeling using the Exchange as the framework. You'll work through the steps, present findings to the room, and leave with something you can use on Monday. (This is exactly the session this summit exists for.) SANS AI Cybersecurity Summit, April 20–21 in Arlington. Workshops available to in-person attendees, or join virtually, no charge, for keynotes and lightning talks: go.sans.org/YjGCXr

    • No alternative text description for this image
  • OWASP AI Exchange reposted this

    AI security is no longer just a technical or standards challenge. I think it is increasingly a policy problem. Around the world, governments are recognizing AI as a national security priority. At the same time, new frameworks, standards, and initiatives are rapidly emerging. But the real issue isn’t the lack of activity, it’s that the needs of policy makers are not being adequately addressed. Policy makers are expected to make critical decisions in an environment shaped by multiple standards, overlapping initiatives, and evolving technical guidance, often without clear alignment or translation into actionable policy. Part of the challenge is that AI security does not lend itself to simple, one-size-fits-all solutions, making it difficult to define and regulate what good security actually looks like in practice. What’s missing is not actually more frameworks, but clearer coordination and better mapping of what already exists, in a way that is usable for decision-makers. Without this, we risk creating a landscape that is technically rich, but practically difficult to navigate, especially for those responsible for shaping policy at the national and international level. This is why I think it is essential to bring policy makers into the center of conversations, not as observers, but as key stakeholders whose needs should actively shape how this space evolves. If we want AI security to be effective at scale, it has to work for policy makers, not just for technical communities. At OWASP AI Exchange, we’ve spent years working to bring clarity and direction to AI security, contributing to global standards and policy discussions, and the SANS Institute brings deep, globally recognized expertise and decades of leadership in training and empowering practitioners across the field, together creating a unique opportunity to bring policy makers, technical experts, and standards leaders into the same conversation, not in parallel but in alignment, because what’s needed now isn’t more activity, it’s connection, clarity, and coordination. 👉 𝐏𝐨𝐥𝐢𝐜𝐲 𝐅𝐨𝐫𝐮𝐦 𝐨𝐧 𝐀𝐈 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐒𝐭𝐚𝐧𝐝𝐚𝐫𝐝𝐢𝐳𝐚𝐭𝐢𝐨𝐧. 𝐀𝐩𝐫𝐢𝐥 21 — 𝐖𝐚𝐬𝐡𝐢𝐧𝐠𝐭𝐨𝐧, 𝐃𝐂 - 𝐀𝐥𝐨𝐧𝐠𝐬𝐢𝐝𝐞 𝐭𝐡𝐞 𝐒𝐀𝐍𝐒 𝐀𝐈 𝐒𝐮𝐦𝐦𝐢𝐭. 𝐀 𝐜𝐥𝐨𝐬𝐞𝐝, 𝐢𝐧𝐯𝐢𝐭𝐞-𝐨𝐧𝐥𝐲 𝐠𝐚𝐭𝐡𝐞𝐫𝐢𝐧𝐠 𝐨𝐟 𝐬𝐞𝐥𝐞𝐜𝐭𝐞𝐝 𝐩𝐨𝐥𝐢𝐜𝐲 𝐬𝐭𝐚𝐤𝐞𝐡𝐨𝐥𝐝𝐞𝐫𝐬 𝐚𝐧𝐝 𝐬𝐭𝐚𝐧𝐝𝐚𝐫𝐝𝐢𝐳𝐚𝐭𝐢𝐨𝐧 𝐥𝐞𝐚𝐝𝐞𝐫𝐬, 𝐜𝐨𝐧𝐯𝐞𝐧𝐞𝐝 𝐛𝐲 𝐭𝐡𝐞 𝐎𝐖𝐀𝐒𝐏 𝐀𝐈 𝐄𝐱𝐜𝐡𝐚𝐧𝐠𝐞 𝐢𝐧 𝐩𝐚𝐫𝐭𝐧𝐞𝐫𝐬𝐡𝐢𝐩 𝐰𝐢𝐭𝐡 𝐒𝐀𝐍𝐒 𝐈𝐧𝐬𝐭𝐢𝐭𝐮𝐭𝐞. With policy makers, standards leaders from National Institute of Standards and Technology (NIST), MITRE, IEEE, Cloud Security Alliance, etc. If you have ideas or suggestions, please share them in the comments. Thanks. Rob van der Veer Aruneesh Salhotra Disesdi Shoshana Cox

    • No alternative text description for this image

Similar pages

Browse jobs