ChaptersCircleEventsBlog
Share how your organization manages AI securely. Take the CSA and Google enterprise AI Survey today!

Working Group

AI Technology and Risk

Explore the latest AI tech, predict risks, and ensure innovation meets security in the realm of AI.
View Current Projects
AI Technology and Risk
The AI Technology and Risk Committee is focused on staying abreast of the latest technological advancements in AI while simultaneously identifying, understanding, and forecasting associated risks, threats, and vulnerabilities. This technical committee aims to act as both a knowledge hub and a proactive risk management entity, bridging the gap between innovation and security in the realm of AI.

Working Group Leadership

Josh Buker
Josh Buker

Josh Buker

Research Analyst, CSA

Working Group Co-Chairs

Mark Yanalitis Headshot Missing
Mark Yanalitis

Mark Yanalitis

Chris Kirschke
Chris Kirschke

Chris Kirschke

Cloud Portfolio Information Security Officer at Albertsons Companies

Security Leader with over 20+ years of experience across Financial Services, Streaming, Retail and IT Services with a heavy focus on Cloud, DevSecOps and Threat Modeling. Advises multiple security startups on Product Strategy, Alliances and Integrations. Sits on multiple Customer Advisory Boards helping to drive security product roadmaps, integrations and feature developments. Avid hockey player, backpacker and wine collector in his spare t...

Read more

Publications in ReviewOpen Until
Fully Homomorphic Encryption to CCM v4.0.1 MappingJul 10, 2025
AICM to ISO 42001 MappingJul 10, 2025
AICM Implementation GuidelinesAug 06, 2025
View all
Who can join?

Anyone can join a working group, whether you have years of experience or want to just participate as a fly on the wall.

What is the time commitment?

The time commitment for this group varies depending on the project. You can spend a 15 minutes helping review a publication that's nearly finished or help author a publication from start to finish.

Virtual Meetings

Attend our next meeting. You can just listen in to decide if this group is a good for you or you can choose to actively participate. During these calls we discuss current projects, and well as share ideas for new projects. This is a good way to meet the other members of the group. You can view all research meetings here.

Jul

21

Mon, July 21, 9:15am - 10:00am PDT
CSA AI Technology & Risk Working Group
See details
This is the biweekly recurring meeting for the AI Technology and Risk working group.

Aug

4

Mon, August 4, 9:15am - 10:00am PDT
CSA AI Technology & Risk Working Group
See details
This is the biweekly recurring meeting for the AI Technology and Risk working group.

Aug

18

Mon, August 18, 9:15am - 10:00am PDT
CSA AI Technology & Risk Working Group
See details
This is the biweekly recurring meeting for the AI Technology and Risk working group.

Sep

1

Mon, September 1, 9:15am - 10:00am PDT
CSA AI Technology & Risk Working Group
See details
This is the biweekly recurring meeting for the AI Technology and Risk working group.

Open Peer Reviews

Peer reviews allow security professionals from around the world to provide feedback on CSA research before it is published.

Learn how to participate in a peer review here.

Fully Homomorphic Encryption to CCM v4.0.1 Mapping

Open Until: 07/10/2025

We are seeking input from industry and legal professionals with experience in cloud security and policy, and comment from F...

AICM to ISO 42001 Mapping

Open Until: 07/10/2025

The Cloud Security Alliance (CSA) invites public peer review of its draft mapping between the AI Controls Matrix (A...

AICM Implementation Guidelines

Open Until: 08/06/2025

The Cloud Security Alliance (CSA) invites public peer review of its draft Implementation Guidelines of the AI Contr...

Premier AI Safety Ambassadors

Premier AI Safety Ambassadors play a leading role in promoting AI safety within their organization, advocating for responsible AI practices and promoting pragmatic solutions to manage AI risks. Contact sales@cloudsecurityalliance.org to learn how your organization could participate and take a seat at the forefront of AI safety best practices.