Your AI Governance Is a Fire Alarm. Where Are the Sprinklers?

Six critical AI vulnerabilities disclosed in the past year reveal three distinct gaps most organizations haven't closed. Join Kiteworks and BigID to learn what they are and what to build next.

May 12 @ 8 AM PDT | 11 AM EDT

Presenter

craig-pfister-187x212-yellow

Craig Pfister

VP of Sales Engineering, Kiteworks

Presenter

big-chris-hoesly-187x212-1

Chris Hoesly

Field CTO, BigID

Moderator

patrick-spencer-187x212

Patrick Spencer, Ph.D.

SVP Industry Research, Kiteworks

You will learn about...

1

Why six AI data exfiltration vulnerabilities across Microsoft Copilot, Salesforce, Google Gemini, Grafana, and the OpenAI plugin ecosystem reveal three distinct security gaps — not one — and why most organizations are only addressing one of them

2

Why model-level AI guardrails failed in every documented case and what data-layer governance looks like as an alternative — including per-operation access control, credential isolation, and tamper-evident audit trails

3

How to discover and classify the data your AI agents are actually touching — and why data discovery is the prerequisite for any AI governance program

4

What a joint data discovery and data-layer governance architecture looks like in practice — and three concrete steps to close the gaps this quarter before the EU AI Act's high-risk provisions take effect in August 2026

abstract-illustration-510x510

A Few of Kiteworks' Thousands of Customers

Kiteworks 2026. @ All Rights Reserved.