Stefano Bennati: dealing with privacy threats in generative AI
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
Dr. Stefano Bennati has over a decade of experience developing tools and processes to ensure data-centric organizations comply with privacy, licensing, and responsible AI standards.
Our guest has led privacy engineering and responsible AI teams with a global mandate. His work includes building and deploying compliance tools such as code scanners, AI-powered algorithms for personal data detection and anonymization, design processes for compliance with ISO 27701 Privacy IMS and ISO 42001 AI IMS standards. His latest work is a book (co-authored with Dr. Engin Bozdağ) is titled “AI Governance: Secure, privacy-preserving, ethical systems”, a practical and accessible guide to govern AI and mitigate security, privacy, ethics and regulatory risks.
References:
* Dr. Stefano Bannati on LinkedIn
* AI Governance: Secure, privacy-preserving, ethical systems (Engin Bozdağ, Stefano Bennati) - Use this code to get a 50% discount between Nov 25 and Dec 9th 2025: MLBozdag.
* Lokke Moerel: using personal data in the development and deployment of AI models (Masters of Privacy, December 2024)
* An overview of machine unlearning (Chunxiao Li et al., 2025)
* Discussion Paper: Large Language Models and Personal Data (Hamburgische Beauftragte für Datenschutz und Informationsfreiheit)
* EDPB Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.mastersofprivacy.com/subscribe