Security Analytics - Podcast 05 - Adversarial Machine Learning
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
These sources examine the security of deep neural networks by focusing on the identification and mitigation of adversarial attacks. Research highlights how evasion attacks exploit model vulnerabilities during deployment by using subtle, human-indistinguishable perturbations to cause misclassifications. To counter these threats, authors propose formal verification frameworks that utilize mathematical optimization and reachability analysis to prove model robustness. Additionally, defensive strategies like adversarial training and defensive distillation are shown to reduce a model's sensitivity to input variations. The literature emphasizes a critical trade-off between a system's computational scalability, its mathematical completeness, and its overall accuracy. Ultimately, these works categorize existing defense methodologies into a structured taxonomy to guide future developments in AI security.