Practical DevSecOps cover art

Practical DevSecOps

Practical DevSecOps

By: Varun Kumar
Listen for free

About this listen

Practical DevSecOps (a Hysn Technologies Inc. company) offers vendor-neutral and hands-on DevSecOps and Product Security training and certification programs for IT Professionals. Our online training and certifications are focused on modern areas of information security, including DevOps Security, AI Security, Cloud-Native Security, API Security, Container Security, Threat Modeling, and more.



© 2025 Practical DevSecOps
Education
Episodes
  • MITRE ATLAS Framework - Securing AI Systems
    Jul 10 2025

    Welcome to a crucial episode where we delve into the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework, an exhaustive knowledge base designed to secure our increasingly AI-dependent world.

    As AI and machine learning become foundational across healthcare, finance, and cybersecurity, protecting these systems from unique threats is paramount.

    Unlike MITRE ATT&CK, which focuses on traditional IT systems, MITRE ATLAS is specifically tailored for AI-specific risks, such as adversarial inputs and model theft. It provides a vital resource for understanding and defending against the unique vulnerabilities of AI systems.

    In this episode, we'll break down the core components of MITRE ATLAS:

    Tactics: These are the high-level objectives of attackers – the "why" behind their actions.

    MITRE ATLAS outlines 14 distinct tactics that attackers use to compromise AI systems, including Reconnaissance (gathering information on the AI system), Initial Access (gaining entry into the AI environment), ML Model Access (entering the AI environment), Persistence (establishing continuous access), Privilege Escalation (gaining more effective controls), and Defense Evasion (bypassing security).

    Other tactics include Credential Access, Discovery, Lateral Movement, Collection, Command and Control, Exfiltration, Impact, and ML Attack Staging.

    Techniques: These are the specific methods and actions adversaries use to carry out their tactics – the "how". We'll explore critical techniques like Data Poisoning, where malicious data is introduced into training sets to alter model behavior; Prompt Injection, manipulating language models to produce harmful outputs; and Model Inversion, which involves recovering target data from an AI model.

    Other key techniques to watch out for include Model Extraction, reverse-engineering or stealing proprietary AI models, and Adversarial Examples, subtly altered inputs that trick AI models into making errors.

    We'll also examine real-world case studies, such as the Evasion of a Machine Learning Malware Scanner (Cylance Bypass), where attackers used reconnaissance and adversarial input crafting to bypass detection by studying public documentation and model APIs.

    Another notable example is the OpenAI vs. DeepSeek Model Distillation Controversy, highlighting the risks of model extraction and intellectual property theft by extensively querying the target model.

    To safeguard AI systems, MITRE ATLAS emphasizes robust security controls and best practices. Key mitigation strategies include:

    Securing Training Pipelines to protect data integrity and restrict access to prevent poisoning or extraction attempts.

    Continuously Monitoring Model Outputs for anomalies indicating adversarial manipulation or extraction attempts.

    Validating Data Integrity through regular audits of datasets and model behaviour to detect unexpected changes or suspicious activity.

    Join us as we discuss how the MITRE ATLAS Framework transforms AI security, providing practical guidance to defend against the evolving threat landscape.

    You'll learn why it's crucial for every organization to embrace this framework, contribute to threat intelligence, and engage with the wider AI security community to secure AI as a tool of innovation, not exploitation.

    The Certified AI Security Professional Course comprehensively covers the MITRE ATLAS Framework, offering practical experience to implement these defences effectively.

    Show More Show Less
    17 mins
  • Best AI Security Books in 2025
    Jun 20 2025

    Are you ready to face the escalating threat of AI attacks? AI system attacks are hitting companies every single day. Hackers use AI tools to break into major banks and steal millions. It's a critical time for anyone in tech or cybersecurity to understand how to fight back.

    In this episode, we delve into why AI security is more crucial than ever in 2025. We reveal that 74% of IT security professionals say AI-powered threats are seriously hurting their companies, and a staggering 93% of businesses expect to face AI attacks daily this year.

    These aren't just minor incidents; last year, 73% of organizations were hit by AI-related security breaches, costing an average of $4.8 million each time, with attacks taking an alarming 290 days to even detect.

    The good news? Companies are desperately seeking individuals with AI security expertise, offering excellent opportunities for those who are prepared. We discuss how AI security books serve as your secret weapon, providing proven strategies directly from real security experts who have battled actual AI attacks.

    We'll touch upon some top resources available, covering everything from:

    • Understanding and protecting against Large Language Model (LLM) security threats.
    • Practical applications of LLMs for building smart systems.
    • Developing your own LLMs from scratch.
    • Defending against sophisticated adversarial AI attacks, including prompt injection and model poisoning.
    • Navigating AI data privacy, ethics, and regulatory compliance.
    • Advanced techniques like AI red teaming to systematically assess and enhance security.

    Whether you're a beginner looking to understand the basics or an expert aiming for cutting-edge strategies, finding the right learning path in AI cybersecurity is essential. Don't wait – AI threats are growing stronger every day. Tune in to discover how to upskill and become an AI security expert, building solid skills step by step for career development success.

    Ready to go further? Our Certified AI Security Professional Course offers an in-depth exploration of AI risks. It combines the best book knowledge with hands-on practice, allowing you to work on real AI security system attacks and learn directly from industry experts.

    Enroll today and upskill your AI Security knowledge with Certified AI Security Professional certification. Plus, for a limited time, you can save 15% on this course, and you can buy it now and start whenever you're ready!

    Show More Show Less
    13 mins
  • Threat Modeling for Medtech Industry
    Jun 18 2025

    Join us for an insightful episode as we delve into the critical realm of product security within the Medtech industry. The digital revolution is transforming patient care, but it also introduces significant security risks to medical devices.

    We'll explore the complex security environment where devices like pacemakers and diagnostic systems are increasingly connected, making them targets for unauthorised access, data theft, and operational manipulation.

    Discover how breaches can lead to dire consequences, from endangering patient health and damaging manufacturers' reputations, to incurring financial losses and navigating stricter regulatory hurdles.

    Learn about the types of medical devices most susceptible to cyber threats, including those with connectivity, remote access features, legacy systems, sensitive data storage (PHI), and life-sustaining equipment.

    Our focus shifts to threat modelling – a crucial, proactive process for enhancing medical device security.

    We'll uncover its immense benefits, such as identifying and addressing risks, boosting device resilience against cyberattacks, and ensuring regulatory adherence.

    We'll also touch upon the FDA's recent policy update, transitioning from the Quality System Regulation (QSR) to the Quality Management System Regulation (QMSR), which now incorporates ISO 13485:2016 standards, highlighting a greater emphasis on risk management throughout the device lifecycle.

    Dive deep into various threat modelling techniques that help manufacturers fortify their products:

    Agile Threat Modeling: Integrating security with rapid development cycles, ensuring continuous assessments aligned with ongoing development.

    Goal-Centric Threat Modeling: Prioritizing protection for critical assets and business objectives based on impact on functionalities and compliance requirements.

    Library-Centric Threat Modeling: Utilizing pre-compiled lists of known threats and vulnerabilities pertinent to medical devices for standardized risk assessment, enhancing scalability and efficiency.

    Finally, we'll discuss how specialized training, such as the Practical DevSecOps Certified Threat Modeling Professional (CTMP) course, equips Medtech manufacturers with the essential skills to proactively identify and address security vulnerabilities.

    This training focuses on real-world applications and scenarios, ensuring continuous security assessment and compliance with stringent regulatory standards from design to deployment.

    Tune in to understand why threat modelling is not just a best practice, but an essential component for safeguarding patient well-being and maintaining integrity in the digital healthcare landscape.

    Show More Show Less
    5 mins

What listeners say about Practical DevSecOps

Average Customer Ratings

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.