• Helping Seniors Avoid Digital Scams, One Click at a Time
    Jul 24 2025

    Alexandria “Lexi” Lutz is a privacy attorney and the Founder of Opt-Inspire, Inc., a nonprofit dedicated to helping seniors and youth build digital confidence and avoid online scams. By day, she serves as Senior Corporate Counsel at Nordstrom, advising on privacy, cybersecurity, and AI across the retail and technology landscape.

    In this episode…

    Online scams are becoming more sophisticated, targeting older adults with devastating financial consequences that often reach tens of thousands of dollars with little recourse. From tech support fraud to AI-driven deepfakes that mimic loved ones’ voices, these scams prey on isolation, fear, and digital inexperience. Many families struggle to protect their aging parents and grandparents, especially when conversations about digital risks are met with resistance from loved ones. How can we bridge the digital literacy gap across generations and empower seniors to navigate these evolving threats?

    The urgency is real. In 2024, seniors lost nearly $5 billion to scams, a 43 percent increase from the previous year. Scammers are using voice cloning, fake emergencies, and fear-based messaging to pressure people into giving up money or sensitive personal information. Education can be a powerful defense, and that's why Opt-Inspire delivers engaging, volunteer-led workshops tailored to senior living communities, teaching practical skills like recognizing fake emails and enabling two-factor authentication. Protecting aging loved ones against technology and AI-driven scams requires proactive and hands-on education. Opt-Inspire equips seniors with the tools and knowledge to stay safe online through engaging, community-based seminars. The nonprofit delivers in-person and volunteer-led workshops tailored to senior living communities, addressing both technical literacy and emotional manipulation tactics. Through scripts, visuals, and a "Make It Personal" toolkit with conversation starters, Opt-Inspire also equips families with resources to discuss digital safety with loved ones in a constructive and relatable way.

    In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Alexandria (Lexi) Lutz, Senior Corporate Counsel at Nordstrom and Founder of Opt-Inspire, about building digital confidence among seniors. Lexi shares how a personal family experience inspired her to launch a nonprofit focused on preventing elder fraud. She delves into the most common scams targeting older adults today, including government impersonation, romance cons, and AI-generated deepfakes. Lexi emphasizes the importance of proactive education, enabling two-factor authentication, and weekly family check-ins. She also offers practical advice and resources for privacy professionals and family members alike who want to make a positive impact.

    Show More Show Less
    40 mins
  • Real AI Risks No One Wants To Talk About And What Companies Can Do About Them
    Jul 17 2025

    Anne Bradley is the Chief Customer Officer at Luminos. Anne helps in-house legal, tech, and data science teams use the Luminos platform to manage the automated AI risk, compliance, and approval processes, statistical testing, and legal documentation. Anne also serves on the Board of Directors of the Future of Privacy Forum, a nonprofit that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies.

    In this episode…

    AI is being integrated into everyday business functions, from diagnosing cancer to translating conversations and powering customer service chatbots and autonomous vehicles. While these tools deliver value, they also bring privacy, security, and ethical risks. As organizations dive into adopting AI tools, they often do so before performing risk assessments, establishing governance, and implementing privacy and security guardrails. Without safeguards and internal processes in place, companies may not fully understand how the tools function, what data they collect, or the risk they carry. So, how can companies efficiently assess and manage AI risk as they rush to deploy new tools?

    Managing AI risk requires governance and the ability to test AI tools before deploying them. That’s why companies like Luminos provide a platform to help companies manage and automate the AI risk compliance approval processes, model testing, and legal documentation. This platform allows teams to check for toxicity, hallucinations, and AI bias even when an organization uses high-risk tools like customer-facing chatbots. Embedding practical controls, like pre-deployment testing and assessing vendor risk early, can also help organizations implement AI tools safely and ethically.

    In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Anne Bradley, Chief Customer Officer at Luminos, about how companies can assess and mitigate AI risk. Anne explains the impact of deepfakes on public trust and the need for a regulatory framework to reduce harm. She shares why AI governance, AI use-case risk assessments, and statistical tools are essential for helping companies monitor outputs, reduce unintended consequences, and make informed decisions about high-risk AI deployments. Anne also highlights why it’s important for legal and compliance teams to understand business objectives driving an AI tool request before evaluating its risk.

    Show More Show Less
    37 mins
  • Privacy in the Loop: Why Human Training Is AI’s Greatest Weakness and Strength
    Jul 10 2025

    Nick Oldham is the Chief Operations Officer, USIS, and Global Chief Risk, Privacy and Compliance Officer at Equifax Inc. A forward-thinking legal and operations executive, Nick has a proven track record of driving large-scale transformations by integrating legal expertise with strategic operational leadership. He oversees all enterprise-wide second-line functions, leading initiatives to embed AI, enable data-driven decision-making, and deliver innovative, compliant solutions across a $1.9B business unit. His focus is on building efficient, scalable systems that align with both compliance standards and long-term strategic goals.

    In this episode…

    Many companies are rushing to adopt AI tools without adequately training their workforce on how to use them responsibly. As AI becomes embedded in daily business operations, the biggest risk isn’t the technology itself, but the lack of human understanding around how AI works and what it can do. When teams struggle to understand the differences between machine learning and generative AI, it creates risks and makes it harder to establish appropriate privacy and security guardrails. Human training is AI's greatest weakness and strength, and closing that gap involves rethinking how companies educate and train employees at every level.

    The responsible use of AI depends on human judgment. Companies need to embed privacy education, critical thinking, and AI risk awareness into training programs from the start. Employees should be taught how to ask questions, evaluate model behavior, and recognize when personal information is being misused. AI literacy should also extend beyond the workplace. Introducing it in high school or even earlier helps prepare future professionals to navigate complex AI tools and make thoughtful, responsible decisions.

    In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Nick Oldham, Chief Operations Officer, USIS, and Global Chief Risk, Privacy and Compliance Officer at Equifax, about the role of human training in AI literacy. Nick breaks down the components of AI literacy, explains why everyone needs a foundational understanding, and emphasizes the importance of prioritizing privacy awareness when using AI tools. He also highlights ways to embed privacy and security into AI governance programs and provides actionable steps organizations can take to strengthen AI literacy across teams.

    Show More Show Less
    28 mins
  • Where Strategy Meets Reality in AI Governance
    Jul 3 2025

    Andrew Clearwater is a Partner at Dentons’ Privacy and Cybersecurity Team and a recognized authority in privacy and AI governance. Formerly a founding leader at OneTrust, he oversaw privacy and AI initiatives, contributed to key data protection standards, and holds over 20 patents. Andrew advises businesses on responsible tech implementation, helping navigate global regulations in AI, data privacy, and cybersecurity. A frequent speaker, he offers insight into emerging compliance challenges and ethical technology use.

    In this episode…

    Many companies are diving into AI without first putting governance in place. They often move forward without defined goals, leadership, or alignment across privacy, security, and legal teams. This leads to confusion about how AI is being used, what risks it creates, and how to manage those risks. Without coordination and structure, programs lose momentum, transactions are delayed, and expectations become harder to meet. So how can companies build a responsible AI governance program?

    Building effective AI governance programs starts with knowing what’s in use, why it’s in use, what data AI tools and systems collect, the risk it creates, and how to manage it. Standards like ISO 42001 and the NIST AI Risk Management Framework help companies guide this process. ISO 42001 offers the benefit of certification and supports cross-functional consistency, while NIST may be better suited for organizations already using it in related areas. Both frameworks help companies define the scope of AI use cases, understand the risks, and inform policies before jumping into controls. Conducting data inventories and utilizing existing risk management processes are also essential in identifying shadow AI introduced by employees or third-party vendors.

    In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Andrew Clearwater, Partner at Dentons, about how companies can build responsible AI governance programs. Andrew explains how standards and legal frameworks support consistent AI governance implementation and how to encourage alignment between privacy, security, legal, and ethics teams. He also outlines the importance of monitoring shadow AI across third-party vendors and practical steps companies can take to effectively structure their AI governance programs.

    Show More Show Less
    29 mins
  • Endpoints-on-Wheels: Protecting Company and Employee Data in Cars
    Jun 26 2025

    Merry Marwig is the VP Global Communications & Advocacy at Privacy4Cars. Merry is a pro-consumer, pro-business privacy advocate who is optimistic about what data privacy rights mean for everyday people — and for the companies they do business with. At Privacy4Cars, she helps protect drivers’ and passengers’ personal data while creating business opportunities for automotive companies.

    In this episode…

    Modern cars are like computers on wheels, collecting and storing data just like smartphones or laptops. Unlike those devices, however, vehicle data is often left unencrypted and persists long after a car is sold, rented, or reassigned. This is especially problematic for businesses that use corporate cars, rental vehicles, fleet vehicles, or personal vehicles for work purposes. Sensitive information such as contact lists, text messages, navigation history, and even security credentials can remain stored in vehicles long after they change hands, posing significant privacy, security, and even physical safety risks.

    To take control of sensitive data, companies need to establish data deletion policies for all vehicles used in a business context. This includes requiring rental agencies and fleet management providers to delete stored data and offer certificates of deletion when cars are returned or decommissioned. Companies should also require automotive providers to provide VIN-specific data disclosures so drivers understand what data the vehicle collects and how it's used and shared. Additionally, companies need to consider how privacy regulations like GDPR and CCPA apply to vehicle data collection and use it to inform their internal policies and third-party contracts.

    In today’s episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Merry Marwig, VP Global Communications & Advocacy at Privacy4Cars, about the privacy and security risks of data collected and stored in vehicles. Merry explains how cars used for work, whether rental, fleet, or personal, retain unencrypted personal and company data that can be exploited when vehicles change ownership or are decommissioned. She shares real-world case studies involving sensitive information left behind in cars, including banking credentials, contact lists, and patient health records. Merry also outlines how data deletion policies and VIN-specific disclosures, required through contracts with automotive providers, help companies reduce privacy and security risks.

    Show More Show Less
    39 mins
  • Agentic AI for Software Security: Eliminate More Vulnerabilities, Triage Less
    Jun 18 2025

    Ian Riopel is the CEO and Co-founder of Root, applying agentic AI to fix vulnerabilities instantly. A US Army veteran and former Counterintelligence Agent, he’s held roles at Cisco, CloudLock, and Rapid7. Ian brings military-grade security expertise to software supply chains.

    John Amaral is the CTO and Co-founder of Root. Previously, he scaled Cisco Cloud Security to $500M in revenue and led CloudLock to a $300M acquisition. With five exits behind him, John specializes in building cybersecurity startups with strong technical vision.

    In this episode…

    Patching software vulnerabilities remains one of the biggest security challenges for many organizations. Security teams are often stretched thin as they try to keep up with vulnerabilities that can quickly be exploited. Open-source components and containerized deployments add even more complexity, especially when updates risk breaking production systems. As compliance requirements tighten and the volume of vulnerabilities grows, how can businesses eliminate software security risks without sacrificing productivity?

    Companies like Root are transforming how organizations approach software vulnerability remediation by applying agentic AI to streamline their approach. Rather than relying on engineers to triage and prioritize thousands of issues, Root’s AI-driven platform scans container images, applies safe patches where available, and generates custom patches for outdated components that lack official fixes. Root's AI automation resolves approximately 95% or more vulnerabilities without breaking production systems, allowing organizations to meet compliance requirements while developers stay focused on building and delivering software.

    In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Ian Riopel and John Amaral, Co-founders of Root, about how AI streamlines software vulnerability detection. Together, they explain how Root’s agentic AI platform uses specialized agents to automate patching while maintaining software stability. John and Ian also discuss how regulations and compliance pressures are driving the need for faster remediation, and how Root differs from threat detection solutions. They also explain how AI can reduce security workloads without replacing human expertise.

    Show More Show Less
    29 mins
  • Operationalizing Privacy Across Teams, Tools, and Tech
    Jun 12 2025

    Sarah Stalnecker is the Global Privacy Director at New Balance Athletics, Inc., where she leads the integration of privacy principles across the organization, driving awareness and compliance through education, streamlined processes, and technology solutions.

    In this episode…

    Operationalizing privacy programs starts with translating legal requirements into actions that work across teams. This means aligning privacy with existing tools and workflows while meeting evolving privacy regulations and adapting to new technologies. Today’s consumers also demand both personalization and privacy, and building trust means fulfilling these expectations without crossing the line. So, how can companies build a privacy program that meets regulatory requirements, integrates into daily operations, and earns consumer trust?

    Embedding privacy into business operations involves more than just meeting regulatory requirements. It requires cultural change, leadership buy-in, and teamwork. Rather than forcing company teams to adapt to new privacy processes, organizations need to embed privacy requirements into existing workflows and systems that departments already use. Leading with consumer expectations instead of legal mandates helps shift mindsets and encourages collaborative dialogue about responsible data use. Documenting AI use cases and establishing an AI governance program also helps assess risks without reactive scrambling. Teams should also leverage privacy technology to scale processes and streamline compliance to ensure privacy becomes an embedded, organization-wide function rather than a siloed concern.

    In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Sarah Stalnecker, Global Privacy Director at New Balance Athletics, about operationalizing privacy programs. Sarah shares how her team approaches data collection, embeds privacy into existing workflows, and uses consumer expectations to drive internal engagement. She also highlights the importance of documenting AI use cases and establishing AI governance to assess risk. Sarah provides tips on selecting and evaluating privacy technology and how to measure privacy program success beyond traditional metrics.

    Show More Show Less
    28 mins
  • Outsmarting Threats: How AI is Changing the Cyber Game
    Jun 5 2025

    Brett Ewing is the Founder and CEO of AXE.AI, a cutting-edge cybersecurity SaaS start-up, and the Chief Information Security Officer at 3DCloud. He has built a career in offensive cybersecurity, focusing on driving exponential improvement. Brett progressed from a Junior Penetration Tester to Chief Operating Officer at Strong Crypto, a provider of cybersecurity solutions.

    He brings over 15 years of experience in information technology, with the past six years focused on penetration testing, incident response, advanced persistent threat simulation, and business development. He holds degrees in secure systems administration and cybersecurity, and is currently completing a Masters in cybersecurity with a focus area in AI/ML security at the SANS Technology Institute. Brett also holds more than a dozen certifications in IT, coding, and security from the SANS Institute, CompTIA, AWS, and other industry vendors.

    In this episode…

    Penetration testing plays a vital role in cybersecurity, but the traditional manual process is often slow and resource-heavy. Traditional testing cycles can take weeks, creating gaps that leave organizations vulnerable to fast-moving threats. With growing interest in more efficient approaches, organizations are exploring new AI tools to automate tasks like tool configuration, project management, and data analysis. How can cybersecurity teams use AI to test environments faster without increasing risk?

    AXE.AI offers an AI-powered platform that supports ethical hackers and red teamers by automating key components of the penetration testing process. The platform reduces overhead by configuring tools, analyzing output, and building task lists during live engagements. This allows teams to complete high-quality tests in days instead of weeks. AXE.AI’s approach supports complex environments, improves data visibility for testers, and scales efficiently across enterprise networks. The company emphasizes a human-centered approach and advocates for workforce education and training as a foundation for secure AI adoption.

    In today’s episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Brett Ewing, Founder and CEO of AXE.AI, about leveraging AI for offensive cybersecurity. Brett explains how AXE.AI’s platform enhances penetration testing and improves speed and coverage for large-scale networks. He also shares how AI is changing both attack and defense strategies, highlighting the risks posed by large language models (LLMs) and deepfakes, and explains why investing in continuous workforce training remains the most important cyber defense for companies today.

    Show More Show Less
    22 mins