• An Interview with Mert Çuhadaroğlu
    Dec 22 2025
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Mert Çuhadaroğlu, Program Manager of BABL AI’s AI & Algorithm Auditor Certification Program, for an in-depth conversation about careers in AI governance, responsible AI, and what it really takes to become an AI auditor. Mert shares his unique professional journey — from banking and finance, to career coaching and publishing, to becoming a leading figure in AI ethics and auditing. Now based in Istanbul, Mert plays a critical role in guiding and evaluating BABL AI certification students, including reviewing capstone projects and supporting professionals from a wide range of backgrounds. Together, Shea and Mert discuss: What makes BABL AI’s AI & Algorithm Auditor Certification different from other AI governance programs Whether you need a technical background to succeed in AI auditing The real-world demand for AI auditors and AI governance professionals Common career paths for certification graduates What students actually do in the capstone project (including LLM and generative AI use cases) How BABL AI’s certifications compare to other industry credentials An overview of BABL AI’s additional certification programs, including EU AI Act Quality Management Systems, AI Governance for Business Professionals, and AI for Legal Professionals This episode is both a behind-the-scenes look at BABL AI’s training philosophy and a practical guide for anyone considering a career in AI assurance, audit, or governance. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    35 mins
  • Diving into the AI Compliance Officer
    Dec 8 2025
    What does a Chief AI Compliance Officer actually do—and does your organization secretly need one already? 🤔 In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by co-hosts Jeffery Recker and Bryan Ilg to unpack what it really takes to own AI risk, compliance, and governance inside a modern organization. Drawing on BABL AI’s AI Compliance Officer Program and years of audit work, they break down the real pain points leaders are facing and how to move from confusion to a concrete plan. Whether you’ve just been handed “AI compliance” on top of your day job, or you’re building AI products and worried about regulations, this one’s for you. In this episode, they discuss: What a Chief AI Compliance Officer role looks like in practice – Why it often lands on general counsel, chief compliance officers, or chief AI officers – Why this work can’t be owned by one person alone The 3-part structure of BABL AI’s AI Compliance Officer Program AI foundations – Governance, AI management systems, policies, procedures, and documentation Fractional AI Compliance Officer support – Access to BABL’s research and audit team on an ongoing basis Continuous monitoring & measurement – Keeping up with self-learning, changing AI systems over time How to build an AI system inventory and triage risk – Simple rubric for identifying high, medium, and low-risk AI systems – When to treat a system as “high risk” by default – Why simplicity is the antidote to feeling overwhelmed Key AI risks every organization should know about – Data poisoning and how malicious instructions can sneak into your systems – Shadow AI (employees using unapproved tools like personal ChatGPT accounts) – Model & data drift and why “it worked when we launched it” isn’t good enough – How these risks connect to reputation, regulatory exposure, and business strategy Why governance, risk & compliance (GRC) is not a “brake” on innovation – How good governance actually lets you move faster and more confidently – The value of a “SWAT team” style AI compliance function vs. going it alone Who should watch/listen? General counsel, chief compliance officers, chief risk officers Chief AI / data / technology leaders Product owners building AI-powered tools Anyone who’s just been told: “You’re now responsible for AI compliance.” 🫠 Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    43 mins
  • Implementing AI into Your Career
    Nov 24 2025
    In this follow-up to our episode on AI, training, and the job market, BABL AI CEO Dr. Shea Brown is joined again by COO Jeffery Recker and Chief of Staff Emily Brown to get practical about one big question: How do you actually implement AI into your career… without losing yourself (or your job) in the process? Whether you’re secure in your role, worried about layoffs, or actively changing careers, this episode focuses on tactical, realistic steps you can start taking this week. 🎧 In this episode, we cover: How to start using large language models (LLMs) and agents in your day-to-day work Concrete examples for roles like lawyers, accountants, marketers, operations, HR, teachers, and journalists What to do if your manager or organization is afraid of AI (data leaks, reputation risk, etc.) How to avoid “AI slop” and become the person who provides clear, minimal, high-value outputs A practical plan if you’ve been laid off or see layoffs coming: dual-track job search + AI pivot Using AI ethically for resumes, ATS filters, and video interviews—without fabricating experience Why you should make an “AI inventory” of tools already in your life (spoiler: it’s more than you think) How to set boundaries with AI so it augments your work, not your identity or mental health Mindset shifts for people who don’t feel “technical” but still need to adapt Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    47 mins
  • AI, Training & the Job Market
    Nov 10 2025
    In this latest episode of Lunchtime BABLing, hosted by BABL AI CEO Dr. Shea Brown with COO Jeffery Recker and—making her first appearance—Chief of Staff Emily Brown, we dig into what today’s AI-shaped job market really means for knowledge workers, how to build durable skills, and why “human in the loop” still matters—especially in marketing, ops, and hiring. 🎧 What you’ll learn Why AI anxiety is spiking—and how to respond with deliberate upskilling The #1 meta-skill: building a strong filter (concise, expert-informed outputs > AI slop) How AI literacy translates to any role (marketing, people ops, compliance, product) Practical ways to pivot toward Responsible AI / AI assurance / AI auditing Why specialization beats chasing every trend (go narrow, go deep, then pivot) The value of community: mentorship, peer feedback, and portfolio/capstone work Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    45 mins
  • AI and Scheduling Optimization with Leon Ingelse
    Jul 14 2025
    From lesson-planning to long-haul trucking, good schedules make the world run—literally. In this episode, BABL AI CEO Dr. Shea Brown sits down with Leon Ingelse, writer-researcher at Croatian optimization studio Dots & Lines, to unpack the hidden math, ethics, and human stories behind modern scheduling and routing. 🔑 What we cover Hard vs. soft constraints – why “can’t” and “prefer not to” need different math Digital twins – building a virtual copy of a business before you touch the real one Fairness & “karma” scheduling – balancing preferences over weeks, months, years Transparency & compliance – explaining a timetable (and the laws baked into it) Human-in-the-loop vs. full automation – when you still want a person pressing “publish” Optimization ≠ LLMs – where stochastic AI falls short and formal models shine The future of Dots & Lines and why bespoke solutions often beat off-the-shelf products Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    41 mins
  • How to Break Into AI Governance?
    Jun 30 2025
    Ever wondered how to start a career in AI Governance, Responsible AI, or AI Risk Management? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg for a no-nonsense, practical conversation about how to actually break in to this fast-growing, high-demand field. 🌟 What you'll learn in this episode ✅ What AI governance really is (and why it matters in every business using AI) ✅ The 3 main career paths into AI governance: Dedicated governance roles Expanding your current role to include AI oversight Building something new as an entrepreneur/intrapreneur ✅ Do you need to be technical? How much? ✅ The real skills hiring managers want ✅ How to transition from zero experience to credible candidate ✅ Why governance is essential for scaling AI safely and responsibly 🧭 Key themes Hands-on learning: You have to use AI to govern AI Systems thinking: Understanding how decisions get made at scale Risk awareness: The #1 thing employers want Building your profile: Projects, credentials, volunteering, networking Niche strategy: Why specializing beats general buzzwords Marathon mindset: This is not a quick certification cash-in Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    48 mins
  • AI Ethicist Reacts to Different Uses of AI
    Jun 16 2025
    In this fun and thought-provoking episode of Lunchtime BABLing, BABL AI CEO and AI ethicist Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg for a rapid-fire discussion on some of the most surprising, bizarre, and controversial uses of AI circulating online. From jailbreaking legal loopholes with ChatGPT, to AI-generated testimony from the deceased, to digital therapy bots and AI relationships—no use case is off-limits. The trio explores the ethical, legal, and emotional implications of everyday AI encounters, reacting in real-time with humor, insight, and a healthy dose of skepticism. 🎧 Topics include: Can AI help someone get out of jail? Is it ethical to use AI-generated avatars in court? Talking to an AI version of a dead loved one—grief or avoidance? Should AI replace your therapist? Professors using ChatGPT to grade student essays AI as your relationship coach (or third wheel) Confirmation bias and the future of learning in the AI age 💬 This episode steps away from regulation and compliance to explore how AI is quietly reshaping human behavior—and whether we’re ready for it. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    38 mins
  • What is ISO 42001?
    Jun 2 2025
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to break down ISO/IEC 42001 — the first international standard for AI management systems. Whether you're leading an AI team, navigating AI risk, or just starting your Responsible AI journey, this high-level introduction will help you understand: What ISO 42001 is and why it matters How it fits into global AI governance (including the EU AI Act and U.S. regulations) Key components of the standard — from leadership, risk assessments, and operations to monitoring and continual improvement Common challenges organizations face when adopting it Practical first steps for implementation, even for startups and resource-limited teams 💡 ISO 42001 is quickly becoming the North Star for organizations aiming to demonstrate trustworthy and responsible AI practices — especially in today’s fast-moving regulatory environment. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    27 mins