• The Scary Truth About AI in the ER and Why Clinical Judgment Still Wins | Dr. Natasha Dole | The Signal Room
    Mar 13 2026

    Send a text

    Artificial intelligence is entering emergency departments, acute care settings, and clinical workflows at speed. But when seconds matter, who do clinicians trust — the algorithm or their own judgment?

    In this episode of The Signal Room, Chris Hutchins sits down with Natasha Dole, Emergency Medicine Consultant and Digital Health & AI Lead, to explore how credibility is established in high-pressure clinical environments — and what that means for AI adoption.

    They discuss:

    • How trust is built in the resuscitation room before anyone speaks
    • What clinicians need to see before relying on AI recommendations
    • When AI supports credibility — and when it undermines it
    • The real drivers behind AI resistance in healthcare
    • What “earned trust” should look like for AI at the bedside
    • The responsibilities that remain uniquely human in clinical care

    This conversation moves beyond hype to examine authority, bias, professional responsibility, and the hidden assumptions embedded in healthcare technology.

    If you care about responsible AI, clinician trust, and the future of decision-making in acute care — this episode is for you.

    Subscribe for more conversations at the intersection of leadership, ethics, and healthcare innovation.

    Support the show

    Show More Show Less
    44 mins
  • The Enterprise AI Journey: From Data Foundations to Generative and Agentic AI | Gary Cao
    Mar 4 2026

    Send a text

    Send a text

    Enterprise AI is not a tool decision. It is an operating model decision.

    In this episode of The Signal Room, Chris Hutchins sits down with Gary Cao, Chief Data & Analytics / AI Officer, to explore the enterprise AI journey from an executive perspective.

    This conversation moves beyond hype and definitions. Instead, it focuses on what actually changes inside an organization when AI becomes strategic:

    • Moving from AI experimentation to enterprise maturity
    • Integrating generative AI into structured data environments
    • Deterministic systems vs. probabilistic reasoning
    • The role of semantic layers and data management bottlenecks
    • Automation vs. agentic AI systems
    • Measuring enterprise ROI in an era of high abandonment rates

    Gary shares practical insight into AI maturity models, governance design, risk tolerance tiers, and the evolving role of the CDAO in coordinating strategy, technology, and accountability.

    If you are a board member, C-suite executive, data leader, or digital transformation officer navigating AI at scale, this episode provides a grounded view of what it takes to move from ambition to enterprise execution.

    Connect with Gary Cao on LinkedIn:
    https://www.linkedin.com/in/garycao/

    Subscribe to The Signal Room for conversations at the intersection of leadership, governance, and AI innovation.

    Support the show

    Support the show

    Show More Show Less
    42 mins
  • From AI Strategy to Execution: Trust, Leadership, and the Operational Reality of Healthcare AI | Brian Sutherland
    Feb 25 2026

    Send a text

    AI ambition isn’t the problem in healthcare. Execution is.

    In this episode of The Signal Room, Chris Hutchins sits down with Brian Sutherland, Lead AI Product Manager and advisor specializing in customer-facing AI for high-consequence healthcare environments.

    Brian built Humana’s first member-facing Intelligent Virtual Assistant — generating $7M+ in annual savings while improving patient experience and task completion. In this conversation, we move beyond AI hype and examine what actually breaks between executive strategy and operational reality.

    We explore:

    • Why AI pilots succeed but enterprise adoption stalls
    • Trust as infrastructure — not philosophy
    • The leadership shift required as AI embeds into clinical workflows
    • Where hype is outrunning evidence in healthcare AI
    • What responsible scale actually looks like

    If you are a healthcare executive, board member, digital health leader, or AI product owner, this episode is a grounded discussion on what it takes to move from ambition to accountable execution.

    Connect with Brian Sutherland on LinkedIn:
    https://www.linkedin.com/in/briandsutherland/

    Subscribe for practical conversations at the intersection of leadership, ethics, and healthcare innovation.

    Support the show

    Show More Show Less
    41 mins
  • Why AI Verification, not Speed or Model Accuracy, is the Real Bottleneck in Pharmaceutical Drug Discovery
    Feb 18 2026

    Send a text

    AI is transforming drug discovery—but faster models alone do not get drugs approved.

    In this episode of The Signal Room, host Chris Hutchins speaks with David Finkelshteyn, CEO of Pivotal AI, about why verification—not speed or model accuracy—is the real bottleneck in pharmaceutical AI.

    David explains why generating AI-designed molecules without rigorous validation creates more risk than value, especially in regulated environments like pharma and healthcare. The conversation breaks down where AI outputs most often fail between discovery and regulatory acceptance, why black-box models struggle under scrutiny, and what it actually means to verify an AI insight in drug development.

    They also explore practical challenges around data integrity, auditability, missing context, hallucinations, and the growing use of consumer AI tools in health decisions. Rather than chasing hype, this episode focuses on how AI can responsibly accelerate drug development by failing faster, tightening verification loops, and building systems that can be defended to regulators, auditors, and clinicians.

    This episode is essential listening for leaders working in pharmaceutical R&D, healthcare AI, data science, AI governance, and regulated technology environments.

    Guest: David Finkelshteyn, CEO, Pivotal AI
    LinkedIn: https://www.linkedin.com/in/david-finkelshteyn-03191a130/

    Support the show

    Show More Show Less
    38 mins
  • No Alerts, Still Breached: Understanding Cybersecurity Risks and Ethical Leadership in Healthcare AI'
    Feb 11 2026

    Send a text

    This episode explores ethical leadership and AI governance challenges in healthcare cybersecurity, emphasizing the risks of undetected breaches.'

    In this episode of The Signal Room, Chris Hutchins speaks with Guman Chauhan, a cybersecurity and risk leader, about one of the most dangerous conditions in modern organizations: being breached and not knowing it. While dashboards stay green and alerts stay quiet, attackers increasingly operate using valid credentials, normal behavior patterns, and long dwell times—remaining invisible for weeks or months.

    Guman explains why “no alerts” is often mistaken for “no breach,” and why silence is one of the most misleading signals in cybersecurity. The conversation unpacks how attackers deliberately avoid detection, why security tools alone do not equal security outcomes, and where organizations create blind spots through untested assumptions, alert fatigue, and fragmented processes.

    They explore why undetected breaches are more damaging than known ones, how time compounds risk once attackers are inside, and what separates organizations that mature after incidents from those that repeat the same failures. Guman emphasizes that proven security is not built on policies, certifications, or dashboards—but on continuous testing, validated detection, and teams that know how to act under pressure.

    This episode is a practical guide for executives, security leaders, healthcare organizations, and regulated enterprises that need to move from assumed security to proven breach readiness.

    Guest: Guman Chauhan
    LinkedIn: https://www.linkedin.com/in/guman-chauhan-m-s-cissp-cism-600824103/

    Topics Covered

    • Why undetected breaches are more dangerous than known breaches
    • How attackers use valid credentials to avoid detection
    • Why “no alerts” does not mean “no breach”
    • Alert fatigue and the signal-to-noise problem
    • Security tools vs security outcomes
    • Visibility gaps, unknown assets, and logging failures
    • External penetration testing and real-world validation
    • Cultural and leadership factors in breach response
    • Assumed security vs proven security

    Key Takeaways

    • Silence is not security; it often means you are not seeing the right signals.
    • Most breaches go undetected because attackers behave like legitimate users.
    • Security tools do not fail—untested assumptions do.
    • Alert fatigue hides real risk by normalizing noise.
    • Proven security requires testing detection and response end to end.
    • Mature organizations treat breaches as learning moments, not events to hide.
    • Confidence without validation creates the most dangerous blind spots.

    Chapters / Timestamps

    00:00 – Why undetected breaches are the real risk
    02:30 – Being breached vs being breached and not knowing
    06:00 – How attackers stay invisible using valid credentials
    08:30 – Why dashboards and alerts create false confidence
    10:00 – Common reasons breaches go undetected for months
    13:30 – Security tools vs security outcomes
    16:00 – Technology, process, and people failures
    19:30 – Alert fatigue and finding real signals
    22:30 – Why external penetration testing still matters
    26:30 – What mature organizations do after a breach
    31:00 – One action to improve breach readiness this year
    32:45 – The uncomfortable question every leader should ask
    34:30 – Assumed security vs proven security
    36:30 – How to connect with Guman & closing

    Support the show

    Show More Show Less
    34 mins
  • Scaling Care with AI: Balancing Human Judgment and Clinical Trust in Healthcare
    Feb 4 2026

    Send a text

    What does it truly mean to scale care with AI inside a real hospital environment? In this episode of The Signal Room, host Chris Hutchins talks with Mark Gendreau, emergency physician and Chief Medical Officer, about the intersection of healthcare AI, ethical leadership, and AI strategy. Together, they discuss how AI is transforming clinical workflows by amplifying human judgment rather than replacing it.

    They explore real-world applications in healthcare AI such as radiology co-pilots, ambient clinical documentation, and workflow intelligence designed to relieve clinician burnout. Dr. Gendreau highlights the need for responsible AI and human oversight in high-reliability healthcare settings.

    The conversation also covers critical topics like AI governance, clinical trust, alert fatigue, and leadership accountability. Listeners will gain insights into why successful AI adoption in healthcare depends on culture and ethical leadership, not just technology.

    This episode is essential for healthcare leaders, clinicians, informaticists, and policymakers seeking practical guidance on AI readiness, ethical AI practices, and driving AI strategies that improve patient care while maintaining human judgment at the core.

    Key Takeaways

    • AI delivers the most value when it amplifies clinicians, not when it attempts to replace them
    • Human judgment is essential in high-risk clinical decisions, even with advanced AI support
    • Ambient documentation can dramatically reduce after-hours EHR work (“pajama time”)
    • Alert fatigue is a governance problem, not just a technical one
    • Trust in AI is built through reliability, transparency, and clear ethical intent
    • Successful AI adoption depends more on leadership and culture than IT execution
    • Interoperability and governance are the biggest barriers to scaling AI across health systems
    • Emotional intelligence, empathy, and shared decision-making remain human responsibilities

    Guest Info

    Mark Gendreau, MD, MS, CPE
    Emergency Medicine Physician | Chief Medical Officer

    Dr. Gendreau is an experienced emergency physician and healthcare executive with deep expertise in clinical operations, patient safety, and responsible AI adoption. He focuses on using technology to improve access, quality, and clinician experience while preserving the human core of medicine.

    🔗 LinkedIn: https://www.linkedin.com/in/markgendreaumd/

    Chapters (YouTube & Spotify)

    00:00 – Introduction and framing the AI scaling challenge
    01:18 – Workforce scarcity and why AI must amplify clinicians
    02:10 – AI in radiology: co-pilots, fatigue reduction, and safety
    05:26 – Ambient documentation and eliminating “pajama time”
    07:17 – Using AI to improve clinician communication and empathy
    09:33 – Where AI falls short and why humans must stay in the loop
    12:44 – Guardrails, trust, and human-AI partnership
    13:44 – Trust in AI vs trust in human relationships
    16:07 – Adoption curves and clinician buy-in
    18:05 – Why AI fails when treated as an IT project
    20:41 – Leadership’s role in shaping AI culture
    22:07 – Interoperability, governance, and scaling challenges
    26:04 – Signals that an organization is truly AI-ready
    29:26 – Emotional intelligence and where AI should never lead
    33:59 – Alert fatigue and governance accountability
    37:27 – Measuring success: outcomes, equity, and pajama time
    38:36 – How to connect with Dr. Gendreau
    39:31 – Episode close

    Support the show

    Show More Show Less
    34 mins
  • From AI Hype to Real Value: Crafting AI Strategy That Delivers Real Business Impact
    Jan 28 2026

    Send a text

    In this insightful episode of The Signal Room, host Chris Hutchins and guest Parth Gargish dive deep into building effective AI strategies that go beyond the hype to deliver real business value. With extensive experience in SaaS and AI-driven product development, Parth shares practical insights on developing AI-first approaches that prioritize ethical leadership, responsible AI adoption, and workforce readiness.

    Listeners will learn why successful AI in healthcare and other industries depends on strong leadership accountability, transparent communication, and establishing trust throughout the AI transformation process. The discussion highlights how targeted AI use cases can maximize ROI, focusing on solving business problems rather than chasing flashy technology demos.

    Key themes include AI governance, ethical AI practices, upskilling teams, and balancing human decision-making with AI capabilities. This episode is essential for healthcare leaders and AI experts looking to implement AI strategies that are both impactful and ethically sound.

    Join us as we explore how ethical leadership and responsible AI practices drive real value in AI adoption and help organizations navigate the complex landscape of AI in business strategy and healthcare.


    Key Takeaways

    • AI success starts with people and process, not tools
    • Small, targeted AI use cases often deliver the highest ROI
    • AI should enable teams, not replace human decision-making
    • Leadership transparency is critical during AI transitions
    • Real value comes from solving business problems, not showcasing technology

    Discussion Themes

    • AI-first strategy versus AI experimentation
    • Separating hype from real enterprise use cases
    • Workforce trust, upskilling, and change management
    • SaaS, customer support automation, and operational efficiency
    • Leadership accountability in AI adoption

    Guest Contact & Links

    LinkedIn: https://www.linkedin.com/in/parth-gargish-0803b897/

    Community: SaaS NXT (North American SaaS founder community)

    Support the show

    Show More Show Less
    26 mins
  • Why Healthcare AI Fails Without Complete Medical Records: Interoperability, Transparency & Patient Access
    Jan 21 2026

    Send a text

    Healthcare AI cannot deliver precision medicine without complete, interoperable medical records, which are essential for responsible AI implementation in healthcare. In this episode, recorded live at the Data First Conference in Las Vegas, Aleida Lanza, founder and CEO of Casedok, shares insights from her 35 years as a medical malpractice paralegal on why fragmented records and inaccessible data continue to undermine care quality, safety, and trust in healthcare AI.

    We dive deep into why interoperability must extend beyond the core clinical record to include the full spectrum of healthcare data—images, itemized bills, claims history, and even records trapped in paper or PDFs. Aleida argues that patient ownership and transparency of their health information, a critical element of healthcare ethics, are key to overcoming these challenges and enabling ethical leadership in healthcare AI.

    This episode also highlights the significant risks posed by missing data bias in healthcare AI, explaining how incomplete records prevent AI systems from accurately detecting patient needs. Aleida outlines how complete medical record transparency and safe AI collaboration can transform healthcare from static averages to truly personalized, informed care, aligning with principles of ethical AI and responsible AI deployment.

    If you're involved in healthcare leadership, AI strategy, data governance, or healthcare ethics, this episode offers valuable perspectives on AI readiness, healthcare AI regulation, and the urgent need to improve interoperability for better patient outcomes.

    Key topics covered

    • Why interoperability must include the entire medical record
    • Patient ownership, transparency, and access to health data
    • The hidden cost of fragmented records and repeated history-taking
    • Why static averages fail patients and clinicians
    • Precision medicine vs static medicine
    • Safe AI deployment without hallucination or data leakage
    • Missing data as the most dangerous bias in healthcare AI
    • Emergency access to complete history as a patient safety issue
    • Medicare, payer integration, and large-scale access challenges

    Chapters

    00:00 Live from Data First Conference
    01:20 Why interoperability is more than clinical data
    03:40 Fragmentation, static medicine, and broken incentives
    05:55 Why AI needs complete patient history
    08:10 Missing data as invisible bias
    10:55 Emergency care and inaccessible records
    12:40 Patient ownership and transparency
    14:30 Precision medicine and AI safety
    16:10 Why patients should own what they paid for
    18:30 How to connect with Aleida Lanza

    Stay tuned. Stay curious. Stay human.

    #HealthcareAI #Interoperability #PatientData

    Support the show

    Show More Show Less
    16 mins