Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design) cover art

Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design)

Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design)

By: Brian T. O’Neill from Designing for Analytics
Listen for free

About this listen

Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be?

While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be?

If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype?

My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions.

Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies.

I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better.

Hashtag: #ExperiencingData.

JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS
https://designingforanalytics.com/ed

ABOUT THE HOST, BRIAN T. O’NEILL:
https://designingforanalytics.com/bio/© 2019 Designing for Analytics, LLC
Art Economics Management Management & Leadership
Episodes
  • 177 - Designing Effective Commercial AI Data Products for the Cold Chain with the CEO of Paxafe
    Sep 3 2025

    In this episode, I talk with Ilya Preston, co-founder and CEO of PAXAFE, a logistics orchestration and decision intelligence platform for temperature-controlled supply chains (aka “cold chain”). Ilya explains how PAXAFE helps companies shipping sensitive products, like pharmaceuticals, vaccines, food, and produce, by delivering end-to-end visibility and actionable insights powered by analytics and AI that reduce product loss, improve efficiency, and support smarter real-time decisions.

    Ilya shares the challenges of building a configurable system that works for transportation, planning, and quality teams across industries. We also discuss their product development philosophy, team structure, and use of AI for document processing, diagnostics, and workflow automation.

    Highlights/ Skip to:

    • Intro to Paxafe (2:13)  
    • How PAXAFE brings tons of cold chain data together in one user experience (2:33)
    • Innovation in cold chain analytics is up, but so is cold chain product loss. (4:42)
    • The product challenge of getting sufficient telemetry data at the right level of specificity to derive useful analytical insights (7:14)
    • Why and how PAXAFE pivoted away from providing IoT hardware to collect telemetry (10:23)
    • How PAXAFE supports complex customer workflows, cold chain logistics, and complex supply chains (13:57)
    • Who the end users of PAXAFE are, and how the product team designs for these users (20:00)
    • Pharma loses around $40 billion a year relying on ‘Bob’s intuition’ in the warehouse. How Paxafe balances institutional user knowledge with the cold hard facts of analytics (42:43)
    • Lessons learned when Ilya’s team fell in love with its own product and didn’t listen to the market (23:57)
    Quotes from Today’s Episode

    "Our initial vision for what PAXAFE would become was 99.9% spot on. The only thing we misjudged was market readiness—we built a product that was a few years ahead of its time." –IIya

    "As an industry, pharma is losing $40 billion worth of product every year because decisions are still based on warehouse intuition about what works and what doesn’t. In production, the problem is even more extreme, with roughly $800 billion lost annually due to temperature issues and excursions." -IIya

    "With our own design, our initial hypothesis and vision for what Pacaf could be really shaped where we are today. Early on, we had a strong perspective on what our customers needed—and along the way, we fell in love with our own product and design.." -IIya

    "We spent months perfecting risk scores… only to hear from customers, ‘I don’t care about a 71 versus a 62—just tell me what to do.’ That single insight changed everything." -IIya

    "If you’re not talking to customers or building a product that supports those conversations, you’re literally wasting time. In the zero-to-product-market-fit phase, nothing else matters, you need to focus entirely on understanding your customers and iterating your product around their needs..” -IIya

    "Don’t build anything on day one, probably not on day two, three, or four either. Go out and talk to customers. Focus not on what they think they need, but on their real pain points. Understand their existing workflows and the constraints they face while trying to solve those problems." -IIya

    Links
    • PAXAFE: https://www.paxafe.com/
    • LinkedIn for Ilya Preston: https://www.linkedin.com/in/ilyapreston/
    • LinkedIn for company: https://www.linkedin.com/company/paxafe/
    Show More Show Less
    49 mins
  • 176 - (Part 2) The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications
    Aug 19 2025

    This is part two of the framework; if you missed part one, head to episode 175 and start there so you're all caught up.

    In this episode of Experiencing Data, I continue my deep dive into the MIRRR UX Framework for designing trustworthy agentic AI applications. Building on Part 1’s “Monitor” and “Interrupt,” I unpack the three R’s: Redirect, Rerun, and Rollback—and share practical strategies for data product managers and leaders tasked with creating AI systems people will actually trust and use. I explain human-centered approaches to thinking about automation and how to handle unexpected outcomes in agentic AI applications without losing user confidence. I am hoping this control framework will help you get more value out of your data while simultaneously creating value for the human stakeholders, users, and customers.

    Highlights / Skip to:

    • Introducing the MIRRR UX Framework (1:08)
    • Designing for trust and user adoption plus perspectives you should be including when designing systems. (2:31)
    • Monitor and interrupt controls let humans pause anything from a single AI task to the entire agent (3:17)
    • Explaining “redirection” in the example context of use cases for claims adjusters working on insurance claims—so adjusters (users) can focus on important decisions. (4:35)
    • Rerun controls: lets humans redo an angentic task after unexpected results, preventing errors and building trust in early AI rollouts (11:12)
    • Rerun vs. Redirect: what the difference is in the context of AI, using additional use cases from the insurance claim processing domain (12:07)
    • Empathy and user experience in AI adoption, and how the most useful insights come from directly observing users—not from analytics (18:28)
    • Thinking about agentic AI as glue for existing applications and workflows, or as a worker (27:35)

    Quotes from Today’s Episode

    The value of AI isn’t just about technical capability, it’s based in large part on whether the end-users will actually trust and adopt it. If we don’t design for trust from the start, even the most advanced AI can fail to deliver value."

    "In agentic AI, knowing when to automate is just as important as knowing what to automate. Smart product and design decisions mean sometimes holding back on full automation until the people, processes, and culture are ready for it."

    "Sometimes the most valuable thing you can do is slow down, create checkpoints, and give people a chance to course-correct before the work goes too far in the wrong direction."

    "Reruns and rollbacks shouldn’t be seen as failures, they’re essential safety mechanisms that protect both the integrity of the work and the trust of the humans in the loop. They give people the confidence to keep using the system, even when mistakes happen."

    "You can’t measure trust in an AI system by counting logins or tracking clicks. True adoption comes from understanding the people using it, listening to them, observing their workflows, and learning what really builds or breaks their confidence."

    "You’ll never learn the real reasons behind a team’s choices by only looking at analytics, you have to actually talk to them and watch them work."

    "Labels matter, what you call a button or an action can shape how people interpret and trust what will happen when they click it."

    Quotes from Today’s Episode
    • Part 1: The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications
    Show More Show Less
    30 mins
  • 175 - The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications (Part 1)
    Aug 6 2025

    In this episode of Experiencing Data, I introduce part 1 of my new MIRRR UX framework for designing trustworthy agentic AI applications—you know, the kind that might actually get used and have the opportunity to create the desired business value everyone seeks! One of the biggest challenges with both traditional analytics, ML, and now, LLM-driven AI agents, is getting end users and stakeholders to trust and utilize these data products—especially if we’re asking humans in the loop to make changes to their behavior or ways of working.

    In this episode, I challenge the idea that software UIs will vanish with the rise of AI-based automation. In fact, the MIRRR framework is based on the idea that AI agents should be “in the human loop,” and a control surface (user interface) may in many situations be essential to ensure any automated workers engender trust with their human overlords.

    By properly considering the control and oversight that end users and stakeholders need, you can enable the business value and UX outcomes that your paying customers, stakeholders, and application users seek from agentic AI.

    Using use cases from insurance claims processing, in this episode, I introduce the first two of five control points in the MIRRR framework—Monitor and Interrupt. These control points represent core actions that define how AI agents often should operate and interact within human systems:

    • Monitor – enabling appropriate transparency into AI agent behavior and performance
    • Interrupt – designing both manual and automated pausing mechanisms to ensure human oversight remains possible when needed

    …and in a couple weeks, stay tuned for part 2 where I’ll wrap up this first version of my MIRRR framework.

    Highlights / Skip to:
    • 00:34 Introducing the MIRRR UX Framework for designing trustworthy agentic AI Applications.
    • 01:27 The importance of trust in AI systems and how it is linked to user adoption
    • 03:06 Cultural shifts, AI hype, and growing AI skepticism
    • 04:13 Human centered design practices for agentic AI
    • 06:48 I discuss how understanding your users’ needs does not change with agentic AI, and that trust in agentic applications has direct ties to user adoption and value creation
    • 11:32 Measuring success of agentic applications with UX outcomes
    • 15:26 Introducing the first two of five MIRRR framework control points:
      • 16:29 M is for Monitor; understanding the agent’s “performance,” and the right level of transparency end users need, from individual tasks to aggregate views
      • 20:29 I is for Interrupt; when and why users may need to stop the agent—and what happens next
    • 28:02 Conclusion and next steps
    Show More Show Less
    29 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.