• Intent Is Not Enough
    May 7 2026

    Agreeing on an idea doesn't mean you both understood the same thing. Dave Sharrock and Peter Maddison dig into why shared context breaks down in practice, and how AI makes that problem harder to ignore.

    This week's takeaways:

    • Intent is always imperfect. Define how you'll validate it, not just what it is.
    • Ambiguity in context isn't a bug. It's necessary. Validation is how you confirm you're aligned.
    • Drive down the cost of validation, not just the cost of building.

    If this landed, share it with someone navigating the same tension. And reach out at feedback@definitelymaybeagile.com - we read everything.

    Show More Show Less
    14 mins
  • Why AI and PowerPoints Are Quietly Killing Your Product Intent
    Apr 30 2026

    It doesn't happen all at once. A great idea comes out of a strategy session. Someone turns it into a PowerPoint. Another person summarizes that PowerPoint with AI. By the time it reaches the team building it, the sharp edges are gone and nobody quite remembers what made the idea worth pursuing in the first place.

    Peter and Dave dig into a problem that's older than AI but getting harder to ignore. How does intent get lost as it travels through layers of people, tools, and artifacts? What does a shared context document do that a business case can't? And what can the architectural world teach the product world about keeping the thread from unraveling?

    Key takeaways:

    • Moving artifacts backwards and forwards through an organization strips out nuance at every step. A single central context document is a more honest way to carry intent from strategy to delivery.
    • AI is being actively encouraged in most organizations right now, and in using it, teams may be quietly eroding the ideas behind what they're building without realizing it.
    • If your outcomes don't match your original intent, the handoff chain is usually where things went wrong. That's worth looking at before blaming the team.

    Try this: Trace one idea from your last strategy session all the way to what actually got built. See if you can find where it changed. Then come tell us what you found at feedback@definitelymaybeagile.com.

    Show More Show Less
    17 mins
  • Do You Actually Have a Capacity Problem?
    Apr 23 2026

    Most organizations think they have a capacity problem. They usually don't.

    What they have is a work-in-progress problem. And those two things call for very different solutions.

    In this episode, Peter Maddison and Dave Sharrock dig into one of the most persistent headaches in organizational management: capacity tracking. Why does the instinct to measure utilization backfire? Why does loading people up to 100% actually slow things down? And what should leaders be asking instead?

    The conversation covers the real cost of context switching, why that "nearly done" project is probably further away than it looks, and how AI is making all of this more urgent, not easier.

    Three things to take away from this episode:

    1. 100% utilization is not a goal. It's a warning sign.
    2. The right question isn't "how much capacity do we have?" It's "how much work in progress can we actually sustain?
    3. AI accelerates your breaking points.

    If this conversation resonated, there's more where it came from. Peter Maddison and Dave Sharrock explore these kinds of organizational challenges every week on Definitely Maybe Agile - the podcast that gets into the real complexity of modern ways of working, without the buzzwords.

    Listen wherever you get your podcasts, or visit definitelymaybeagile.com to catch up on past episodes and reach out with your own questions.

    Show More Show Less
    20 mins
  • Context Engineering and the Roles AI Is Rewriting
    Apr 16 2026

    AI is changing how products get built. That part isn't news. But it's also changing who needs to do what - and that's a conversation most organizations haven't had yet.

    In this episode, Peter and Dave dig into one of the more interesting tensions emerging in 2026: as coding agents take on more of the actual development work, the thing that drives quality output isn't just better tooling. It's better context. Clear, structured, well-owned context that tells agents what you're actually trying to build, who it's for, and what can't be compromised.

    Which raises a real question. Who owns that? Where does it live? And what happens when it's missing - which, let's be honest, it usually is?

    They get into the rise of "context engineering" as a role, why the name creates its own problems, and what this shift means for product owners, product managers, and the long-standing gap between business and technology teams.

    Key takeaways from this episode:

    • Most organizations have never truly written down their product intent in a structured, usable way. AI is making that gap impossible to ignore.
    • Good context drives better outcomes from agents - and the work of capturing, structuring, and maintaining that context needs a clear owner.
    • Start asking: what context exists to guide your products? Where is it stored? Who creates it? Who picks it up and moves it through the system?
    • The business and technology divide matters more now, not less. You can't afford to throw things over the wall anymore. The two groups need to work closely together, not in parallel.
    • What's new here isn't the idea. It's the urgency. These are transformations organizations have been attempting for years. AI is just forcing the issue.

    Want to continue the conversation?

    If this episode brought up questions about how your teams are navigating the shift to agentic development - or where context ownership actually sits in your organization - reach out at feedback@definitelymaybeagile.com. We'd love to hear what you're seeing.

    Show More Show Less
    21 mins
  • AI Won't Fix a Structural Problem with AJ Bubb
    Apr 9 2026

    A lot of organizations are betting that AI will make their teams faster. Some of them are right. Most are solving the wrong problem.

    AJ Bubb, founder of MxP Studio and host of Facing Disruption, joins Peter and Dave to talk about what actually happens when AI lands in a development team without fixing the system around it. If engineers can't get approvals, can't get access, and spend half their day in meetings, AI just means they produce more output the organization still can't handle. That's not a tooling problem. It's a structural one.

    They also get into velocity without direction, what ownership really looks like when a ticket gets blocked, and why synthetic user testing might be the most polite way to avoid talking to actual customers.

    This Week's Takeaways

    • Own the problem from the customer all the way down. When something is blocked, it's still yours until it moves.
    • When an outcome surprises you in either direction, ask whether your model was wrong. Most teams take the win and move on. The ones that improve don't.
    • Before reaching for a technical solution, ask why five times. The problem someone walks in with is usually the invitation to a conversation, not the actual problem.

    If this episode got you thinking, we'd love to hear from you. Drop us a note at feedback@definitelymaybeagile.com or leave a review on your podcast app. And if you know someone navigating AI adoption right now, send this one their way.

    Show More Show Less
    41 mins
  • Project vs. Product: Finding the Operating Model That Actually Fits
    Apr 2 2026

    Most organizations are running some version of a project operating model or a product operating model - or, more honestly, an uncomfortable mix of both. In this episode, Peter Maddison and Dave Sharrock get into what actually separates these two approaches, where the tensions show up, and why copying what works somewhere else rarely lands the way you expect.

    They dig into how the nature of your work - ordered versus unordered, stable versus volatile - should shape how you plan, who holds decision rights, and how closely your experts need to stay involved. They also talk honestly about the hybrid trap: why trying to be all things to all teams usually ends up serving nobody, and what a smarter version of "borrowing from both" can actually look like.

    Real examples from large organizations, including a couple of banks, show just how messy it gets when the model is mandated from the top without enough room for context.

    Key takeaways from this episode:

    • There is no universal operating model. The right fit depends on your context right now, not what worked somewhere else.
    • If your plan is constantly changing, lean toward the product side. If it's stable and predictable, the project side probably serves you better.
    • Be intentional about your choices. Ask why you're organizing work the way you are, and how you'll know if it's working.
    • Getting an outside perspective matters. It's easy to stay stuck in familiar patterns without someone who can see the system clearly and name what isn't working.
    • Get your operating model working before you add AI into the mix. Throwing new tools at a system that isn't working yet just breaks things faster.

    Which end of the spectrum does your organization sit on right now - and is it actually working for you? Leave a comment below. We read everything.

    Show More Show Less
    20 mins
  • Who Decides? Sorting Out Product Managers, Project Managers, and Product Owners
    Mar 26 2026

    Product manager. Product owner. Project manager. Three roles that often exist in the same organization, sometimes in the same meeting, and frequently stepping on each other's toes. In this episode, Dave and Peter break down what actually separates these roles, why the confusion happens, and what it costs when the lines blur in the wrong ways.

    They dig into the difference between a project-centric operating model and a product operating model, and why that distinction matters more than most organizations realize. They also get into a concept Peter uses with clients: product owners reduce decision latency, project managers reduce reporting latency. It sounds simple, but the implications reach into how teams are funded, how authority is distributed, and why some transformations stall halfway.

    The conversation covers real patterns from the field, including what happens when a technical project manager spends most of his time coordinating 14 dependency groups just so a product owner can get a decision made, and what it looks like when a project-centric funding model quietly undermines a product operating model that was never quite finished.

    They also touch on where AI fits into all of this, and where it currently falls short as a bridge between these two worlds.

    Three key takeaways from this episode:

    1. It's not either-or. Both project management and product management are necessary. The goal is to use each skill set in the right place, not to eliminate one in favor of the other.
    2. The relationship between product managers and project managers works best as a true peer-to-peer dynamic. Hierarchy between the two tends to break things down quickly.
    3. Be clear about decision-making authority. If your product owners don't actually have the autonomy to make decisions, the role isn't working. And if your project managers exist primarily to satisfy a funding model that doesn't match your operating model, that's a signal to look at finishing what you started.

    If this is a conversation your team needs to have, share this episode with them. And if you're finding value in Definitely Maybe Agile, follow the show on your favorite podcast platform so you never miss an episode. New conversations drop every week.

    Show More Show Less
    22 mins
  • AI Agent Governance in Production with Logan Kelly
    Mar 19 2026

    Most organizations are somewhere between experimenting with AI agents and quietly hoping nothing breaks in production. Logan Kelly, CEO of Waxle AI, has spent a lot of time in that gap, and he thinks governance is the piece most teams are walking past too quickly.

    In this episode, Logan joins Peter and Dave to talk about what agentic governance actually looks like in practice, why a single consistent layer beats a pile of point solutions, and how to keep developers moving fast without letting things go sideways when it counts.

    This week's takeaways:

    • Let your teams experiment. That's how you learn what agents can actually do. Just don't skip governance on the way to production.
    • Governance doesn't have to be a gate. The best version layers in without friction, and gives everyone in the organization visibility, not just the dev team.
    • If a developer has to do extra work to implement a governance feature, that's a design problem. Good governance should work for the developer, not the other way around.
    Show More Show Less
    28 mins