Ai Change Desk cover art

Ai Change Desk

Ai Change Desk

By: Michael Hanna-Butros Meyering
Listen for free

About this listen

AI Change Desk helps leaders, managers, and operators make sense of AI changes and run adoption without hype. Every episode follows one format: context, impact, and action.

MHBM 2026
Episodes
  • AI Brief: what changed this week
    Feb 25 2026

    Two operator-relevant signals from this week, translated into concrete controls teams can execute immediately.

    • Distillation attacks moved from model-lab concern to enterprise operations risk.
    • NIST's AI Agent Standards Initiative reinforced near-term interoperability and accountability expectations.
    • A 25-minute weekly governance desk loop you can run every Monday.
    1. Treat provider security bulletins as workflow events, not background reading.
    2. Classify AI usage into open-assist, controlled-assist, and restricted classes.
    3. Add interoperability and control portability checks to AI procurement intake.
    4. Require a human accountability map for every agent-like workflow.
    5. Ship a one-page operator update: what changed, what to do, what not to do.
    • 00:00 Cold open: policy that cannot survive Monday is policy theater
    • 01:00 Theme intro
    • 01:16 Framing and disclosure
    • 01:57 Signal 1: distillation attacks and model-control hardening
    • 04:30 Signal 2: standards momentum as procurement and controls signal
    • 06:57 Monday checklist: 25-minute governance desk
    • 08:06 Close
    • 08:18 Final reminder: one owner, one decision, one due date
    • 08:27 Brand outro
    • https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks
    • https://www.businessinsider.com/anthropic-deepseek-distillation-minimax-moonshot-ai-2026-2
    • https://www.nist.gov/caisi/ai-agent-standards-initiative
    • https://www.ansi.org/standards-news/all-news/2-18-26-nist-launches-ai-agent-standards-initiative
    • https://www.nist.gov/news-events/news/2026/02/nist-seeks-public-input-advance-ai-agent-interoperability-and-efficiency
    • Website episode page: https://www.michaelhbm.com/AIChangeDesk/episodes/brief-2026-02-25-ai-brief.html
    • Apple Podcasts: https://podcasts.apple.com/us/podcast/ai-change-desk/id1876677295
    • Spotify: https://open.spotify.com/show/5X1sLLTeULqFCdt7aaisGD

    AI-assisted tools were used in research and production support. Final editorial judgment and release approval remained human-led.

    Show More Show Less
    9 mins
  • AI governance implementation for operators: turning policy into weekly execution
    Feb 23 2026
    EP003: AI GOVERNANCE IMPLEMENTATION FOR OPERATORS AI governance breaks when it lives as a policy document and not as a weekly operating loop. In this main episode, we use current market signals (model updates, AI security tooling, regional deployment strategy, and standards activity) to show how leaders and operators can run governance as execution instead of theory. WHAT YOU WILL GET • A practical model-change governance workflow you can run every week. • Security workflow controls for AI-assisted code review. • Procurement and data-governance actions triggered by regional/partner deployment signals. • A reusable weekly AI Governance Desk format with owner, controls, and communication outputs. • A late-update block on alignment-research funding and regulated-industry deployment signals. TIMESTAMPS • 00:00 Cold open — governance is a workflow, not a PDF • 00:59 Intro music + disclosure • 01:20 Why this episode now (EP001/EP002 bridge) • 03:20 Story 1 — Claude Sonnet 4.6 and model-change governance • 07:50 Story 2 — Claude Code Security and human-in-the-loop controls • 12:20 Story 3 — OpenAI for India + Tata and procurement reality • 16:00 Story 4 — NIST AI agent interoperability signal • 18:10 Late updates — alignment funding + regulated-industry collaboration • 19:00 Weekly AI Governance Desk (25-minute operating loop) • 22:05 Postscript — chat-code controls + workflow-class policy mapping • 23:25 Monday morning actions • 24:25 Outro + listener question MONDAY MORNING ACTIONS 1. Name one owner for weekly AI governance desk operations. 2. Run a model-change regression check on your top workflows. 3. Require human approval for AI-generated security patches/findings. 4. Update procurement clauses (data handling, change notifications, sub-processors). 5. Publish a one-page internal update: what changed, what to do, what not to do. SOURCES • https://www.anthropic.com/news/claude-sonnet-4-6 • https://docs.anthropic.com/en/release-notes/api#feb-17th-2026 • https://www.anthropic.com/news/claude-code-security • https://docs.anthropic.com/en/docs/claude-code/security • https://openai.com/index/openai-for-india/ • https://www.tata.com/newsroom/openai-and-tata-group-announce-strategic-collaboration • https://www.nist.gov/news-events/news/2026/02/nist-seeks-public-input-advance-ai-agent-interoperability-and-efficiency • https://www.federalregister.gov/documents/2026/02/20/2026-02979/ai-agent-interoperability-and-efficiency-standards-request-for-information • https://openai.com/index/advancing-independent-research-ai-alignment/ • https://alignmentproject.aisi.gov.uk/ • https://www.anthropic.com/news/anthropic-infosys • https://www.infosys.com/newsroom/press-releases/2026/advanced-enterprise-ai-solutions-industries.html LISTEN • Spotify: https://open.spotify.com/show/5X1sLLTeULqFCdt7aaisGD • Apple Podcasts: https://podcasts.apple.com/us/podcast/ai-change-desk/id1876677295 DISCLOSURE AI-assisted tools were used in parts of drafting, synthesis, and production support. Final editorial judgment and release approval remained with the host.
    Show More Show Less
    25 mins
  • AI policy basics for operators: what this week changed
    Feb 19 2026

    EP002: AI policy basics for operators.

    This episode translates AI policy concepts into practical operating decisions for leaders, managers, and delivery teams.

    • Episode: 002
    • Title: AI policy basics for operators
    • Runtime: 10m 30s
    • Host: Michael Hanna-Butros Meyering

    AI policy works only when it is written as operational guidance people can apply in daily workflows.

    • 00:00 Why AI policy fails in real teams
    • 01:20 Story 1: Claude Sonnet 4.6 and model-change governance
    • 04:40 Story 2: AI infrastructure cost signals and procurement controls
    • 07:40 Action block: policy + change management implementation
    • 09:40 Monday-morning actions + outro
    • Anthropic launched Claude Sonnet 4.6 (February 17, 2026), which reinforces the need for model-upgrade controls and evaluation gates in internal policy.
    • Anthropic announced it will cover electricity price increases tied to data-center growth (February 17, 2026), making infrastructure impact a practical procurement and governance issue.
    • Scope: which AI use cases are allowed, restricted, or prohibited.
    • Data: which data classes may be used with which tools.
    • Controls: review, logging, exception handling, and escalation.
    • Accountability: who owns policy updates and incident response.
    • Add a model-change trigger section to your AI policy (when re-evaluation is mandatory).
    • Add three infrastructure-risk questions to AI vendor intake.
    • Run one manager briefing with a clear script for allowed/restricted use.
    • Audit one active AI workflow for drift between policy and real usage.
    • Anthropic, “Announcing Claude Sonnet 4.6”: https://www.anthropic.com/news/claude-sonnet-4-6
    • TechCrunch coverage, “Anthropic releases Claude Sonnet 4.6”: https://techcrunch.com/2026/02/17/anthropic-releases-claude-sonnet-4-6/
    • Anthropic, “Covering electricity price increases from AI data centers”: https://www.anthropic.com/news/covering-electricity-price-increases
    • Reuters coverage (via Investing.com): https://www.investing.com/news/stock-market-news/anthropic-to-cover-electricity-price-increases-in-areas-where-it-builds-data-centers-3894580
    • NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework
    • NIST Generative AI Profile: https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-generative-artificial-intelligence
    • OECD AI Principles: https://oecd.ai/en/ai-principles
    • ISO/IEC 42001 overview: https://www.iso.org/standard/81230.html

    This episode uses AI-assisted production tools (voice rendering, editing support, and publishing automation). Final editorial and risk decisions are human-led.

    Show More Show Less
    10 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.