Artificial Developer Intelligence cover art

Artificial Developer Intelligence

Artificial Developer Intelligence

By: Shimin Zhang & Dan Lasky
Listen for free

About this listen

The Artificial Developer Intelligence (ADI) podcast is a weekly talk show where hosts Dan Lasky and Shimin Zhang (two AI Filthy Casuals) discuss the latest news, tools, and techniques in AI enabled software development. The show's for the 99% of software engineers who need to ship features and not fine-tune large language models. We cut through the hype to find the tools and techniques that actually work for us, discuss the latest "LLM wars" and "vibe coding" trends with a healthy dose of skepticism and humor. We also discuss any AI papers that catch our eye, no math background required. ADI will either documents our journey to survive and thrive in the age of AI, or our descend into AI madness.ADIPod Politics & Government
Episodes
  • Episode 6: GPT 5.2, Claude Skills, and Hacker Hall of Fame
    Dec 19 2025

    In this episode of "Artificial Developer Intelligence," hosts Shimin Zhang and Dan explore the latest advancements in AI, including the release of GPT 5.2 and its implications for the industry. They discuss the integration of Cloud Code into Slack, Mistral AI's new coding model, and the innovative MindEval framework for assessing AI's clinical competence. The episode also features a deep dive into AI-generated user interfaces and a lively discussion on the evolving role of hackers in the tech industry.


    Takeaways
    GPT 5.2 offers incremental improvements and new modes for AI applications

    Cloud Code's integration into Slack aims to streamline coding workflows.

    Mistral AI's new model targets the coding space with open-weight strategies.

    OpenAI's enterprise products show significant adoption, especially in non-coding sectors.


    Resources Mentioned
    Introducing GPT-5.2
    Claude Code is coming to Slack, and that’s a bigger deal than it sounds
    Mistral AI surfs vibe-coding tailwinds with new coding models
    Introducing MindEval: a new framework to measure LLM clinical competence
    AI should only run as fast as we can catch up
    Useful patterns for building HTML tools
    Ask HN: How can I get better at using AI for programming?
    Claude Agent Skills: A First Principles Deep Dive
    Generative UI: A rich, custom, visual interactive user experience for any prompt
    CoreWeave CEO defends AI circular deals as ‘working together’
    OpenAI boasts enterprise win days after internal ‘code red’ on Google threat

    Chapters

    • (00:00) - Introduction to AI in Software Engineering
    • (02:40) - Latest Developments in AI Models
    • (09:12) - Innovations in AI Coding Assistants
    • (12:11) - Benchmarking AI Clinical Competence
    • (12:59) - Techniques for Effective AI Utilization
    • (17:48) - Exploring AI Tools for Web Development
    • (22:01) - Personal Experiences with AI Models
    • (26:30) - Deep Dive into Claude's Agent Skills
    • (27:40) - Exploring Skill Invocation in AI Tools
    • (31:38) - Generative UI: The Future of Interactive Experiences
    • (36:36) - Ranting About Context Management in AI
    • (44:21) - The Hacker Ethos in Software Development
    • (50:37) - Two Minutes to Midnight: AI Bubble Watch
    • (51:40) - ADI Outro

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai


    Show More Show Less
    52 mins
  • Episode 5: How Anthropic Engineers use AI, Spec Driven Development, and LLM Psychological Profiles
    Dec 12 2025

    In this episode, Shimin and Dan explore the evolving landscape of AI in software engineering, discussing the implications of the Cloud Opus 4.5 sole document, the ethical considerations of AI models, and the impact of AI on developer productivity. They delve into spec-driven development, the latest advancements in AI models like DeepSeek v3.2, and the intersection of AI and mental health. The conversation also touches on the potential AI bubble and the challenges faced by developers in integrating AI tools effectively.


    Takeaways
    The Cloud Opus 4.5 sole document reveals insights into AI model training.
    Spec-driven development is a promising approach for AI-assisted coding.
    DeepSeek v3.2 showcases advancements in reasoning models.
    AI models can exhibit traits similar to human emotions and traumas.
    Skills in AI may not always resolve context issues effectively.

    Resources Mentioned
    How AI is transforming work at Anthropic
    Claude 4.5 Opus Soul Document
    12 Factor Agents
    Understanding Spec-Driven-Development: Kiro, spec-kit, and Tessl
    From DeepSeek V3 to V3.2: Architecture, Sparse Attention, and RL Updates
    When AI Takes the Couch: Psychometric Jailbreaks Reveal Internal Conflict in Frontier Models
    Are we really repeating the telecoms crash with AI datacenters?
    Anthropic CEO weighs in on AI bubble talk and risk-taking among competitors
    Time until the AI bubble bursts
    Microsoft’s Attempts to Sell AI Agents Are Turning Into a Disaster

    Chapters
    Connect with ADIPod

    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Show More Show Less
    57 mins
  • Episode 4: Open AI Code Red, TPU vs GPU and More Autonomous Coding Agents
    Dec 5 2025

    In this episode of Artificial Developer Intelligence, hosts Shimin and Dan discuss the evolving landscape of AI in software engineering, touching on topics such as OpenAI's recent challenges, the significance of Google TPUs, and effective techniques for working with large language models. They also delve into a deep dive on general agentic memory, share insights on code quality, and assess the current state of the AI bubble.


    Takeaways

    • Google's TPUs are designed specifically for AI inference, offering advantages over traditional GPUs.
    • Effective use of large language models requires avoiding common anti-patterns.
    • AI adoption rates are showing signs of flattening out, particularly among larger firms.
    • General agentic memory can enhance the performance of AI models by improving context management.
    • Code quality remains crucial, even as AI tools make coding easier and faster.
    • Smaller, more frequent code reviews can enhance team communication and project understanding.
    • AI models are not infallible; they require careful oversight and validation of generated code.
    • The future of AI may hinge on research rather than mere scaling of existing models.


    Resources Mentioned
    OpenAI Code Red
    The chip made for the AI inference era – the Google TPU
    Anti-patterns while working with LLMs
    Writing a good claude md
    Effective harnesses for long-running agents
    General Agentic Memory Via Deep Research
    AI Adoption Rates Starting to Flatten Out
    A trillion dollars is a terrible thing to waste

    Chapters
    Connect with ADIPod

    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Show More Show Less
    1 hr and 4 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.