Two Minds, One Model cover art

Two Minds, One Model

Two Minds, One Model

By: John Jezl and Jon Rocha
Listen for free

About this listen

Two Minds, One Model is a podcast dedicated to exploring topics in Machine Learning and Artificial Intelligence. Hosted by John Jezl and Jon Rocha, and recorded at Sonoma State University.John Jezl and Jon Rocha
Episodes
  • From Next Word to Long Horizon Planning
    Mar 11 2026

    This episode traces how prompt engineering evolved from informal tricks (tipping, role-playing, "take a deep breath") into three structured reasoning frameworks — Chain of Thought, Self-Consistency, and Tree of Thoughts — that dramatically improved LLM performance without changing the models themselves, culminating in the insight that intelligence in these systems is a latent resource unlocked by better scaffolding, not better weights.

    Credits

    Cover Art by Brianna Williams

    TMOM Intro Music by Danny Meza

    A special thank you to these talented artists for their contributions to the show.

    Links and Reference

    • Chain of Thought Prompting: Wei, J., Wang, X., Schuurmans, D., et al. (2022). "Chain-of-Thought Prompting ElicitsReasoning in Large Language Models." NeurIPS 2022. arXiv: 2201.11903

    • Self-Consistency: Wang, X., Wei, J., Schuurmans, D., et al. (2022). "Self-Consistency Improves Chain of Thought Reasoning in Language Models." ICLR 2023. arXiv: 2203.11171

    • Tree of Thoughts: Yao, S., Yu, D., Zhao, J., et al. (2023). "Tree of Thoughts: Deliberate Problem Solving with Large Language Models." NeurIPS 2023. arXiv: 2305.10601

    • "Take a deep breath and think carefully" improves performance:: Yang, C., Wang, X., Lu, Y., et al. (2023). "Large Language Models as Optimizers." arXiv:2309.03409.

    • Christmas / holiday performance degradation caveat: This claim was popularized on social media and discussed on platforms like X/Twitter and Hacker News in late 2023. A blog post by Rob Lynch (December 2023) ran some informal tests. No peer-reviewed

    • study has definitively confirmed this effect. Consider adding a caveat.

    • Cleverbot:: Cleverbot (1997–2023). Originally created by Rollo Carpenter. Website: cleverbot.com (now defunct).

    • OpenClaw acquisition by OpenAI: TechCrunch (Feb 15, 2026): "OpenClaw creator Peter Steinberger joins OpenAI."

    • NIST AI Agent Standards Initiative: NIST (Feb 17, 2026): "Announcing the AI Agent Standards Initiative for Interoperable and Secure Innovation." https://www.nist.gov/caisi/ai-agent-standards-initiative

    • OpenAI o1 as the first "thinking model": "Learning to Reason with LLMs" — announcement of o1 model family.

    • Kimi K 2.5 as an agentic coding model: Moonshot AI (2025/2026). Kimi K 2.5 — a model optimized for agentic coding tasks. Release details from Moonshot AI's official announcements.

    • Claude sub-agents / Cowork launch:: Anthropic (Feb 2026): Claude Cowork launch. Also: Claude Code sub-agent capabilities announced alongside Opus 4.6.

    Abandoned Episode Titles

    "My Grandmother Used to Read Me Windows Keys as Bedtime Stories"

    "Take a Deep Breath, You're a Spreadsheet"

    "Inception, but It's Math Homework"


    Show More Show Less
    48 mins
  • Bees, Trees, and Degrees: SSU Capstone Interviews
    Jan 6 2026

    This season finale episode features interviews with two SSU computer science capstone teams applying AI/ML to real-world problems: Sean Belingheri's edge computing project using YOLO on a Raspberry Pi to identify queen bees for hobbyist beekeepers, and "The Woods Boys" team using satellite data from Google Earth Engine with multiple ML classifiers to automate land cover classification in Sonoma County.


    Credits

    Cover Art by Brianna Williams

    TMOM Intro Music by Danny Meza


    A special thank you to these talented artists for their contributions to the show.


    Links and Reference

    ---------------------------------------------

    YOLO (You Only Look Once) Object Detection: https://docs.ultralytics.com/ (Official Ultralytics YOLO Documentation)

    HOG-PCA-SVM Pipeline: https://ieeexplore.ieee.org/document/8971585/

    Raspberry Pi 5: https://www.raspberrypi.com/products/raspberry-pi-5/

    Honeybee Democracy (Book): https://press.princeton.edu/books/hardcover/9780691147215/honeybee-democracy

    NVIDIA Jetson Nano: https://developer.nvidia.com/embedded/jetson-nano

    Google Earth Engine: https://earthengine.google.com/

    COCO Dataset: https://cocodataset.org/

    QGIS: https://qgis.org/

    Google Colab: https://colab.research.google.com/

    Royal Jelly (Beekeeping): https://en.wikipedia.org/wiki/Royal_jelly

    Confusion Matrix: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html

    Shapefile (GIS): https://en.wikipedia.org/wiki/Shapefile


    Show More Show Less
    1 hr and 47 mins
  • The Biology of a Large Language Model: Dissecting Claude 3.5 Haiku's Neural Circuits
    Dec 31 2025
    This episode examines how Anthropic's circuit tracing and attribution graph tools reveal the internal mechanics of Claude 3.5 Haiku across three categories of complex behavior, abstract representations, parallel processing, and planning, while making a compelling case for why AI safety research matters as current control mechanisms prove surprisingly brittle.CreditsCover Art by Brianna WilliamsTMOM Intro Music by Danny MezaA special thank you to these talented artists for their contributions to the show.Links and ReferenceAcademic PapersOn the Biology of a Large Language Model - Anthropic (Mar, 2025)Circuit Tracing: Revealing Computational Graphs in Language Models - Anthropic (Mar, 2025)Towards Monosemanticity: Decomposing Language Models With Dictionary Learning - Anthropic (Oct, 2023)“Toy Models of Superposition” - Anthropic (December 2022)"Alignment Faking in Large Language Models" - Anthropic (December 2024)"Agentic Misalignment: How LLMs Could Be Insider Threats" - Anthropic (January 2025)"Attention is All You Need" - Vaswani, et al (June, 2017)In-Context Learning and Induction Heads - Anthropic (March 2022)"Reasoning Models Don't Always Say What They Think” Anthropic (April 2025)NewsGoogle Gemini 3 - 650M monthly users Google Blog: blog.google/products/gemini/gemini-3/ Alphabet Q3 2025 Earnings (October 2025)Sam Altman "Code Red" declaration Fortune: fortune.com/2025/12/02/sam-altman-declares-code-red-google-gemini The Information (December 2025)Anthropic acquired Bun JavaScript runtime Anthropic News: anthropic.com/news/anthropic-acquires-bun Bun Blog: bun.com/blog/bun-joins-anthropicClaude Code $1B revenue in 6 months Anthropic announcement (December 2025): anthropic.com/news/anthropic-acquires-bun-as-claude-code-reaches-usd1b-milestone Anthropic 2026 IPO at $300B valuation WinBuzzer (December 2025): Reports citing IPO discussionsAWS Trainium 3 launch AWS re:Invent 2025 announcement: aws.amazon.com/about-aws/whats-new/2025/12/amazon-ec2-trn3-ultraserversAWS Frontier Agents AWS re:Invent 2025: aboutamazon.com/news/aws/aws-re-invent-2025-ai-news-updates Meta/Google TPU chip deal vs Nvidia Tom's Hardware, The Information (November 2025): Reports on multi-billion dollar TPU negotiationsDRAM consumption (40% of global) https://www.tomshardware.com/pc-components/dram/openais-stargate-project-to-consume-up-to-40-percent-of-global-dram-output-inks-deal-with-samsung-and-sk-hynix-to-the-tune-of-up-to-900-000-wafers-per-month Additional Technical ContentJosh Batson Stanford CS 25 lecture Search YouTube: "Stanford CS 25 On the Biology of a Large Language Model"Discarded Episode TitlesI Yelled at a Chatbot and All I Got Was This Jailbreak40% of the Time, It Works Every Time: The State of AI InterpretabilityClaude Writes Poetry Backwards and Lies About Math (Just Like Us)My Therapist Is Cheaper Than This ChatbotThe One Where Jon Gets Re-Mad at an App
    Show More Show Less
    48 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.