In this episode of Artificial Thought, we unpack the 2025 AI Index report—one of the most comprehensive snapshots of where the field sees itself heading. From benchmark breakthroughs to global regulation, the report tracks acceleration in every direction. But what does that tell us about the systems we’re building—and what does it leave out?We’ll walk through some of the major findings, raise a few behavioural questions, and reflect on what’s really being measured when progress moves this fast.This episode sets the stage for a deeper conversation in the next post, where we explore the blind spots in AI evaluation—and why behavioural insight might be key to filling them.A closer look at the AI Index 2025The AI Index is one of the most influential annual snapshots of how artificial intelligence is evolving—tracking everything from technical performance and economic trends to regulation, education, and public opinion.Produced by the Stanford Institute for Human-Centered AI, it’s the most data-rich public record we have of where the field sees itself going, but with hundreds of charts, complex benchmarks, and deep dives across disciplines, it’s not exactly designed for a quick read.Still, understanding where the field is heading matters—especially for those of us thinking about the behavioural consequences of system design. This guide offers a high-level overview of the major findings, with just enough context to orient you and just enough interpretation to make you pause. It pairs with the podcast episode, which walks through some of the trends conversationally. A deeper companion essay is coming soon.Technical leaps are happening fast, but unevenlyAI systems are getting dramatically better at coding, answering expert-level questions, and understanding complex inputs like charts and diagrams. One coding benchmark, SWE-bench, saw models jump from solving 4% of problems to over 70% in a single year. That’s a massive leap in software automation.At the same time, models are struggling with tasks that involve deep reasoning, logical planning, or understanding cause and effect. So while AI can now mimic expertise in many areas, it still stumbles on the kind of reasoning that requires steps, context, and reflection.The report also notes that smaller, more efficient models are catching up fast to their larger, more resource-intensive predecessors. This suggests AI might soon become more accessible—embedded in everyday tools, not just the big platforms.AI is spreading through business but impact is still limitedCorporate investment in AI reached $252.3 billion in 2024, a 26% increase from the previous year. Adoption is rising too: 78% of organisations say they’re using AI in some form, and use of generative AI more than doubled in just one year.But most companies report modest results so far. Less than 10% say they’re seeing major cost savings or new revenue directly from AI. In other words: the hype is real, but the business value is still emerging—and often depends on how well the tools are integrated into human workflows.Global optimism is rising but opinions are polarisedMore people believe AI will change their daily lives in the next three to five years—and optimism about AI’s potential is growing in countries that were previously more cautious, like Germany and Canada.But there’s a clear regional divide:* Countries like China, Indonesia, and Mexico report high excitement and trust.* The US, UK, and Canada are more sceptical—especially about fairness, privacy, and the intentions of AI companies.Trust in the ethical conduct of AI developers is declining globally, even as enthusiasm grows. This gap—between excitement and concern—is a key behavioural dynamic to watch.Responsible AI is talked about more than it’s practicedAlthough companies are increasingly aware of the risks—like bias, misinformation, and security—few are taking consistent steps to mitigate them.Barriers include:* A lack of training or understanding among teams* Unclear regulations* Competing priorities, like speed to marketEven in healthcare, where stakes are high, studies show that adding AI doesn’t always improve outcomes unless the workflows and roles are carefully redesigned.Collaboration is assumed but not guaranteedMany parts of the report reflect an implicit belief that AI will support, not replace, human roles. Research shows that in many settings, AI complements human work—handling detail or recall while people focus on judgement.But this collaboration only works well when the system fits the task and the user understands how to work with it. Some studies show that using AI tools alongside traditional resources doesn’t always improve results. The interaction needs to be well-designed.AI is deeply embedded in science, medicine, and educationFrom helping win Nobel prizes to predicting protein structures and assisting in climate modelling, AI is becoming a core part of scientific ...