Neurosymbolic AI And Why Reasoning Matters More Than Scale cover art

Neurosymbolic AI And Why Reasoning Matters More Than Scale

Neurosymbolic AI And Why Reasoning Matters More Than Scale

Listen for free

View show details

About this listen

Why do today's most powerful AI systems still struggle to explain their decisions, repeat the same mistakes, and undermine trust at the very moment we are asking them to take on more responsibility?

In this episode of Tech Talks Daily, I'm joined by Artur d'Avila Garcez, Professor of Computer Science at City, St George's University of London, and one of the early pioneers of neurosymbolic AI.

Our conversation cuts through the noise around ever-larger language models and focuses on a deeper question many leaders are now grappling with. If scale alone cannot deliver reliability, accountability, or genuine reasoning, what is missing from today's AI systems?

Artur explains neurosymbolic AI in clear, practical terms as the integration of neural learning with symbolic reasoning. Deep learning excels at pattern recognition across language, images, and sensor data, but it struggles with planning, causality, and guarantees. Symbolic AI, by contrast, offers logic, rules, and explanations, yet falters when faced with messy, unstructured data. Neurosymbolic AI aims to bring these two worlds together, allowing systems to learn from data while reasoning with knowledge, producing AI that can justify decisions and avoid repeating known errors.

We explore why simply adding more parameters and data has failed to solve hallucinations, brittleness, and trust issues. Artur shares how neurosymbolic approaches introduce what he describes as software assurances, ways to reduce the chance of critical errors by design rather than trial and error. From self-driving cars to finance and healthcare, he explains why combining learned behavior with explicit rules mirrors how high-stakes systems already operate in the real world.

A major part of our discussion centers on explainability and accountability. Artur introduces the neurosymbolic cycle, sometimes called the NeSy cycle, which translates knowledge into neural networks and extracts knowledge back out again. This two-way process opens the door to inspection, validation, and responsibility, shifting AI away from opaque black boxes toward systems that can be questioned, audited, and trusted. We also discuss why scaling neurosymbolic AI looks very different from scaling deep learning, with an emphasis on knowledge reuse, efficiency, and model compression rather than ever-growing compute demands.

We also look ahead. From domain-specific deployments already happening today to longer-term questions around energy use, sustainability, and regulation, Artur offers a grounded view on where this field is heading and what signals leaders should watch for as neurosymbolic AI moves from research into real systems.

If you care about building AI that is reliable, explainable, and trustworthy, this conversation offers a refreshing and necessary perspective. As the race toward more capable AI continues, are we finally ready to admit that reasoning, not just scale, may decide what comes next, and what kind of AI do we actually want to live with?

Useful Links

  • Neurosymbolic AI (NeSy) Association website
  • Artur's personal webpage on the City, St George's University of London page
  • Co-authored book titled "Neural-Symbolic Learning Systems"
  • The article about neurosymbolic AI and the road to AGI
  • The Accountability in AI article
  • Reasoning in Neurosymbolic AI
  • Neurosymbolic Deep Learning Semantics
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.