Thanks for listening to the AI Moment podcast today.
The Executive Summary
The AI Moment podcast series by Danny Denhard and Jonathan Wagstaffe continues with episode 8 focused the critical and often overlooked issue of truth in the age of artificial intelligence.
They express concern over the increasing influence of governments and corporations in defining what AI platforms present as factual information.
Highlighting recent examples from both China and the US, they argue that this trend threatens to undermine the reliability of AI as a tool for business and society.
The Full Rundown
The hosts warn that as AI becomes more integrated into our daily lives, the manipulation of information could have profound consequences, from influencing consumer decisions to shaping political discourse.
They conclude by urging listeners to remain vigilant, critically evaluate the information they receive from AI, and stay informed about the evolving landscape of AI governance.
Jonathan and Danny kick off their discussion by identifying a crucial, yet under-discussed, topic: the concept of "truth" within AI and who gets to define it.
Jonathan emphasises that for AI to be a truly valuable tool, particularly in the business world, its outputs must be trustworthy and accurate.
He points to concerning developments, such as a Chinese AI model DeepSeek, providing neutral or evasive answers to questions about government policy, suggesting state influence. More alarmingly, he notes recent US legislation aiming to give the government a say in what AI platforms deem true, as well as corporate influence, citing an instance where Elon Musk's Grok was reportedly programmed to reflect his opinions.
Danny echoes these concerns, observing the growing trend of AI platforms being used to determine the truthfulness of information on social media. He highlights the danger of relying on systems like Grok or Perplexity, as their algorithms are not transparent.
The hosts discuss the possibility of different AI models from OpenAI, Claude, and Grok presenting conflicting versions of the truth, a phenomenon already beginning to emerge.
This issue is compounded by the fact that a significant portion of users already trust AI models more than traditional search engines.
The conversation then shifts to the potential real-world consequences of this battle for "truth." Danny raises the scenario of a business being unfairly labelled as "bad" by an AI, leading to significant financial repercussions. For consumers, relying on AI agents for recommendations could mean missing out on the best products or services if the AI's information is biased.
Jonathan frames the current situation as the "wild west" of AI, similar to the early days of social media. He predicts that just as bad actors exploited social media for political manipulation, they will do the same with AI. The hosts conclude with a call to action for their listeners: be critical of the information provided by AI, especially when it pertains to commercial or political matters.
They stress the importance of staying informed about the power dynamics at play between governments, tech giants, and the public in shaping the future of AI and truth.