AI-Associated Delusions cover art

AI-Associated Delusions

AI-Associated Delusions

Listen for free

View show details

About this listen

This week we talk about AI therapy chatbots, delusions of grandeur, and sycophancy.We also discuss tech-triggered psychosis, AI partners, and confident nonsense.Recommended Book: Mr. Penumbra's 24-Hour Bookstore by Robin SloanTranscriptIn the context of artificial intelligence systems, a hallucination or delusion, sometimes more brusquely referred to as AI BS, is an output usually from an AI chatbot, but it can also be from another type of AI system, that’s basically just made up.Sometimes this kind of output is just garbled nonsense, as the AI systems, those based on large language models, anyway, are essentially just predicting what words will come next in the sentences they’re writing based on statistical patterns. That means they can string words together, and then sentences together, and then paragraphs together in what seems like a logical and reasonable way, and in some cases can even cobble together convincing stories or code or whatever else, because systems with enough raw materials to work from have a good sense of what tends to go where, and thus what’s good grammar and what’s not, what code will work and what code will break your website, and so on.In other cases, though, AI systems will seem to just make stuff up, but make it up convincingly enough that it can be tricky to detect the made up component of its answers.Some writers have reported asking AI to provide feedback on their stories, for instance, only to later discover that the AI didn’t have access to the stories, and they were providing feedback based on the title, or based on the writer’s prompt—the text the writer used to ask the AI for feedback. And their answers were perhaps initially convincing enough that the writer didn’t realize the AI hadn’t read the pieces they asked them to criticize, and the AI systems, because most of them are biased to sycophancy, toward brown-nosing the user and not saying anything that might upset them, or saying what it believes they want to hear, they’ll provide general critique that sounds good, that lines up with what their systems tell them should be said in such contexts, but which is completely disconnected from those writings, and thus, not useful to the writer as a critique.That combination of confabulation and sycophancy can be brutal, especially as these AI systems become more powerful and more convincing. They seldom make the basic grammatical and reality-based errors they made even a few years ago, and thus it’s easy to believe you’re speaking to something that’s thinking or at the bare-minimum, that understands what you’re trying to get it to help you with, or what you’re talking about. It’s easy to forget when interacting with such systems that you’re engaged not with another human or thinking entity, but with software that mimics the output of such an entity, but which doesn’t experience the same cognition experienced by the real-deal thinking creatures it’s attempting to emulate.What I’d like to talk about today is another sort of AI-related delusion—one experienced by humans interacting with such systems, not the other way around—and the seeming, and theoretical, pros and cons of these sorts of delusional responses.—Research that’s looked into the effects of psychotherapy, including specific approaches like cognitive behavioral therapy and group therapy, show that such treatments are almost aways positive, with rare exceptions, grant benefits that tend to last well past the therapy itself—so people who go to therapy tend to benefit from it even after the session, and even after they stop going to therapy, if they eventually stop going for whatever reason, and that the success rate, the variability of positive impacts, vary based on the clinical location, the therapist, and so on, but only by about 5% or less for each of those variables; so even a not perfectly aligned therapist or a less than ideal therapy location will, on average, benefit the patient.That general positive impact is part of the theory underpinning the use of AI systems for therapy purposes.Instead of going into a therapist’s office and speaking with a human being for an hour or so at a time, the patient instead speaks or types to an AI chatbot that’s been optimized for this purpose. So it’s been primed to speak like a therapist, to have a bunch of therapy-related resources in its training data, and to provide therapy-related resources to the patient with whom it engages.There are a lot of downsides to this approach, including the fact that AI bots are flawed in so many ways, are not actual humans, and thus can’t really connect with patients the way a human therapist might be able to connect with them, they have difficulty shifting from a trained script, as again, these systems are pulling from a corpus of training data and additional documents to which they have access, and that means they’ll tend to handle common issues and patient types pretty well, ...

What listeners say about AI-Associated Delusions

Average Customer Ratings

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.