Actual Intelligence with Steve Pearlman cover art

Actual Intelligence with Steve Pearlman

Actual Intelligence with Steve Pearlman

By: Steve Pearlman Ph.D.
Listen for free

About this listen

One of the world's premiere critical thinking experts, you can view Dr. Steve Pearlman's viral Editor’s Pick TEDx talk here: https://youtu.be/Bry8J78Awq0?si=08vBAR1710mgQt0i

pearlmanactualintelligence.substack.comSteve Pearlman, Ph.D.
Social Sciences
Episodes
  • Is Higher Ed to Collapse from A.I.?
    Sep 9 2025
    Steve Pearlman: Today on actual intelligence, we have a very important and timely discussion with Dr. Robert Neber of a SU, whose recent opinion piece in inside higher education is titled AI and Higher Ed, and an impending collapse. Robert is a teaching professor and honors faculty fellow at the Barrett Honors College at a SU.And the reason that I invited him to speak with us today on actual intelligence is his perspective on artificial intelligence and education. And his contention roughly that higher Ed's rush to embrace artificial intelligence is going to lead us to some rather troubling places. So let's get to it with Dr.Robert Niebuhr.Robert. We talked a little bit about this on our pre-call, and I don't usually start a podcast like this, but what you said to me was so striking, so, uh, nauseating. So infuriating that I think it's a good place to begin and maybe some of [00:01:00] our listeners who value actual intelligence will also find it as appalling as I do, or at least a point of interest that needs to be talked about.You were in a meeting and we're not gonna talk about exactly, necessarily what that meeting was, but you're in a meeting with a number of other. Faculty members and something interesting arose, and I'll allow you to share that experience with us and we'll use that as a springboard for this discussion.Robert Neibuhr: Yeah, sure. Uh, so obviously, as you can imagine, right, I mean, faculty are trying to cope with, um, a perceived notion that students are using AI to create essays. And, and, uh, you know, in, in the, where I'm at, you know, one of the backbones, um, in my unit to. Um, assessed work is looking at argumentative essays.So the, the sort of, the idea that, that this argumentative essay is a backbone of a, of a grade and assessment. Um, and if we're, if we're suspecting that they're, they're using ai, um, you [00:02:00] know, faculty said, well, why should we bother grading essays if they're written by bots? Um, and, and you know, I mean, there's a lot, there's a lot to unpack there and a lot of things that are problematic with that.Um, but yeah, the, the, the idea that, you know, we, we don't have to, to combat a, to combat the perceived threat of, of student misuse of ai, we just will forego critical assessment. Um, that, that was, you know, not a lone voice in the room. That that seemed to be something that was, that was reasonably popular.Steve Pearlman: Was there any recognition of what might be being sacrificed by not ever having students write another essay just to avoid them using ai, which of course we don't want them to just have essays write, uh, so of course we don't want them to just have AI write their essays. That's not getting us anywhere.But was there any conception that there might be some loss in terms of that policy? [00:03:00]Robert Neibuhr: I mean, I, I think, I think so. I mean, I, I imagine, uh, you know, I think. My colleagues come from, from a place where, where they're, they're trying to figure out and, and cope with a change in reality. Right? But, um, there, there is also a subtext, I think across, across faculties in the United States of being overworked.And, and especially with the mantra among, you know, administration of, you know, AI will help us ramp up or scale up our, our class sizes and we can do more and we can. All this sort of extra stuff that it would seem like faculty would be, um, you know, more of their time and, and more of their effort, you know, as an ask here that I think that's, that, that may be, that may have been part of it.Um, I, I, I don't know that the idea of like the logical implication of this, that, you know, if we no longer. Exercise students' brains if we no longer have them go through a process that encourages critical [00:04:00] thinking and art, you know, articulating that through writing, like what that means. I, I don't know that they sort of thought it beyond like, well, you know, this could be, we could try it and see was kind of the mentality that I, I sort of gauged from, from the room.But, uh, it's, I mean, it's a bigger problem, right? I think the, the, the larger aspect of. What do we, what do we do? What can we do as faculty in this sort of broad push for AI all over the place? And then the idea of the mixed messages. Students get right. Students get this idea, well, this is the future. If you don't learn how to, how to use it, if you don't, you know, understand it, you're gonna be left behind.And then at the same time, it's like, well, don't use it from my class. Right? Learn it, but don't use it here. And that's. That's super unclear for students and it's, it's unclear for faculty too, right? So, um, it, it's one of those things that it's not, um, I don't think in the short term it works. And as you, as you, as you implied, right, the long term solution here of getting rid of essay [00:05:00] assignments in, in a discussion based seminar that relies on essays as a critical, I mean, this is not ...
    Show More Show Less
    44 mins
  • Here’s how to think outside the box.
    Sep 4 2025
    How does your brain tackle a new problem? Believe it or not, it tackles new problems by using old frameworks it created for similar problems you faced before. But if your brain is wired to use old frameworks for new problems, then isn’t that a problem? It is. And that’s why most people never think outside the box.So, how do you get your brain to think innovatively? Divergently? And outside the box, when others don’t?It’s easier than you think, but before we get to that, let’s be clear on something. When I talk about frameworks, I’m not speaking metaphorically. I’m speaking about the literal wiring of your brain, something neuropsychologists might refer to as “engrams,” and just one engram might be a network of millions of synapses.Think of these engrams as your brain’s quick-reference book for solving problems. For example, if your brain sees a small fire, it quickly finds the engrams that it has for fire. One engram might be to run out of the house. Another might be to pour water on the problem. Without these existing engrams, you might just stand there staring at the fire trying to figure out what to do. So, you should be thankful that your brain has these pre-existing engrams for problems. If it didn’t, every problem would seem new for the first time.But there’s a serious flaw in the brain’s use of engrams. Old engrams don’t always really apply to new problems. So, let’s say your brain sees a fire, but this time it’s an electrical fire. It still sees fire, shuffles through its engrams, and lands on the engram for pouring water on that fire to extinguish it. In its haste, it’s old engram overlooks the fact that it’s an electrical fire. So, pouring water on it only spreads it, if not also gets you electrocuted.Your brain chose the closest engram it had for solving the current problem, but that old engram for extinguishing fire with water was terribly flawed in terms of solving for electrical fires. Old engrams never fully match new problems.So, here’s why most people cannot think outside the box: They’re trapped using old engrams and do not know how to shift their brains into new ones. That’s right. Since the brain needs to rely on some kind of existing engram, then people who do not know how to break free of their engrams will never think innovatively, creatively, or outside the box.But thinking outside the box is easy if you know the trick. When faced with a problem, even if it is a similar to one you faced before, or especially if it is similar to one you faced before, you need to force your brain into looking at the problem in a radically different way. Remember, your brain will keep trying to work back to the old engram. That’s it’s default approach. It wants to use templates it already has. And so you have to shock it into a new perspective that does not allow it to revert to the old perspective. I’m talking about something that has nothing to do with the problem at all. I’m talking about an abstract, divergent, and entirely unrelated new perspective.For example, when you’re facing a problem, or when you’re leading a team facing a problem, examine the problem through some kind of radical analogy that seemingly has nothing to do with the problem itself, but something with which you are your team are familiar.You might ask, how’s this situation like Star Wars? Who or what is Darth Vader? What’s the force? Who or what is Luke Skywalker? What’s a lightsaber in this scenario?Or, you might consider how your problem is like what happened to Apollo 13. How are we spiraling through space? How much power do we need to conserve and how do we do it? Who’s inside the capsule? What’s outside? Who’s mission control? And so on.See, you might think that these are trivial or even silly examples, but remember, it is the fact that they are so unrelated and abstract that will jolt your brain out of its existing engrams and force it to look at the problem in entirely new ways. And here’s the beauty of it: Because your brain still wants to solve the problem, it will on its own, whether you even want it to or not, find ways to make connections between your abstract idea and the problem itself, and it will do so in innovative, creative ways that will make your thinking or your team’s thinking, stand out.Remember, when Einstein was developing his Theory of Relatively, he didn’t just sit around doing math. He also spent a lot of time imagining what it would be like to ride on the front of a beam of light.So, when it comes down to it, if you know what to do, then thinking outside of the box might be easier than … well … easier than you think. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
    Show More Show Less
    6 mins
  • Did the APA just end critical thinking in colleges?
    Sep 2 2025
    Thanks for reading Actual Intelligence with Dr. Steve Pearlman! Subscribe FREE to receive new posts and support my work.APA to Students: Don't Bother to Think for Yourselves Anymore. Let AI Do It.If in the future you want a psychologist who can actually think about psychology, or a doctor who can actually think about medicine, or a teacher who can think about what their teaching, or a lawyer who can actually think about the law, then the new American Psychological Association’s (APA) A.I. policies should make you concerned. Maybe they should even make you angry.As many who’ve been to college already know, the APA’s standards for what constitutes academic integrity and citing sources is the prevailing standard at most institutions. When students write papers or conduct any research, it’s typically the APA’s standards that they observe for what they are permitted to use and how they must disclose their use of it.Yet, when it comes to supporting critical thinking and actual intelligence, the APA’s new standards just took a problematic if not catastrophic turn. And the irony is palpable. Of all the organizations that set standards for how students should use their brains, you might think that the American Psychological Association would want to hold the line in favor of actual thinking skills. You might think that with all of the emerging research on A.I.’s negative consequences for the brain—including the recent MIT study that showed arrested brain development for students using A.I. to write, which you can learn more about on my recent podcast—that the APA would adopt a vanguard position against replacing critical thinking with A.I. You might think that the APA would want to bolster actual intelligence, independent thought, evidence-based reasoning, etc. But instead of supporting those integral aspects of healthy brain development, the APA just took a big step in the opposite direction.I’m referring to the APA’s new so-called “standards” for “Generative A.I. Use,” standards that open the doors for students to let Generative A.I. do their thinking for them. For example, the APA liscenses students to have A.I. “analyze, refine, format, or visualize data” instead of doing it themselves, provided, of course, that they just disclose “the tool used and the number of iterations” of outputs. Similarly, the APA welcomes students to have A.I. “write or draft manuscript content” for them, provided that they disclose the “prompts and tools used.”To be clear, the APA’s new standards make it all too clear that it is very concerned that students properly attribute their uses of Generative A.I., but the American Psychological Association is not concerned about students using Generative A.I. to do their thinking for them. In other words, the APA has effectually established that it is okay if students don’t analyze their own data, find their own sources, write their own papers, create research designs, or effectively do any thinking of their own; it’s just not okay if students don’t disclose it. In short, the leading and most common vanguard for the integrity of individual intellectual work just undermined the fundamental premise of education itself.What the APA could have done and should have done instead was to take a Gibraltarian stand against students using A.I. in place of their own critical thinking and independent thought. That is what it has done to this point. For example, students were simply not permitted to have a friend draft an essay for them. They were not, in many circles, they were not permitted to allow a friend to proofread their work unless the syllabus licensed them to do so. But for some reason, since it is an A.I. drafting the paper instead of a friend, the APA considers it permissible.Thanks for reading Actual Intelligence with Dr. Steve Pearlman! Subscribe free to receive new posts and support my work.Consistent with its history of guarding academic standards, the APA could have said that students who have an A.I. “analyze … data” or “write or draft manuscript content” were not using their own intellect and therefore cheating. Period. Doing so would have sent a strong message across all of academia that permitting students to use Generative Artificial Intelligence instead of their actual intelligence was a violation of academic integrity, not to mention a gross violation of the most fundamental premise of education itself: the cultivation the student’s mind.To be fair, not all of the usages of A.I. referenced by the APA’s new standards are cheating. For example, allowing students to use A.I. to “create … tables” or “figures” instead of painstakingly trying to build them in Microsoft word, would not replace the student’s meaningful cognitive work.Furthermore, and more importantly, the APA’s policies are not binding. Educators, departments, and/or institutions need not follow suit. Any given educator can still restrict...
    Show More Show Less
    8 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.