• Is Higher Ed to Collapse from A.I.?
    Sep 9 2025
    Steve Pearlman: Today on actual intelligence, we have a very important and timely discussion with Dr. Robert Neber of a SU, whose recent opinion piece in inside higher education is titled AI and Higher Ed, and an impending collapse. Robert is a teaching professor and honors faculty fellow at the Barrett Honors College at a SU.And the reason that I invited him to speak with us today on actual intelligence is his perspective on artificial intelligence and education. And his contention roughly that higher Ed's rush to embrace artificial intelligence is going to lead us to some rather troubling places. So let's get to it with Dr.Robert Niebuhr.Robert. We talked a little bit about this on our pre-call, and I don't usually start a podcast like this, but what you said to me was so striking, so, uh, nauseating. So infuriating that I think it's a good place to begin and maybe some of [00:01:00] our listeners who value actual intelligence will also find it as appalling as I do, or at least a point of interest that needs to be talked about.You were in a meeting and we're not gonna talk about exactly, necessarily what that meeting was, but you're in a meeting with a number of other. Faculty members and something interesting arose, and I'll allow you to share that experience with us and we'll use that as a springboard for this discussion.Robert Neibuhr: Yeah, sure. Uh, so obviously, as you can imagine, right, I mean, faculty are trying to cope with, um, a perceived notion that students are using AI to create essays. And, and, uh, you know, in, in the, where I'm at, you know, one of the backbones, um, in my unit to. Um, assessed work is looking at argumentative essays.So the, the sort of, the idea that, that this argumentative essay is a backbone of a, of a grade and assessment. Um, and if we're, if we're suspecting that they're, they're using ai, um, you [00:02:00] know, faculty said, well, why should we bother grading essays if they're written by bots? Um, and, and you know, I mean, there's a lot, there's a lot to unpack there and a lot of things that are problematic with that.Um, but yeah, the, the, the idea that, you know, we, we don't have to, to combat a, to combat the perceived threat of, of student misuse of ai, we just will forego critical assessment. Um, that, that was, you know, not a lone voice in the room. That that seemed to be something that was, that was reasonably popular.Steve Pearlman: Was there any recognition of what might be being sacrificed by not ever having students write another essay just to avoid them using ai, which of course we don't want them to just have essays write, uh, so of course we don't want them to just have AI write their essays. That's not getting us anywhere.But was there any conception that there might be some loss in terms of that policy? [00:03:00]Robert Neibuhr: I mean, I, I think, I think so. I mean, I, I imagine, uh, you know, I think. My colleagues come from, from a place where, where they're, they're trying to figure out and, and cope with a change in reality. Right? But, um, there, there is also a subtext, I think across, across faculties in the United States of being overworked.And, and especially with the mantra among, you know, administration of, you know, AI will help us ramp up or scale up our, our class sizes and we can do more and we can. All this sort of extra stuff that it would seem like faculty would be, um, you know, more of their time and, and more of their effort, you know, as an ask here that I think that's, that, that may be, that may have been part of it.Um, I, I, I don't know that the idea of like the logical implication of this, that, you know, if we no longer. Exercise students' brains if we no longer have them go through a process that encourages critical [00:04:00] thinking and art, you know, articulating that through writing, like what that means. I, I don't know that they sort of thought it beyond like, well, you know, this could be, we could try it and see was kind of the mentality that I, I sort of gauged from, from the room.But, uh, it's, I mean, it's a bigger problem, right? I think the, the, the larger aspect of. What do we, what do we do? What can we do as faculty in this sort of broad push for AI all over the place? And then the idea of the mixed messages. Students get right. Students get this idea, well, this is the future. If you don't learn how to, how to use it, if you don't, you know, understand it, you're gonna be left behind.And then at the same time, it's like, well, don't use it from my class. Right? Learn it, but don't use it here. And that's. That's super unclear for students and it's, it's unclear for faculty too, right? So, um, it, it's one of those things that it's not, um, I don't think in the short term it works. And as you, as you, as you implied, right, the long term solution here of getting rid of essay [00:05:00] assignments in, in a discussion based seminar that relies on essays as a critical, I mean, this is not ...
    Show More Show Less
    44 mins
  • Here’s how to think outside the box.
    Sep 4 2025
    How does your brain tackle a new problem? Believe it or not, it tackles new problems by using old frameworks it created for similar problems you faced before. But if your brain is wired to use old frameworks for new problems, then isn’t that a problem? It is. And that’s why most people never think outside the box.So, how do you get your brain to think innovatively? Divergently? And outside the box, when others don’t?It’s easier than you think, but before we get to that, let’s be clear on something. When I talk about frameworks, I’m not speaking metaphorically. I’m speaking about the literal wiring of your brain, something neuropsychologists might refer to as “engrams,” and just one engram might be a network of millions of synapses.Think of these engrams as your brain’s quick-reference book for solving problems. For example, if your brain sees a small fire, it quickly finds the engrams that it has for fire. One engram might be to run out of the house. Another might be to pour water on the problem. Without these existing engrams, you might just stand there staring at the fire trying to figure out what to do. So, you should be thankful that your brain has these pre-existing engrams for problems. If it didn’t, every problem would seem new for the first time.But there’s a serious flaw in the brain’s use of engrams. Old engrams don’t always really apply to new problems. So, let’s say your brain sees a fire, but this time it’s an electrical fire. It still sees fire, shuffles through its engrams, and lands on the engram for pouring water on that fire to extinguish it. In its haste, it’s old engram overlooks the fact that it’s an electrical fire. So, pouring water on it only spreads it, if not also gets you electrocuted.Your brain chose the closest engram it had for solving the current problem, but that old engram for extinguishing fire with water was terribly flawed in terms of solving for electrical fires. Old engrams never fully match new problems.So, here’s why most people cannot think outside the box: They’re trapped using old engrams and do not know how to shift their brains into new ones. That’s right. Since the brain needs to rely on some kind of existing engram, then people who do not know how to break free of their engrams will never think innovatively, creatively, or outside the box.But thinking outside the box is easy if you know the trick. When faced with a problem, even if it is a similar to one you faced before, or especially if it is similar to one you faced before, you need to force your brain into looking at the problem in a radically different way. Remember, your brain will keep trying to work back to the old engram. That’s it’s default approach. It wants to use templates it already has. And so you have to shock it into a new perspective that does not allow it to revert to the old perspective. I’m talking about something that has nothing to do with the problem at all. I’m talking about an abstract, divergent, and entirely unrelated new perspective.For example, when you’re facing a problem, or when you’re leading a team facing a problem, examine the problem through some kind of radical analogy that seemingly has nothing to do with the problem itself, but something with which you are your team are familiar.You might ask, how’s this situation like Star Wars? Who or what is Darth Vader? What’s the force? Who or what is Luke Skywalker? What’s a lightsaber in this scenario?Or, you might consider how your problem is like what happened to Apollo 13. How are we spiraling through space? How much power do we need to conserve and how do we do it? Who’s inside the capsule? What’s outside? Who’s mission control? And so on.See, you might think that these are trivial or even silly examples, but remember, it is the fact that they are so unrelated and abstract that will jolt your brain out of its existing engrams and force it to look at the problem in entirely new ways. And here’s the beauty of it: Because your brain still wants to solve the problem, it will on its own, whether you even want it to or not, find ways to make connections between your abstract idea and the problem itself, and it will do so in innovative, creative ways that will make your thinking or your team’s thinking, stand out.Remember, when Einstein was developing his Theory of Relatively, he didn’t just sit around doing math. He also spent a lot of time imagining what it would be like to ride on the front of a beam of light.So, when it comes down to it, if you know what to do, then thinking outside of the box might be easier than … well … easier than you think. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
    Show More Show Less
    6 mins
  • Did the APA just end critical thinking in colleges?
    Sep 2 2025
    Thanks for reading Actual Intelligence with Dr. Steve Pearlman! Subscribe FREE to receive new posts and support my work.APA to Students: Don't Bother to Think for Yourselves Anymore. Let AI Do It.If in the future you want a psychologist who can actually think about psychology, or a doctor who can actually think about medicine, or a teacher who can think about what their teaching, or a lawyer who can actually think about the law, then the new American Psychological Association’s (APA) A.I. policies should make you concerned. Maybe they should even make you angry.As many who’ve been to college already know, the APA’s standards for what constitutes academic integrity and citing sources is the prevailing standard at most institutions. When students write papers or conduct any research, it’s typically the APA’s standards that they observe for what they are permitted to use and how they must disclose their use of it.Yet, when it comes to supporting critical thinking and actual intelligence, the APA’s new standards just took a problematic if not catastrophic turn. And the irony is palpable. Of all the organizations that set standards for how students should use their brains, you might think that the American Psychological Association would want to hold the line in favor of actual thinking skills. You might think that with all of the emerging research on A.I.’s negative consequences for the brain—including the recent MIT study that showed arrested brain development for students using A.I. to write, which you can learn more about on my recent podcast—that the APA would adopt a vanguard position against replacing critical thinking with A.I. You might think that the APA would want to bolster actual intelligence, independent thought, evidence-based reasoning, etc. But instead of supporting those integral aspects of healthy brain development, the APA just took a big step in the opposite direction.I’m referring to the APA’s new so-called “standards” for “Generative A.I. Use,” standards that open the doors for students to let Generative A.I. do their thinking for them. For example, the APA liscenses students to have A.I. “analyze, refine, format, or visualize data” instead of doing it themselves, provided, of course, that they just disclose “the tool used and the number of iterations” of outputs. Similarly, the APA welcomes students to have A.I. “write or draft manuscript content” for them, provided that they disclose the “prompts and tools used.”To be clear, the APA’s new standards make it all too clear that it is very concerned that students properly attribute their uses of Generative A.I., but the American Psychological Association is not concerned about students using Generative A.I. to do their thinking for them. In other words, the APA has effectually established that it is okay if students don’t analyze their own data, find their own sources, write their own papers, create research designs, or effectively do any thinking of their own; it’s just not okay if students don’t disclose it. In short, the leading and most common vanguard for the integrity of individual intellectual work just undermined the fundamental premise of education itself.What the APA could have done and should have done instead was to take a Gibraltarian stand against students using A.I. in place of their own critical thinking and independent thought. That is what it has done to this point. For example, students were simply not permitted to have a friend draft an essay for them. They were not, in many circles, they were not permitted to allow a friend to proofread their work unless the syllabus licensed them to do so. But for some reason, since it is an A.I. drafting the paper instead of a friend, the APA considers it permissible.Thanks for reading Actual Intelligence with Dr. Steve Pearlman! Subscribe free to receive new posts and support my work.Consistent with its history of guarding academic standards, the APA could have said that students who have an A.I. “analyze … data” or “write or draft manuscript content” were not using their own intellect and therefore cheating. Period. Doing so would have sent a strong message across all of academia that permitting students to use Generative Artificial Intelligence instead of their actual intelligence was a violation of academic integrity, not to mention a gross violation of the most fundamental premise of education itself: the cultivation the student’s mind.To be fair, not all of the usages of A.I. referenced by the APA’s new standards are cheating. For example, allowing students to use A.I. to “create … tables” or “figures” instead of painstakingly trying to build them in Microsoft word, would not replace the student’s meaningful cognitive work.Furthermore, and more importantly, the APA’s policies are not binding. Educators, departments, and/or institutions need not follow suit. Any given educator can still restrict...
    Show More Show Less
    8 mins
  • Kids Want Off Their Phones. Here's How!
    Aug 25 2025

    Thanks for reading Actual Intelligence with Dr. Steve Pearlman! This post is public so feel free to share it.

    Thanks for reading Actual Intelligence with Dr. Steve Pearlman! Subscribe for free to receive new posts and support my work.

    Want Your Kids Off Their Phones: They Just Told Us How to Do It

    In a new Harris poll conducted with The Atlantic, kids have reminded us about the importance unstructured, unsupervised play for the development not just of their actual intelligence, but of so many related developmental factors: critical thinking, problem solving, self-efficacy, social maturity, and, well, you name it.

    According to the article, What Kids Told Us About How to Get Them Off Their Phones, by David Graham and Tom Nichols, the Harris poll surveyed 500 kids between 8 and 12 years old, most of whom have phones and not only are on social media, but also interact—unsupervised—with adult strangers through social media or games. Yet, most aren’t allowed out in public without adult supervision, even though, as the article states, “according to Warwick Cairns, the author of How to Live Dangerously, kidnapping in the United States is so rare that a child would have to be outside unsupervised for, on average, 750,000 years before being snatched by a stranger,” statistically speaking.

    But modern parents, concerned about dangers in the real world, relegate their kids to online interactions in part under the guise of their safety. As the authors put it, “because so many parents restrict their ability to socialize in the real world on their own, kids resort to the one thing that allows them to hang out with no adults hovering: their phones.”

    If there are operative words in that quote, they are “no adults hovering.” What kids report is that more than anything else, they want play that does not involve adult supervision.

    Of course they do. Why? Because, based on overwhelming amounts of research, our brains evolved with free play as a primary means of cognitive and social development. And that’s not just true of humans, by the way. Studies on animals reinforce the point. For example, kittens who were not permitted free play also never developed they social skills they needed as adults. So, is should not be surprising that human children are meant to play with each other, in mixed groups, without supervision, figuring out how to get along, create games, test their own ideas, etc.

    If you want a sense of just how important and powerful free play is, then consider just one of many recent studies: Advocating for Play: The Benefits of Unstructured Play in Public Schools,

    Heather Macpherson Parrott and Lynn E. Cohen. The study examined the impact of increased free play time for kids in school, which found improvements in the following areas:

    · desire and ability to learn/focus,

    · mood,

    · social interaction,

    · cooperation,

    · problem solving,

    · independence, and

    · self-advocacy

    All said, whereas the evidence about the harms of smartphones of child development is mounting fast, unsupervised free play helps young brains develop in just about all of the ways that they need to develop.

    So, though it might take just a little coordination with other parents, give your kids what they want (even they specifically don’t know that they wan it): free play with other kids that’s not (generally) under your watchful eye. Take their phones away and then drop them at a park, a backyard, a basement, etc. and tell them to have fun. And if they complain that they are bored, then tell them to figure out what to do, because that’s exactly what their brains need to learn anyway.

    What I mean by that is that it is healthy for their brains to work through being bored, figure out how to resolve social conflicts, and invent what to do next, including, and most especially, adapt to changing circumstances. All of that happens through free, unsupervised play. So, sometimes the key to excellent parenting isn’t parenting more, but parenting less.

    As Max Bekoff wrote, “Play is training for the unexpected.”



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
    Show More Show Less
    5 mins
  • Is ChatGPT Dumbing Down your Kid? New MIT Study Says, “Yes.”
    Aug 21 2025
    Is ChatGPT dumbing down your kid? It is and here’s what you can do.A new MIT study reveals the powerful consequences of artificial intelligence on actual intelligence, and guess what? Simply (and terrifyingly) put, the use of artificial intelligence undermines your child’s actual intelligence. In short, when children don’t think for themselves, they don’t learn to think for themselves. That should surprise no one.I’ll get to the disturbing details of the study in a moment, but let me first explain why these outcomes were obvious and inevitable. In a nutshell, the brain functions like a muscle insofar that it becomes stronger when it is used and atrophies when it is not used. I could list a thousand additional factors that affect thinking, but that simple premise really is enough for this discussion.And when I say that the brain functions like a muscle, most people think I’m speaking overly metaphorically. I’m not. While the brain, of course, isn’t actual muscle tissue, its functioning is remarkably similar. Much in the way that exercising muscles builds more muscles, exercising the brain builds the brain—literally. Every single time we engage in a thinking act, the brain builds more wiring, such as synapses through synaptogenesis, for that thinking act. On the flipside, the brain not only allows existing pathways to diminish when they’re not used, it actually overwrites existing pathways with new ones.Watch this play out in the MIT study …The MIT StudyThat study is Your brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, by a team of researchers led by Dr. Nataliya Kosmyna. The scientists broke a group of students down into three essay-writing groups: An “A.I.-assisted” writing group that used multiple LLMs (not just ChatGPT), a “search engine” group, and a “brain-only” group. The students then engaged in three writing sessions while the researchers monitored their brain activity using an EEG. Each student was interviewed after each session, and all of their writing was assessed by humans, as well as an A.I.So, what happens when one group is required to use their brains more than the other groups? Would it shock you to know that the group that needed to do their own thinking actually thought more? I hope not, not anymore than it should be surprising that a group of kids who practiced hitting a ball did better at hitting a ball than a group of kids who watched a robot hit a ball for them. (Okay, that’s not a perfectly fair analogy to the A.I. usage in this case, but it illustrates the point.)And the point is that brain-only group performed better and scored higher on their essays. But that’s not the most important outcome for us. What’s more important is that “the brain-only group exhibited the strongest, widest-ranging networks” of brain activity, while the group with A.I. “assistance elicited the weakest overall coupling.” In other words, the brain-only group thought a lot; the A.I.-assisted group did not. Do you remember what we said about what happens when the brain “muscle” isn’t used?But it gets worse. The researchers brought those two groups back for a fourth session and switched their roles. They gave the A.I. group a brain-only writing task and the brain-only group an A.I. writing task. And here’s what’s so important: the brain-only group still performed better, even when using A.I., and the A.I. group still performed worse, even when given the opportunity to think for themselves. Or should I say, it did worse because they now had to think for themselves.Over the first three brain-only writing assignments, the brain-only students built their brains for the task, and they built mental frameworks (read: habits) to rely on when engaging those tasks. Thus, that they then “gained” an A.I. assistant did not suddenly degrade all of the wiring that their brains built. But the A.I. group, when suddenly given the opportunity for a brain-only task, not only had built no wiring for accomplishing that task, it also, and this is the most critical part, created wiring and mental frameworks for using A.I. instead.What that means in a nutshell, and these are my words not those of the study, is that the brain-only group got smarter and the A.I. group not only failed to become smarter, they got dumbed down—they became habituated to relying on A.I. Thus, when given the opportunity to do so, they were incapable of thinking as well as the brain-only participants did.All of that should be concerning enough, but there’s more. In addition to the direct cognitive effects, the researchers also found that brain-only participants “demonstrated higher memory recall” and engagement of thinking-related brain areas compared to the A.I group. Meanwhile, compared to the brain-only group, the A.I. participants reported lower “ownership of their essay,” which is an educator’s way of saying that they didn’t ...
    Show More Show Less
    11 mins
  • Is ChatGPT Dumbing Down your Kid? A New MIT Study Says, “Yes.”
    Aug 21 2025

    We finally have emerging research on Artificial Intelligence's consequences for actual intelligence. If you're an educator or parent--or if you're anyone who just thinks that thinking is important--then you need to learn about this study. It offers hard evidence that our young people are in danger of diminished thinking skills for life.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
    Show More Show Less
    11 mins
  • QUICK TIP: Unlocking Divergent Thinking--The Physical Connection
    Aug 13 2025

    Stuck in a mental rut? Need a way to break out of your current thought patterns? Want to unlock and unleash your creative, divergent, disruptive thinking skills?

    Who doesn't?

    Listen to learn how!



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
    Show More Show Less
    10 mins
  • Headagogy Update!
    Feb 9 2024

    More Headagogy coming soon! Also, check out The Critical Thinking Institute pdocast, with me!!!



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
    Show More Show Less
    1 min