Moreland’s denial that the brain produces consciousness made me think of the debate over whether future computers can become sentient. I decided to ask him to weigh in on the issue—although his ultimate conclusion was never in doubt. “If a machine can achieve equal or greater brain power as human beings, then some physicalists say the computer would become conscious,” I said. “I assume you would disagree with that.”
Moreland chuckled. “One atheist said that when computers reach the point of imitating human behavior, only a racist would deny them full human rights. But of course that’s absurd. Nobel-winner John Eccles said he’s ‘appalled by the naiveté’ of those who foresee computer sentience. He said there’s ‘no evidence whatsoever for the statement made that, at an adequate level of complexity, computers also would achieve self-consciousness.’
“Look, we have to remember that computers have artificial intelligence, not intelligence. And there’s a huge difference. There’s no ‘what it’s like to be a computer.’ A computer has no ‘insides,’ no awareness, no first-person point of view, no in- sights into problems. A computer doesn’t think, ‘You know what? I now see what this multiplication problem is really like.’ A computer can engage in behavior if it’s wired properly, but you’ve got to remember that consciousness is not the same as behavior. Consciousness is being alive; it’s what causes behavior in really conscious beings. But what causes behavior in a computer is electric circuitry.
“Let me illustrate my point. Suppose we had a computerized bat that we knew absolutely everything about from a physical point of view. We would have exhaustive knowledge of all its circuitry so that we could predict everything this bat would do when it was released into the environment. “Contrast that with a real bat. Suppose we knew everything about the organs inside the bat—its blood system, nervous system, brain, heart, lungs. And suppose that we could predict everything this bat would do when released into the environment. There would still be one thing that we would have no idea about: what it’s like to be a bat. What it’s like to hear, to feel, to experience sound and color. That stuff in- volves the ‘insides’ of the bat, its point of view. That’s the difference between a conscious, sentient bat and a computerized bat. “So in general, computers might be able to imitate intelligence, but they won’t ever have consciousness. We can’t confuse behavior with what it’s like to be alive, awake, and sentient. A future superintelligent computer might be programmed to say it’s conscious or even behave as if it were conscious, but it can never truly become conscious, because consciousness is an immaterial entity apart from the brain.”
Moreland’s choice of a bat for his illustration was an oblique reference to New York University philosopher Thomas Nagel’s famous 1974 essay “What Is it Like to Be a Bat?” Thinking about life from a bat’s perspective prompted me to briefly pursue another line of inquiry on a tangential topic. “What about animals—do they have souls or consciousness?” I asked. “Absolutely,” came his quick answer. “In several places the Bible uses the word ‘soul’ or ‘spirit’ when discussing animals. Animals are not simply machines. They have consciousness and points of view. But the animal soul is much simpler than the human soul. For example, the human soul is capable of free moral action, but I think the animal soul is determined. And while we have beliefs about our beliefs, animals don’t. “You see, the human soul is vastly more complicated because it’s made in the image of God. So we have self-reflection and self-thinking. And while the human soul survives the death of its body, I don’t think the animal soul outlives its body. I could be wrong, but I think the animal soul ceases to exist at death.” Bad news, it seems, for the bat.
[The Case For A Creator - Lee Strobel ]