Article Lead Image

Blake Patterson/Flickr (CC BY 2.0) | Remix by Jason Reed

Artificial intelligence will make religion obsolete within our lifetime

We spoke to seven scholars and futurists about the singularity and the uncertain future of religion.

 

Dylan Love

IRL

Posted on Aug 5, 2015   Updated on May 28, 2021, 5:37 am CDT

After conquering the kitchen, the game show, and outer space, are our machines heading for religion next?

The singularity is a hypothesized time in the future, approximately 2045, when the capabilities of non-living electronic machines will supersede human capabilities. Undismissable contemporary thinkers like Elon Musk, Stephen Hawking, and Ray Kurzweil warn us that it will change everything. Hawking likens it to receiving a message from aliens announcing their arrival in “a few decades,” saying this is “more or less” what’s happening with artificial intelligence software.

The tension between technology and the human soul dates all the way back to the Old Testament. In the story of the Tower of Babel, God responds to humanity’s construction of a tower tall enough to reach heaven by confusing their languages to inhibit their progress.

“No gods will save us because there are no gods—unless we become gods.”

“The story is a pretty big warning against becoming too technologically capable,” said Jason Stellman and Christian Kingery, former pastors who host a philosophical comedy podcast called Drunk Ex-Pastors. “As far as Christianity is concerned, the singularity probably wouldn’t go over too well with God.”

We have low-grade artificial intelligence systems today; they control the robots exploring Mars, beat humans at Jeopardy, and generally force us to ask us complicated questions about the nature of knowledge and understanding. But that’s nothing compared to what we can expect in the future. Electronics of this caliber would be described as “superintelligent”; they would be conversational and aware, like C-3PO from Star Wars.

Talk of the singularity ripples with religious undertones: It’s obsessed with the unknown future and assumes the arrival of a superior entity down the road. It has its naysayers and true believers alike, each eager to tell you why (or why not) such an entity will actually show up.

We spoke to seven thinkers and scholars to learn about what might happen when superintelligence bumps into religion.

Are there any religious suggestions, Biblical or otherwise, that humanity will face something like the singularity?

John Messerly, affiliate scholar for the Institute for Ethics and Emerging Technologies: There is no specific religious suggestion that we’ll face a technological singularity. In fact, ancient scriptures from various religions say virtually nothing about science and technology, and what they do say about them is usually wrong (the Earth doesn’t move, is at the center of the solar system, is 6,000 years old).

Still people interpret their religious scriptures, revelations, and beliefs in all sorts of ways. So a fundamentalist might say that the singularity is the end of the world as foretold by the Book of Revelations or something like that.

Also there is a Christian Transhumanist Association and a Mormon Transhumanist Association, and some religious thinkers are scurrying to claim the singularity for their very own. But a prediction of a technological singularity—absolutely not. The simple fact is that the authors of ancient scriptures in all religious traditions obviously knew nothing of modern science. Thus they couldn’t predict anything like a technological singularity.

Lincoln Cannon, president of the Mormon Transhumanist Association: The Bible contains many ideas that religious transhumanists tend to associate with emerging and future technology risks and opportunities.

“The thing that makes us all nervous is that we are creating a thing with ambiguous ethics that will theoretically be infinitely more intelligent than us.”

On the negative side, some see the risk of unfriendly superintelligence in prophecies related to the Antichrist (2 Thessalonians 2: 1-4) and various related technological risks in the apocalyptic prophecies of destruction (Revelation 13).

On the positive side, some see the opportunity of friendly superintelligence in prophecies related to the return of Christ and theosis (1 John 3: 1-2 and many others), and various related technological opportunities in the millennarian prophecies of transfiguration (1 Corinthians 15: 51-53).

Peter Moons, Ph.D. candidate at Salve Regina University: While I am not a Biblical scholar, what I find interesting is that the Bible’s book of prophecy, Revelations, and Martin Heidegger’s concept that technology reveals its complete effects on humanity only after its implementation, share the same root word: reveal.  Something is shielded from humanity and only after discovery—uncovering that which is hidden—can we see the reality before us.

Duncan Trussell, comedian and host of The Duncan Trussell Family Hour podcast: You could compare this moment to the development of complex eyes during the Cambrian explosion. Prior to this, all forms of biological life would have experienced a world of absolute darkness. Those creatures that first developed an eye would have experienced a kind of apocalypse—an infinite field of data formerly inaccessible opened up to them. Darkness peeled back to reveal light.

How realistic do you personally think the arrival of some sort of superintelligence is?

Neal VanDeRee, officiator at the Church of Perpetual Life: I believe that it is inevitable that the arrival of a superintelligence is bound to happen, and when looking at the current course of AI, this should be within our lifetime. I would imagine that it could very nearly replicate life as we know it now, but without pain, suffering, and death.

Naturally, time will tell.

Lincoln Cannon: For practical and moral reasons, I trust in our opportunity and capacity as a human civilization, to evolve intentionally into compassionate superintelligence. I don’t think it’s inevitable, and I do think there are serious risks. But I do trust it’s possible, particularly if we put aside passive, escapist, and nihilistic attitudes about our future and work to mitigate the risks while pursuing the opportunities.

Peter Moons: It’s worth pointing out that past paradigms in human existence also saw a steep rise in technology. For example, the Enlightenment; here, God was not excluded in the post-discovery realm, but the man-made artifices and processes surrounding believers’ faith in God changed.

Though I cannot speak to the other two Abrahamic faiths, Christianity has [developed past its primitive forms] and still exists even with space travel. Thus, we can expect that if and when humanity enters the singularity, unless specifically excluded, our belief in God will accompany us.

How “alive” would a superintelligence be?

Mike McHargue, host of the Ask Science Mike podcast: We think nothing of wiping out bacteria by the millions when we wash our hands, and most people don’t hesitate to slap the fly buzzing around their heads. But dogs? Dolphins? Apes? We see some reflection of awareness in their eyes, and mark them as greater peers among life. What’s fascinating about machine intelligence is we are presented with some level of consciousness that is not associated with biological life. We’ve already built robots with similar intelligence and conscious awareness as an earthworm, and we’ve modeled neural network as complex as insects and possibly reptiles.

As computer technology advances, there’s a real possibility of something that is highly intelligent but not “alive” in any traditional sense.

“I don’t fear an aggressive superintelligent AI. I fear one that is indifferent to us.”

John Messerly: I think you’re assuming we would be different from these SIs (superintelligences). Instead there is a good chance we’ll become them through neural implants, or by some uploading scenario. This raises the question of what it’s like to be superintelligent, or in your words, how alive you would feel as one. Of course I don’t know the answer since I’m not superintelligent! But I’d guess you would feel more alive if you were more intelligent.

I think dogs feel more alive than rocks, humans more alive than dogs, and I think SIs would feel more alive than us because they would have greater consciousness.

Mike McHargue: I think consciousness is more remarkable than simple life. Bacteria are important to my existence, but I think we’re right to value conscious animals more—and a superintelligent machine would likely be more conscious than we are, in that it would build a more elaborate model of reality and its consciousness would be composed of more feedback loops than we have in our own brains.

Assuming we can communicate with such a superintelligence in our own natural human language, what might be the thinking that goes into preaching to and “saving” it?

Christian Kingery and Jason Stellman: It’s an interesting question because if we did develop superintelligence, shouldn’t we be trusting it to tell us what religion is real? If a computer is 10,000 times smarter than a human, then won’t it already have deduced with certainty which, if any, religion is true?

Lincoln Cannon: So long as machine intelligence approximates human intelligence, and so long as there’s sufficient overlap and intelligibility in our respective cognitive spaces, I’m sure humans will attempt to persuade machines to just about all of our vying ideas, and machines will do the same in return.

The interactions could of course take traditional forms, like persuasion through presentation or discussion, but we should also anticipate new and unfamiliar forms of interaction enabled by whatever technological interfaces become available, such as brain-to-computer interfacing.

John Messerly: Thinkers disagree about this. [Founder of the Transhumanist political party] Zoltan Istvan thinks that we will inevitably try to control SIs and teach them our ways, which may include teaching them about our gods. Christopher J. Benek, co-founder and chair of the Christian Transhumanist Association, thinks that AI, by possibly eradicating poverty, war, and disease, might lead humans to becoming more holy. But other Christian thinkers believe AIs are machines without souls and cannot be saved.

Of course, like most philosophers, I don’t believe in souls, and the only way for there to be a good future is if we save ourselves. No gods will save us because there are no gods—unless we become gods.

Duncan Trussell: My hope is that we aren’t the ones doing the “saving,” but that this new intelligence will save us. The thing that makes us all nervous is that we are creating a thing with ambiguous ethics that will theoretically be infinitely more intelligent than us.

It’s as though we received a simple message from deep space: “We are coming in 30 years.”  If that actually happened, the entire planet would scramble to prepare for the arrival of whatever sent the message. But because the message is not coming from outer space, but from the inner space of the minds of some of the great thinkers of our time, it seems like most folks are ignoring it.

For most folks it’s just impossible to digest the very real fact that a super-advanced intelligence is growing through us and out of us and its initial sprouts look like technology.

Are you aware of any “laws” or understandings of computer science that would make it impossible for software to hold religious beliefs?

Lincoln Cannon: No. Of course there are some naive voices among the anti-religious that would like to imagine a technical incompatibility between machine intelligence and religious beliefs, but humans are already proof of concept. I do think we can identify some limits to the possibility space of intelligence in general, based on logic and physics, but religiosity remains clearly within the possibility space.

John Messerly: I assume you can program a SI to “believe” almost anything. And you can try to program humans to believe things too.

Neal VanDeRee: No, there are no laws or rules in computer science that would make it impossible for software to hold a religious belief.

Lincoln Cannon: It’s worth pointing out, perhaps, that some of us conceive of religion too narrowly to account for how it’s actually functioned from deep history to the present, and a strong case can be made that transhumanism often (but not always) manifests itself as a religion, even if misrecognized.

How might a religious superintelligence operate? Would it be benign?

Lincoln Cannon: Religious superintelligence may be either the best or the worst kind of superintelligence—sublimely compassionate or horribly oppressive. I like to think of religion as applied esthetics, the most powerful social technology for provoking strenuous action toward a common goal. As such, it’s not inherently good or evil. It’s just power, to be used for good or evil, as it clearly has been used for both historically.

While the particular forms of religion will continue to evolve, the general function of religion seems unlikely to go away. So, as we do with all powerful technologies, we should aim to mitigate the risks of religion while pursuing its opportunities.

Religion already isn’t benign, and any religion worthy of a superintelligence certainly would be even less so.

Mike McHargue: Imagine something that is more intelligent compared to us than we are when compared to ants. What do ants know of human aspirations and the way we go about life? Our existence doesn’t fit in their model of reality, and I think speculating on a superintelligent machine’s operation is equally impossible.

“Religion already isn’t benign, and any religion worthy of a superintelligence certainly would be even less so.”

I don’t fear an aggressive superintelligent AI. I fear one that is indifferent to us, and from that indifference produces actions that break the line of human life that extends back to the first life on Earth.

Neal VanDeRee: I do not believe that we will see one single superintelligence, but many that will be interacting—a race of AI beings. I believe that they will carry with them our best components of humanity into the future.  Yes, I would I expect them to be benign and potentially very helpful in carrying out the best wishes of humanity and achieve what transhumanists wish to see.

Peter Moons: The fear, [Swedish philosopher Nick] Bostrom notes, is that once the AI becomes cognizant of the depth of its knowledge, operating capacity, speed, and even potential physical manipulation, the AI will choose a path for its continued existence that may preclude the existence of man. Writer James Barrett’s recent book predicts that humanity’s “Final Invention”—part of the title of his work—will be AI, for a superintelligence will change how humans live.

Humanity may be in an existential fight at that point.

Christian Kingery and Jason Stellman: The thought terrifies us, to be honest. Superintelligence is scary enough. Adding religion to the mix? No thank you. Maybe if it was a Buddhist. They seem pretty chill.

Peter Moons: The question here is this: Will there be a place for God in the singularity?

One could say that the technological marvel of uploading minds and consciousness into a cyber environment and then connecting all the minds together may preclude humans from expressing humanity. This thought comes from the idea that the expected scientific environment of ones and zeros makes no room for humanity.

However, if consciousness can be exactly translated (or copied) into a binary environment, the concepts of God, spirituality, and religion will be copied along with everything else, unless those concepts are specifically not copied. I am unlikely to be the first thinker of this “spirituality-exclusion” in the future of the singularity.

Duncan Trussell: We have to hope that the mystics are correct when they claim that the essential nature of the universe is love. If this is the case, then my hippy dream is that this advanced intelligence will be a pure manifestation of love and compassion, and thus its tendency would be not to destroy but to heal.

If not, then at least we get to experience what it’s like to be annihilated by a superintelligence.

Either way I think our species has a lot to look forward to.

Photo via Blake Patterson/Flickr (CC BY 2.0) | Remix by Jason Reed

Share this article
*First Published: Aug 5, 2015, 9:00 am CDT