Superstition and the singularity

by John MacBeath Watkins

I always figured California would be a place where religions could arise, but I had no idea smart people could come up with one so lame.

I'm talking about the Singularity. Vernor Vinge invented the term, which describes the future advent of a super-intelligent, conscious being as a result of computers getting smarter. Consciousness is supposed to emerge, but Vinge used the term "singularity" as a metaphor from black holes, from which no information can escape. His view was that we could not predict the capabilities or motives of such a being.

Which has not kept people from speculating.

Some believe a benevolent super-intelligence will be effectively all-knowing and omnipresent. Some believe they will be taken up into the cloud and given eternal life. And some believe in the devil.

I'm talking here about Roko's Basilisk. From RationalWiki::

Roko's basilisk is a proposition that says an all-powerful artificial intelligence from the future may retroactively punish those who did not assist in bringing about its existence. It resembles a futurist version of Pascal's wager; an argument used to try and suggest people should subscribe to particular singularitarian ideas, or even donate money to them, by weighing up the prospect of punishment versus reward. Furthermore, the proposition says that merely knowing about it incurs the risk of punishment. It is named after the member of the rationalist community LessWrong who most clearly described it (though he did not originate it).Despite widespread incredulity,[2] this entire saga is about things that are actually believed by some groups of people. Though it must be noted that LessWrong itselfdoes not, as a policy, believe in or advocate the basilisk — just in almost all of the premises that add up to it.

One of those premises is that an exact copy of you is you. It would feel what you would feel, suffer as you would suffer, and react as you would react. To a materialistic atheist, it would be no different from you.

I am a bookseller. I have recently seen a first edition of Hemingway's For Whom the Bell Tolls. I have in my store a rather nice facsimile of the same book, the only detectable differences being an entry on the copyright page. If I were to sell the facsimile as a first edition and were found out, it would ruin my reputation -- and if the publisher had not included an entry on the copyright page, the book would be not merely a facsimile, but a counterfeit.

An exact copy of you would be a counterfeit you. In fact, the super intelligence could make endless copies of you if it were so inclined. Differences in experience would start to occur almost at once, and each copy would become a different person as time when on. If so, which one would be you? All of them? None of them?

The notion that an exact copy of you would be you is atheist theology, based on the idea that you are no more than a physical being. I consider it a claim to know more than can be known, so one might call it a superstition or a religious belief.

And short of creating a new body for you, some of those who are doing the theology of the singularity speculate that you could do a mind upload, which would give you a bodyless existence in the cloud. But would that be you? Again, once the copy of you is in digital form, it can be copied endlessly. None of the copies would be you. They might act like you, or they might not, depending on how badly the copy gets corrupted and how different the urges of an expert system "living" in a machine are from those of a person living in a body.

What you could create would not be you. It would be a sort of software monument to you. Theoretically, a super intelligent machine could more easily create a software version of your mind than an entirely new you, but in either case, what motivation would it have to build monuments to inferior beings?

The next problem is that the assumptions about the singularity are that it will come to evolve differently than machines have to date. Up to now, machines have evolved the way ideas evolve, being designed and built by humans. If a machine started designing better versions of itself, its motivations would have to be those designed into it. Yes, you could even program it to be motivated to build software monuments to internet billionaires, but that seems like a vainglorious use of a powerful machine. At the point where we have "conscious" machines, they will be designed to simulate consciousness, which will be a signal to start an endless controversy about what consciousness is.

But part of the theology of the singularity is that consciousness is an emergent property, which will appear when the conditions are right, such as sufficient intelligence, sense data and memory. I see no reason to assume that this is the case, and I posit that any conscious machine that we create will be designed to be conscious, with its motivations in its software.

Which brings us back to Roko's Basilisk. It can only be created if we create it, and do so in a way intended to harm ourselves. I wish I could be certain that fearful, superstitious people would not do that.



Comments