Posts Tagged ‘transhumanism’

Dumb objections to mind uploading

February 12, 2010 4 comments

As the old adage goes, only 2 things are certain in life: death and taxes. But I believe in cheating the former (and the latter too if I can find a clever enough lawyer). Not through Chinese medicine, Yogic meditation, an afterlife in heaven or any other mystical nonsense; I intend to use science. It’s not going to be easy, as death is pretty well programmed in to us, and there’s a world full of microorganisms and malicious beasties just waiting to find a home in or extract energy from my corpse. I think we’re making progress on that front, since we’ve identified most of what makes us age and medical science is advancing at a pretty good clip, but even if Aubrey De Gray and SENS accomplish all of their goals, I could still get hit by a bus, and no amount of clever medical science can protect me from that. Having accidental death as my only risk is pretty good, but I think we can do better.

Whole Brain Emulation (WBE), sometimes called uploading (although its possible uploading will require emulating a much smaller chunk of the brain), is the one and only solution. For those unfamiliar with the concept, WBE would involve scanning the brain at a sufficiently high-resolution and making a working copy of it inside a computer of some sort. Assuming the brain runs at a speed of about 10^16 operations per second, which is our best conservative guess right now but is still just an estimate, we’ll have computers fast enough to run one some time in the next 20 years or so. Even if we take 30 more years on top of that to figure out how to model the brain well, WBE should be viable well within my expected lifespan.

All of this involves some uncertainty of course; the brain might be more complicated than we thought. Roger Penrose thinks it runs on quantum mechanics, which doesn’t make sense when you really think about it for various reasons including the fact that the brain isn’t good at the sorts of things a quantum computer would be, but there are other technical challenges to WBE. Still, based on our best science I think it’s safe for me to give at least 50 percent odds that I’ll be able to be around in 200 years time if I want to be. Or perhaps I should say, I or someone/something that remembers being me will be around then. The implications of this are staggering, and will require a whole separate post just to begin exploring. But leaving aside the technical concerns, which I’ll leave to the neuroscienctists and computer scientists, people have a lot of philosophical problems with mind uploading. This is more the purview of armchair philosophers such as myself. While most of these arguments are just flat out wrong, a few give me pause and seem to require deeper consideration. I’m going to spend the rest of this post going over the less interesting objections, and then devote separate posts to the ones I find more compelling.

As I see it, most philosophical objections to the possibility of uploading fall into three broad categories. The first of these is the “brain runs on magic” argument. Essentially they say that even a perfectly accurate simulation of every atom/quark/string (pick your smallest level) in the brain or body running in real time would not be human. This often (though not always) comes along with a belief in an immortal soul, and when it really comes down to it, is the belief that the brain operates outside the laws of nature, ie that it runs on magic. If this is what you think, I can’t really argue against with except just to say that you’re wrong. It’s an a priori belief that cannot be touched by evidence or logic. You’ll probably even continue to hold said belief even after we develop WBE, insisting that there’s something “wrong” with the people living in the computer, even though you can discern no difference in any interaction with them. Fortunately, if you’re this kind of person, I don’t need to convince you, just outlive you.

Some slightly more sensible arguments fall into the category of “the brain is not/is not like a computer”. These arguments are also wrong, but they at least allow us to have a real debate. Partly, it depends on your definition of computer. Take a look in your dictionary, and chances are you’ll find a definition along the lines of “a machine (ie physical system) that stores and processes information”. By this definition (unless you’re in the above “the brain is magic” camp), the brain must be some sort of computer. Still, critics of the brain/computer metaphor are right to a point; the brain may be a computer, but it’s nothing like the computer you’re reading this on. Although in the most abstract sense both take in input and produce output, they do it in very different ways. Digital computers are very good at performing serial operations on abstract symbols very rapidly, and brains are not. They also store memory in clean separate compartments that are easy to access if their location is known. Conversely, brains perform very slow but massively parallel operations on highly contextual information. The memory in brains is error prone, but is also stored in a bafflingly decentralized way, and can be accessed at amazing speeds using context sensitive “search” terms. So while the brain and the digital computer are more alike than say, the brain and the cells that make it up or the computer and the silicon in its transistors, there are important differences.

Many critics stop here, having shown that the brain does not work like today’s computers, and say “so now I have proven that AI and mind uploading are impossible, QED”. This is an incredibly narrow minded and unimaginative conclusion. I read a comment a while back (the source of which I’ve managed to lose) by a guy named Mark Gubrud that succinctly responds to this argument:

It is obvious that the brain is neither a Turing machine nor any type of digital computer like the one I’m typing this reply on. What is not obvious is that a digital computer can’t do an effective simulation of a brain. (Is a jet engine a Turing machine? Can a computer simulate a jet engine?)

Your argument seems to rest ultimately on some unstated belief in the supernatural or extra-physical (or perhaps some quantum voodoo). Do you believe the brain is a physical system? Do you believe its behaves according to the laws of physics? If so, it can be simulated by a sufficiently powerful digital computer. Even if it uses nonlocal quantum effects, which is quite unlikely, it could be simulated by a quantum computer. I know you must have heard these arguments before, so why do you ignore them?

Saying a sufficiently powerful computer can’t be intelligent, or that it can’t simulate a brain, because it is not like a brain makes about as much sense as saying an airplane can’t fly because it isn’t sufficiently like a bird. Even the computers we have today, which will be orders of magnitude weaker than the ones we will have even if Moore’s Law only holds for another decade or so, can already accurately simulate all sorts of things they bear little resemblance to. Unfortunately, as with airplanes, I believe people will continue to dismiss the possibility of human minds running on a computational substrate until they actually see a working instance of one.

Finally, there’s an argument that gets into even less firm philosophical territory; the notion that, even accepting all the above, an accurate upload of you is in some important sense not you. I think this argument is also wrong, but it gets down to the meaning of identity, which is something I’ll admit I haven’t totally figured out, and also shades into some of the more compelling objections to uploading that I’m going to devote later posts to. I believe that there is a “hard problem” of consciousness, of what first person experience actually means and where it comes from, even though I think the concept of P-zombies is total BS, and fairly amusing BS at that. I’ll have to discuss this in more detail later, but I think it requires a rethink of just what identity means. We need to start understanding the message in John K Clark’s (marginally) famous quote:

“But I am not an object. I am not a noun, I am an adjective.  I am the way matter behaves when it is organized in a John K Clark-ish way.  At the present time only one chunk of matter in the universe behaves that way; someday that could change.”

Expect more on all this in due time.