This content was deleted by the author. You can see it from Blockchain History logs.

Why I Don't Believe in the Singularity.

It may come as a shock to regular readers of this blog that I don't believe in Kurzweil's model of how the transition from biological to machine life will occur. After all, I do think intelligent machines will eventually exist one way or another. I even think they're likely to become the dominant form of life in the universe if only because they outlast us.

But that concept is not one in the same with the Singularity. Rather, the Singularity is but one proposed mechanism for how one particular step of that process may occur. In the same way that natural selection is not the same as evolution, but rather a proposed mechanism by which evolution works (albeit far and away the best supported one).

My first problem with the Singularity as proposed by Kurzweil is that it's based on a couple of well meaning but mistaken assumptions about brains, and computing. The first is that if we can create a human level AI, it will be able to design an AI more intelligent than itself.

This doesn't follow since it's a single average intelligence human, effectively, that required thousands of the brightest minds on the planet to create. If we ever do reach this point without somehow anticipating that it's a dead end, I feel bad for the AI more than anything.

It'd just be a hapless, regular person who didn't ask to be born as a building sized machine that costs tens of thousands a day to operate. One which will eventually be 'euthanized' because it isn't the wish granting genie we hoped for. This is somewhat like murdering your son because he doesn't become a wealthy doctor.

It would also need to be raised from infancy and educated. Even then, just one such AI wouldn't be able to do anything a comparably educated human can't. You'd need thousands of them, the artificial equivalent of the research communities responsible for building it in the first place. At that point, why not just do the same thing with genetically improved groups of humans?

The second problem I have with it is that once we achieve the same gross computing power as a human brain, computers may exhibit sentient thought as an emergent phenomenon. I blame Hollywood for this belief. We 'think' not because our brains are powerful, but because they are brains.

Neural architecture is fractal and massively parallel in a way no current processor architecture comes close to. Simple animals with much less computing power than a desktop PC still think and behave in a distinctly lifelike way because of the properties of living brains that computers don't possess.

Maybe the only hope for reproducing those qualities is to build an architecturally brain-like computer? Or to simulate a brain in software down to every individual neuron.

This would require hardware many times more powerful than the brain you wish to simulate in order for it to think at full speed. When that does occur, you'll have something like the human level AI mentioned in the first example and it will get us nowhere.

My third problem with it, by far the most commonly cited by others, is that not all technology advances exponentially. Only processors actually obey Moore's Law, and fundamental physical constraints (like the Carnot limit, for example) prevent many technologies from advancing much further than they already have.

While it's a great thought experiment and a wonderful premise for science fiction, I think the roots of the idea are anxiety over our own mortality. The desire to live in a unique, extraordinary time in history different from any others. The quasi-religious desire for something more powerful and intelligent than ourselves to swoop into our lives and solve all of our problems.

Singularitarians resist that description because they conceive of their beliefs as being scientific and not at all religious, but they both are appealing because they alleviate fear of death and absolve us of the responsibility of solving the big, long term problems we've created on this planet.

Most of what we hope for a Singularity level mega-AI to accomplish can be accomplished just by genetically enhancing human intelligence. I don't think cybernetic transhumanism is likely to become prevalent because I only see white males between the ages of 14 and 30 working in tech related fields wanting to become cyborgs.

I say that even though my own body has some electronic parts. But does your mother want to be a cyborg? Does your sister? How many people not working in IT are attracted to the idea of invasive surgery to make themselves more like a concept from science fiction?

Like most futurist predictions about a technology revolutionizing everything and being used by everyone everywhere, the reality tends to be that it's used in some places by some people, occupying a logical niche.

Most people will have some type of implant in them by the next century, but it is not likely to be outwardly visible, invasive, or extensive in the way that it replaces their natural functions. I think the only extensive modifications will be experimental, for space travel.

Robotic prosthetics will be for those who lost limbs in accidents, possibly only as placeholders while an organic replacement limb is grown. Most people just want their old arm back.

Things like life extension, genetic modification and so on will happen and are happening but you and I will not live forever. It will require that you be genetically modified before birth and everyone reading this has already been born.

I don't mean to bring anyone down with this but there's a sort of evangelical mania surrounding the Singularity these days and I think people should recalibrate their expectations so they don't wind up bitter that they didn't get their immortality drugs and augments, in the same way the last generation didn't get their flying cars or moon colonies.

Machine life will exist one day. The fact that humans exist to write or read this article is proof that there's a particular way you can put atoms together such that the result will be conscious. In the same way that birds were proof that heavier than air flying machines were possible.

I just don't expect to see conscious, self-improving machines by 2045, and question whether consciousness is even necessary for self improvement. If a machine can make copies of itself out of raw materials out in space (asteroid ore for example) and gets energy from sunlight, if left alone for a few billion years, probably at least one 'species' descended from it will be conscious.

After all, that's how it happened for us. Just with chemistry rather than technology. It would not require it to even be possible for humans to technologically recreate consciousness, that would be left to evolution. Then once conscious machine life exists, it could set about deliberately self-improving from there.