BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Taking Back Control Of The Singularity

Following

David Wood’s new book, “The Singularity Principles” is published at an opportune moment. A growing number of well-informed people are saying that the technological singularity – the arrival of super-intelligent machines – now appears to be much nearer than they used to think. If it is, then the job of making sure the outcome is positive for humans becomes urgent. Wood and I discussed his book in the latest episode of The London Futurist Podcast.

Rapture for Nerds

The task of ensuring that superintelligence is safe for humanity is hindered by the fact that many people do not take it seriously: the mere mention of the singularity raises hackles in some circles. Wood wants to calm these hackles. Riffing ironically on Brexit, he says he wants to “take back control of the singularity”.

He has coined the term the “singularity shadow” to denote these hackles, which he thinks are largely caused by over-confident predictions about timescales and outcomes. Ray Kurzweil’s seminal “The Age of Spiritual Machines”, and its 2005 sequel “The Singularity is Near” are brilliant books, but they are also prime examples of this dogmatic over-confidence.

As a result of this dogmatism, the Singularity has often been derided as Rapture for Nerds, referring to the fundamentalist Christian idea that Jesus will return to Earth, and his believers will rise up into the air to join him. In fact, the Singularity is not a religion, although it is true that if a positive version of it happens, then humans will acquire powers that would appear godlike to us today.

Upside potential

Wood asserts that nothing about the singularity is pre-ordained. Much discussion of it today is dystopian, but there could be enormous benefits, including accelerated drug discovery and understanding of biology, leading eventually to extreme longevity. There could be a new enlightenment, and nuclear fusion could finally become a staple form of energy. It is hard to describe these and similar ideas without sounding starry-eyed, but humanity’s superpower is intelligence, so if we could amplify our intelligence many times over, we could achieve things which currently seem impossible. We could eliminate hunger and poverty, and accelerate the process of scientific discovery enormously.

Wood’s estimate of the likelihood of the arrival of artificial general intelligence (AGI, a machine with all the cognitive abilities of an adult human) is 50% by 2050, and 10% by 2030. But he points out that even if AGI isn’t coming until 2070, we should already be working urgently on AI alignment - the project of ensuring that superintelligence benefits humanity rather than harming us. His timeline is similar to the median one generated by a recent survey of 780 AI researchers, which was 50% by 2059.

AI alignment

On average, the researchers in that survey were moderately optimistic that the outcome would be positive, but Wood argues that the task of ensuring this cannot be left to AI professionals. After all, if someone releases a beta version of AGI which fails, there might never be a second, debugged version. Worldwide, there are currently a few hundred people working full-time on the problem of AI alignment. This represents a significant increase over the last couple of years, but it is dwarfed by the number of people working on advanced AI, and that number is growing even faster.

There are arguably three ways to arrange a positive outcome. The first option is to rely on luck: to trust that the default state of superintelligence is to be helpful toward the species which created it, or at least that this is what happens in our case.

Control and Motivation

The second option is to solve the control problem and / or the motivation problem. This means that we work out how to retain control over an entity which is much, much smarter than we are, and getting smarter all the time. Or that we design its initial state to be effectively beneficial in a way that will never change in the future. Or both. There are very clever people working on mathematical approaches to these problems at the existential risk organisations like MIRI, the Machine Intelligence Research Institute in California.

One of the many problems with this strategy is that we probably have to understand what we mean by a positive outcome, and this involves solving disputes within moral philosophy that have been raging since the ancient Greeks. What looks like a positive outcome to a utilitarian may be a very negative one to a Christian, a Buddhist, or a white supremacist. Wood argues that there have been important achievements in moral philosophy since Aristotle, but even if this is true, it is hard to believe the job could be complete within a few decades.

Merging with machines

The third option to avoid being superseded (or worse) by superintelligence is to merge with the machines. This means uploading human brains into silicon substrate, a feat we are currently very far from capable of. We might well require superintelligent assistance to achieve it, but then we would be relying on the first superintelligence being beneficent, so we would be depending on option one after all. It also raises philosophical questions about whether uploading preserves the subject, or simply kills the subject and replaces them with an entirely new person.

But if it is possible, this third option of merging with the machines has the important additional feature that it could save humanity from the existential despair that we might otherwise succumb to as we realise that we have become the second-smartest species on the planet, and that our future is entirely dependent on the smartest one. This is the situation that chimpanzees are in today, but they have the important advantage that they are entirely unaware of it. Not everyone would want to take advantage of uploading, and they should not be coerced. But it is likely that within a fairly short time, most people would.

These vital questions are what Wood wrestles with in his new book, and they are also the subject of my book, “Surviving AI”.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here