Last fall, billionaire Elon Musk declared that artificial intelligence (AI) is humanity’s “biggest existential threat” and is akin to “summoning the demon.” Stephen Hawking and Bill Gates have echoed similar concerns about AI. So, why are these tech leaders afraid of AI?
There will come a moment in time, when all of humanity’s technological advancements will converge into a singularity, where AI will exceed human capacity. Depending on which side you’re on, it could be the moment “when humans transcend biology” or the moment of humanity’s irrelevance. Futurists have been debating when this singularity will arrive, but if you follow Ray Kurzweil, an influential futurist, he picks 2045.
One thing propelling the timeline is Moore’s Law, which states that technology advances at an exponential rate. The processing speed needed for a system to approach human levels is about 36.8 quadrillion computations per second. In June 2013, the Chinese government spent millions to build a supercomputer (Tianhe-2) that clocks in at 33.9 quadrillion computations per second. Using Moore’s Law, it will take about 10 years before a computer with that much power will cost only $1,000.
Even though human-level processing power is almost available, it still does not come close to replicating human thought. The trick is in the software and not in the hardware. How do we write software for a digital brain if we don’t even know how a real one works?
In 2013, the European Union funded a huge 10-year research program called the Human Brain Project. It aims to understand all aspects of the human brain. The program is as ambitious as the human genome project in the 1990s. It took researchers 13 years of concerted effort to successfully map the human genome.
It is only a matter of time before researchers decode the human brain and use that knowledge to improve AI. Once they unleash human-level AI, it will only be a couple of moments before that system exceeds human capacity and becomes super-intelligent (ASI) and Godlike in its wisdom. It would only take another decade or so beyond that before ASI has the computing power of the entire human race. I believe this moment is inevitable.
Why is this important to think about? Currently, lesser versions of AI infiltrate every aspect of our lives already. More and more humans are adopting mobile devices and sharing everything in the Cloud. As the Internet of Things continues to build, AI will have an ever-growing bank of data points until they know just about everything. For example, have you noticed how pop-up ads on websites have evolved to know what your interests are? Humans cannot function without AI in today’s society. As a matter of fact, AI is already rendering many human jobs irrelevant.
The big question for most people is whether ASI will be friendly or unfriendly to humankind? Will ASI become like immortal gods with humans as their playthings? Or will they regard us as a nuisance and seek to eliminate us like in those apocalyptic movies? What if ASI is simply indifferent to humans?
There is a possibility that ASI can make our best dreams come true. They may even think of things that are way more awesome than we can imagine. Regardless, there are some serious repercussions to consider as technology advances towards the singularity event.
If you want to have a more thorough understanding of AI, I would recommend this excellent post by Tim Urban.