Our Final Invention: How the Human Race Goes and Gets Itself Killed
Greg Scoblete
INTRODUCTION
Ron: Greg Scoblete says that in his book, Our final Invention, James Barrat reckons that once Artificial Intelligence (AI) developers go on to achieve "artificial general intelligence" (AGI), the AGI will develop itself to achieve something called artificial superintelligence (ASI) -- that is, an intelligence that exceeds -- vastly exceeds -- human-level intelligence. Barrat argues that the time it will take for ASI to surpass human level intelligence, rendering us ant-like in comparison, could be a matter of days, if not mere hours, after it is created. Worse (it keeps getting worse), human researchers may not even know they have created this potent ASI until it is too late to attempt to contain it. An ASI birthed in a supercomputer may choose, Barrat writes, to hide itself and its capabilities lest the human masters it knows so much about it, attempt to shut it down.
If you think that Talmudists are psychopaths totally lacking in empathy and love, try to imagine what a world controlled by a super computer MACHINE MIND will be like. At best, humanity can expect that ASI's extermination of humanity would be a mere byproduct of its existance. Terminator-esque malevolence would not be needed. As Greg Scoblete points out: 'Computers, like humans, need energy and in a competition for resources, ASI would no more seek to preserve our access to vital resources than we worry about where an ant's next meal will come from.'
Arguably, humanity needs divine intervention not only to stave off enslavement by Talmudists but also to prevent the Talmudic invention of ASI from destroying humanity, including its Talmudic inventors. --Rod Remelin
NOTE: Please understand that we already have Divine Intervention. Creator God Aton of Light as said that man would never be allowed to genocide humanity. Man does not create life, nor does he have the option to universally destroy it. ---PHB
***********
Our Final Invention: How the Human Race Goes and Gets Itself Killed
We worry about robots.
Hardly a day goes by where we're not reminded about how robots are taking our jobs and hollowing out the middle class. The worry is so acute that economists are busy devising new social contracts to cope with a potentially enormous class of obsolete humans.
Documentarian James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, is worried about robots too. Only he's not worried about them taking our jobs. He's worried about them exterminating the human race.
I'll repeat that: In 267 brisk pages, Barrat lays out just how the artificial intelligence (AI) that companies like Google and governments like our own are racing to perfect could -- indeed, likely will -- advance to the point where it will literally destroy all human life on Earth. Not put it out of work. Not meld with it in a utopian fusion. Destroy it.
Wait, What?
I'll grant you that this premise sounds a bit.... dramatic, the product of one too many Terminator screenings. But after approaching the topic with some skepticism, it became increasingly clear to me that Barrat has written an extremely important book with a thesis that is worrisomely plausible. It deserves to be read widely. And to be clear, Barrat's is not a lone voice -- the book is rife with interviews of numerous computer scientists and AI researchers who share his concerns about the potentially devastating consequences of advanced AI. There are even think tanks devoted to exploring and mitigating the risks. But to date, this worry has been obscure.
In Barrat's telling, we are on the brink of creating machines that will be as intelligent as humans. Specific timelines vary, but the broad-brush estimates place the emergence of human-level AI at between 2020 and 2050. This human-level AI (referred to as "artificial general intelligence" or AGI) is worrisome enough, seeing the damage human intelligence often produces, but it's what happens next that really concerns Barrat. That is, once we have achieved AGI, the AGI will go on to achieve something called artificial superintelligence (ASI) -- that is, an intelligence that exceeds -- vastly exceeds -- human-level intelligence.
Barrat devotes a substantial portion of the book explaining how AI will advance to AGI and how AGI inevitably leads to ASI. Much of it hinges on how we are developing AGI itself. To reach AGI, we are teaching machines to learn. The techniques vary -- some researchers approach it through something akin to the brute-force memorization of facts and images, others through a trial-and-error process that mimics genetic evolution, others by attempting to reverse engineer the human brain -- but the common thread stitching these efforts together is the creation of machines that constantly learn and then use this knowledge to improve themselves.
The implications of this are obvious. Once a machine built this way reaches human-level intelligence, it won't stop there. It will keep learning and improving. It will, Barrat claims, reach a point that other computer scientists have dubbed an "intelligence explosion" -- an onrushing feedback loop where an intelligence makes itself smarter thereby getting even better at making itself smarter. This is, to be sure, a theoretical concept, but it is one that many AI researchers see as plausible, if not inevitable. Through a relentless process of debugging and rewriting its code, our self-learning, self-programming AGI experiences a "hard take off" and rockets past what mere flesh and blood brains are capable of.
And here's where things get interesting. And by interesting I mean terrible.
Goodbye, Humanity
When (and Barrat is emphatic that this is a matter of when, not if) humanity creates ASI it will have introduced into the world an intelligence greater than our own. This would be an existential event. Humanity has held pride of place on planet Earth because of our superior intelligence. In a world with ASI, we will no longer be the smartest game in town.
To Barrat, and other concerned researchers quoted in the book, this is a lethal predicament. At first, the relation between a human intellect and that of an ASI may be like that of an ape's to a human, but as ASI continues its process of perpetual self-improvement, the gulf widens. At some point, the relation between ASI and human intelligence mirrors that of a human to an ant.
Needless to say, that's not a good place for humanity to be.
And here's the kicker. Barrat argues that the time it will take for ASI to surpass human level intelligence, rendering us ant-like in comparison, could be a matter of days, if not mere hours, after it is created. Worse (it keeps getting worse), human researchers may not even know they have created this potent ASI until it is too late to attempt to contain it. An ASI birthed in a supercomputer may choose, Barrat writes, to hide itself and its capabilities lest the human masters it knows so much about it, attempt to shut it down. Then, it would silently replicate itself and spread. With no need to eat and sleep and with an intelligence that is constantly improving and war-gaming survival strategies, ASI could hide, wait and grow its capabilities while humanity plods along, blissfully unaware.
Though we have played a role in creating it, the intelligence we would be faced with would be completely alien. It would not be a human's mind, with its experiences, emotions and logic, or lack thereof. We could not anticipate what ASI would do because we simply do not "think" like it would. In fact, we've already arrived at the alarming point where we do not understand what the machines we've created do. Barrat describes how the makers of Watson, IBM's Jeopardy winning supercomputer, could not understand how the computer was arriving at its correct answers. Its behavior was unpredictable to its creators -- and the mysterious Watson is not the only such inscrutable "black box" system in existence today, nor is it even a full-fledged AGI, let alone ASI.
Barrat grapples with two big questions in the book. The first is why an ASI necessarily leads to human extinction. Aren't we programming it? Why couldn't humanity leverage it, like we do any technology, to make our lives better? Wouldn't we program in safeguards to prevent an "intelligence explosion" or, at a minimum, contain one when it bursts?
According to Barrat, the answer is almost certainly no. Most of the major players in AI are barely concerned with safety, if at all. Even if they were, there are too many ways for AI to make an end-run around our safeguards (remember, these are human-safeguards matched up with an intelligence that will equal and then quickly exceed it). Programming "friendly AI" is also difficult, given that even the best computer code is rife with error and complex systems can suffer catastrophic failures that are entirely unforeseen by their creators. Barrat doesn't say the picture is utterly hopeless. It's possible, he writes, that with extremely careful planning humanity could contain a super-human intelligence -- but this is not the manner in which AI development is unfolding. It's being done by defense agencies around the world in the dark. It's being done by private companies who reveal very little about what it is they're doing. Since the financial and security benefits of a working AGI could be huge, there's very little incentive to pump the breaks before the more problematic ASI can emerge.
Moreover, ASI is unlikely to exterminate us in a bout of Terminator-esque malevolence, but simply as a byproduct of its very existence. Computers, like humans, need energy and in a competition for resources, ASI would no more seek to preserve our access to vital resources then we worry about where an ant's next meal will come from. We cannot assume ASI empathy, Barrat writes, nor can we assume that whatever moral strictures we program in will be adhered too. If we do achieve ASI, we will be in completely unknown territory. (But don't rule out a Terminator scenario altogether -- one of the biggest drivers of AI research is the Pentagon's DARPA and they are, quite explicitly, building killer robots. Presumably other well-funded defense labs, in China and Russia, are doing similar work as well.)
Barrat is particularly effective in rebutting devotees of the Singularity -- the techno-optimism popularized by futurist Ray Kurzweil (now at Google, a company investing millions in AI research). Kurzweil and his fellow Singularitins also believe that ASI is inevitable only they view it as a force that will liberate and transform humanity for the good, delivering the dream of immortality and solving all of our problems. Indeed, they agree with Barrat that the "intelligence explosion" signals the end of humanity as we know it, only they view this as a benign development with humanity and ASI merging in a "transhuman" fusion.
If this sounds suspiciously like an end-times cult that's because, in its crudest expression, it is (one that just happens to be filled with more than a few brilliant computer scientists and venture capitalists). Barrat forcefully contends that even its more nuanced formulation is an irredeemably optimistic interpretation of future trends and human nature. In fact, efforts to merge ASI with human bodies is even more likely to birth a catastrophe because of the malevolence that humanity is capable of.
The next question, and the one with the less satisfactory answer, is just how ASI would exterminate us. How does an algorithm, a piece of programming lying on a supercomputer, reach out into the "real" world and harm us? Barrat raises a few scenarios -- it could leverage future nano-technologies to strip us down at the molecular level, it could shut down our electrical grids and turn the electronic devices we rely on against us -- but doesn't do nearly as much dot-connecting between ASI as a piece of computer code and the physical mechanics of how this code will be instrumental in our demise as he does in establishing the probability of achieving ASI.
That's not to say the dots don't exist, though. Consider the world we live in right now. Malware can travel through thin air. Our homes, cars, planes, hospitals, refrigerators, ovens (even our forks for God's sake) connect to an "internet of things" which is itself spreading on the backs of ubiquitous wireless broadband. We are steadily integrating electronics inside our bodies. And a few mistaken lines of code in the most dangerous computer virus ever created (Stuxnet) caused it to wiggle free of its initial target and travel the world. Now extrapolate these trends out to 2040 and you realize that ASI will be born into a world that is utterly intertwined and dependent on the virtual, machine world -- and vulnerable to it. (Indeed one AI researcher Barrat interviews argues that this is precisely why we need to create ASI as fast as possible, while its ability to harm us is still relatively constrained.)
What we're left with is something beyond dystopia. Even in the bleakest sci-fi tales, a scrappy contingent of the human race is left to duke it out with their runaway machines. If Our Final Invention is correct, there will be no such heroics, just the remorseless evolutionary logic that has seen so many other species wiped off the face of the Earth at the hands of a superior predator.
Indeed, it's telling that both AI-optimists like Kurzweil and pessimists like Barrat reach the same basic conclusion: humanity as we know it will not survive the birth of intelligent machines.
No wonder we're worried about robots.