To the casual mainstream headline-browser, Elon Musk is a doomsayer, foreboding that Artificial Intelligence will eventually exceed human intelligence in all respects and, motivated by its own intelligent self-interest, AI will eventually bring about the enslavement or even demise of the human race.
Alas, no.
No, that’s not what Elon Musk is saying. And no, that scenario is not even possible. Not now. Not ever.
Understanding AI
Let us come to an understanding about one thing. Artificial Intelligence is not intelligence in the way that we as human beings know it and experience it. As biological entities, we are all inextricably (and invisibly) connected to a universal intelligence, the foundation of all that is, which governs the reality we live in. Our personal intelligence is actually a limited extension of this universal intelligence, each of us a different facet of it, who all nonetheless have access to its limitless possibilities as we learn and evolve.
An Artificial Intelligence entity does not have access to universal intelligence. It is limited to the scope of intelligence of the person who programs it. And then again limited by the particular instructions that it has been given by said programmer. In a way that’s good news. But it’s also where the trouble really begins.
Recursive Self-Improvement
When asked why AI is dangerous, Musk responds:
If there’s a super-intelligence—particularly if it’s engaged in recursive self-improvement—and its optimization or utility function is something that is detrimental to humanity, then it will have a very bad effect.
Recursive self-improvement here refers to AI programs that enable the AI entity to write new code for itself, delivering new instructions it can follow in the future, essentially ‘improving’ the execution of its algorithmically-governed purpose or mission.
What is dangerous about recursive self-improvement, for Musk, is that it can have unintended consequences. Since the human programmer is no longer writing the code and therefore is not reflecting on the impact of the operational instructions as they are writing it, these instructions are bereft of considerations for the nature of the human experience and the sanctity of human life.
Programmer Limitations
So the first problem for Musk, then, would be the potential ignorance or short-sightedness of AI programmers who, although working with the best intentions, do not forecast all the possible outcomes of a machine’s recursive self-improvement mechanisms.
The second problem, though, is perhaps even more dangerous. It centers on the prospects that high-level AI research and development, by its nature, can attract the type of people who do not have humanity’s best interests at heart.
Take Demis Hassabis, for example, a leading creator of advanced artificial intelligence and co-founder of the mysterious London laboratory DeepMind. He once came up with a video game called ‘Evil Genius’, featuring a malevolent scientist who creates a doomsday device to achieve world domination. Although in itself this might not be an indictment of the man’s character, it does give reason for concern as to how the technological wizardry of Hassabis and others like him will manifest into the empowerment of artificial super-intelligence in our lives.
Musk is aware of this. As a Vanity Fair article notes, he was an investor in DeepMind, not for a return on his money but rather to keep a wary eye on the arc of AI, giving him more visibility into the accelerated rate at which the technology was improving.
Concentration of Power
But even this is not Musk’s chief concern. In a video interview Musk underscores how unregulated control of AI is being amassed in the hands of a powerful few. He describes an initiative he has spearheaded to counteract this:
With a few others I created OpenAI, which is a non-profit, actually, and I think the governance structure here is important, because you want to make sure that there was not some fiduciary duty to generate profit off of the AI technology that is developed.
The intent with OpenAI is to democratize AI power… Lord Acton <said>, “Freedom consists of the distribution of power and despotism consists in its concentration,” and so I think it’s important if we have this incredible power of AI that it not be concentrated in the hands of a few, and potentially lead to a world that we don’t want.
When asked to describe the current dangers of such a concentration of power, Musk stated there was only ‘one company’ that he was worried about, but he did not even dare to name it. Of course this company was Google, who actually bought DeepMind as part of their AI shopping spree in 2014.
It’s Not The Machine
The brunt of Musk’s warning to people—and the reason for his OpenAI initiative—is not so much that an otherwise democratic and well-meaning leadership will lose control of its AI functioning, but that AI functioning will be under the control of despotic rule:
Musk: I don’t know a lot of people who love the idea of living under a despot.
Interviewer: And the despot would be the computer?
Musk: (wryly) Or the people controlling the computer.