Through several
iterations, DeepMind (an Alphabet, Inc. company which also has
Google) developed a Go-playing automaton, AlphaGo Zero, that is the
best Go-player in the world. This ultimate iteration of the series
of AlphaGos was self-taught. That is, by starting from a blank slate
and playing millions of games against itself, it devised its winning
strategies. These collection of moves were often unknown to even the
best of the world’s Go players. They called them “strange,”
“alien,” “beautiful,” and “brilliant.” Still, many of
these moves they cannot understand. AlphaGo Zero would make strange
moves, widely separated across the board, that only after a lengthy
time would show their power.
When we build a
human-scale artificial intelligence, we will probably be building it
for some purpose. We will build it as a series of intelligences
leading up to an ultimate version. Its training will undoubtedly
mimic successful past activity, perhaps like AlphaGo Zero’s.
But whatever
methods are used, we can expect superior results. The mechanism will
eventually surpass our abilities in ways we cannot anticipate. It is
possible, even likely, that we will not understand how this automaton
achieves its ends.
Here’s a
dystopian thought. The goal of any ruling entity is to stay in
power. In the case of Russia (or it’s predecessor the USSR),
China, or any area ruled by a single individual, it is that
individual’s goal to stay in power. In the case of a more
distributed allocation of power, such as in a fair democracy, the
party in power wants to stay in power. All other things are
secondary. (Occasionally there arises an influential individual
with higher motivations. One example is Mikhail
Gorbachev who
saw that the best way forward for his people was the dissolution of
the USSR. Another is John McCain with his recent actions in the US
Senate. However these people are rare.)
Suppose
Xi Jinping, Vladimir Putin, or any other dictator developed such a
superior intelligence. Or suppose it was developed by something like
DARPA and acquired by one of the political parties. Suppose it’s
goal was to enable the ruling entity to remain in power indefinitely.
Suppose
its recommendations were followed explicitly. Finally suppose it was
given control in the name of the ruling entity.
We
would not understand its actions though they would be ultimately
effective – more than any human actions. Yes this is Skynet. I
think Elon Musk (his warning is the title of this essay) and Stephen
Hawking understand these possibilities. Others don’t but see a
limited utility in artificial intelligence or
assume it will be used for prosaic purposes.
I
think we need to be very careful. We won’t understand a vastly
superior intelligence, but it will understand us.
No comments:
Post a Comment