This is a link to a presentation I did at a local conference (Barcamp, Omaha NE). I later gave the same presentation to a group of Omaha astronomers (the Omaha Astronomical Society).
This is NOT a scientific endeavor. I made many assumptions that are sketchy, to say the least: for example, I assumed that intelligent life would be like us in many respects (water-based planet in the "green zone" of its sun and in the galaxy, free manipulating limbs, a society, fluent communication, various probabilities of critical events, etc.).
So this is for fun - not a "real" answer to Fermi's Paradox. Though the "analysis" does result in a handful of intelligent species (one to six), of which we are one (though that may be subject to argument as well).
In any case, if you are in the mood to be amused by such things, here is a PDF of that presentation.
Sunday, November 12, 2017
Saturday, October 28, 2017
Unleashing the Demon: The Road to Superior Intelligence
Through several
iterations, DeepMind (an Alphabet, Inc. company which also has
Google) developed a Go-playing automaton, AlphaGo Zero, that is the
best Go-player in the world. This ultimate iteration of the series
of AlphaGos was self-taught. That is, by starting from a blank slate
and playing millions of games against itself, it devised its winning
strategies. These collection of moves were often unknown to even the
best of the world’s Go players. They called them “strange,”
“alien,” “beautiful,” and “brilliant.” Still, many of
these moves they cannot understand. AlphaGo Zero would make strange
moves, widely separated across the board, that only after a lengthy
time would show their power.
When we build a
human-scale artificial intelligence, we will probably be building it
for some purpose. We will build it as a series of intelligences
leading up to an ultimate version. Its training will undoubtedly
mimic successful past activity, perhaps like AlphaGo Zero’s.
But whatever
methods are used, we can expect superior results. The mechanism will
eventually surpass our abilities in ways we cannot anticipate. It is
possible, even likely, that we will not understand how this automaton
achieves its ends.
Here’s a
dystopian thought. The goal of any ruling entity is to stay in
power. In the case of Russia (or it’s predecessor the USSR),
China, or any area ruled by a single individual, it is that
individual’s goal to stay in power. In the case of a more
distributed allocation of power, such as in a fair democracy, the
party in power wants to stay in power. All other things are
secondary. (Occasionally there arises an influential individual
with higher motivations. One example is Mikhail
Gorbachev who
saw that the best way forward for his people was the dissolution of
the USSR. Another is John McCain with his recent actions in the US
Senate. However these people are rare.)
Suppose
Xi Jinping, Vladimir Putin, or any other dictator developed such a
superior intelligence. Or suppose it was developed by something like
DARPA and acquired by one of the political parties. Suppose it’s
goal was to enable the ruling entity to remain in power indefinitely.
Suppose
its recommendations were followed explicitly. Finally suppose it was
given control in the name of the ruling entity.
We
would not understand its actions though they would be ultimately
effective – more than any human actions. Yes this is Skynet. I
think Elon Musk (his warning is the title of this essay) and Stephen
Hawking understand these possibilities. Others don’t but see a
limited utility in artificial intelligence or
assume it will be used for prosaic purposes.
I
think we need to be very careful. We won’t understand a vastly
superior intelligence, but it will understand us.
Subscribe to:
Posts (Atom)