Source: Machine Intelligence Research Institute (MIRI), 2010.
Reducing Long-Term Catastrophic Risks from Artificial Intelligence. The
Singularity Institute, San Francisco, CA. Available at: <https://intelligence.org/files/ReducingRisks.pdf> [Accessed 24 September 2015].
The literature review I have
chosen concerns of the long-term risks associated with the development of
“artificial intelligence” (AI). Since there seem to be strongly negative views
against the development of AI, I decided to explore the roots of those risks in
order to understand the fears and hopes surrounding the topic. This article is
one I believe mirrors many people’s views, and having been written by the
well-known, Machine Intelligence Research Institute, that specialise in such
research, I thought this article would provide me with some insight on AI. In
this review, I have explored two main focuses of the article: the risks of AI
itself, and the likelihood and risks of “friendly AI”.
I am fascinated by the idea
that a machine can been deemed “intelligent”, since I always understood the
term to involve some sort of independent thinking, whereas machines are installed
with a “fixed set of imperatives”, defying the nature of the term,
“intelligence”. I am also disturbed by the suggestion that “a software problem that will require new insight”, a
“robot rebellion” in relation to “machines” in an “intelligence explosion”
(Good, 1965) or a “technological singularity” (Kurzweil, 2005), which is how AI
has been defined by previous researchers, can equate to a sense of another
so-called “intelligent” life form other than our own – namely, a revolution of
AI machines.
Furthermore, I am not
convinced that, as suggested in the article, AI will put “evolutionary
pressures” on human development due to their “self-modification” features
(Bostrom, 2004). I am doubtful of the notion that AI will develop and surpass
the survival instincts engrained in our human DNA, which is millions of years
old, because they have some technological microchip that allows them to
“evolve” and produce the next generation by themselves. This is because, even
though it has been suggested that the fact we cannot produce a machine with
human-level intelligence yet probably would not stop a machine improving itself
to this level anyway, there is no empirical evidence of machines “thinking” of
such an intention as we know it, especially since they have a limited
capability built into their “fixed set of imperatives”. Hence, based on this
lack of evidence, I do not believe that, at least in the near future, there is
much of a risk of an intelligence war breaking out between humans and AI, because
I do not believe human ideas, which are infinite, can be surpassed by machine
imperatives, which are so finite and fixed.
Another suggestion proposed
was the concept of “friendly AI”. This is the notion that AI can benefit and
work in the interests of the human society for the good of the people it
serves. My instinctive answer to such a concept is to point out its idealistic
fantasies. In my opinion, I am very suspicious that anything deemed to possess
“intelligence” will submit to essentially being a slave to those it may be
thinking are even inferior to itself. Although an AI machine holds a “fixed set
of imperatives”, if it really is so “intelligent”, there is a possibility of
moderating just how “fixed” the imperatives are, thus leading to the question
on control of AI, and exactly how “friendly” they would be if they knew humans
only wanted them for what I would like to suggest as “AI slavery”. On the other
hand, of course, if the AI is not so intelligent, then I highly doubt that it
has the level of “intelligence” to be a “friendly AI” and contribute to society
in the way we might like it to, except for carrying out the industrial and
dangerous workload we wish to take off our manual labour workers. Hence, the
question of exactly what is meant by “intelligence” remains unanswered, and in
not being able to determine how intelligent “intelligence” is, the risks
associated with AI remain theories that are not backed up by substantial
evidence.
Therefore, I conclude that the
proposed risks associated with AI’s intelligence and the notion of the
existence of “friendly AI” are unprecedented and theoretical, due, perhaps, to
our very natural, human fear of the unknown. This is because, there is a lack
of technological evidence to show the dangers of AI taking over from humans,
and the concept of “intelligence” still remains, at least to me, vaguely
defined. So, until “intelligence” is redefined or more concisely defined, and
AI is more potent in the world, I do not believe we can justify the risks of AI
taking over as proposed.
No comments:
Post a Comment