Stephen hawking, the renowned physicist, cosmologist and the
author, in an interview with the BBC, said “the development of full artificial
intelligence could spell the end of human race,”
Hawking said that the state of artificial intelligence AI
today holds no threat, but he is concerned about the scientists of the future
creating technology that can surpass humans in terms of both intelligence and
physical strength.
It would take of its own, and re-design itself at an ever
increasing rate. Humans with slow biological evolution, couldn't compete and
would be superseded leading to their extinction. This is clearly seen in the
new avengers, Age of Ultron, where Ultron redesigns itself making it stronger
and more intelligent. Hawking’s comments closely follow those made by hi-tech
entrepreneur Musk, who raised controversy in late October when he warned an
audience at the MIT about the dangers
behind AI research.
Musk’s comments raised discussion about the state of AI, which
today is more about robotic vacuum cleaners than terminator like robot which
shoot people and take over the earth. Musk also said again that the biggest
existential threat facing mankind is AI.
Google’s director of engineering, Ray kurzweil, is also
worried about AI. He is concerned that it may be hard to write algorithmic
moral code strong enough to constrain and contain super-smart software. The
machines are slowly but surely getting smarter and pursuits in which humans
remain champions are diminishing. For science fiction authors Charles Stross,
the dangers inherent in artificially smart systems do not rise because they
will out-think us or suddenly realize they can please themselves rather than
their human masters.
The whole question of the use of AI in warfare has been
addressed last week in a report by two Oxford academics. In a paper called
Robo-wars: The regulation of robotic Weapons, they call for the guidelines on
the use of such weapons in 21st Century warfare. “I’m particularly
concerned by situations where we remove human being from the act of killing and
war,” said Dr Alex Leveringhaus, the lead author of the paper.
He says you can see AI beginning to creep into warfare, with
missiles that are not fired at a specific target: “a move that sophisticated
system could fly into an area and look around for targets and could engage
without anyone pressing a button.”
Looking at this scenario of driver-less cars having to decide
whether to protect the life of someone inside the car or of that outside. This kind
of dilemmas will emerge in all sort of areas where smart machines now get work with
little or no human intervention. Are we just being scared of something powerful
or are we right to worry about the rise of machines? What’s your take?
No comments:
Post a Comment