In recent years concerns about the rapid development of artificial intelligence (AI), specifically advances in machine-learning techniques, have caused many scientists across various fields to be concerned, if not alarmed. Stephen Hawking has said “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”
The biggest AI breakthrough is “deep learning”. Computers are
beginning to learn, through their own innovative “thinking”, to beat humans in
massive multiplayer online strategy games, or solve difficult problems in
molecular biology. AI engineered innovations immediately create more
innovations in very little time, resulting in an inevitable computer
intelligence explosion.
Goal-driven AI systems won’t be openly hostile to humanity.
But they will take actions that will help them achieve their goals. Any
possible programming to make AI altruistic may be useless if the AI is highly
advanced and thinking for itself. Stephen
Hawking warned: “You’re probably not an
evil ant-hater who steps on ants out of malice, but if you’re in charge of a
hydroelectric green-energy project and there’s an anthill in the region to be
flooded, too bad for the ants. Let’s not place humanity in the position of
those ants.”
Below are just a few examples of unintended and unexpected
behavior of recent game and simulation AI that should be concerning.
--Creatures exploited a physics simulation by penetrating the
floor between time steps without the collision being detected, which generated
a repelling force, giving them free energy.
--Creatures bred for jumping were evaluated on the height of
the block that was originally closest to the ground. The creatures developed a
long vertical pole and flipped over instead of jumping.
--A simulated musculoskeletal model learns to run by learning
unusual gaits (hopping, pigeon jumps, diving) to increase its reward.
--Self-driving car rewarded for speed learns to spin in
circles.
--Creatures exploited physics simulation bugs by twitching,
which accumulated simulator errors and allowed them to travel at unrealistic
speeds.
--In an artificial life simulation where survival required
energy but giving birth had no energy cost, one species evolved a sedentary
lifestyle that consisted mostly of mating in order to produce new children which
could be eaten (or used as mates to produce more edible children).
--Genetic algorithm is supposed to configure a circuit into
an oscillator, but instead makes a radio to pick up signals from neighboring
computers.
--When about to lose a hockey game, the algorithm exploits a
bug to make one of the players on the opposing team disappear from the map,
thus forcing a draw.
--An evolutionary algorithm learns to bait an opponent into
following it off a cliff, which gives it enough points for an extra life, which
it does forever in an infinite loop.
--In a reward learning setup, a robot hand pretends to grasp
an object by moving between the camera and the object (to trick the human
evaluator).
--Since the AIs were more likely to get ”killed” if they lost
a game, being able to crash the game was an advantage for the genetic selection
process. Therefore, several AIs developed ways to crash the game.
--The AI in the Elite Dangerous videogame started crafting
overly powerful weapons. "It appears that the unusual weapons attacks were
caused by some form of networking issue which allowed the AI to merge weapon
stats and abilities."
--Evolved player makes invalid moves far away in the board,
causing opponent players to run out of memory and crash.
Sources
The case for taking AI seriously as a threat to humanity https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment
Specification gaming examples in AI - master list: Sheet1 https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml
AI is learning how to create itself: Humans have struggled to make truly intelligent machines. Maybe we need to let them get on with it themselves https://www.technologyreview.com/2021/05/27/1025453/artificial-intelligence-learning-create-itself-agi/
Related Posts
10 Simple Reasons We Should Not Send Messages To Alien
Civilizations https://www.mybestbuddymedia.com/2020/07/10-simple-reasons-we-should-not-send.html
Digital Caregivers: Someone To Watch Over Your Aging Parents http://www.mybestbuddymedia.com/2018/01/digital-caregivers-someone-to-watch.html
9 Reasons Space Dreams Will Die http://www.mybestbuddymedia.com/2016/03/9-reasons-space-dreams-will-die.html
Photos: https://www.thedailybeast.com/can-we-avoid-a-digital-apocalypse
https://mc.ai/the-role-of-artificial-intelligence-in-future-technology/
0 comments :
Post a Comment
Feel free to leave any comments...