Uncommon Descent Serving The Intelligent Design Community

Claim: Robots lie to each other

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

From Business Insider:

Del Monte believes machines will become self-conscious and have the capabilities to protect themselves. They “might view us the same way we view harmful insects.” Humans are a species that “is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses.” Hardly an appealing roommate.

He wrote the book as “a warning.” Artificial intelligence is becoming more and more capable, and we’re adopting it as quickly as it appears. A pacemaker operation is “quite routine,” he said, but “it uses sensors and AI to regulate your heart.”

A 2009 experiment showed that robots can develop the ability to lie to each other. Run at the Laboratory of Intelligent Systems in the Ecole Polytechnique Federale of Lausanne, Switzerland, the experiment had robots designed to cooperate in finding beneficial resources like energy and avoiding the hazardous ones. Shockingly, the robots learned to lie to each other in an attempt to hoard the beneficial resources for themselves.

“The implication is that they’re also learning self-preservation,” Del Monte told us. “Whether or not they’re conscious is a moot point.”

Anyone know anything about this?

Sounds fishy. What’s in it for a robot to lie, unprogrammed?

Follow UD News at Twitter!

Comments
You all might be interested in this article by Robert J Marks, II on AI, the Turing test and the the Lovelace test. It is highly doubtful that computers will ever be able to actually generate original creativity...the "flash of creative genius" mentioned in the article.DonaldM
July 7, 2014
July
07
Jul
7
07
2014
07:22 AM
7
07
22
AM
PDT
Anyone know anything about this?
Yeah. Del Monte is suckering the portion of the public who will believe everything a scientist says, even if the topic is outside of his field. The claim about robots "lying" has more to do with creative wordplay than anything else. If you code a calculator that tells you 2 + 2 = 5, is it lying? If so, computers have been lying practically since their development. It's just a pretty banal programming and selection simulation where some robots 'lasted longer' if they reported values that resulted in other robots ignoring their location. Old Galaga machines also 'show that AI are engaged in self-preservation' I guess.nullasalus
July 6, 2014
July
07
Jul
6
06
2014
08:54 PM
8
08
54
PM
PDT
The paper of the 2009 experiment that Del Monte is referring to is found here: http://www.pnas.org/content/106/37/15786.full We can read in the abstract that: Because robots were competing for food, they were quickly selected to conceal this information. However, they never completely ceased to produce information. Detailed analyses revealed that this somewhat surprising result was due to the strength of selection on suppressing information declining concomitantly with the reduction in information content. If I understood well this behavior was due by a loss of information in the ''genome''.Peace123
July 6, 2014
July
07
Jul
6
06
2014
07:31 PM
7
07
31
PM
PDT
News this may interest you: Princeton Philosophy Prof Dr. Hans Halvorson speaks on "Quantum Mechanics and Mind" - video https://www.youtube.com/watch?v=d_UK7Y4NWc0 http://www.st-andrews.ac.uk/~qoi/physphil2012.htm?video=hans#talks Of note from preceding video: Introducing quantum information into multiplayer games allows a new type of equilibrium strategy which is not found in traditional (classical) games. The entanglement of players's choices can have the effect of a contract by preventing players from profiting from betrayal.bornagain77
July 6, 2014
July
07
Jul
6
06
2014
05:25 PM
5
05
25
PM
PDT
It's got to be a matter of what's being optimized. Many video games will play themselves, and the human can just watch. But a lot of what is being called "AI" is just a feedback loop. I can't believe that pacemakers are concerned about "what's your favorite color". There is also the question of what kind of timeframe is involved. If you tell a computer to maximize profit on a stock over the next hour (or next 15 seconds), it will of course make different decisions than if you tell it to maximize total return over the next 10 years. There are of course cases where the computer has been programmed with flaws. The F-16 fighter "doesn't know it's an airplane until you load the software." One the software is loaded, the aircraft will refuse to execute commands from the cockpit that violate the defined flight envelope. But this is very complicated and there have been a number of mistakes. The first time a flight of F-16s visited Brazil, the pilots discovered that their planes IMMEDIATELY rolled inverted after takeoff. Whilst debugging the files, the analysts discovered that the programmers had assumed that the aircraft would ALWAYS fly in the northern hemisphere, and so when GPS send coordinates in South Latitude, the computer didn't know what to do. There was a similar case when a detachment deployed to Korea. The aircraft had to return to Hawaii because the computer didn't understand the International Date Line. And when the aircraft crossed it in flight, the computer couldn't figure out where the aircraft was. Again, humans debugged the files, made the necessary corrections, and the computer then very willing flew the aircraft where the humans wanted it to go.mahuna
July 6, 2014
July
07
Jul
6
06
2014
04:51 PM
4
04
51
PM
PDT

Leave a Reply