http://technology.timesonline.co.uk/tol/news/tech_and_web/article5741334.ece
QuoteAutonomous military robots that will fight future wars must be programmed to live by a strict warrior code or the world risks untold atrocities at their steely hands.
The stark warning â€" which includes discussion of a Terminator-style scenario in which robots turn on their human masters â€" is issued in a hefty report funded by and prepared for the US Navy’s high-tech and secretive Office of Naval Research .
The report, the first serious work of its kind on military robot ethics, envisages a fast-approaching era where robots are smart enough to make battlefield decisions that are at present the preserve of humans. Eventually, it notes, robots could come to display significant cognitive advantages over Homo sapiens soldiers.
“There is a common misconception that robots will do only what we have programmed them to do,†Patrick Lin, the chief compiler of the report, said. “Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person.†The reality, Dr Lin said, was that modern programs included millions of lines of code and were written by teams of programmers, none of whom knew the entire program: accordingly, no individual could accurately predict how the various portions of large programs would interact without extensive testing in the field â€" an option that may either be unavailable or deliberately sidestepped by the designers of fighting robots.
The solution, he suggests, is to mix rules-based programming with a period of “learning†the rights and wrongs of warfare.
A rich variety of scenarios outlining the ethical, legal, social and political issues posed as robot technology improves are covered in the report. How do we protect our robot armies against terrorist hackers or software malfunction? Who is to blame if a robot goes berserk in a crowd of civilians â€" the robot, its programmer or the US president? Should the robots have a “suicide switch†and should they be programmed to preserve their lives?
The report, compiled by the Ethics and Emerging Technology department of California State Polytechnic University and obtained by The Times, strongly warns the US military against complacency or shortcuts as military robot designers engage in the “rush to market†and the pace of advances in artificial intelligence is increased.
Any sense of haste among designers may have been heightened by a US congressional mandate that by 2010 a third of all operational “deep-strike†aircraft must be unmanned, and that by 2015 one third of all ground combat vehicles must be unmanned.
“A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives,†the report noted.
A simple ethical code along the lines of the “Three Laws of Robotics†postulated in 1950 by Isaac Asimov, the science fiction writer, will not be sufficient to ensure the ethical behaviour of autonomous military machines.
“We are going to need a code,†Dr Lin said. “These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code.â€
Isaac Asimov’s three laws of robotics
1 A robot may not injure a human being or, through inaction, allow a human being to come to harm
2 A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law
3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Introduced in his 1942 short story Runaround
http://ethics.calpoly.edu/ONR_report.pdf
Quote"military robot ethics"
Whoa! :-\
http://www.theregister.co.uk/2009/02/18/darpa_self_aware_tanks/
QuoteDARPA seeks self-aware AI robot mega-tanks
We care not who welcomes us, meatsacks - all shall die
By Lewis Page
Posted in Government, 18th February 2009 13:27 GMT
Free whitepaper â€" Best practices in SOX compliance
Pentagon boffinry chiefs have announced that they would like some self-aware computer systems capable of "meta-reasoning" and "introspection". The plan is to place these machine intelligences in command of heavily armed, well-nigh invulnerable robotic tanks.
This latest plan for humanity's subjugation comes, of course, from DARPA - the agency believed to harbour the largest known group of lifelike people-simulant robots piloted from within by tiny, malevolent space lizard infiltrators in the entire US federal government.
The plan is called Self-Explanation Learning Framework (SELF). It is being handled by Dr Mike Cox of DARPA's renowned Information Processing Technology Office.
According to this presentation (pdf) (http://www.darpa.mil/ipto/solicit/baa/SELF_presentation.pdf) by Dr Cox:
Without a model of self, cognitive systems remain brittle ...
Goal: Provide machines with an ability to reason about their own reasoning... SELF will enable any learning system to explain and repair itself
Task Benefits:
Improved goal satisfaction through self-explanation and meta-control module.
Self-explaining systems lead to better calibrated trust for human users.
It seems that DARPA already has a fearful array of "Intelligent Agent" software at its disposal, so Dr Cox would like his future collaborators to "focus fully on the meta-level" as basic Agent-Smith-a-like killer AIs will be provided as "GFE": government furnished equipment.
Assuming the self-aware, self-repairing, self-programming software can be built, one might ask what Dr Cox plans to do with it.
Rather than attempting like any sane person to unplug the whole system at the wall before it eradicates humanity, Cox believes it would be suitable in the "near term" for "armored combat" and "tactical air" missions. Just to make it quite clear what this means, the good doctor - or anyway the lifelike lizard-piloted simulant which long ago replaced the real Cox - helpfully includes pictures of a main battle tank and a jet fighter, two of the most potent engines of destruction available to the modern military.
It is clear that he intends to place his self-aware 'ware in charge of such kit as the frightful 70-ton turbine-powered Abrams (http://www.army.mil/factfiles/equipment/tracked/abrams.html), armoured like a mobile Fort Knox and capable of shooting a hole through small mountains.
what if there is a war, robots vs. humans? Because humans discriminate against the robots as the robots don't even have the right to vote, so the robots decide to crush mankind because the humans attempt to put all the robots on a reservation. The humans will never honor those bogus treaties.
http://www.sealab2021x.com/season-0/6/
But I really want that robot house keeper.
For an excellent horror story about this, I recommend reading "Second Variety" by Philip K. Dick. Or if you're lazy, rent "Screamers" the movie based on the story.
Utter nonsense; many kinks to smooth out here. What humans would robots obey? Just American humans? Or Russian Humans? I guess they would program them to obey only the country or entity that they are created from. A lot of "what if's" in this robot fighting future wars or serving in our military thing.