AI: "Summoning the Demon"

Started by spuwho, October 26, 2014, 09:59:19 PM

spuwho

SpaceX and Tesla founder Elon Musk was speaking at a conference at MIT when a student asked him about Artificial Intelligence. His metaphorical response has triggered some strong responses.

Per the Sydney Morning Herald via Mashable:

There have already been several dire warnings from Tesla and SpaceX founder Elon Musk in recent months regarding the perils of artificial intelligence, but this week he actually managed to raise the bar in terms of making AI seem scary.

First, according to Musk, AI was as dangerous as nuclear war. Now Musk is likening the possible battle between humans and computers in the future, termed by some as "the singularity," as a struggle for the soul of mankind itself.

How so? By invoking the one thing even those with little interest in technology fear the most: demons!

In an hour-long interview for MIT, which held its Centennial Symposium last week, Musk opened himself up to the audience for questions. Most of the questions were about space travel, but one audience member asked Musk for his thoughts on artificial intelligence, and that's when things got a bit spooky.

"I think we should be very careful about artificial intelligence," said Musk, the expression on his face suddenly turning very serious. "If I were to guess like what our biggest existential threat is, it's probably that. So we need to be very careful with the artificial intelligence. There should be some regulatory oversight maybe at the national and international level, just to make sure that we don't do something very foolish."

Sounds reasonable. Prudent even. A generally conservative approach to a potential technological issue facing our world in the future. Wise words.

But then...

"With artificial intelligence we are summoning the demon," said Musk. "In all those stories where there's the guy with the pentagram and the holy water, it's like, 'Yeah, he's sure he can control the demon.' Doesn't work out."

Forget Tony Stark, the comic book character most often associated with Musk, it may be time to start thinking Doctor Strange. Pentagram? Really, Elon?

And lest you think Musk was just fooling around about his fear of the potential dangers of AI, when the next questioner asked him about telecommunications, he stumbled for a bit and admitted that he was still pondering the question of AI. You can watch the entire video below, or skip to Musk's demonology musings.

http://www.youtube.com/v/8-qEiB6a5f4

ronchamblin

#1
The AI question is interesting.  One can engage the ideas of practical work being performed by slave AI machines, or one can discuss the progression to intelligences identical to that of humans, and eventually to intelligences exceeding that of humans.

The probability is high that the brain functions independent of its external physical environment, other than gathering sensory data from its environment via sight, tactile, sound, smell, temperature, vibrations, and chemicals via food and various intrusions via skin etc.  In other words, there is no evidence of mysterious (extrasensory) infusions of thoughts or data from outside of the brain itself, nor from the brain to other entities -- other than via motor actualizations such as physical movements, body language, and speech.   

There is evidence too that a small segment of one's thinking attributes are shaped somewhat by genetic gifts from parents and grandparents etc.

Ultimately, both the imagined AI machine/computer/robot and the human brain transmit and receive data by way of physical mediums as above, but extended via airwaves, the phone, and via the Internet.  But within its "brain", the AI computer has a slight advantage of speed, as AI endures almost no "contemplation" time, given that microelectronics allow its neural networks to work at virtually the speed of light, whereas the human must depend upon the slow electrochemical method of "thinking".

The idea of "error" is interesting.  Once set in motion with its program, the AI system is almost immune from error, whereas the human brain is prone to occasional error simply because it is partly a chemical system -- not one of stable or fixed on-off electronic logic circuitry, as with the computer.    Of course, the AI brain can fail completely, by simply losing power, just as the human brain can fail abruptly by physical death, or gradually by either physical or mental deterioration. 

But what of this fear of the AI system set loose upon the human environment?  The idea of fear of any future AI systems might depend upon the degree of autonomy designed into it.

For example, if one were to design an AI system with only limited intelligence -- but a machine capable of mass destruction upon the human landscape, and then designed and programmed its AI system so that its only objective was to destroy anything resembling life, then all humans would have cause to fear this machine.  They would attempt to destroy it before it destroyed them.  Of course, the machine must have the capability of refreshing its arsenal and its fuel -- and would have compatriots about. 

Obviously the imagined above machine, if it is to be considered a potential threat, would not have been programmed to approach the overall intelligence level of the human, as absent within its brain would be the ideas of compassion and concern for the suffering and the lives of living things.  Only "some" humans are infected with these immoral or villainous attributes, as a consequence of certain kinds of brain dysfunctions.  We often catch them, and kill them. 

I suspect that the AI system to which the fellow in the article has referred as being an object to fear is the AI system let loose with attributes that include those of the human -- which means a huge repertoire of malleable choices, limited only by the imagination as to the good or the bad possibilities able to force pain and suffering upon the environment, or good.

Once an AI system has been allowed the freedom to learn, and the freedom to imagine, and then the freedom to proceed upon its own journey in thought, as is the case with the human, then one might wonder how the created AI system would choose when encountering choices involving what we call evil and the good. 

The reality is that the humans occasionally choose what we call evil, and we have names for the dysfunctions.  Surely the imagined AI systems could be designed and programmed to occasionally choose the evil and destructive paths.  But surely too, they could be programmed to choose only the path offering the good to all in the environment.  But these "programmed" AI systems, programmed to do evil or good, are not really free and on their own.

It is the AI systems "let loose" that we must worry about -- the one's with total freedom ... as is the case with the human.  But whereas the electronic AI system is less prone to deterioration and system dysfunction -- as compared to the human that too often descends to mental illness with all kinds of possible behaviors, including serial killing -- it is highly probable that the free and intelligent AI system will not have cause to drift to perform evil.  In other words, we might discover that the very intelligent AI system, by way of its excellent health record (reliable microelectronics .. with redundancies), will very seldom offer evil or harm to its environment or to others -- simply because its program is one of excellence ... of intelligence.

I must prepare for work.  Seems that there is more to the story.  Of course there is. Anyone ..... ? 


IrvAdams

Interesting, Ron. Program out the negative, keep the positive. Of course, a fallible human brain will make all the decisions so you wonder if weaknesses will inevitably find their way in.
"He who controls others may be powerful, but he who has mastered himself is mightier still"
- Lao Tzu

whyisjohngalt

The first thing we look for as a "sign of life" is water.
In a robot apocalypse the weapon of choice will be water balloons and water guns - as just a drop of water from either could render AI useless.

Maybe we should continue to bankrupt the city and hinder development via the Police and FIRE pension since the fire fighters will essentially become soldiers of the future.

ronchamblin

#4
Quote from: IrvAdams on December 14, 2014, 09:22:16 AM
Interesting, Ron. Program out the negative, keep the positive. Of course, a fallible human brain will make all the decisions so you wonder if weaknesses will inevitably find their way in.

I presume you mean that the human error might occur in the design of the AI machine, and that therefore the AI system would be contaminated with these errors, thus allowing weaknesses within the AI. 

Makes sense.  Given that the human system, although quite amazing in its abilities, is wrought with continual error however slight, as a consequence of its dependence upon the electro/chemical/biological method of "computation", the true AI system, stabilized with "hard electronic" logic circuitry, will be receptive to continual "tuning" over time, thus gravitating toward a perfection never experienced in the human.  So the AI weaknesses you mention, once discovered via observation of less than desirable behaviors, might be gradually eliminated via a tuning process, as rendered by the occasional human who happens to be functioning perfectly for a few minutes.

Quote from: whyisjohngalt on December 14, 2014, 12:24:37 PM
The first thing we look for as a "sign of life" is water.
In a robot apocalypse the weapon of choice will be water balloons and water guns - as just a drop of water from either could render AI useless.

Maybe we should continue to bankrupt the city and hinder development via the Police and FIRE pension since the fire fighters will essentially become soldiers of the future.


Absolutely.  The AI system would certainly be disabled if water, even in the form of condensation or mist, were allowed to invade the sensitive electronics of the AI hardware.  Besides being designed to eliminate moisture intrusion, any AI system must be designed to function in extreme temperatures. 

I presume that any AI system would be able to "know" that it was not functioning at optimum and accurate levels so that it could shut down or simply shift "thinking" to a backup or duplicate segment of its hardware, so as to eliminate the sick or wounded segment of its brain. 

The idea of the "self" arises as one discusses the AI system.  Obviously the human gains a sense of the self during early childhood development, and this "self" essence invades the child's mind.  Just as the human's physical body is separate from its environment, the AI "body" is too.

The human exists in and travels through its physical environment, and thus is aware that it is separate.  The AI system, being a physical entity, especially if it is part of a robot, will too, gain a sense of the "self", since it will travel through its environment. 

The "self" concept is important for any system because it becomes the center, the final decision maker ... the dictator.  Any AI system housed in a robotic environment, must have only one decision maker, one "self", simply because the robot can move in only one direction at any one moment.  Imagine if a robot had two "selfs" ... two centers of control.  It would exhibit behaviors similar to the confused and disoriented human who cannot determine which way to go ... running back and forth or in circles.  There is some kind of mental dysfunction which is characterized by the victim tending to two selves ... or switching between two identities -- split personality disorder ... multiple personality disorder ... dissociative identity disorder.       
   


finehoe

Stephen Hawking warns artificial intelligence could end mankind

Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.

He told the BBC:"The development of full artificial intelligence could spell the end of the human race."

http://www.bbc.com/news/technology-30290540

ronchamblin

#6
Quote from: Murder_me_Rachel on December 15, 2014, 09:49:23 AM
I think the better response would have been, "I mean...have you ever seen the Terminator? films??"

I've never seen the terminator movies.  I average about one movie every four years ... porno ... and one fiction work every ten years. Not good. 

The threat from an increasingly powerful AI technology is probably real.  How significant is the threat?  Any threats seem to be both in the practical sense of taking jobs from humans, and in the more sinister sense of creating monsters that might run amok, doing harm to humans or society.

I suspect that it will soon be possible to create such a powerful AI system, in or out of a robot environment, that it will far exceed what humans can do in the areas of thinking and problem solving. 

The dilemma reminds me of the problem with nuclear power plants.  They are wonderfully efficient in energy production, but they pose such potential dangers that some believe the energy production is not worth the risk of a possible runaway nuclear reactor.

Just as the nuclear systems require extreme design cautions and backups to achieve safety .. so too, will be the case with any AI systems that are designed to engage the thinking environments formerly encountered only by the human.  After all, wouldn't it be careless for any human AI design group or individual to let loose upon the environment a powerful AI system which would have advanced physical motor abilities ... a robot ..., or control access to any critical systems in the environment?

Just as any nuclear power generating agency would be very foolish to operate a nuclear power generating station without the proper safety systems in place, including backup control and shutdown systems, it would be foolish for any AI or robotic system to be "let loose" upon the environment without the proper control or "shut down" methods in place.

The future AI systems will eventually take the form approximating that of the human mind.  It will possess a "self".  It will become an individual ... possessing all the attributes of the human.  Once the "self" or the "I" has emerged within its brain, it will have come alive. 

Once alive, it could at some point possess the desire to remain alive, and to act against any threats to its life.  And if this powerful AI system is designed into a robot environment, with all of the motor skills allowing for travel and precise motor behavior, then there is a possibility that a monster, as has existed only in the fiction and the movies, will have been created.

Therefore, it seems to me that extreme caution must be tendered during the design and building of the kind of robots which will be possible in perhaps twenty or thirty years.

We are somewhat safe as long as any very powerful AI system ... a brain far exceeding that of the human ... is not placed into a robot that has motor abilities approaching or exceeding that of the human.  After all, what can a sinister or evil minded AI brain do if it is trapped inside a box having no arms, legs, or wheels, or any method of accessing or controlling its environment? 

As we experiment with the coming AI systems, we might discover that a powerful AI brain will consistently tend to be "good" ...and that the human has been inclined to occasional evil simply because of defects occurring as a consequence of the occasionally failed bio/chemical/electrical "circuitry" within the human brain.  After all, the solid and reliable electronic circuitries in the AI brain should have great reliability and accuracy as compared to the human brain.