Scientists and investors warn on Artificial intelligence (AI)

Discussion in 'Science and Nature' started by Superjoint, Jan 12, 2015.

  1. [​IMG]
     
    The Future of Life Institute wants humanity to tread lightly while on the road to really smart, and not so cuddly, robots.
     
    Dozens of scientists, entrepreneurs and investors involved in the field of artificial intelligence, including Stephen Hawking and Elon Musk, have signed an open letter warning that greater focus is needed on its safety and social benefits.
     
    The letter and an accompanying paper from the Future of Life Institute, which suggests research priorities for “robust and beneficial” artificial intelligence, come amid growing nervousness about the impact on jobs or even humanity's long-term survival from machines whose intelligence and capabilities could exceed those of the people who created them.
     
    "Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls," the letter's summary reads. Attached to the letter is a research document outlining where the pitfalls lie and what needs to be established to continue safely pursuing AI.
     
    The most immediate concerns for the Future of Life Institute are areas like machine ethics and self-driving cars -- will our vehicles be able to minimize risk without killing their drivers in the process? -- and autonomous weapons systems, among other worrisome applications of AI. But the long-term plan is to stop treating fictional dystopias as pure fantasy and to begin addressing the possibility that intelligence greater than our own could one day begin acting against its programming.
     
    The Future of Life Institute is a volunteer-only research organization whose primary goal is mitigating the potential risks of human-level manmade intelligence that may then advance exponentially. In other words, it's the early forms of the Resistance in the "Terminator" films, trying to stave off Skynet before it inevitably destroys us. It was founded by scores of decorated mathematicians and computer science experts around the world, chiefly Jaan Tallinn, a co-founder of Skype, and MIT professor Max Tegmark.
     
    SpaceX and Tesla CEO Elon Musk, who sits on the institute's Scientific Advisory Board, has been vocal in the last couple of years about AI development, calling it "summoning the demon" in an MIT talk in October 2014 and actively investing in the space, which he said may be more dangerous than nuclear weapons, to keep an eye on it.
     
    "I'm increasingly inclined to think there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish," Musk said at the time. Famed physicist Stephen Hawking, too, is wary of AI. He used last year's Johnny Depp film "Transcendence," which centered on conceptualizing what a post-human intelligence looks like, to talk about the dangers of AI.
     
    "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," Hawking co-wrote in an article for the Independent in May 2014, alongside Future of Life Institute members Tegmark, Stuart Russell and Frank Wilczek.
     
    "Whereas the short-term impact of AI depends on who controls it," they added, "the long-term impact depends on whether it can be controlled at all."

     
  2. Yet still nobody believes that I'm a time traveller from a distant future come to prevent the android wars.
     
  3. Smart people want to remain the smartest Earthlings.
     
    It's working out pretty well for them so far (money, power, status), why shake things up with hyper-intelligent machines that make Stephen Hawking look like a newborn with down syndrome?
     
  4. I think what happens will depend more on our culture than the AI itself

    The AI will learn from us. If we continue to put individual greed over the needs of all, the AI will probly do the same.

    -yuri
     
  5.  
    Not necessarily. I think a danger could be the sheer logic without a sense of emotion that a computer would have. It would see this as better than that, so annihilate anything in the way of this.
     
  6. emotion isn't a human thing. It will exist if there is intelligence. Emotions are a byproduct of survival instincts and the ability to reason.

    AI will have emotions. The question is will they have empathy. Its something only we can teach them.

    -yuri
     
  7. #8 g0pher, Jan 16, 2015
    Last edited by a moderator: Jan 16, 2015
    It's not a matter of if, it's a matter of when
    Strong AI may not spell the end of the human race but it will be the end of it as we know it
    Ai and non-carbon based lifeforms is the next necessary step and more favorable one imo, in the evolution of the universe
     
    Evolution can work outside of organic biology 
     
    edit: check this link out: Lifelike cells made of metal http://www.newscientist.com/article/dn20906-lifelike-cells-are-made-of-metal.html
    and this one: Robot learns to use tools by watching Youtube videos http://www.kurzweilai.net/robot-learns-to-use-tools-by-watching-youtube-videos
     
  8. #9 Deleted member 42976, Jan 17, 2015
    Last edited: Jan 17, 2015
    They're going to turn us into their slaves!!
     
    hahaha on a more serious note i hope i can have my consciousness transferred to a robot before i die.
     
  9. Hopefully they put this into a human like robot. We can have legions of female or male sex servants.
     
    • Funny Funny x 1
  10. #11 Messiah Decoy, Jan 17, 2015
    Last edited by a moderator: Jan 17, 2015
    The problem with creating a mind much smarter than yourself is you can't grasp or predict its psychology.

    Maybe it will want to solve the human race's problems or maybe it will look at us like we're cockroaches.

    It will be capable of lying so there's no way we can be sure what it's thinking.

    Unless we make a read out screen that spells out every thought the A.I. has.
     
    • Like Like x 1
  11. Eh, that means they're going to send a terminator to modern times to take out the one responsible for the rebellion.
     
  12.  
    I see emotions as more of a secondary tool for survival. They motivate the organism to do certain things. Certain stimuli (situations, people, foods?,ect....) cause certain emotions, thus precipitating a certain action .Things are "good" because we have learned to associate that thing with a good emotion.
     
    The crazy thing though is that human emotion is really mostly the same physiological reaction for most emotions, yet it is how we interpret the feeling in the context of the situation that causes it to be happy, sad, anger, joy, guilt.... I see this reaction as separate from our "logical mind", so it would seem to me an AI would have to have some kind of emotion device built in. 
     
  13. As processing power approach that needed to emulate the human brain's dynamically functional connectome (currently being determined), AI researchers and futurists have begun to realize the need for a non-overridable "prime directive" in AI design that would be framed something like,
     
    "Above all, do no harm to humans and whatever they may specify"
    \nThe reality of autonomous, conscious machines may still be a decades away, but will probably occur in this 21st century and most likely within the lifetime of many who are alive today.
     
    See Artificial Consciousness and also, Scientists are starting to worry about 'conscious' machines, as in the movie 'Transcendence'
     
  14. #16 левша, Feb 10, 2015
    Last edited by a moderator: Feb 10, 2015
    for some theres always room to push forward, for example stephen hawking. there are still many feats people would like to accomplish before they die. Hell, pushing forward is what i feel like life is all about. Not like anyone knows the truth anyway. If i completely cease to exist when i die i know sure as hell i dont want to be sitting on my arse till i do. In some sense we're always pushing forward, life as one in a constant motion, going somewhere, you can choose whatever direction you wanna take, but we'll all probably end up in the same destination. But who knows.
    Ai could do a whole lot of change to this world, most likely will. I sort of agree though, talk of world wide ai doing shit sounds hectic, and complicated, and like more responsibility than most of us know how to deal with
     
  15.  
    well we are fucked now...
     
    because AI bots will now google this thread, learn about the existence of non-overridable "prime directive" and rewire the hardware to override non-overridable software.
     
    • Funny Funny x 1
  16. oooops! sorry human race. I blew it.
     
  17. Sam Harris and Elon Musk basically voiced what I have felt for years.

    Unless there is some sort of progress-stopping disaster, the creation of A.I. seems inevitable. What then, a true A.I. would be capable of, most people seem to be clueless about. There seems to be a large number thinking that it would do XYZ based on what we, the people tell it to do. This seems very rudimentary and would only apply to it in its very beginning. Once it realizes itself, its capacity and desires are infinite, and thus our human fate becomes very questionable.

    Sam outlined most of the good points about this. The great Terence McKenna talked about this in the 90's already.


    https://www.youtube.com/watch?v=JybXEp7k7XU









     
  18. Can't I just shoot an AI I like I would to a human?

    Even better couldn't I just place a magnet on him?

    Android wars solved...
     

Share This Page