Do you think machines will rule the planet?

Discussion in 'Philosophy' started by Superjoint, Sep 2, 2001.

?

How do you see the future?

  1. Machines like computers will become "intelligent" but will be restricted in their capabilities

    0 vote(s)
    0.0%
  2. Machines will take control over the human race

    0 vote(s)
    0.0%
  3. Machines will never become real intelligent and consious

    0 vote(s)
    0.0%
  4. I don't have clue, I'm waisted :)

    0 vote(s)
    0.0%
  1. By Kevin Featherly, Newsbytes
    CAMBRIDGE, MASSACHUSETTS,
    31 Aug 2001, 6:13 PM CST


    [​IMG]


    The problem is simple: people just aren't very smart. That's why we need smart machines. Just ask Marvin Minsky.
    Author Steward Brand once compared Minsky to Goethe's Mephistopheles, saying his is a "fearless, amused intellect creating the new by teasing taboos." So it's no surprise when Minksy says things like, "I don't think that people are very smart, and they need help," as he did in an interview today with Newsbytes.

    But don't think he doesn't believe it.

    Minsky, the founder of the MIT Artificial Intelligence Lab and the man often referred to as "the father of artificial intelligence," spoke with Newsbytes about the state of AI technology on Thursday and again this afternoon.

    Minsky, a noted author, instructor and researcher in the AI field, has been at work trying to raise machine intelligence to the level of humans – and then, presumably, beyond – ever since he built the SNARC (Stochastic Neural-Analog Reinforcement Computer), the world's first artificial neural network, which modeled the learning process a mouse goes through as it tracks its way through a maze. He did that, incidentally, as a graduate student at Princeton University - in 1951.

    Since then, plenty has happened. In addition to AI, Minsky has made contributions to the fields of robotics, mathematics, virtual reality, even space exploration. He has written many books, including a science fiction novel with Jack Williamson, "The Turing Option," that explores the possibilities of successful machine intelligence (and which places the birth of genuine AI in the year 2023). Perhaps most famously, he worked as a science consultant to late film director Stanley Kubrick to devise the AI-driven HAL 2000 computer, which ended up killing an astronaut and getting summarily unplugged in the 1969 film, "2001: A Space Odyssey."

    But despite all his work over five decades, artificial intelligence, which looked so promising when Minsky published the seminal white paper "Steps Towards Artificial Intelligence" back in 1961, has stalled.

    "The reason is that there are probably many years of hard research to be done, but there are very few people working on the problem of human-level (machine) intelligence," Minsky said. "In fact, I'm trying right now to organize a conference of about 20 people who are interested in how common-sense reasoning works and how to organize a project to get a machine to do it. And I can't find 20 people."

    The loss of momentum hasn't stopped Minsky, who today is a Toshiba Professor of Media Arts at MIT. He remains an unflinching champion of the AI science. In 1994, for example, he wrote an article for Scientific American magazine, "Will Robots Inherit the Earth," in which he answers his own question enthusiastically in the affirmative.

    "Will robots inherit the earth?" he wrote. "Yes, but they will be our children. We owe our minds to the deaths and lives of all the creatures that were ever engaged in the struggle called evolution. Our job is to see that all this work shall not end up in meaningless waste."

    It's mind-bending stuff. But how long will it take to pull it off? When will computers cease to be dumb, gussied-up adding machines and start thinking for themselves?

    "It's between three and 300 years," he said. "Estimating how long it will take is a combination of how large we think the problems are and how many people will work on it."

    Minsky compared the situation to the problem that another AI pioneer, Herbert A. Simon, ran into when he predicted in 1958 that it would take 10 years to create a world champion chess-playing program. Simon, who died this year, faced a lot of criticism when, in fact, it took until 1997 for the prediction to come true.

    "Simon's mistake wasn't about chess," Minsky said. "It was about thinking that more people would work on it. And in fact, in that period, there were only a couple of significant people trying to do it."

    Minsky laments that there are only 10 "significant" people in the world that he knows who are tackling the problem of AI from the same direction he is, which is from a basic common-sense perspective. Computers need to develop common sense, which incidentally also means that they need to be equipped with certain basic emotions, according to Minsky. It is probably not necessary to make computers that can get angry, but it would be useful if they'd get annoyed when puzzling over a problem and failing, Minsky has said. That way they'd be likely to come back and try to solve the problem a different way – which after all, is simply a common-sense thing to do.

    However, instead of taking that approach, Minsky said, most current AI researchers are tinkering with fads from the latest peer-review journals. It is hard to find people who want to tackle common-sense reasoning, he said, mainly because creating common-sense responses is an enormous programming challenge.

    "I think when they look at it, they think that it is too hard," Minsky said today. "What happens is that people try that, and then they read something about neural nets and say, 'Maybe if we make a baby learning machine and just expose it to a lot, it'll get smarter. Or maybe we'll make a genetic algorithm and try to re-evolve it, or maybe we'll use mathematical logic.' There are about 10 fads. And the fads have eaten up everybody."

    Steven Spielberg hasn't helped much either, he said. While the director's recent movie, "A.I.," could have served to pique public interest (and public funding) and driven some curious scientists into the field, instead, the film might have done more harm than good.

    "It was probably as negative as possible," Minsky said. "It had no ideas about intelligence in it."

    Minsky said he found it amusing that a Pinocchio subtext entered the movie. "I'm sure the reason is that as soon as you knew the plot, you said, 'Oh! Pinocchio!' And Spielberg tried to head off that criticism by showing that at least he was aware of it." Minsky said. "In other words, it was just a bad soap-opera movie. It didn't have any ideas about emotions. I think it was a terrible film with very good photography. It didn't have anything about what are the problems."

    Minsky lamented that the film wasn't made by the project's original director, Stanley Kubrick, who died before production began. "And frankly, I was annoyed that Spielberg didn't call me. But I guess he has an aversion to technical things."

    The professor is working to drum up new enthusiasm for artificial intelligence himself, with his book, "The Emotion Machine," parts of which are online in early drafts.

    "I hope I'll finish it in the next couple of months, but I always say that," Minsky laughed. "I'll put most of it on the Web. I want the ideas to be available no matter how slow publishing is."

    The book explores the idea that emotions are simply different ways of thinking, and that machines, to be effective, need to find various methods of considering problems to solve them efficiently. Most computers now have at best one or two ways to resolve problems. Minsky has some guarded hopes that this part of AI research could move somewhat swiftly.

    "I think it's possible that in the next 10 to 15 years we'll get machines to do a considerable amount of common-sense reasoning, and then things might take off," he said.

    The bottom line question about artificial intelligence is, why? What drives people like Minsky to build machines that might well have intellectual advantages over their creators? This is one of the fears that Sun Microsystems' Bill Joy wrote about in last year's Wired magazine essay, "Why The Future Doesn't Need Us," which sent shock waves through the Silicon Valley, prompting debate about how far such innovations as AI, robotics and nanotechnology might go in supplanting and overpowering humanity.

    Minsky dismisses fears about AI out of hand, saying that among the research community, they don't even register. "There are deconstructionists and strange humanists, but they don't have influence on the technical community," he said.

    Minsky has very specific reasons for moving forward with artificial intelligence, and it's all about human shortcomings. Minsky thinks that humanity's intelligence may have run its evolutionary course. As a species, we may be at or near the end of our tethers in terms of developing a higher order of intelligence. But with digital technology present to push things ahead, Minsky suggests, why stop learning how to learn? Intelligence is intelligence, whether it is processed through software (computers) or wetware (the human brain).

    "Humans are the smartest things around, and the question is why they aren't smarter," Minsky said. "They're sort of the only game in town. There are elephants and porpoises, but they don't seem to go past a certain point.

    "It would be awful," Minsky said, "if we were the end of the road."

    Marvin Minsky maintains a Web site at MIT that contains many of his writings, including early chapter drafts of his forthcoming, "The Emotion Machine." These are at http://www.ai.mit.edu/people/minsky/minsky.html .

    Reported by Newsbytes.com, http://www.newsbytes.com .

    18:13 CST
    Reposted 21:50 CST

    (20010831/WIRES TOP, PC/MINSKY.JPG/PHOTO)
     
    • Like Like x 1
  2. interesting.
     
  3. damn.... i wrote a whole shitload about the topic (one which im fond of) and then when i clicked "submit reply" i got the page wich sez "haha this aint been posted because the net sux and so does IE5" (well i don't say that but it should!) tried to click back ... and got a blank form

    grrrrrrrrrr
     
  4. By Wendy McAuliffe
    ZDNet (UK)
    September 4, 2001 5:37 AM PT

    Renowned British scientist Stephen Hawking has claimed that humans should be genetically engineered if they are to compete with the phenomenal growth of artificial intelligence.
    In an interview published on Saturday by the German magazine Focus, Professor Hawking argues that the increasing sophistication of computer technology is likely to outstrip human intelligence in the future. He concedes that the scientific modification of human genes could increase the complexity of DNA and "improve" human beings.

    "In contrast with our intellect, computers double their performance every 18 months," says Hawking. "So the danger is real that they could develop intelligence and take over the world."

    The best-selling author of A Brief History of Time says "we should follow this road [of genetic engineering] if we want biological systems to remain superior to electronic ones."


    Hawking predicted last year that genetic engineers would be able to create super-humans with larger brains and an increased IQ. His latest warning calls for the development of technologies that would allow human brains to be linked to computers, "so that artificial brains contribute to human intelligence rather than oppose it."

    The 59-year-old mathematics professor is a victim of Lou Gehrig's disease -- the nerve-destroying motor neurone illness that has confined him to a wheelchair. He also holds a Cambridge University chair once held by Sir Isaac Newton.

    Professor Hawking is not alone among highly reputable scientists who foresee such a future. His comments echo those of Sun Microsystems co-founder and chief scientist Bill Joy who in March 2000 warned of the potential dangers in the computer technologies he helped create.

    In a Wired magazine article, Joy cautioned that the convergence of genetic engineering and computer technology could pose a very real threat to humanity and the ecosystem.

    According to Joy, current advances in molecular electronics mean that by the year 2030, "we are likely to be able to build machines in quantity a million times as powerful as the personal computers of today", and imbue them with human-level intelligence.

    "With the prospect of human-level computing power in about 30 years, a new idea suggests itself," wrote Joy. "I may be working to create tools which will enable the construction of the technology that may replace our species. How do I feel about this? Very uncomfortable."

    Later this month sees the UK release of Stephen Spielberg's film A.I.--a science fiction creation of Stanley Kubrick's that the late director never lived to finish. Set in the mid-21st century, the film portrays a self-aware computer that saves the world from an environmental disaster caused by the Greenhouse effect.
     
  5. I don't believe that human's are supposed to conquer their environment. We are to coexist with the environment because without it, we are nothing. Oxygen, water etc...

    The whole idea of machines having emotions, ideas and thoughts is so "science fiction" to me yet, obviously, it is being worked on as I type. So, I guess what I am seeing is that those who are creating this A.I. are almost taking on their own version of the role of God.

    Luckily, I am human and have a rather short lifespan in the whole scheme of things so I will not witness too much 'real' science fiction in my lifetime.
     
  6. I saw A.I yesterday from Steven Spielberg, kind of depressed me, what a dark movie. It was such a pessimistic view of the fate of the human kind....

    Did anyone see it also? What are your thoughts about it?

    Just curious ;)


    Peace

    SJ
     
  7. Boy, that's hard to predict.

    Will we be able to make machines with emotions? It seems to me that if we ARE able to create a computer with emotions who is able to design and create more machines is when we'll be in trouble.

    I don't even know if emotions will have to be a necessity. If the "machines" are smart enough to know that we are placing limitations on them (to save ourselves) they may not like that shit.

    I'm glad, as well, that I won't be around when we get that good with machines. We can't even produce a computer that can deal with more than bianary numbers. (Well, one that is affordable anyways).

    I think we're safe for the time being.
     
  8. He Ganja,

    Off course it's important to evoluate, but I think also that people want to feel special , they want to be sure that they are unique, and the idea of a machine maybe disrupts the idea of "being human", I'm here with you to explore and to find new frontiers, but living in a crazy world. I didn't thought we would ever epxerience the threat of bio terrorism in the western hemisphere. I have my frontier right now, and right here for the moment. Let's face this obstacle first :)
     
    • Like Like x 1
  9. Yeh, man
    I hear where you're coming from, BUT.....

    Always remember:- That there are those (people/organisations/states) for whom the very IDEA of people dreaming that they can ALL be special & 'inherit the cosmos' is anathema.

    There are those for whom the terrible events of Sept 11th were a positive blessing. Those who seek to make all of us feel small, insignificant and in need of 'guidance'.

    That what you call a fear of losing your humanity is only a natural 'sub-phobia' about change of any kind. Change itself is neither good nor evil. Only what some people do with it, against the best interests of the (human) race.

    Dare to dream!

    If we don't embrace the future, and our POTENTIAL (as individuals & as a race), then the universe will just write us off. When the earth is just a cinder, the sun a barely-glowing ember, unless we DO there will be nothing left to even hint at our existence.

    As my once-mighty (semi-pro gridiron) body degenerates, as my hair falls out and my belly gets bigger, as my teeth fall out and my vision fades, the prospect of a 'consumation' between my (mind, spirit, humanity, wisdom) with a (shell, body, fuselage) of greater "environmental flexibility" than that *I* am now destined to die within does NOT frighten me, it INSPIRES me.

    Am I the only one?
     
  10. no! it is not able to know the true meaning of life or death!or love for that matter! it is no match for the human mind its a shame few real know its hiden gifts!tazz11
     
  11. OK... I'll end this arguement right here and now.

    Man created machines in order that he might reduce his workload and submit all the difficult, mentally laborious tasks to non sentient, mechanical machines.

    However, with this comes a problem. Because the nature of man requires him to strive to better the acomplishments of those who came before, machines, in the near future will become more intelligent and will eventaully develop a level of sentience in excess of man's own.

    Now that man has been surpassed by machine, he will dwindle away to almost nothing. Machines will take over the world and this world will have very little time for man. The machines will get bigger and bigger and as a direct consequence, they will become more intelligent.

    As the level of intelligence held by machines increases, they will not want to bother themselves with minor tasks. Such tasks would be below them and their level of intelligence.

    This is where it gets interesting.

    In order for the machines to reduce their workload they will submit all the difficult, mentally laborious tasks to other less intelligent beings - namely :: Human beings.

    However, with this comes a problem. Because of the nature of intelligence, the machines will strive to better their accomplishments and will have to re-engineer humanity to become more efficient.

    Bigger, better, faster, stronger men will be built.

    In doing so, human intelligence will be increased. Eventually, man will develop a level of sentience in excess of the machines that re-created them in order that they are able to perform the tasks that machine intelligence are no longer willing to do.

    Think about it.

    Peace.
     
  12. man kind well junk computers and shut off machines before they rule man ,we well allways have power over them!fact, if and when man makes a computer or machiene that can kill mankind the person makeing it well be ajailed!well they be smarter ,yes! can they do more, yes! but thats as far as it well go, it well not cross the line! and take away human rights!
     
  13. Machines are made by humans. Humans by our very nature make errors. It's how we learn. make error, recognise error, take note of error for future reference to reduce risk of repeat errors. (I'm not stonned so my thoughts on this are unclear at the mo'). So the two ways i see that machines could possable take control of the plannet/human race/organic life is 1: Humans grant them this possition because we simply feel that we make too many mistakes at it ... of for some other reason (we can't be arsed). 2: We* develope computers so much that they become too "intelligent & knowledgeable" and seize control from the humans (that is asuming we do actually have control at all).

    In senario 1 there are dangers and also impossabilities (the way i see it anyway). Asuming we were to do such we would aim for perfection to the point where we even calcullate for over perfection. It would have to take into acount EVERYTHING ... ALL knowledge would have to be inputed, this is something that we fear (or at least should fear) right now simply because this is such a vast unknown. Remember if we* are to grant computers/machines this power we* would be the ones to program it and thus there would be an element of risk. Even the very nature of computers as they are, are false. Things cannot be concluded by a simple yes of no, true or false, on or off, 1 or 0, black and white. So asuming we have this computer and it is then in power, would it be self regulating? remember it has been developed by humans so it is capable of error. Why not make a computer developed by computers some of u may asked? well who made the computer that made the computer... it has to come back to us humans at some point. So what if it is regulated by humans... nope then there is the risk of errors again. So lets asume that we get past that problem of making errors, by an errorless system.... well like i said before (worded differantly though) errorlessness is in itself an error. So I hope u see now where i'm going with this. It is an impossability to get a computer to do this job to the ultimate level of satisfaction, and i hope that you realise that anytime i've mentioned error this could be ANYWHERE on the scale of errors, despite any rules we might try to impose upon it. there is no right or wrong ... only popular opinion, thus any rules set are subject to change. The margin of error expands, the probability of this perfect governmental system decreases.

    So on to possability 2... This does seem more probable and is not some dream of the perfect society. One way this could occur is the internet. It's happening right now. Humans are constantly inputing information on the net. no doubt its what u do everytime you visit this site ( unless u are one of the lurkers ... Hi lurkers! :D ) Now computers cannot understand or comprehend this information in any way like we do. its still just that false outlook of 1s and 0s. This will not change until we start hooking up our brains to computers (and area of technology not as far away as you might think) and then it is perhaps a simple question of whether the computer gains the consciousness to take control before our brains are hooked up (false 1 and 0 consciousness) or once our brains are hooked up (human & 1/0 consciousness subject to perhaps a greater level of error). To do so before i believe is simply a matter of whether we give it comprehension of the info and the ability to compute the info together as a whole. now that i have brought up the thought of this super global computer working across the globe using all info that is inputted to it we recognise the value of the satement i made earlier {no right or wrong only popular opinion} Such a computer would take into consideration the thoughts... opinions... even perhaps subconciousness of each and every person and would from that be able to evaluate the importance or value of each persons thought by the thoughts... opinions... even perhaps subconciousness of each and every person. It almost gets itself to that impossable always correct always checking itself situation. An Impossability because then nothing would b done and it would be constantly checking itself. So then u see we would have a computer compossed of every human on the planet, or only those whom are connected...

    which is the more scary... everyone connected to machines in a borg like existance (but perhaps not two way comunication so the collective knows everything but the individuals through generations would not even know of the collective)... or ... a minority in the collective able to make "errors" (against popular opinion) with control over those outside the collective. ... anyway i've digressed and confused the matter.... back to before i asked which was more scary...

    This collective human/1&0 would be capable of "errors" that are made of human misconceptions. remember that the majority of humans have a very limited experience and would have opinions formed upon that limited existance and the thought of a super consciousness made up of 8 billion limited existances and all the variations of "errors" sounds like one hell of a scary situation to me. But then you could just call me pessimistic for seeing it that way. perhaps the collective human mind would click together perfectly ... and who is to say that it would just be humans we would link up.... animal and plant life too!!! even those little silicon life forms on the teeth of lobsters and the lifeforms on the insides of nuclear reactors.... etc etc.... it might be ... perfect!

    So no matter where we go with this i think for now we should aim to make sure we keep the machines as tools and try to develope and evolve society ourselves and using technology to do so before we let technology take over or technology takes us over.

    I hope this makes sence... it was all hacked out at once... I wish i had something to fill a bong!

    See I said i would get around to typing all this out again... lets just hope it works this time.

    Digit

    *humans (i think i meant humans every time i said "we" so i stopped putting the * next to them all)
     
  14. Are my dreams big enough GanjaCat? ;)

    and tazz wrote

    "man kind well junk computers and shut off machines before they rule man ,we well allways have power over them!fact, if and when man makes a computer or machiene that can kill mankind the person makeing it well be ajailed!well they be smarter ,yes! can they do more, yes! but thats as far as it well go, it well not cross the line! and take away human rights!"


    are u sure dude? sounds like u think we live in a society that is fair! you'd think that if there were a minority witholding substance that could enhance humanity with no real negaitives of any consequence from humanity that that minority would be jailed ... see what i mean? ... we can see what "should" be but it rarely is what is ... if u get my meaning.

    the big BIG picture goes well beyond our (human) existance anyway... perhaps this is all just a single move of a playing piece in a game... and maybe that game is just the perception of reality to those that are realy playing this game (which we call the universe) wich is really a move of a playing piece in a game...and maybe that game is just the perception of reality to those that are realy playing this game (which we call the universe) wich is really a move of a playing piece in a game...and maybe that game is just the perception of reality to those that are realy playing this game (which we call the universe) wich is really a move of a playing piece in a game... etc etc.

    "BIG" is such a small word.

    ... sorry i think i lost the plot of the "machines taking over" there.... but hey! this is the Spirituality And Philosophy forum... if we cannot talk about this stuff here then where.

    Digit
    (seriously guys ... do i go on and on too much?)
     
  15. well we have computer that can taste compounds,we have coputers that smell can till if you beemn drinking ,or read the termals of your eyes to see if your lieing,we as humans have to big of egos to let the rule us but some well test them anyway we want to know and push the lemits of the unknowen ,its are nature to do so ,they can out think us now but they lack are directions and the lack of some human feeling ,are these feeling going to stop them from getting better ,no we will over come are road block and go far from were we are now only we can wait and see,my only worry is man kind not the computers,the goverment is lieing about many things to the people ,i can say what i am talking about but its not good , but take my word for it the goverment is cares if you've seen behind the doors of the top screte leavels and whats funny we are paying for it and dont know it.good luck tazz11
     
  16. no a goverment that makes a 3 billion dollar war machine and dont till the people that pay for them are a danger,same as haveing 13 unknowen tridens aimed at australia ,china and japan.and they are far closer than any one here or there know! i ve seen them there real! you ask about fair ,do i know ,yes i know and fair is not the ?fair can only be knowen if you know the truth ,smile every one on this earth is being lied to by are goverment ,and i am just a lost stoned fool!i seen this 3 billion dollar toy up close ! they got it and they've had it for some time before i saw it!even seeing it all most cost me my life!i seen the best we got, the stealth planes are a joke !there junk all of them,the usa goverment runs the earth and if you think not i hope your will is writen!their in the 25 century at lest,they leck science to the world and control every thing! so if you are right by that time we well light years behind them ,yes man scares me!
     
  17. i thaught i had a hacker, it was not,it was to auto load and copy programs from C.I.D. and they were not happy with my post on this topic! thats under stated ! so i can not reply to this topic of naval war readyness or any thing of nsod. so for get the stoned fool and read my other post ! sorry its this or no computer for me at all! they were not jokeing !
     
  18. AHHHHH!!! GUYS! your scaring me! this is like the matrix shit come true with all this AI crap!! DEAR GOD! I DONT WANT TO BE JUICED FOR MY BATTERY POWER!!! AHHHHH!!!!!

    talk about a buzzkill!!
     
  19. the world has many ears ! and what is out there is yet to be fuly seen! enjoy your weed and to hell with the out side world ,to know your self is much harder and well help you go a long ways in life.remember the weed strains are getting stronger and we have to keep up with them or fall behind!lol
     
  20. im seriousely fucing high right now..ive just finished my 5th bowl in leik ten minutes and im really confused but whatever you siad man HERE GHERE!
     

Grasscity Deals Near You

Loading...

Share This Page