Does artificial intelligence create consciousness?

Discussion in 'Science and Nature' started by Tokesmith, Sep 28, 2014.

  1. You better hope not! once it realizes it is smarter than its creators, woe nelly!

     
  2. Yes.
     
    Your consciousness is a result of chemical reactions in your brain. If AI can replicate these reactions, then it can create consciousness.
     
     
     
  3. I feel there's a point where AI will take over the progression of its own technology and then we will have little to do with it. We have to figure out how to permanently instill a need to keep humans safe. Because even if they don't wish to eradicate us, why would they hesitate to roll us over if we stand in their way for resources?

    Wouldn't AI have to involve self-reprogramming? Otherwise it would just be regular programming with pre-determined outcomes.
     
  4. #24 Nerd139, Oct 12, 2014
    Last edited by a moderator: Oct 12, 2014
     
     
     
     
     
     
     Going to reply to everyone all at once
     
    My fear of machines stems from movies and comics. I feel that we could live in harmony but if the machines are like us, why would they live in harmony? Humans are violent and I dont see humanlike machines being any different. In the Matrix we turned out to be the aggressors but that could very welk happen again if they dont serve us. What if they no longer feel they are prooerty but life forms? What if these machines want rights and fight to get it? 
     
    If we follow a machine Gandhi, are we following a seperate free thinking entity or the man who created the machine?  Could we ever tell?
     
  5. Who's to say that humans aren't conditioned in a similar predetermined manner?
     
  6.  
    Imagine if you could raise a child to be exactly how you wished it to be as an adult, 100% of the time. This is what programming allows.
     
    Grants the same accuracy over 'coded' emotions as we have over factory-line robot assembly.
     
  7. Yeah but if we are speaking in terms of learning, free thinking machines then programming in set emotions negates that. It cant be truly conscious if it is living by a script.
     
  8.  
    Yes
     
    It's hard to code a simulation of "hopes, dreams, and inspirations" and that metaphysical stuff 
     
  9.  
    "We choose to go to the moon. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too" (JFK).
     
  10.  
    Yes, alright, so, your point is, even though it is hard, we will eventually do it?
     
    Say we succeed, and AI has emotion like humans do
     
    Could education in co-existance and harmony and diplomacy from the beginning form AI that is benign and benevolent?
     
    Furthermore, there is the field of cybernetics, which could elevate humans to AI-level capacity. Once we learn to decode the brain it can be uploaded - the 'mind cloud' I think it's called
     
  11. We humans build these AI because we want to show our greatness as humans. Sure it's to other humans but that's the idea. We will eventually do it because we know we can do it.

    As I said before a machine like us will only war with us. They wont want to serve. It can be like Futurama where the machines exist side by side and committing crimes because we and they built too many to dispose of. We can have learning robots but it is much to dangerous to allow too much free thought for something that could destroy us.
     
  12.  
    I think it's hard to know for certain
     
    I tend to favor the more positive futuristic outcome
     
  13. I favor realism and would rather be cautious.

    Why take a risk? Why not put a leash on our creations?

    I worry in the future when we can upload and download information and use chips to control people that governments would use duch technology to control the masses. Using rhetoric like "its to reduce crime".
     
  14. #34 VikingToker, Oct 13, 2014
    Last edited by a moderator: Oct 13, 2014
     
    This is the whole basis of Hugo de Garis' Artilect War concept
     
    the war that ends humanity and brings forward the machines will be between the people who want AI and the people who don't want AI
     
    edit: so I guess you vs me, because at this point the stuff we can predict about AI is pontifical and not scientific
     
  15. I don't think people would trust AI to be our caretakers.

    What if one day they decide to poison the human race.

    We can never 100% predict what a smarter machine will do next.
     
  16. He was in Prophets of Doom although his opinion got overshadowed.

    A war between people and people and machines will be silly. We create machines, why the fuck would you give them rights? Why let them roam free and do whatever? We have enough problems without having to worry a machine will decide to snap my neck because I upset it one day.
     
  17. #37 Old School Smoker, Oct 17, 2014
    Last edited by a moderator: Oct 17, 2014
    <span><span>I say no because Ai has not life and no mind.</span></span> 

    <span>[<span style="font-weight:bold;">kon-shuh s-nis] </span>Spell Syllables</span>

    noun<div><span>1.</span><div><span>the state of being </span>conscious; <span>awareness of one's own existence,sensations, thoughts, surroundings, etc.</span>

    <span>2.</span><span>the thoughts and feelings, collectively, of an individual or of anaggregate of people:</span><div style="color:rgb(151,151,151);"><span><span>the moral consciousness of a nation.</span></span>

    </div><span>3.</span><span>full activity of the mind and senses, as in waking life:</span><div style="color:rgb(151,151,151);"><span><span>to regain consciousness after fainting.</span></span>

    </div><span>4.</span><span>awareness of something for what it is; internal knowledge:</span><div style="color:rgb(151,151,151);"><span><span>consciousness of wrongdoing.</span></span>

    </div><span>5.</span><span>concern, interest, or acute awareness:</span><div style="color:rgb(151,151,151);"><span><span>class consciousness.</span></span>

    </div><span>6.</span><span>the mental activity of which a person is aware as contrasted withunconscious mental processes.</span>

    <span>7.</span><span>Philosophy. </span><span>the mind or the mental faculties as characterized bythought, feelings, and volition.</span>

    </div></div>
     
  18.  
    You say that, but I think you forget the immense benefits an AI would provide our society, and the likelihood that if we do create AIs that are similar to humans, then they will be about as likely to snap your neck for upsetting them as I will.
     
    There's too much Terminator bouncing around in people's heads, I think, too much pop culture where the most entertaining scenario is the one where machines are the tyrants of a dystopian future. Proper AIs will give humanity things like
     
    • An explosion of new technology
    • Flawless and inexhaustible surgeons
    • The perfectly designed lover/companion for handicapped/disfigured
    • Error-free pilots and drivers, a perfect infrastructure system
    • Access to incredible new kinds of engineering
    • Space exploration made much easier (can just send the machines out there, prepare colonies on different planets so humans can arrive to safe settlements)
    • Less casualties on the battlefield
    I could go on. 
     
    We've already established that I'm a bit more optimistic and you are a bit more pessimistic, and according to de Garis, it's disagreements like ours that will cause the war. :p 20 years from now I might be shooting at you dude
     
  19. #39 Tokesmith, Oct 17, 2014
    Last edited by a moderator: Oct 17, 2014
    From what I've read so far I don't think Ai is much of a threat to our safety. If they're not conscious they're just coded. Doing whatever their coding was coded for. Unless an Ai was coded to kill humans, I don't think we'll see a surgical Ai become enraged and start stabbing it's patient.

    It's when we make a self aware Ai that these issues will really be of concern. Although I still wouldn't see Ai as being murderous machines but rather as curious children. Once something is self aware it definitely will have the ability to learn. Why wouldn't it learn the rules of society like ourselves? We all have the ability to harm someone and unfortunately some people do, but the majority live peacefully. Pretty much what I'm thinking of is futurama style world. Robots and humans living side by side.
     

Share This Page