Does artificial intelligence create consciousness?

Discussion in 'Science and Nature' started by Tokesmith, Sep 28, 2014.

  1. #1 Tokesmith, Sep 28, 2014
    Last edited by a moderator: Sep 28, 2014
    I'm very curious about this topic.

    When people create Ai does this create something conscious? Even ai in video games are be subject to this. Of course the origin of consciousness would be different from ourselves, but the question remains, is it a form of consciousness?

    What are your guys/gals thoughts on this?


    "I'm to drunk, to taste this chicken" -Talladega nights
     
  2. i doubt it, but i guess technology could get to that point. there are theories that our existence is just a computer programme, and we're conscious so... pretty next to think about 
     
  3. #3 Deleted member 771271, Sep 29, 2014
    Last edited by a moderator: May 31, 2023
    program
     
  4. http://www.sciencedaily.com/releases/2014/09/140923085937.htm
     
    We're getting closer.. but it'd just be like intelligence, artificial. If we could mimic how animal brains work in full, we'd be creating consciousness. It'd just be artificial consciousness. We don't really have any artificial consciousness now, awareness isn't something that can be programed. It needs to be programmed in a way that it can become aware through it's own experiences.. and it'll be a bit yet til we get there.
     
  5. Isnt that what nature does? Creating us in such a way that we react a certain way based on given stimuli.

    To answer the thread I do not feel any AI thus far has consciousness. Just like Mantikore has stated no machine or program will gain consciousness until it becomes self aware. We can imitate it but it wont be true consciousness. I suppose if you can accept artificial intelligence then accepting artificial consciousness is not such a huge leap. I think that once we develop machines and programs that can actively learn all that a living being can learn and process it the same way then consciousness is something that will develop naturally.

    What JimmyTbag stated about us living in a computer simulation actually brings up an interesting thought about programs developing consciousness. If a program eventually learns that it is inside a computer simulation, what happens to that program? Does it essentially break down at that realization? Can a program that thinks like us and learns like us truly accept its world is fake? If we are programs, can it really be said we have consciousness? If we have not become aware of the nature of our own existence, how can we be?
     
  6. #6 Deleted member 771271, Sep 29, 2014
    Last edited by a moderator: May 31, 2023
    favorite color is grey
     
  7. #7 Tokesmith, Sep 29, 2014
    Last edited by a moderator: Sep 29, 2014
    Do you think once artificial intelligence becomes self aware, it will try to survive. Such as repairing itself, avoiding dangers, and increasing its security?


    Sent from my iPhone using Grasscity Forum
     
  8. Not instantly, but once it learns I don't see why not. It's just like life, it has to learn that it can be harmed, then learn to avoid being harmed. Then it has to learn that it can repair itself, then learn how to do it. With the internet though, its speed of learning would be pretty quick. It'd react to life based on its experiences. One AI might learn to increase its security and defense based on its experiences while another cut from the same cloth might not because it never experienced a reason to.. but in reality, trying to predict artificial intelligence/consciousness would be like trying to predict that of life and we still are working on that.
     
  9. I think the best thing to do for any machine with self aware capabilities is to add something in its programming that prevents it from wanting to harm humans except in defense of another, harming humans in some attempt to prevent us from harming ourselves, and changing its own programming.
     
  10. I personally think we could live in harmony with a self aware race of.machines. why not?

    Sent from my LG-E739 using Grasscity Forum mobile app
     
  11. #11 Tokesmith, Sep 30, 2014
    Last edited by a moderator: Sep 30, 2014
    I agree, they would be a lot like ourselves. They could question consciousness, and all that stuff we like to do.

    I do wonder though, if Ai would be able to have anything similar to emotions. That would be interesting.


    Sent from my iPhone using Grasscity Forum
     
  12. Emotions are created mostly out of self interest, and other interests in general.

    If the AI has interests it should have emotions

    Sent from my LG-E739 using Grasscity Forum mobile app
     
  13.  
    Would quite possibly be much easier than living around most of these people. :smoking:
     
  14. All you have to do is create a machine that functions like a human brain.

    Easier said than done of course.
     
  15. Why would artificial consciousness develop emotions? Emotions are a human flaw for the most part. We let them interfere in our progress and sometimes even put our lives in danger. An efficient machine would have no use for emotions.
     
  16. #16 VikingToker, Oct 8, 2014
    Last edited by a moderator: Oct 8, 2014
     
     
     
     
     
     
     
     
    Awesome posts! And it might! That question is hard to answer, but its super interesting. I've heard it said that if you have a question about AI, just put in "future superhuman" in all the places you put AI and see if you can guess the answer. Because AIs are the gods of the future, if technology keeps progressing like it is, more about that below if you care
     
    [​IMG]
     
    Hey thread
     
    I'm currently doing a masters degree in AI so maybe I can weigh in on some of this stuff  :smoke:  The next stuff is for the curious layman, skip all this if you dgaf. Gonna try to start from the bottom and then point to the future
     
    1) How AI is built up
    At it's root, AI works like all coding, which is "If X happens, do Y, and if this happens, do this, and that happens, do this" etc. This rules everything you can tell a robot to do, and you just make it more and more complex (for example, "if this happens twice, do this" or "if this happens because of that, do this" etc), and build and build until at last you have an AI complex enough to mimic a human. So, if you build it into the code for the AI that it wants to survive, and give it the means and knowledge to fix itself, it will. 
     
    2) We are just rebuilding ourselves in metal
    One of the most fascinating things about the human brain is that it is a computer. So we're already "kind of" AIs, just ones built by chemicals, organic material and evolution instead of machine code, wiring and metal. The brain is a really elegant and hard to understand computer, but we're working on it.
     
    3) We're almost there
    And while neuroscientists work on figuring out how the human brain computer works, computer engineers are reading their work and trying to build AIs based on that, using logic (the mathematical discipline, not like common sense lol) and algorithms, which is a super fancy word for a set of instructions. Together with robotics, which is the discipline of combining code with metal, building an artificial human being isnt far off
     
    4) Conclusion
    If things progress as they do, AIs are destined to become GODS. This is no joke - if you consider all the capacities we already have, we will boil these down to a single enitity that we can mass produce. So, with our technology, we can already:
     
    • Move way faster than we are designed to (cars, trains)
    • Lift things way heavier than we are supposed to (cranes, hydraulics)
    • Fly (into space, even)
    • Communicate instantly (text messaging, internet technology)
    • Do extremely complex mathematics (computers can do math way faster and better than the genetic brain can)
    All of these things will be built into a single "device, robot, artifact, cyborg" call it whatever you want. And it will be to us what we are to ants. That shit is coming.
     
    5) Discussion
    So what's REALLY interesting is, how will their ethics and morality turn out? Will we be able to code them into Gandhis and Mother Teresas and Jesuses, or will they take control of that themselves? What motives will they have, if they have any at all that we don't give them?
     
    The ultimate tragedy is to create the god that has no willpower or interest in the world around it
     
    Edit So so sorry for the longass post, I obviously love this stuff :D
     
  17. Don't emotions allow empathy?


    Which we would need to prevent the robots from destroying us all?
     
  18.  
    Empathy is not the only factor that bars destruction. I dont care about dandelions either way but I'm not on the war path to eradicate them all. 
     
    AIs would have to have a reason to remove us, and we would have to provide the grounds for that reason through code somehow. 
     
  19. Hear from the real motherfuckers in the field who know their shit, like Hugo de Garis
     
    This about the war they think will end us, and it will be between people. One group for AIs and one group against
     
    http://www.youtube.com/watch?v=b-cOipzmFhg
     
     
     
  20. I don't think they will just against AI

    It will be a war between religious extremists who are anti gmo/AI/evolution/stemcells/cloning/ everyfucking thing

    "what is a bunny of fish?" - Christopher Brown
     

Share This Page