The thought occurred to me earlier... Lets assume that AI has advanced far beyond it's technological limitations of today's world. Something akin to Speilburg's AI, lifelike enough in the physical and emotional sense to fool people into thinking they are actually human. What rights are granted to these machines capable of emulating human thought and behavior? "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness." Does this mean AI derives their rights from the factory that spits em out, or do they derive their rights from the same "creator" as we do? ^^Does the "duck test" apply in this case, or is it a private property matter? Not really looking for a right or wrong answer here, just opinion. Many angles to consider on this one...
But what defines a person as being an individual human? " i think, therefore i am" Self consciousness is key to defining human existence. Being able to be free in our own mind is vital to being an individual, and vital to defining what it means to exist at all. If a machine has these capabilities, how is that different? After all, the human body is merely a machine that sustains the consciousness that exists within. I'm not disagreeing, just exploring.
I would agree but hey if were going to give corporations the right of personhood might as well give robots their rights too!
well, here's a monkeywrench for ya. if we were to truly try to apply Asimov's Three Laws, the robots would have to basically put us under house arrest and not allow us to leave home, too much danger A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
I believe that if a species (and AI would be a species of silicon based life) has the ability to defend itself in some way then it possesses natural, inalienable rights. There are a lot of interesting ethics questions about AI, especially regarding danger. Ray Kurzweil predicts an AI birthed today would eclipse man's knowledge from Socrates to it's own inception in about 48 hours. This scares a lot of people.
I think the moment they become self conscious they should be entitled to the same inalienable rights as a normal human being. Although it's a seriously odd thought, that a machine could be treated the same as a human. But if they can feel, and are conscious of themselves, then I don't see how we can justify enslaving them (although if they became that intelligent, we might become their slaves...)
If they are smarter than people, more creative and have something similar to feelings why wouldn't they have rights. In fact they'll probably want more rights than man has if we don't program the originals with a sense of equality or compassion.