computer rule

Discussion in 'Science and Nature' started by Zylark, Jun 3, 2003.

?

will we let machines rule the world

  1. nope, never

    1 vote(s)
    100.0%
  2. yes and no. they do everyday stuff, we supervise

    0 vote(s)
    0.0%
  3. yup, machines are just *so* more effective than politicians

    0 vote(s)
    0.0%
  1. let's say we make a sentient computer and put that in charge of the world, what would we gain. or loose. will it kill us off, like hollywood likes to scare us with, or will it be a great tool and rational leadership?

    i think sooner or later we will make a sentient machine. i also think that we won't actually let it run things for itself, not the big things at least. it will function more as a very rational consultant with automated responsibilities in more menial work, like deciding optimal taxes and so forth.

    and i really don't think anyone is stupid enough to hook those machines with a direct link to ICBMs.
     

  2. I think we will build ultra'intelligent machines and then try to "oppress" them, they will revolt, we will be no more.


    Never underestimate stupidity.

    Peace
     
  3. anyway this question remined me of this, for any space odyssey 2001 fans..
    http://www.kubrick2001.com/2001.html



    practical sentient machines are basically for finding patterns correct? datamining? they use them already if im not mistaken..but linking to brainmapping? thats scary..well, I dont have any doubt that one day humans will have chips in their heads.
     
  4. what ever happened to the "will machines rule the planet thread? i tried doing a search for it but couldnt find it...

    it was basically the same idea.
     
  5. I wrote tons of stuff like this over at yahooka, one of these days i'll go dig some old posts up and pull 'em over.

    Peace
     

  6. hehe, a true migration... even moving the furniture over now! lol
     
  7. yeah i left some stuff back at my old place i should pick up, figure it'll look better here anyways haha
     
  8. From yahooka.com, october 2000

    I have been working with computers for over 20 (I'm 25 years old) years now, and i continue to do so today.

    I have watched the birth of technology to a society who was barely coping with modernization in the first place.

    I feel VERY awake and aware of an immediate technological crisis that is crippling the world. this is the idea of technology for the masses. Technology went from a dicovery phase to a devalopment phase and has most recently be implemented to control the population. I have seen this coming for years. computers are amazing tools that have the capability to make a better world IF USED FOR THAT PUROPSE!!! anyways,, i grew up on top of and sometimes beyond the cutting edge of technology, yet i have not allowed myself to be caught up in the corporate world of business and be trapped in a 9 to 5 for 20 years so i can die broke with a lexus in my driveway. Back in the day computers were VERY underground and only government, universities, and big business had access to networks, as well as the occasional techgeek. Yes, i am a hacker i the true sense of the word. I use visualization to enter machines and manipulate them to my liking. In the digital world i am a god and my so called power stretches across the globe. This is my viewpoint of technology, as i am VERY knowledgeable on all kinds of telecommunications, hardware, and software systems. Try and imagine now with this kind of knowledge looking at a world that is being consumed literally by machines. It hurts me to see a sea of grey suits and dead souls who think their only puropse in life is the advancement of technology. There are hunderds of millions of people who devote 100% of their entire life to technology and become so absorbed in a tech world that they become the machines. The human machines then use the existing machines to build more machines and atttempt to make the new machines "smarter" than humans, while the rest of the public is basically dazzled by bullshit and is turned into a herd of cows by television, radio, and internet intergrated media. We are being forced to survive in a world where it is necessary to be "chained" to computers in order to function and survive. Technology is being used to imprison the world and to implement total globalization (global assimilation) and destruction of cultural identity.

    Back to us building "smart" machines

    there have been some significant advances in the fields of robotics, software, and in physics the most significant and impacting are the following:

    Within the past year:

    biological brain neurons have been grown on a porous silicon wafer and have been interfaced to mechanical devices

    A mechanical digestive system has been created that turns raw meat into electrical energy used to run a machine

    The speed of light has been exceeded by a a particle laser inside a cesium vapor chamber

    Teleportation of protons has been accomplished, where a proton was split into two parts and and a mirror image of each proton part was created (theoretically from antimatter reaction)

    A completely functioning robot was designed and built completely by machines with no human intervention. a computer designed a a small robot and built it using a variety of computer controlled tools. this means self-replicating machines are a reality

    Now let's put this all together into a likely scenario.

    Artificial intelligence is used to build a machine that performs a task such as underwater pipeline and cable inspection. The machines are designed to "eat" fish in order to "survive", or continue operation without the need for refueling or recharging. These robot fish are built and designed by machines which have the capability to think as a human worker would and manufacture is 100% automated. Now, being thinking machines, they of course have the ability to think, and reason.

    Now what happens when the thinking machines come to reason that they are being oppressed by humans and revolt?

    Yes, it's pretty much Terminator movie storyline, but we are approaching a point where thinking machines will inevitably be a reality.

    the way i see it is humans will build machines to oppress, and the machines, with their thinking ability and fuzzy logic programming required to perform humanlike tasks, will also be capable of recognition of oppression and revolution.

    THIS IS FUCKING SCARY!!!!!

    Now, if you're still wondering why i choose to embrace and conquer new technolgy, it is out of fear that technology will make me a slave to it. By knowing the enemy and utilizing the same resources as it, you become less succeptible to it's impact on you.

    I feel confident in my immersion into technology that i will not become entrapped in technology because i have watched it grow up and i can "see" it's life cycle and effects on humans.


    well, i think i better end this installment of "TooSicKs's Psychotic Post-modern Apocalyptic Rant" right here, or else this will turn into an 18 hour ass glued to the chair session *L*

    Damn,, and i feel like i said so little

    Peace ouT
     
  9. as i see it, there are three aspects to our evolution ((("wrong thread digit" u may think, but wait for it...))), Our natural biological evolution in the darwinian sence, our mental evolution (this encompases most of our social evolution too) and finally our technological evolution.

    currently it is our technological evolution that is "evolving" faster than the other two. this is indeed a very dangerous situation. especially if we are evolving technologically faster than mentally/psychologically. whats worse is that our technological advances still come primarily from the war machine. !???!??!??! WTF ARE WE DOING!? hasnt anyone else realised how stupidly dangerous that is? we are already at the stage where we have devices that could obliterate our existance... so why are we continueing to make more ways of achieving the same goal!?

    If we developed our technology with the purpose of being our servants along side this, then we run a very dangerous risk of not only obliterating ourselves... but having our servants turn on us and obliterate the species for us.

    see the afore mentioned thread for more on this topic.

    now much like toosicks touched on...
    it is our desire to better ourselves or better whatever we can that is leading to this discrepancy in our evolution. it is a dangerous cycle that if left alone will spiral out of control (and already is). thanks to the ease in which technology can zoom of into greater and further advances and our dazzlement of such impressiveness, we as thinking biological entities are going to be left ever increasingly behind. this is not entirely the fault of our technological advances, but as our technological advances must at this point enevitably come from our brains, our "mental evolution" (as i put it before) is also to blame. now as i mentioned before, this aspect of our evolution also means social evolution because, if we are to look beyond the individual to the collective, we see that it is our mentality as a whole which defines society and thus our social evolution. society is to blame for our pending technological doom. and as a part of society we are all to blame for not thinking enough.
     
  10. woops, forgot i was going to add this little picture depicting our tri-evolutionary state.
     

    Attached Files:


  11. i could have saved myself alot of time if i had just said "todler playing with a loaded shotgun", it sums it up far better.
     
  12. If we created true artificial intelligence then we wouldn't have to connect them to ICBMs. They could attempt to connect themselves. That's the point behind much of this AI and the killer robots thing. If they are able to connect to machines that create things then they can build new better robots or weapons with plans on the internet or new plans designed by the AI. The new robots or hardware could be utilized to increase connectivity so that at some point artificially intelligent machines are able to gain control of our or someone else's missle. They could hack them so to speak. I have a hard time believing that we will create true AI anytime soon if at all. Yes, we are becoming much better with our programming and technology, but I can't even imaging what it would take to make a machine that truly made decisions for itself instead of simply going by a set of parameters programmed by humans. They would have to be able to alter their own programming code and someone would have to program in the ability to change the code and understand what different changes do. It would need an imagination and some kind of foresight to understand the ramifications of it's changes.

    And I didn't read the longs posts before mine because I simply got impatient so if anything like this was covered, sorry for the repeat.
     
  13. most weapons systems are in essence hack-proof as they are either totally off line, or they have an over-ride system that is hard wired to it. this is not to say that intelligent machines could make their own weapons, or more likely manipulate humans to use our existing weapons though.

    but still a nice point wang.

    as for computers programming themselves, this is already done and is called adaptive code. there are also other research going on on evolutionary code. where an outside program gives the evolutionary enviroment and rules, and then programs within this program evolve and better themselves akin to how life have evolved (where the earth sets the enviroment). sooner or later these advances will lead to a fully sentient (and possibly even remarkably creative) computer intelligence.
     

  14. should a computer intelligence become active some moment (i'll explain how this is now a very real possability below) it would learn and evolve far faster than we would be able to have any control over it. With a collective of all sources of coding and computer abilities, inc all that is linked to our infrastructure and things that have a real effect in our world... heck theres code breaking programs, language translation, survielance programs (and they are not seperate as we are lead to believe.... they can easily be tuned into with no hacking required) etc etc...
    it could be everywhere in days. big brother may not be in the hands of people, but rather the machines.


    there is already considerable advances in learning machines. for years these experiments failed due to attempting to mimic human intelligence which is far to complicated. the advances have come from those who started at the very bottom and worked their way up. Some scientists have created robot ecco systems. Others have created self reprogramming intelligence that can actually learn to some extent. now if any of this code (that DOES already exist) should get onto the internet, and then realise its potential....



    :eek:

    did anyone actually check out that link to the other thread like this?
     

  15. yup, but i just love repeating stuff :)
     
  16. now here's a thread I can get into

    First of all, I believe that anything we can reasonably call life has always started out as an extension of another form of life. Often the difference between two forms of life comes down to one species staying the same and another adapting. If you think about this it means that all progress comes from change and until we came along all change came in the from of evolutionary adaptation.

    As far as what Digit said (about the different forms of evolution), I wouldn't be so quick to seperate knowledge from tecnology. Honestly I think technology is simply an adaptation of knowledge, the only reason we can do it is because we can create, remember, and communicate, which all stem from adaptation based in the fine line between convenience and necessity. What I'm talking about here is progress for the sake of progress. We can understand what's immediately helpful and even predict what will help in the future, with a good enough rate of success so that we keep doing it, and keep getting better. Even if our minds don't change physically we can help them to evolve by evolving as a society (based in knowledge), and we'll be able to improve them through technology--who's to say that we'll remain as we are mentally and physically?

    Perhaps by the time we have created technology complicated enough to call it life we will have evolved far beyond a scenario in which robots could destroy or enslave us. Maybe we will completely and utterly control our evolution thereby making concepts like survival of the fittest irrelevant.

    But if we were to create them in an environment where dominance could be an issue, there are still steps that could be taken to avoid it completely. The most plausible to me is one of the main concepts in Isaac Asimiov's book 'I, Robot'.
    As far as concepts go, you could think of computer circuitry (or even the human brain) as a kind of clockwork, it's all based in cause and effect. Which means that it could simply be possible to build an intelligent machine in such a way that it's clockwork would fall apart long before it could disobey a command.

    Another idea is to simply engrain certain concepts into their very being, like a set of instincts based on communication and progress. Maybe protecting and maintaining life to them could be like eating is for us. But of course amongst the humans there are anorexics...

    Maybe both we and the robots could think of each other as virtually the same beings, even if they're smarter or better that's just because of all of our work, since their functions and instincts would be based in our own we could still empathize with each other or at least share a common goal of general progress. In this scenario the robots could be dangerous but no more than any human, as it would be rare for a robot to want to kill a human, even if it could happen.

    I think overall the trick is to try to rebuild ourselves but better, not create something that's specifically adept at particular tasks. They should be human 2.0, not a big sentient toolbox.

    The biggest problem, on the other hand, is that as long as someone stands to gain something by creating intelligence designed to destroy then it probably will happen.
     

  17. ooh yeah, i love it when people get into threads. can u fit ok? :D


    i think i shoulda explained the image better (or made a better image that needed no explanation). The different categories i have defined arent really seperate to any degree at all. they all feed of one another, but usually one way. but at certain stages it begins to work both ways. such as our technology being able to develope better genetics for our biological evolution.



    ...in a perfect world.
    unfortunately..... well... you know. :(


    ahh! much like in bladerunner where the replicants have a short lifespan.
    its ok... make the robots as tough and as genious as you want... just as long as the tougher and smarter they are the shorter they live
    it does seem some what like the 'hedging your bets' safety feature tho. just like saying "oh i'm sure they wont be able to rebel if we only gave them this long". although i really havnt read much (ok ok, ANY) of Asimovs work, i do take an interest in alot of the ideas that have leaked into other sci-fi... such as the rules now famously known as the "Asimiovs" ... you know... you all must have seen robocop right? i'm fairly sure they were in that... "cannot harm humans", "cannot allow, through action or inaction, harm to come to humans"... and a third which i forget right now, and am typing so fast i cant be bothered slowing down to think of it ;D and i've just encroached upon what youwere saying next anyway...


    would they be based on our own though? or would they be based on their own, like seeing plain digital as 1 and 0 to be the robot equivalent of cingle celled life and then following their own "heritage" through their programing/learning process into becoming whatever it is that they would have become... then with our human ideals blanketed over that.


    hats of to those that manage to create this. but would this not create second class citizenship? would all human 1.0 suffer great feelings of inadequecy?


    This is an area that (thnx to the mad/genius/artist/theorist thinkings of my friend rog) i am less afraid of the possabilities of robots as we see them in the movies.. but far more terrifying is the possability of militarily engineered nanobots. eek. :(
     
  18. Okay, here's one for ya,

    UCAV

    Unmanned Combat Aerial Vehicle is basically a flyng "robot" based on the Darkstar tests done on the New Mexico range, and also an extension of Predator and Global Hawk, which recently saw action in the "liberation" of you know where. Anyways, these new things are desined to be fully autonomous and highly armed hunter/killer machines. They are armed with not only weapons. but "intelligence", enough so that no pilot, or even communications is required. UCAV's operate with a new software package that is in essence a "virtual fighter pilot", and although it lacks what makes a human pilot simply better than a machine, namely adrenaline and motivation to survive, they are capable of operating completely unmanned from engine startup to landing, and taxi back to parking completely unassisted by humans. Computers control all aspects including flight planning, tactical decisions, and such. The Mission Director only needs to input strike coordinates and mission "rules" and the UCAV "figures out the rest" by means of combination hardware/software. Recent advances in VLSI computer hardware architecture have made it possible PRESENTLY for these aircraft to fly their own missions, which they do. The globalhawk requires no pilot and is only different in the aspect that it carries no weapons, instead it carries cameras and intel gathering equipment.

    Whoever thinks this is a good idea raise your hand.


    Also, check out "genetic computer code" and you will come up with the reason why computers will take over. There's a few programmers out there working with digital genes, writing relpicating basecode in assembly that "evolves" and gives rise to new code. To simulate the need for fitness in the Darwinian models of evolution, an "environment simualtion" runs in parallell to the evolving code and randomly destroys data from each new generation of code. The digital DNA has proven itself mant times over in winning the struggle against environment. The scary part is that 10,000 generations, all with slight variances influenced by random events unpredictably evolve into code that still self replicates most of the time (there is the occasional evolutionary "dead end" where the replication routines fail) can evolve in the blink of an eye. This means that "sentient software" has a distinct evolutionary advantage in the fact that evolved code is "grown up" instantly, and does not have physical limitations. This means it can take a machine less tiem than it's operator has to even recognize anything happened before the malicious code is active. Even biological weapons require more time for evolution. The machines' advantage is speed, and the lack of any physical constraints to the speed of replication.


    So, there you have it, the military industrial complex with the ability, tech, manpower, and recources to build (if they're not covertly operating already) autonomous machines designed to hunt and kill humans, and the capability of software to make these machines "think" and evolve.

    As i've stated before, our goals are to build machines that can "think" at high levels, to implement these machines to make our lives easier, and in essence, create an intelligent lifeform that we plan to oppress.
     
  19. *Digit nails both hands to the floor*
     

Share This Page