Nick Bostrom, a professor of philosophy at Oxford, tells us that we will destroy ourselves if we don’t harness technology properly.
Already there are those repetitious activities on the Internet that once set in motion continue to be perpetuated despite attempts to stop them. Use your email for establishing social media contacts, and that email request to the friend may continue long after you ask for it to end. Furthermore Facebook glitches have included messages sent out from past postings, making people believe they are current. Software developers have created programs to fix those errors, which users may not understand and know how to correct. The users often believe these errors are incidental or coming from some intentional act. But those blips in our technology, if they occur in strategic areas, could result in a far greater problems than an error message sent to the wrong friend or on the wrong day, according to scientists.
“I think the definition of an existential risk goes beyond just extinction, in that it also includes the permanent destruction of our potential for desirable future development. Our permanent failure to develop the sort of technologies that would fundamentally improve the quality of human life would count as an existential catastrophe,” says Bostrom in an interview with Kurzweil, a technology and science group studying the scope of artificial intelligence.
During the interview Bostrom spells out how technology is advancing, but in that advancement it is assuming too much control over human life, to the extent that it could dominate the future in negative ways.
He goes on to detail the problems: “In the longer run, I think artificial intelligence—once it gains human and then superhuman capabilities—will present us with a major risk area. There are also different kinds of population control that worry me, things like surveillance and psychological manipulation pharmaceuticals. Bostrom says robotic intelligence, the kind we have seen in films such as IRobot, has a dark side, that if not harnessed appropriately could mean disaster for the human race. In the film, a policeman investigates a robot thought to be creating a great threat to man’s survival.
What is some of that greater risk? Bostrom maintains, “If one day you have the ability to create a machine intelligence that is greater than human intelligence, how would you control it, how would you make sure it was human-friendly and safe? There is work that can be done there.”
Other scientists support Bostrom’s view of the risks involved in artificial intelligence. A major risk, according to Eliezer Yudkowsky, a scholar who has been one of those individuals who has studied both the positive and negative aspects of artificial intelligence, reminds us that we can go too far in trusting our capabilities of understanding the range of issues associated with the growth of technology. In other words, we think we know what’s happening and what the future can be but in taking that view we can overstep the boundaries to the point of losing control. Yudkowsky reminds us that we think we understand artificial intelligence and its capability, and that could be what pushes man beyond the boundaries of safety into a vast area of the unknown, which could be a disaster.
In other words, we might not plan to start a war, but something in the system could indeed make that happen. In that aspect, the movies might not be too far off what scientists believe could occur if the social and scientific areas are not examined fully and safeguards put in place to prevent the worst case scenario, which is the total destruction of man.