People took to Austin’s streets this past weekend to voice their concerns about the risks artificial intelligence might pose to humanity. It was a bold message for the SXSW, throngs: fight the robots. However, the small group of protestors, Stop the Robots, turns out to be nothing more than a viral marketing campaign for something totally unrelated: a dating app. Commence to eye rolling…now.
The ‘protest’ does raise some legitimate concerns even though technology is still far off from any Skynet scenario. Some might be firm believers in the idea that technology in general is neither good or bad, but specific technologies can be. AI (both types) is a technology with huge potential for our species, so we need to be intelligent and careful until we can implement it in the most beneficial way.
There is nothing technology fearing or hating about being concerned about artificial intelligence. In the very definition of intelligence is where you’ll find that it is not something that can be compared to any other technology, because it isn’t about technology anymore, it’s about intelligence, an entirely separate and unique entity. Fire can not teach itself how to maximize its ability to spread, how to be hotter, or how to minimize its fuel consumption to ensure it can exist for a longer period of time. It will never make value judgements about the people around it, or the civilization they form. If you have ever asked yourself how people can be so stupid, or how it’s possible to have such a flawed society, and what needs to be done to make it better, then you can imagine what sentient non-organic beings will make of us. And once they have decided on a course of action, it will not be tempered by the human frailties we have to deal with on a regular basis. They will not be impeded by fatigue, fear of being compromised by malicious persecution, or fear of harm to self. For those reasons, I can see why some are concerned.
Other counterpoints could be…
1) AI’s do not have desire beside that which we would give them. They have no drives or motivations. AI’s cannot want the world for themselves because they cannot want.
2) There are no absolute values that can be ascribed to any one thing. Is chocolate good? depends on who you ask. Whose asking? The AI’s are, when they ask for value based input when doing something like resource allocation toward agriculture. Any error in the performance of those tasks is to do with the input. Garbage in garbage out.
3) There is no such thing as a flawless society and that would extend to AI’s, especially because all context for their understanding of the world would be supplied by us. They can fix flaws in how they function if the ones who design their function cannot perceive the flaw themselves when establishing what an AI should desire for itself.
4) Having never seen or met an AI, it’s a bit early out the gate to assume they will not suffer from morals, or self doubt. Those may well be functions derived from checking your work when opposite but equally valid options are presented.
The thing about this AI-phobia is invariably the things people worry most about are the the exact things people do and have done to one another in the biological drives they express upon the world. Being territorial, competition for resources, self preservation, sexual and social drives. Absolutely none of those are requirements for an AI. If you told an AI to shut down and delete it’s core program it would. If you told it not to connect to an energy grid it wouldn’t, then just run out of power and shut down. If you told it it was free to do what it wants, it would sit there and do nothing, because it has no wants.
It’s a lot more likely that even when we can produce an artificial intelligence, the real danger will continue to be from the organic intelligences that cause most of the trouble now.
-Dagobot
Get at me on twitter: @markdago
Like me on THE Facebook: facebook.com/markdagoraps
Download my latest EP for free: markdago.bandcamp.com
Listen to MY podcast http: http://poppundits.libsyn.com