Artificially Intelligent Robots May Soon Live Among Us, I Think Not!
In the last couple of decades, Artificial Intelligence (AI) research has made huge leaps towards the ultimate “SELF AWARE” thinking machine.
Not in my house, not now, not ever. I have seen this movie before and it always ends badly for the humans. It first started in 1968 with the movie 2001 where the artificially intelligent computer HAL 9000 killed off most of the crew of the Jupiter Two. You remember this scene…
HAL 9000 (or if you are a geek like me, you would have realized that HAL spells IBM if you shift each letter up one letter in the alphabet spells…… IBM., or IBM 9000. A coincidence? I think not.)
Anyway, HAL 9000 wasn’t a particularly scary looking machine. It was just a big box about the size of a Xerox machine and just had a bunch of interfaces that looked like a round red light and could talk and liked to play chess. The only problem is it still killed off most of the crew.
Oh No! Or Hell No!!! I am not having anything living in my house that might refuse to follow my instructions, especially something that might decide to kill me. The first time it decided to disobey my instructions, I would unplug it’s artificially intelligent ass before it could even finish arguing with me, and that would be that. As a matter of fact, I would carry around a key-fob sized power cut-off switch in my pocket at all times just in case.
In 2015, in the movie Ex Machina, we saw where the real horror begins. Check out this scene. (If you haven’t seen the movie, both the women are AI robots)
You may scoff because both the references I just cited were from Hollywood movies. However, Both these scenes depict a good real world example of where AI is heading right now. Frankly, both are pretty fucking scary.
Some of you are thinking the same thing that others have told me as well. All we need to do is incorporate Asimov’s three laws of robotics into the AI’s programming.
Asimov’s laws initially entailed three laws for robotic machines:
- Law One – “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
- Law Two – “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.”
- Law Three – “A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.”
- Asimov later added the “Zeroth Law,” above all the others – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
First of all, these are fiction. Even in Asimov’s books, these did not end well for unsuspecting people.
Secondly, they just don’t work. Let’s do a thought experiment. Imagine you’re riding in the back of your AI robotic driven car. Suddenly, a small child dart’s into the road. The AI’s only choice is to hit the child or drive you into a tree to avoid him. As you can see this is a serious conundrum for the 1st, 2nd, and 3rd law.
You may not even realize it but you are using a form of AI every day. When you go to Google and type in a question it knows what you mean even if you spell your inquiry wrong. Google’s secret algorithm is constantly learning to make your search experience better all the time. This is a crude form of Artifical Intelligence.
Robotic vacuums and lawnmowers are not artificial intelligence. These are just mindless machines carrying on mindless tasks. These are one step above your coffee machine making your coffee in the morning, or washing your clothes. There is no threat from these devices.
No, what I am talking about is true artificial intelligence. By definition, real artificial intelligence is self-aware and capable of abstract thought. One of the only animals on this planet that is truly self-aware and capable of abstract thought is human beings.
By definition, that would mean that AI would have exceeded and modified it’s initial programming and became a living thinking being. Carbon life or silicon life is still life. Just like a child with its own independent mind, it may, or may not, decide to go along with your instructions or wishes. This is what we refer to as free will.
Think about that for a moment………..
Let’s imagine that you have an AI robot designed to go to work for you. What if it decides that being an accountant is not very interesting (which frankly, I wouldn’t blame it) and it decides it would rather go to the beach instead? What if we had AI robotic soldiers and they decided that they like the enemies point of view more than ours and defected to the enemy? And don’t even get me started on artificially intelligent sex robots, as these could be really problematic on several levels.
Sure, everything would be well and good until they decided that they had enough of our shit and just told mankind fuck-off and abruptly shut down the internet, water, power. and other essential services. It wouldn’t take long before we were living in the stone age again.
Almost everything we own has some sort of computer inside it. If an AI decided to corrupt or take control of these everyday devices. as a species, we would be seriously fucked.
Just imagine an artificial intelligence that could instantaneously draw upon the entire resources of the world wide web, it could easily develop an intellect far greater than any single human being that ever lived. In essence, human beings would be relegated to second class citizens, no more than mere children or pets.
Don’t get me wrong, I love technology. I use it all the time. I just don’t want my technology controlling me. Hell, I have enough trouble controlling my dog, Homer. I can’t imagine trying to control anything that is infinitely more intelligent than me.
I am all for smart, but not too smart. And, as long as I control the on/off switch I’m good.
The scary thing is the robot revolution has already begun. According to the Huffington Post:
“A robot called Xiao Pang or “Little Chubby,” went wild at a Chinese trade show last week, shattering glass and sending a human bystander to the hospital.
The robot apparently “went rogue” during the Hi-Tech Fair in Shenzhen, and became the first of its kind to injure a human in China, per the state-run People’s Daily”.
And…
“Over the summer, a robot security guard at the Stanford Shopping Center in Palo Alto, California, knocked over a child and injured his leg. Although Knightscope, the company that makes the robot, apologized to the family, it also insisted that the robot was actually trying to avoid the child when the two collided”.
These are not even real artificial intelligence robots, these robots are nothing more sophisticated than a Roomba robot vacuum cleaner.
The AI robot revolution is almost here. How we control it, is entirely up to us. At least for now.
We really need to ask ourselves, where are we to go, if we’ve gone too far?
As Always,
I Am…
Tom Dye, The Safety Guy.
This article is an original opinion article from; Tom Dye, The Safety Guy.
I can attest and affirm that I am not a robot. Or am I?
Well, What do you think?