Rise of the Machines — are you ready?
Today, technology is gaining ground at a rapid rate, faster than imagined 20 years ago. We are seeing what was once sci-fi quickly becoming reality and there are valid concerns as to whether this as a good thing. Are we headed for a robotic Armageddon?
AI and robotics have been fascinations of mine ever since I was a kid trying to make robots out of cardboard and scrap motors from my dead remote control cars. Needless to say, with the current progress companies like Boston Dynamics are making, I’m totally excited to see what comes next.
I have followed with deep technical interest the strides AI has been making over the years, even experimenting with a lot of the technologies along the way. While current technology existing in the public domain uses AI techniques which are directed at accomplishing specific tasks (Narrow AI), we’re at least a few decades away from a more human form of AI (Artificial General Intelligence).
The form of AI depicted in movies such as the Matrix and Terminator is, you guessed it, Artificial General Intelligence. The general theme being that when this form of AI becomes self aware, it will enslave or eradicate humanity (you know, for reasons).
However, before we get to that grim reality, we do need to fulfill a few prerequisites:
1- Our global systems will need to be predominantly electronically controlled
2 - We would need to perfect Artificial General Intelligence — lets call it Bob
3- We will need to piss Bob off as humans tend to do (probably the easy part)
Our global systems will need to be predominantly electronically controlled
In a world where we are seeing more and more automation, where software controllers are basically everywhere augmenting traditionally purely mechanical systems; it’s not far fetched to see that in a decade or two most all of our important and critical systems would be electronically controlled.
This means that most core security systems, vehicles and even missile systems would be controllable via software. This gives us conveniences that will benefit society, however, given that AI is basically software; its feasible that it will be able to manage and control all these facilities as it decides. However, that is if it is able to get remote and security access to these systems.
The premise most critics are working on is that this AI system will be highly intelligent and capable of easily breaking our security systems to get to these control units to carry out its master plan — you know ‘cuz AI just roll’s like that.
So the idea here is that, yes, our critical systems will be able to be controlled by software, however, a hostile AI system will still need to get access to these by being intelligent enough to crack our security systems for the control units.
We would need to perfect Artificial General Intelligence — Let’s call it Bob
Even with all the advances in AI, a complete Artificial General Intelligence system is still some ways off. Experts in the industry are pretty far apart on this. Some predict it’s a matter for decades while others see it happening centuries away.
A few conspiracy theorists even claim that it’s already been developed but is being trained by pulling data from the Internet and related data services.
A lot of this Research and Development is being pioneered by private corporations and governments. These organizations have little motivation to post information on their actual progress in the public domain so there’s little to go on to make any meaningful predictions here.
What we can say is that in the race for superiority, both governments and private businesses will be pushing hard to make progress in this field, so I wouldn’t be surprised if Bob shows up sooner than later. If technology trends are to be used as referenced, many of the recent advancements in technology (even narrow AI) were originally predicted to be decades further into the future.
We will need to piss Bob off
As with any intelligent being, Bob is more than likely going to need a logical or even emotional reason to want to enslave or annihilate humans. While this is a complex thing to predict, even with humans, we do know that there are certain things that can affect this outcome in humans. Given the current direction for AI (being creative and more human like), it’s fairly safe to say that these can be transposed into the conditioning of Bob as well.
Is Bob’s learning environment teaching violent inhumane solutions to problems? Are we making a world where a self aware Bob could feel threatened or enslaved? What will be the value system that we nurture Bob with?
These are all things that we as the creators can control as we push research and development of the first true Bob. However, there needs to be consensus between parties about what these conventions and standards are.
A lot of sensationalism has been recently put on Elon Musk’s warnings around AI and its potential threat to humanity. However, while most articles seems to suggest that his statements falls into the category of “fear of AI” my interpretation seems to feel like its more: “Hey, this thing can be huge. Let’s consider some stuff and regulate its development across the board so we don’t end up in shit 50 years down the line”
For my part, I choose to focus on the vast benefits we will get from AI rather than focus irrationally on a potential negative outcome of coexistence with our creation. That said, we should take precautions to prepare our society to minimize the potential for a negative outcome on the AI/human relationship:
1 — Implementation of global strategies aimed at education of the wider society for acceptance of AI
2 — Championing proper standards for security systems of our critical control units
3 — Proper protocols to guide the development and nurturing of AI.