Google, while promising to “bring the benefits of AI to everyone,” signed a deal with the Pentagon to use their AI technology to kill people. The outrage this caused among its employees, made Google cancel the contract.
Though, at first glance, it seems like a victory for the non-violent use of AI, isn’t it also quite naive?
Don’t underestimate the power of Artificial Intelligence
Even when you make a choice not to participate in weaponizing AI actively, once you put a product out there, you don’t always have full control anymore. Look how ISIS abused Google’s YouTube as a recruitment tool.
Elon Musk warned that though Google might have good intentions, it could “produce something evil by accident.” Eric Schmidt of Alphabet responded by saying:
“Robots are invented. Countries arm them. An evil dictator turns the robots on humans, and all humans will be killed. Sounds like a movie to me.”
A movie, …really?
A(I) new geopolitical battleground
Rapid developments in Artificial Intelligence have an even greater impact than what we see in the news.
AI is the new battleground of geopolitics and can power a technological revolution that could shift the hegemony of the US to its challenger. Not only because of its military applications, but because of its economic advantage.
President Putin pointedly said in 2017:
“Artificial intelligence is the future, not only for Russia but for all humankind. Whoever becomes the leader in this sphere will become the ruler of the world.”
The US still leads with AI development, but other countries are rushing in to catch up. France is investing $1.8 billion.
The French president Macron stated:
“There’s no chance of controlling any effects (of these technologies) or having a say on any adverse effect if we’ve missed the start of the war.”
It’s illustrative that Macron would use the word ‘war.’
China aims to be the global leader in AI by 2030.
With over a billion people and no scruples about privacy, it has more data than any other country. And data is the food on which to grow AI.
Raising a well-behaved AI
MIT fed a version of AI, aptly named Norman, with data from the dark corners of the web to discover the effects on Norman’s worldview. It became very bleak. Where others saw flowers, Norman saw death and destruction.
The experiment proved an uncomfortable reality of AI. Prof Iyad Rahwan, part of the Norman project, said:
“Data matters more than the algorithm. […] It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves.”
Thus, the human touch matters in AI. You can raise a child well, and chances are it will turn out okay. Abuse and expose it to cruelty, and it might be your next Norman.
However, as AI grows up, it becomes increasingly complex to understand, even for its developers. And we all know that an excellent upbringing doesn’t guarantee a well-behaved adult later in life.
Imagine what a deranged Norman-like autonomous weapons system can do?
Winning at AI can be losing
Even if we can control AI, winning the technological race doesn’t guarantee geopolitical success. AI not only changes the world militaries but also its societies.
AI makes our life easier by automating many tasks. Yet, more automation also leads to the destruction of traditional jobs.
Countries need to deal with AI’s socioeconomic consequences internally to be truly successful.
As McKinsey stated in this 2017 report about AI developments in China:
“[…] half of all work activities in China could be automated, making it the nation with the world’s largest automation potential. Hundreds of millions of Chinese workers could be affected.”
The Communist Party derives its legitimacy from providing prosperity and stability. AI is potentially undermining both. How will the Party respond to this threat?
Artificial Intelligence will kill. Deal with it.
The outrage caused by the Google example calls for further debate.
I agree it is more comfortable labeling the development of AI as something ‘cool’ than thinking critically about its consequences.
However, we need to start with acknowledging that AI will kill people, or better said, others will use it to kill.
This kind of knowledge might cause many people to become cynical. Because “if Google doesn’t kill people with AI, others will.” It shouldn’t.
We need to discuss how to best deal with this.
What role do you think companies will and should play in the geopolitical battle between nations over AI and its uses?
Share it here.
- https://www.psychologytoday.com/us/blog/the-future-brain/201805/the-geopolitics-ai (has many sources linked in footer notes)