Does AI Need Superintelligence To Become A Godlike Bigot?
Dumb AI's are already in use to approve loans, housing, and to screen job applicants. AI is even used to factor the amount someone may have to pay in bail.
Automation is impacting many jobs traditionally done by humans. But bots replacing us is not my concern when it comes to artificial intelligence. Robots have been with us for a while, and while automation often displaces people, living organisms tend to move on to new tasks.
Not everyone who loses a job to a robot will be given a new, better job overnight. But I'm skeptical that we will see all or (most) human jobs filled by robots, hardware, or software. Growing pains are likely, but automation itself is not likely to destroy the economy and kill all humans.
Automation Will Displace But Won't Replace
Last night I watched the delightful TV show Mary Berry's Country House Secrets. That show exploring the history of British manners by way of cooking highlights my point about automation. Many of these stately kitchens, once operating with teams of 20+ working to cook up meals, now have staffs of only one or two chefs. The change in personnel requirements results from new tools that have replaced workers. Stirring a bowl was once a task that required a dedicated person with the forearms of Popeye. I expect most people are not upset that an electric mixer has taken a human's job.
Machines, both software and hardware, seem best suited for replacing humans at repetitive tasks. Currently, This is true, even on the bleeding edge of AI tech, where algorithms may soon surpass their organic counterparts. While AI may someday soon outperform humans at extrapolating data from medical imaging, that is the result of millions of repetitive tasks processing more information than a single person ever could.
With machine learning analyzing patterns in images with known patient outcomes, a neural network can grow adept at recognizing a diagnosis. Anyone who has ever used an automated chat system should know the difference between "recognizing" and understanding. Use of natural language processing is common, but automated chatbots fundamentally run from a series of branching conditional statements. The bots may not be as helpful as a live person, but bots do not feel fatigue or frustration.
Bots Deliver Emotionless Consistency
Chatbots offer consistency I'd argue no human could maintain. Most people feel frustrated when asked the same question repeatedly; the bot feels nothing and will answer the same way every time. Even the most battle-hardened sales and customer support person can feel offended or want to call someone "a time-wasting idiot."
Our biological life forms are unable to press a reset button between conversations. I meditate to avoid snapping at the innocent after a stressful day. I'm willing to admit that because I am 100% sure everyone at some point has caught themselves taking a hard day out on a person who was not the cause of the day's difficulties.
Conceptually AI, from a chatbot using conditional expressions to a superintelligence will operate just as coldly. If the AI makes a decision to kill you, it will be nothing against you personally. The emotionlessness of AI as a killing machine is the plot of the short film "Slaughterbots".
Bias Programming Could Mean Bigoted AI
With bots, things just run according to their programming, without emotion. An ATM, unlike a bank teller, will treat all customers equally unless programmed to do otherwise. People are, however, bad at avoiding bias when we program these systems. That risk of bias in AI training data is why I am deeply opposed to predictive policing.
It's clear already that human blemishes are reflected in our digital offspring. Flawed data sets fed into machine learning will risk creating an AI monster, that is not a gamble we've ever had with a kitchen appliance as we see AI become more prevalent. Even without becoming more advanced, these problems will amplify and manifest in more ways that may not be visible.
As AI advances, we risk a superintelligence with the underlying prejudices of a three-letter organization in the southern United States. While right now, both a food mixer and an AI chat system present about the same low risk of unplugging from the wall and announcing themselves as a god, that may change. AI seems destined to grow in complexity from narrow AI to general AI and maybe even past human capacity.
We Already See Somewhat Generally Intelligent AI
AI training techniques like reinforcement learning have already created at least slightly general AI. Reinforcement learning involves telling the AI a goal and allowing the machine learning to retry potentially millions of times to find the best way to improve.
In 2015 Google's DeepMind used reinforcement learning to train an AI to play 49 Atari video games; the machine beat human players at most of the games. A few years later, in 2019, OpenAI released a reinforcement learning project that taught an AI to play hide and seek. Both AI's learned by doing, just like biological organisms. Watch the video OpenAI published along with the project for a better understanding of how this worked.
These examples are hardly at the level of human intelligence, but they also aren't a calculator. These AI's functioned generally and learned with minimal programming. Having self-learning software comes with the possibility of creating an AI with super intellect and of teaching the AI some unknown bias.
Will AI Lead To A Godlike Bigot?
In a now-infamous TED talk the neuroscientist Sam Harris argues that with enough time, a generally intelligent, self-learning, and possibly self-aware AI is inevitable. He believes this is true even without the continuation of Moore's law or explosive progress. Harris's reason, in physical systems intelligence, is a matter of information processing and time. He argues over a long enough timeline, unless interrupted, artificial superintelligence is nearly a guarantee.
Perhaps you can dismiss such claims from a Harris. However, it's hard to ignore Stuart Russell, who leads the Center for Human-Compatible Artificial Intelligence at UC Berkeley and believes that AI poses a real risk to humanity.
How concerned is Russell? Well if you watch till the end of the "Slaughterbots" film you will see a message from Stuart Russell explaining the film is not just speculation. He says the film depicts the results of integration and miniaturization of technolagy that already exists. His concerns are given credence by about 70% of AI experts.
While Russell does express fear of a godlike AI, he also seems wary of the narrow unintelligent AI systems we have now. Russell wants AI research to shift focus from the pursuit of pure intelligence, towards intelligence that aligns with human values. Clearly, a simple task due to the relatively homogeneous values humanity can articulate *sarcasm alert*.
Even "Dumb" AI Is Already Disturbingly Powerful
Relatively simple AI like YouTube's system to flag videos for demonetization and removal showed a bias that was likely introduced and enforced without human knowledge. If a human gave the bots a list of keywords to flag, it would be possible to see if YouTube was bias. But by leaving decisions up to an AI, the algorithm had to be reverse-engineered even to point a finger. This is the real threat of AI.
Dumb AI's are already in use to approve loans, housing, and to screen job applicants. AI is even used to factor the amount someone may have to pay in bail. On a day to day level, the way many people receive news is dictated by algorithms. Even the way we research is on some level decided by a self-learning AI. I'd be shocked if any person working at Google could fully explain how the search engine actually ranks pages.
Is Narrow AI Already Godlike?
Certainly, the sci-fi concept of artificial superintelligence is bone-chilling and perhaps possible. But if humanity comes about from a godlike bigot, a terminator, or an army of orca whales, does it matter? The outcome is the same.
"The greatest trick the Devil ever pulled was convincing the world he didn't exist." - The Usual Suspects
AI is already here, and while it doesn't seem to be at the level of general (let alone super) intelligence AI is already a cause of problems, with promises to cause more. Facebook, YouTube, Google, and many other tools most people use daily are run in part by very narrow AIs, determining what humans see.
Sure, the YouTube AI should not be blamed for "radicalizing" people. And while any discussion of how much influence fake news disseminated on Facebook had during the 2016 U.S. presidential elections will quickly grow polarizing, Facebook's AI probably didn't change the outcome. However, most agree these platforms create an echo-chamber, and people do not know how what they see is determined.
This technology, an integral part of our daily lives is not well understood, possibly even by the teams building these tools. Humans are losing control of AI, or maybe AI is gaining control of humans?
The C.S. Lewis book The Screwtape Letters can be summarised as a demon advising his nephew to tempt humans in three steps:
Distract the humans.
Get them fighting among themselves.
Make them think you don't exist.
If an AI of divine intellect was running the world, wanting to bring about the end of humanity; Would it behave any differently from the dumb AI's and simple automation we have currently?
Article by Mason Pelt of Push ROI. First published in HackerNoon on December 27, 2019. Photo by DeepMind on Unsplash.