U.N. Calls for Moratorium on AI Technologies

Share this article:

This article was written by Edward Hsiang

While evil sentient AI like Skynet in the Terminator or VIKI in I, Robot, might have you believing in a physical manifestation of a robo-apocalypse, the truth in the matter might be more sinister and camouflaged than you’d think. On September 15th, the United Nations’ human rights chief, Michelle Bachelet, made an official statement calling for a moratorium (temporary prohibition) on the use of artificial intelligence technology in ways that could infringe upon human rights. As opposed to “sentient” kill-bots, the dangers of AI in our modern society falls more into the category of profiling and systematic oppression. “We cannot afford to continue playing catch-up regarding AI — allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact,” said Bachelet in her statement following an official report by the U.N. human’s rights council on the risks posed by AI-powered technologies. 

For those less informed, artificial intelligence has been progressing in the background of our society at breakneck speeds, with constant evolutions and breakthroughs in the field of machine learning. In a comment, notorious tech-mogul Elon Musk wrote: “The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most.” Google’s Deepmind division appears to be at the forefront of AI development using an open-ended algorithm designed to be extremely flexible. Unlike most other AI programs, Deepmind was programmed with no pre-defined purpose and instead relies on user input to learn how to accomplish tasks. In 2013, Deepmind published results on teaching their AI how to play and surpass human abilities in old school Atari games such as Pong and Enduro, infamously surpassing the performance of other AIs designed specifically for this purpose (Volodymyr et al., 2013). This “neuroevolution” approach allowed the AI to understand how to play the game without any alterations to its base code by allowing it to “play” the game over and over again and telling it if it won or lost. Famously between 2015 and 2017, Deepmind’s AlphaGo program was able to beat top ranking players of Go, a board game with over 300 times the number of possible moves than chess, previously thought to be too complicated for an AI to play at more than an amateur level. While chess has few enough movesets that a modern computer can brute force a perfect game by simulating a decision tree in real-time, the same can not be said for Go which is played on a 19×19 grid,  with players aiming to surround opponents’ pieces. AlphaGo, therefore, demonstrates Deepmind’s extreme ability for pattern recognition at a highly sophisticated level.

While these powerful programs can have immense applications to benefit our lives in the form of self-driving cars and resource distribution, the U.N.’s concerns are not unfounded. Take, for instance, the eerie feeling you get when advertisement algorithms on social media are just a little too on point, and imagine a world where our socio-economic status is tied to similar algorithms. Highly sophisticated, artificial intelligence is not immune to bias, especially the bias of its creators. Already racial bias is a problem at the forefront of our society. Online real estate company Zillow has published data showing how Black applicants are disproportionately denied mortgage rates compared to White applications regardless of where in the US they live (Zillow, 2021). Facebook admitted in 2016 that “64% of all extremist group joins are due to our recommendation tools” (Wall Street Journal, 2016) and has only recently stated that they would no longer algorithmically recommend political groups to members of their platform. Other tech giants also have allegations against them for information misuse. Bloomberg published in 2019 an expose on Amazon employees being hired to transcribe voice recordings from their virtual assistant Alexa (Bloomberg, 2019). It isn’t hard to imagine the human rights infringements that could result from using highly efficient AI’s for similar targeting or even social manipulation purposes. Major advertising agencies spend millions of dollars on psychology research for how to influence consumers, combining that with the pattern recognition and strategy forming performance of an A.I. like Deepmind definitely steps into the moral grey zone. 

Other applications the U.N. has warned about have been in facial recognition software. A quick google search shows the growth in anti-facial recognition products in the past decade, from bizarre make-up tutorials to clothes and accessories designed to mess with cameras. China, for example, has been heavily criticized for its use of A.I. technologies in mass surveillance that allow authorities to track and identify individuals by ethnicity in public spaces. Meanwhile, the city of Portland, Oregon, passed a broad ban on facial recognition software last September, the European Commission proposed a ban on the use of A.I. for tracking individuals and ranking their actions this April, and Amnesty International has launched a “Ban the Scan” initiative to protest the use of facial recognition software by New York City government agencies. 

It isn’t surprising that mainstream media has latched onto these new-age horror stories of AI dystopia in examples such as Netflix’s Black Mirror or HBO’s Westworld, but how far away are we really from these fictional stories? Back in 2018, China had plans to give all 1.4 billion citizens an assigned social credit score based on how they behave, tracking them with facial recognition software and their online traffic. While most of the focus has been on businesses who deal in unsavoury practices, air and transit tickets have been denied to over 30 million people who were deemed untrustworthy as of 2019 (South China Morning Post, 2019). Dating websites in the country also allow individuals to display their social scores as a prestige symbol, much like the famous Black Mirror episode “Nosedive.”

Regardless of where you stand on the merits or demerits of artificial intelligence, there can be no denying that the lack of current guidelines and restrictions in this field are a reason for concern. While no country has publicly addressed the U.N.’s comments so far, many have previously echoed similar fears and are already working on solutions.

Leave a Reply

Your email address will not be published.