Secretly recorded. Caught on camera. Listen for yourself. Seeing is believing. As CNN recently pointed out in a special report on a controversial emerging technology, audio and video have functioned as a bedrock of truth for more than a century.
“Not only have sound and images recorded our history,” the report noted, “they have also informed and shaped our perception of reality.” But times change, and people can no longer always trust their eyes or ears thanks to generative adversarial networks (GANs), a specialized subset of artificial intelligence technology.
Very simply, GANs are two networked algorithms playing a cat-and-mouse game against each other (hence, the “adversarial”). One algorithm (called a “discriminator”) detects a counterfeit anomaly in a data set, while a networked algorithm uses what it has just learned to create a disturbingly better counterfeit (hence, “generative”). And this technology has attracted the attention of lawmakers for making it relatively easy to create deepfakes—a.k.a. convincing clips of people doing or saying things that they never did or said.
Simply put, as democratic societies struggle to counter fake news in printed form, deepfakes have appeared on the scene and presented a potentially much more serious threat to Western civilization because they create an opportunity for bad actors to supercharge disinformation.
Late-night comedians currently use this technology to playfully skewer politicians, while more reptilian individuals have attempted to destroy the reputations of celebrities by creating deepfake “revenge porn.” And when you think about other negative applications, the sky’s the limit.
National security experts, of course, worry about foreign actors deploying deepfakes to manipulate public opinion—a real concern considering the misinformation operations uncovered to date. But domestic concerns abound. Imagine giving political campaigns the ability to seamlessly insert the voice and image of an opponent into compromising video clips, or providing criminals with the power to produce time-stamped video clips offering supposedly bulletproof alibis, or empowering police to manufacture body-cam evidence of citizens resisting arrest, or giving hate groups the power to produce a historical video of Winston Churchill, Franklin Roosevelt, and Joseph Stalin discussing plans to make Hitler look bad by manufacturing the Holocaust.
The potential for this kind of an assault on reality has put a serious regulatory target on GANs, which are the subject of several bills sitting in the U.S. Congress that could lead to their restricted development or even prohibition. But here’s the rub. The technology behind deepfakes might just be what the Western world needs to keep up in the AI race.
Keep in mind that a global game for AI leadership is afoot—and the stakes could not be higher. As Russian President Vladimir Putin famously observed with some hyperbole in 2017, whoever leads the development of artificial intelligence “will become the ruler of the world.”
Like Russia, China is serious about obtaining the AI crown, which is why democratic nations have a strategic interest in leading the development of machine learning, which is what most people really mean when they talk about AI. But the West started running this race with a significant disadvantage.
Machine learning—the autonomous, iterative learning by computers without the need for explicit new programming—occurs by processing training sets of big data, meaning unfathomably large data sets that were generally unavailable before the arrival of the Internet, smartphones, RFID, and sensors that passively spew data. As a result, the strength of any nation’s AI, not to mention its use-case outcomes, depends on the volume and quality of its available data. And when accumulating data, Russia and China are not hindered by Western ethics, laws, or human rights.
The point of this article is not to promote government surveillance or advocate freeing AI developers around the world from data privacy or retention restrictions. I am also not dismissing the threat that deepfakes represent. My objective is to highlight how GAN technology enables AI developers to augment or complete data sets where data is unavailable, incomplete, or otherwise lacking. And this can be done in ways that do not violate the principles of democratic societies.
In other words, the technology behind deepfakes has the potential to empower AI researchers and developers from rule-of-law jurisdictions to wipe out the advantage of competitors in authoritarian regimes, where there are few written limitations on the amount of data that domestic companies can collect and use (and even fewer, if any, enforced limitations), not to mention no restrictions whatsoever on government data collection and usage.
GAN technology has plenty of other positive things to offer. Medical researchers use it to improve cancer diagnoses, autonomous vehicle developers use it to improve automotive safety, and astronomers use it to map the universe. There are also significant homeland security and military applications, including ones that could improve defensive capabilities, not to mention help prevent election tampering (there is truth behind “it takes a thief to catch a thief,” and so, the best hunters of online deepfakes are GANs. It’s kinda what they do.)
But sadly, people writing about deepfakes often fail to highlight these positive uses of the underlying technology, making it easy for a chorus of tech negativists to dominate public debate. Instead of having a conversation based on fear and misunderstanding, Western nations need to explore deploying laws, policies, and regulations that are grounded in age-old legal concepts like defamation and slander to counter the threat posed by deepfakes.
After all, luddites don’t win wars, and knee-jerk calls for regulatory action just risk putting unnecessary restrictions on the development of a technology that could very well hold the key to democracy’s survival.
This article was originally published in the Ivey Business Journal on 3 March.