Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on June 16, 2019

The ethics of deepfakes aren’t always black and white

Be on the lookout for 'greyfakes'


The ethics of deepfakes aren’t always black and white

Chances are if you’ve seen a deepfake, such as this now infamous video of Obama ‘speaking’ Comedian Jordan Peele’s words, it has left an uncomfortable feeling. Since deepfakes emerged in December 2017, most media coverage has focused on their potentially catastrophic applications. These range from deepfake pornography, ransomfakes, smear campaigns against politicians, and a new age of fake news that could worsen the global ‘post-truth’ crisis.

While these malicious uses of deepfakes and synthetic media are rightly a cause for concern, there are also positive uses of the same generative AI technologies. For example, Lyrebird, a Canadian startup, has partnered with the ALS foundation on Project Revoice, an initiative that uses generative AI to create personalized synthetic voices for ALS sufferers who have lost the ability to speak. Similarly, DeepEmpathy, a project by MIT and Unicef, creates synthetic images that show what cities such as London and Tokyo would look like if they were bombed, with the aim of fostering empathy with those fleeing from war.

These examples show that, like most technologies, AI generated synthetic media has positive and negative applications. However, beyond these morally black and white examples, there are numerous ‘grey’ applications that don’t fit neatly into this classification. I call these morally ambiguous examples of AI generated synthetic media greyfakes.

By their nature greyfakes may not stay grey forever. They could evolve to have a positive or negative impact, irrespective of the intentions behind their creation. While governments and businesses are scrambling to counter the explicitly negative uses of synthetic media, and harness the positive ones, greyfakes are quietly developing under far less scrutiny.

Here I want to draw attention to three examples of greyfakes that are already developing rapidly, and why they require much more attention.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Synthetic resurrection

Arguably the most promising commercial applications of AI generated synthetic media lie in the entertainment sector. Using CGI, experts have already realistically re-created the deceased actress Carrie Fisher as Princess Leia in Star Wars. With the rapid improvements in AI generated synthetic media, this practice will likely be automated and scaled with an even higher degree of realism.

The result may be that an actor’s death simply leads to the creation of their synthetic twin that continues to feature in films or TV shows. This is clearly on the mind of Disney, who are fighting legislation that restricts deepfakes and synthetic media in New York, claiming such restrictions on the technology could stifle creative freedoms.

Yet the ethical status of this ‘synthetic resurrection’ are far from clear. As noted by Melanie Ehrenzkranz, the possibility of a synthetic afterlife for celebrities could see figures such as JFK becoming puppets for companies, recreated to advertise products or brands.

This raises significant questions about control over our ‘digital afterlife’ and rights to our self image. In the case of celebrities, the question focuses specifically on whether we can draw the line between respectful recreation and commercial exploitation.

Another greyfake in this space is the potential use of synthetic voice audio to recreate the voices of dead loved ones. This was observed by one synthetic voice audio startup, who commented that people have asked them to recreate their dead father’s voice with the technology.

Potentially, this could form part of a new kind of bereavement therapy, or help people feel better connected with the deceased. Alternatively, it could cause significant psychological damage, stunting people’s recovery from loss and creating a dependence on synthetic copies of the dead.

Deepfake doppelgangers

It isn’t just the dead who could be synthetically recreated with morally ambiguous results. When Vice first broke the story of deepfakes, the focus was on their nonconsensual ‘deepfake pornography’ involving celebrities. Since then, concerns have grown about how the commodified AI tools for creating synthetic media could be used to create revenge pornography that targets ordinary people.

However, one company, Naughty America, is pioneering the idea of consensual synthetic pornography that inserts customers into custom scenes. Despite all parties involved providing consent, it is hard to imagine the commodification of this service wouldn’t lead to some people being unwillingly inserted in pornographic scenes through a widely accessible online platform.

Moreover, these services may also normalize the idea of synthetic pornography, which could further exacerbate existing concerns about the negative impact of pornography on psychological and sexual development.

Major tech companies are also heavily investing in synthetic technologies that partially or entirely replicate their users. One notable example is Facebook’s Codec Avatars, where a user’s upper body is recreated with a near indistinguishable degree of accuracy in virtual reality. The developer of these avatars claim they could be used to have embodied conversations with other people’s avatars, enabling a level of interpersonal connectivity and realism previously unseen.

Alternatively, these avatars and similar recreations of individuals could lead to more visceral bullying and online trolling, creating more intense, but not necessarily more positive interactions. In the long term, the normalization of these avatars may also reduce the frequency of physical conversations and ‘real life’ interactions.

Hidden in plain sight: AI voice assistants

Arguably the most socially prominent forms of synthetic media are voice assistants and AI enhanced ‘robocalls.’ The popularity of Amazon’s Alexa and Google Assistant have perhaps made us comfortable living in a world were the organic and synthetic intermix in this space. However, until recently these synthetic voices have not sounded like real people; they are familiar but clearly synthetic.

Google’s announcement of its Duplex AI voice call assistant in 2018 changed this. Duplex is an automated service that can book appointments on your behalf, making live calls with a highly realistic synthetic voice that imitates intimately human elements of speech, including pauses and verbal cues such as “mhmmm.”

While Duplex is not designed to imitate a specific voice, the intention was clearly to create a synthetic voice assistant that could pass as human in phone interactions, creating the illusion with the other party that they were speaking with a real person.

The immediate and obvious concern was how this technology could be abused in a similar way to the Naughty America example, enhancing scams, identity theft, or other explicitly malicious applications.

However, the real ‘grey’ question is whether Duplex and a future where synthetic voices are no longer distinguishable from real ones undermines real human interactions. This issue was expertly discussed by Natasha Lomax, who referred to Duplex as being the result of “deception by design,” that lacked an “appreciation of the ethical concerns at play around AI technologies that are powerful and capable enough of passing off as human.”

Google has since ensured Duplex calls will start with an announcement that it is the synthetic voice assistant and not real. However, this hasty revision shows important ethical questions about whether certain intimately human interactions should be protected from synthetic media are far from clear cut.

A framework for the future

The long term impact of AI generated synthetic media on society are hard to predict. However, these examples of greyfakes show some areas desperately need to be explored and questioned than others. Moving forward, we need to ensure generative AI and synthetic media are developed responsibly, encouraging potentially positive applications, while ensuring they do not unwittingly cause harm.

This will likely require the creation of a code of practice, and a wider societal discussion about whether some areas of human interactions should be treated as sacred or beyond the reach of technological interference.

If we want to avoid the all too familiar scenario of only reacting after harm or damage has occurred, it’s essential we broach the issue of greyfakes as soon as possible. Like many emerging technologies whose broader impacts we are only just beginning to understand, greyfakes show things are very rarely black and white.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with