About 96% of deepfakes on the internet are pornography, and virtually all of them depicted women. The fear is that deepfakes could be used to extort, humiliate, or harass victims.
Deepfakes have taken our society by storm lately, with altered images and videos of people from all walks of life — from common citizens to celebrities — being posted on social media platforms, leaving the victims traumatized and at mental health and depression risk.
Warily, we have witnessed a growing debate over voice and video deepfakes (the term is a mix of ‘deep learning’ and ‘fakes’) from the US to India, as some Bollywood celebrities like Katrina Kaif, Rashmika Mandanna (who starred in a recent blockbuster, ‘Animal’ with Ranbir Kapoor), Kajol and Alia Bhatt fell prey to the deepfakes.
Indeed, women, teenagers and children are the soft targets of deepfakes or synthetic media.
There are estimated to be over 15,000 deepfake videos out there right now. And now that it only takes a day or two to make a new deepfake look increasingly realistic, their number could rise very rapidly.
Some deepfakes are created just for fun, while others are trying to manipulate your opinions. What is more troubling, according to Justin Thomas, a professor of psychology at Zayed University in the UAE, deepfakes can be used to embarrass, harass and even blackmail their targets.
The technology won’t be limited to targeting celebrities as personal deepfakes are already here. Sonit Jain, CEO of GajShield Infotech in Mumbai, India, tells A Lotus In The Mud, “The surge in deepfakes can be attributed to the growing accessibility of deep fake technology and its application in various domains. Deep fakes have found utility in entertainment, political manipulation, and even fraudulent activities.”
What are deepfakes?
Deepfake videos are clips that have been altered using artificial intelligence (AI) technology to switch one person’s voice or face with that of a different person.
Tools for making deepfakes have recently become much cheaper and more accessible, amplifying conversations about potential creative applications as well as potential risks — such as spreading misinformation and manipulating viewers’ memories.
The word ‘deepfakes’ originated in December 2017 with an anonymous user on the online platform Reddit who called himself ‘deepfakes’. He applied deep-learning algorithms to digitally superimpose faces of celebrities on performers in pornographic content.
Proper deepfakes first emerged on the scene in 2019 with fake videos of Meta CEO Mark Zuckerberg and former US House Speaker Nancy Pelosi. If you have seen former President Barack Obama calling Donald Trump a “complete dipshit”, or Zuckerberg having “total control of billions of people’s stolen data,” you now know what deepfake is.
The social impact of deepfakes
As deepfake technologies become more sophisticated and accessible to the broader online community, their use puts women participating in digital spaces at increased risk of experiencing violence online and abuse.
In a ‘post-truth’ era, the ability to discern what is real and what is fake allows malevolent actors to manipulate public opinion or ruin the social reputation of individuals in public.
While the scholarly research on the topic is sparse, a recent study titled ‘Deepfakes and Harm to Women’ by Jennifer Laffier and Aalyia Rehman from Ontario Tech University did explore it. The study suggests that deepfakes are a relatively new method to deploy gender-based violence and erode women’s autonomy in their on-and-offline world. The study highlighted the unique harms for women that are felt on both individual and systemic level and the necessity for further inquiry into the issue through victims’ experiences.
About 96 percent of deepfakes on the internet were pornography, according to an analysis by AI firm DeepTrace Technologies, and virtually all pornographic deepfakes depicted women.
“People viewing explicit images of you without your consent – whether those images are real or fake – is a form of sexual violence,” according to Kristen Zaleski, director of forensic mental health at Keck Human Rights Clinic at the University of Southern California.
Although experts can now detect a deepfake, one recent study revealed that exposure to a deepfake depicting a political figure in the negative light significantly worsened viewers’ attitudes toward that politician.
Deepfakes have interpersonal consequences too
Video deepfakes have the potential to modify our memories and even implant false memories, and they can also modify a person’s attitudes toward the target of the deepfake.
One recent study revealed that exposure to a deepfake depicting a political figure in the negative light significantly worsened viewers’ attitudes toward that politician.
More worryingly, given social media’s ability to target content to specific political or demographic groups, the study revealed that micro-targeting the deepfake to groups most likely to be offended (Christians) amplified this effect relative to sharing the deepfake with a general population.
According to a paper in the journal ‘Cyberpsychology, Behavior, and Social Networking’ by Jeffrey T. Hancock and Jeremy N. Bailenson, “An important harm we have not yet considered is the non-consensual victim portrayed in a deep fake to be doing or saying something that they did not.”
Given the power of the visual medium in altering our beliefs, and the influence that such deepfakes can have on self-identity, the impact on a victim’s life can be devastating.
Although empirical research to date is limited, it is not difficult to imagine how deepfakes could be used to extort, humiliate, or harass victims, the researchers added.
However, they say that it is possible for people to develop resilience to novel forms of deception such as deepfakes. Deep Face technology is already used in Hollywood movies, for example, in the portrayal of Princess Leia in Star Wars VIII after the actor Carrie Fisher had died.
The New York Times reported recently that platforms like TikTok even allow AI-generated content of public figures, including newscasters, so long as they do not spread misinformation. “Parody videos showing AI-generated conversations between politicians, celebrities or business leaders — some dead — have spread widely since the tools became popular,” the paper adds.
Eventually governments have to intervene. Quoting an Indian official, Tech Crunch reported in November that “India is drafting rules to detect and limit the spread of deepfake content and other harmful AI media.”
After the deepfake video depicting a woman in a bikini with actress Rashmika Mandhana’s face went viral, Tech Crunch reported that “India is drafting rules to detect and limit the spread of deepfake content and other harmful AI media” a senior lawmaker said Thursday, following reports of proliferation of such content on social media platforms in recent weeks.
How to detect deepfakes
- Pay attention to the face. High-end deepfake manipulations are almost always facial transformations.
- Pay attention to the cheeks and forehead. Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes? Deepfakes may be incongruent on some dimensions.
- Pay attention to the eyes and eyebrows. Do shadows appear in places that you would expect? Deepfakes may fail to fully represent the natural physics of a scene.
- Look at the glasses. Is there any glare? Is there too much glare? Does the angle of the glare change when the person moves? Once again, deepfakes may fail to fully represent the natural physics of lighting.
- Look at facial hair or lack thereof. Does this facial hair look real? Deepfakes might add or remove a mustache, sideburns, or beard. But, deepfakes may fail to make facial hair transformations fully natural.
- Pay attention to facial moles. Does the mole look real?
- Pay attention to blinking. Does the person blink enough or too much?
- Pay attention to the lip movements. Some deepfakes are based on lip-syncing. Do the lip movements look natural?
4 ways to to protect yourself against deepfakes
- Look for the following characteristics of a deep fake video:
- jerky movement
- shifts in lighting from one frame to the next
- shifts in skin tone
- strange blinking or no blinking at all
- lips poorly synched with speech
- digital artifacts in the image
2. Educate yourself and others on how to spot a deep fake. Make sure you are media literate and use good quality news sources. “The main advice at the moment is not to exaggerate the threat or try to recognize voice/video deepfakes where they don’t exist. Nevertheless, you need to be aware of possible threats and be prepared for advanced deep fake fraud becoming a new reality in the near future,” Dmitry Anikin, Senior Data Scientist at Kaspersky, tells A Lotus In The Mud.
3. Have a secret code word that every family member knows, but that criminals wouldn’t guess. If someone claiming to be your daughter, grandson or nephew calls, asking for the code word can separate real loved ones from fake ones. Pick something simple and easily memorable that doesn’t need to be written down (and isn’t posted on Facebook or Instagram).
4. Ask the other person in the video call to turn their head around and to put a hand in front of their face. Those maneuvers can be revealing because deepfakes often haven’t been trained to do them realistically. The most reliable way to smoke out deepfakes may be to insist on an in-person meeting.
There are software tools that automatically look for AI-generated glitches and patterns to separate legitimate audio and video from fake.