The Near Future of Deepfakes Just Got Way Clearer

India’s election was ripe for a crisis of AI misinformation. It didn’t happen.

The Near Future of Deepfakes Just Got Way Clearer

Before the start of India’s general election in April, a top candidate looking to unseat Prime Minister Narendra Modi was not out wooing voters on the campaign trail. He was in jail. Arvind Kejriwal, the chief minister of Delhi and the head of a political party known for its anti-corruption platform, was arrested in late March for, yes, alleged corruption. His supporters hit the streets in protest, decrying the arrest as a politically motivated move by Modi aimed at weakening a rival. (Kejriwal has maintained his innocence, and the Indian government has denied that politics played a role.)

Soon after the arrest, Kejriwal implored his supporters to stay strong. “There are some forces who are trying to weaken our country and its democracy,” he said in a 34-second audio clip posted to social media by a fellow party member. “We need to identify those forces and fight them.” It was not Kejriwal’s actual voice, but rather a convincing AI voice clone reading a message that the real Kejriwal had written from behind bars. A couple of days later, Modi’s supporters mocked Kejriwal’s misfortune by sharing their own AI response: a montage of images in which Kejriwal is strumming a guitar from inside a prison cell, singing a melancholic Hindi song. In classic AI fashion, there are mangled fingers and a pastiche of human faces.

Throughout this election cycle—which ended yesterday in a victory for Modi’s Bharatiya Janata Party after six weeks of voting and more than 640 million ballots cast—Indians have been bombarded with synthetic media. The country has endured voice clones, convincing fake videos of dead politicians endorsing candidates, automated phone calls addressing voters by name, and AI-generated songs and memes lionizing candidates and ridiculing opponents. But for all the concern over how generative AI and deepfakes are a looming “atomic bomb” that will warp reality and alter voter preferences, India foreshadows a different, stranger future.

Before this election, India was rightly concerned about deepfakes. As cheap, accessible AI tools such as voice cloning have made it possible for almost anyone to create a political spoof, the country has already witnessed AI scandals. In the lead-up to four state elections at the end of last year, the fact-checking publication Boom Live clocked roughly 10 election-related audio deepfakes, according to the deputy editor Karen Rebelo. If a dozen audio fakes emerged during just a few state elections, Rebelo thought, the national election would see unprecedented volumes. “It was truly terrifying,” she told me. “I thought, We’re going to see one a day or one an hour.”

And indeed deepfakes, and especially audio clones, surfaced throughout the 2024 election cycle—including ones involving false election-result predictions, simulated phone conversations, and fake celebrity criticisms. In the first week of voting, deepfaked clips went viral of the Bollywood stars Aamir Khan and Ranveer Singh criticizing Modi—a big deal considering that India’s film stars don’t often chime in on politics. But the dire fears of Rebelo and others haven’t materialized. Of the 258 election-related fact-checks that Boom Live did, just 12 involved AI-generated misinformation. Others counted more than 12 AI fakes. Digvijay Singh, a co-founder of Contrails.ai, a deepfake-detection firm in India, told me that he helped fact-checkers investigate and debunk a little over 30 pieces of AI-generated media in April and May.

You might need only one truly believable deepfake to stir up violence or defame a political rival, but ostensibly, none of the ones in India has seemed to have had that effect. The closest India got was when footage of India’s home minister, Amit Shah, falsely claiming to abolish affirmative action for lower castes prompted arrests and threats of violence. Some outlets misreported the clip as a deepfake, but it had just been edited. In part, deepfakes haven’t panned out because of the technology itself: The videos and images were not that high-quality, and audio clips, although they sometimes crossed the uncanny valley, were run through detection tools from companies such as Contrails.ai. Though not perfect, they can spot signs of manipulation. “These were easy to debunk, because we had the tools,” Rebelo said. “I could test it immediately.”

The main purpose of AI in Indian politics has not been to create deepfakes as they have conventionally been understood: an AI spoof of a candidate saying or doing something damaging, with ambiguity around whether it’s real or fake. Days before Slovakia’s election last fall, for example, a fake clip emerged of a major candidate talking about rigging the vote. Instead, in India, politicians and campaigns have co-opted AI to get out their messages. Consider maybe the weirdest use of AI during the election: The team of one candidate on the ballot for the Congress Party, India’s national opposition, used AI to resurrect his deceased politician father in a campaign video. In the clip, H Vasanthakumar, a member of Parliament until he passed away in 2020, endorses his son as his “rightful heir.” The hyper-real video, in which the late Vasanthakumar is dressed in a white shirt and a tricolored scarf,  garnered more than 300,000 views on Instagram, and more on WhatsApp.

At the same time, official social-media accounts of political parties have shared dozens of AI-augmented posts in jest, to troll, or as satire. Despite name-checking deepfakes as a “crisis” prior to the start of the election, Modi retweeted an obviously AI-created video of himself dancing to a Bollywood tune. Another meme grafted Modi’s face and voice over an artist’s in a music video titled “Thief,” intended to criticize his close ties to billionaires. Whether these memes are believable is sometimes beside the point. Deception is not the primary goal—Indian voters can easily tell that Modi is not actually singing in a music video. It’s to drive home a message on social media.

Synthetic media has especially come into play with personalized AI robocalls. There are clear pitfalls: The United States made using AI-generated voices for unsolicited calls illegal after New Hampshire residents received ones in the voice of President Joe Biden, urging them to skip voting in the primaries. But in India, AI robocalls are now a $60 million industry, and so far are used most widely by actual politicians. For a national leader such as Modi, whose main language is Hindi but who presides over a country with 22 official languages and hundreds of dialects, AI-generated calls enabled him to endorse candidates in Telugu, a South Indian language he doesn’t speak. Local leaders also used AI to deliver personalized campaign calls in regional dialects to their respective constituents, addressing voters by name. More than 50 million AI-generated calls are estimated to have been made in the two months leading up to the Indian election in April—and millions more were made in May, my reporting revealed.

Although deepfakes have not been as destructive in India as many had feared, the use of generative AI to make people laugh, create emotional appeals to voters, and persuade people with hyper-personalized messages contributes to what academics call the risk of gradual accumulation of small problems, which erodes trust. Politicians who embrace generative AI, even with good intentions, may be flirting with danger. Feigning a personal connection with voters through AI could act as the stepping stone toward the real risk of targeted manipulation of the public. If personalized voice clones become normal, more troubling uses of the technology may no longer seem out of bounds. Similarly, a barrage of mostly innocuous AI content could still damage trust in democratic institutions and political structures by fuzzing the line between what’s real and what’s not. India has witnessed many cases of politicians falsely trying to spin damaging clips as deepfakes—a much more believable argument when politicians are already sharing their own AI messages.

As the U.S. and other countries head to the polls this year and reckon with the political consequences of AI, they may see something similar to what played out in India. The Democratic National Committee, for example, mocked a clip of Lara Trump singing by creating an AI-generated diss track. Deepfakes might still be a problem going forward as the technology progresses. “The question is whether the volume and effectiveness of the malicious and deceptive usages within this spectrum of human and political expression will grow,” Sam Gregory, the executive director of the human-rights nonprofit Witness, told me. “All the trend lines for synthetic-media production point in that direction.”

For now, there are still bigger misinformation concerns than deepfakes. On May 15, Rebelo’s team at Boom Live fact-checked a video going around on social media that showed a major rival to Modi, Rahul Gandhi, predicting that the prime minister would win another term. Testing the audio clip on Contrails.ai showed that there had been no manipulation using AI. It was still fake: Someone had taken a video of Gandhi claiming that Modi would not stay in office and heavily altered it with jump cuts. Even in the era of AI, “just age-old edits might still be the most impactful attack,” Contrails.ai’s Singh told me.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow