2024 is set to be the largest global election year in history. This coincides with the rapid rise in deepfakes. In the Asia-Pacific region alone, there was a 1,530% increase in deepfakes from 2022 to 2023, according to the Sumsub report.
Photography link iStock | Getty Images
Ahead of the Indonesian elections on February 14, a video of the late Indonesian President Suharto defending the political party he once headed went viral.
The AI-generated deepfake video that cloned his face and voice racked up 4.7 million views on X alone.
This was not a one-time incident.
In Pakistan, former Prime Minister Imran Khan faked the national elections, declaring that his party would boycott them. Meanwhile, in the US, voters in New Hampshire heard a fake clip of President Joe Biden telling them not to vote in the presidential primaries.
Politician deepfakes are becoming increasingly common, especially with 2024 set to be the biggest global election year in history.
At least 60 countries and more than four billion people are said to be voting for their leaders and representatives this year, making deepfakes an issue of grave concern.
According to a Sumsub report in November, the number of deepfakes worldwide rose 10-fold from 2022 to 2023. In the Asia-Pacific region alone, deepfakes rose by 1,530% during the same period.
Online media, including social platforms and digital advertising, saw the largest rise in identity fraud at 274% between 2021 and 2023. Professional services, health care, transportation, and video gaming were also among the industries affected by identity fraud.
Asia is not prepared to deal with deepfakes in elections in terms of regulation, technology and education, said Simon Chesterman, senior director of AI governance at Amnesty International in Singapore.
In its 2024 Global Threat Report, cybersecurity firm Crowdstrike said that with the number of elections scheduled this year, nation-state actors, including China, Russia and Iran, are very likely to launch disinformation or disinformation campaigns to sow unrest.
“The most serious interventions would be if one of the major powers decided they wanted to disrupt elections in a country — that would likely be more impactful than manipulation by political parties on the margins,” Chesterman said.
Although many governments have tools in place (to prevent online lies), the worry is that the genie will be out of the bottle before there is time to put it back in.
Simon Chesterman
Senior Director, Amnesty International Singapore
However, he said that most deepfakes will be created by actors within the countries in question.
Local actors may include opposition parties and political dissidents or far-right and left-wing extremists, said Carol Soon, a senior research fellow and head of society and culture at the Institute for Policy Studies in Singapore.
The dangers of deepfakes
At the very least, Sun said, deepfakes pollute the information ecosystem and make it difficult for people to find accurate information or form informed opinions about a party or candidate.
Voters may also be alienated from a particular candidate if they see content related to a scandalous issue go viral before it is debunked as fake, Chesterman said. “Although many governments have tools (to prevent online lies), the worry is that the genie will be out of the bottle before there is time to put it back in.”
“We've seen how quickly X can be captured by fake porn involving Taylor Swift — these things can spread incredibly quickly,” he said, adding that regulation is often inadequate and incredibly difficult to enforce. “It's often too late.”
Adam Myers, head of anti-adversarial operations at CrowdStrike, said deepfakes could also trigger confirmation bias in people: “Even if they know in their hearts that this isn't true, if this is the message they want and the thing they want to believe in, they “They will do it.” We won't let that go.”
Chesterman also said fake footage showing misconduct during elections, such as ballot stuffing, could make people lose confidence in the legitimacy of the election.
On the flip side, candidates may deny a truth about themselves that might be negative or unpleasant, and attribute it to deepfakes instead, Son said.
Who should be responsible?
Chesterman said there is now a realization that social media platforms need to take more responsibility because of the semi-public role they play.
In February, 20 leading technology companies, including Microsoft, Meta, Google, Amazon, and IBM, as well as AI startup OpenAI and social media companies like Snap, TikTok, and X, announced a joint commitment to combating the deceptive use of AI in elections this year. year.
Sun said the technology agreement signed represents an important first step, but its effectiveness will depend on implementation and execution. She said that with technology companies adopting various measures across their platforms, a multi-pronged approach is needed.
Technology companies will also have to be very transparent about the types of decisions that are made, for example, the types of operations that are implemented, Son added.
But Chesterman said it's also unreasonable to expect private companies to carry out what are essentially public functions. He added that determining what content will be allowed on social media is difficult to implement, and companies may take months to decide.
“We should not rely solely on the good intentions of these companies,” Chesterman added. “That's why regulations and expectations need to be set for these companies.”
To that end, the Coalition for Content Source and Authenticity (C2PA), a non-profit organization, has introduced digital credentials for content, which will show viewers verified information such as the content creator's information, where and when it was created, as well as whether or not it is a product. Artificial intelligence was used to create the material.
C2PA member companies include Adobe, Microsoft, Google, and Intel.
OpenAI announced that it will apply C2PA content credentials to images created using its DALL·E 3 offering early this year.
“I think it would be terrible if I said, ‘Oh yeah, I'm not worried. I'm feeling comfortable. For example, we will have to monitor this very closely this year (with) very careful monitoring (and) very strong feedback.
In a Bloomberg House interview at the World Economic Forum in January, Sam Altman, founder and CEO of OpenAI, said the company is “very focused” on ensuring its technology is not used to manipulate elections.
“I think our role is very different from that of a distribution platform,” such as a social media site or news publisher, he said. “We have to work with them, so it's as if you're born here and distributed here. There should be a good conversation between them.”
Myers proposed creating a bipartisan, non-profit technology entity whose sole mission would be to analyze and identify deepfakes.
“The public can then send them content they suspect has been manipulated,” he said. “It's not foolproof but at least there is some kind of mechanism that people can rely on.”
But ultimately, although technology is part of the solution, a large part of it comes down to consumers, who are still unprepared, Chesterman says.
He also quickly highlighted the importance of educating the public.
“We need to continue our outreach and engagement efforts to increase the sense of vigilance and awareness when the public receives information,” she said.
The public needs to be more vigilant; She said that besides checking facts when something is very suspicious, users also need to verify the authenticity of important pieces of information especially before sharing it with others.
“There's something everyone should do,” Son said. “It's all hands on deck.”
— CNBC's MacKenzie Sigalos and Ryan Brown contributed to this report.