Meta's plan to stop AI from ruining elections is about to get its first big test

Meta is working on tools that label AI-generated content and adding features to improve ad transparency.

Meta's plan to stop AI from ruining elections is about to get its first big test
Pattern of hand voting
  • Meta is taking steps against AI abuse ahead of the EU's Parliament election.
  • The company is adding labels to AI content and new ad restrictions to improve transparency. 
  • The company invested over $20 billion into safety and security since 2016.

Meta says it's been working hard to prevent a rehash of the 2016 election, when Facebook became a harvesting ground of misinformation.

Now, the EU Parliament elections will put the social networking platform to the test.

Meta released a statement on Sunday outlining a new plan to ensure election integrity with the EU Parliament elections taking place June 6 through 9.

"While each election is unique, this work drew on key lessons we have learned from more than 200 elections around the world since 2016," Marco Pancini, head of EU affairs at Meta said in a statement.

Pancini said that Meta is focusing on three key areas: misinformation, influence operations, and generative AI abuse.

Misinformation refers to the spread of incorrect information and influence operations refers to deceptive campaigns. The company has so far partnered with 26 fact-checking organizations and releases a quarterly report on threat findings, according to the statement.

Most recently, the company is fine-tuning its approach to GenAI. Meta joined the Partnership on AI, a platform designed to promote guidelines and best practices. It also signed the Tech Accord, which aims to prevent deceptive AI content on major platforms in the 2024 elections.

While all posts are subject to the same policy guidelines, Meta is taking extra steps to monitor AI content, according to the statement. Meta partnered with independent fact-checking partners to review AI content. If content is identified as fake, manipulated, or transformed, it will rank down on users' feeds, according to the statement.

As generative AI advances, it offers a quick and effective way to produce material for political campaigns. But it also may lead to more disinformation.

Multiple instances of deepfakes impersonating political figures, such as President Joe Biden or UK Prime Minister Rishi Sunak, threaten to spread fake news to anyone who has access to AI tools.

Meanwhile, an OpenAI developer built a "propaganda machine" just to demonstrate how cheap and easy it was to create AI-powered propaganda. The project took two months to build and cost less than $400 a month to operate.

The company is also working on creating tools to label AI-generated content that was made from other sources. It already automatically labels images created with Meta AI.

Additionally, Meta will be adding a feature that lets users disclose whether content uses AI-generated video or audio. Meta will add a separate label if the material is particularly high risk and those who fail to label their own content may face consequences.

Similarly to users, advertisers will have to disclose if they used AI to create the content.

The ads also have to display a "paid for by" disclaimer to show users who is behind each ad. Between July and December of 2023, Meta removed 430,000 ads in the EU for failing to include a disclaimer, according to Meta's statement.

The company will also have an Ad Library that shows what ads are running, who they are targeting, and how much was spent on them. Advertisers on Meta will have to go through a verification process to prove they are who they say they are and that they live in the EU.

Read the original article on Business Insider

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow