Political campaigns must clearly state when AI-generated images and audio have been used from November.
Google will soon require political ads to disclose when AI-generated images, videos and audio have been used. Political ads using artificial intelligence be accompanied by a prominent disclosure if imagery or sounds have been synthetically altered.
The search engine explains the consequences of not adhering to its rules in the Google political content policy: “Non-compliance with our political content policies may result in information about your account and political ads being disclosed publicly or to relevant government agencies and regulators.”
Why we care. Tackling fake news and enhancing online safety could boost people’s trust in the internet, which could ultimately give them more confidence to shop online. One possible drawback with Google’s proposal is that many of the kinds of accounts that may be involved in using “deep fakes” and similar tactics to distort the truth may think Google’s warning somewhat toothless.
How will it work? Political ads must feature labels to act as red flags when AI content has been used, such as:
- “This image does not depict real events.”
- “This video content was synthetically generated.”
- “This audio was computer generated.”
- “This image does not depict real events.”
Campaigns that use AI for “inconsequential” tweaks, such as small edits to photos like the removal of red eye, will not need to feature a disclaimer.
Why now? The new rules are coming into force one year ahead of the next US Presidential election. A Google spokesperson told the BBC that the move was in response to “the growing prevalence of tools that produce synthetic content”.
The news also comes one week after X (the platform formerly known as Twitter) announced that it is bringing back political ads ahead of the 2024 US election.
The post Google to make political ads disclose AI content appeared first on MarTech.
MarTech(4)