Meta is taking a bold step in the fight against digital manipulation by requiring advertisers to disclose whether the ads they submit have been altered, including through the use of AI tools. This new policy applies specifically to political or social ads and will mark digitally altered ads in the same way some advertisements come with a “Paid for” disclaimer. The implementation of this rule is set to begin in the new year, just as the campaign period for the highly anticipated and potentially contentious 2024 US presidential elections heats up.
In a recent blog post, Meta explained the specifics of the new advertising policy. Advertisers will need to disclose if they submit a social issue, electoral, or political ad that includes digitally altered photorealistic images or videos, as well as realistic sounding audio that has been manipulated to make a real person say or do something they didn’t actually say or do. Additionally, they must disclose if they are submitting an ad with a realistic-looking person that doesn’t exist, a realistic-looking event that didn’t happen, or altered footage of a real event. The policy also requires disclosure if an advertiser submits a fake image, video, or audio recording of an event using AI image generators. However, simple adjustments such as resizing, cropping, color correction, and sharpening are exempt from this disclosure requirement.
Meta anticipates that some advertisers will run afoul of this new rule and has warned that it will reject ads if it determines that they failed to or deliberately didn’t disclose that their submissions were digitally altered. Repeat offenses may result in penalties, although the specifics of the authorization process and safeguards to prevent abuse have not yet been fully detailed by Meta. The company has promised to provide further details in the future.
The potential impact of AI on the 2024 elections has already caused concern among politicians and supporters on both sides of the aisle. There have been warnings about the use of AI to propagate election misinformation, and there is already an altered video of President Joe Biden circulating on Facebook, edited to falsely depict him inappropriately touching his grandchild. Meta’s Oversight Board is currently reviewing a case regarding this video and is expected to release a decision in the near future.
By introducing this new policy, Meta is taking a proactive stance towards addressing the potential misuse of AI in political advertising. The company’s efforts to combat digitally altered content in political and social ads demonstrate a commitment to preserving the integrity of political discourse and elections. As the 2024 campaign season approaches, the implementation of these new rules could have significant implications for online political advertising and the spread of misinformation. It is clear that Meta is making a concerted effort to protect the authenticity and accuracy of political content on its platforms.