Meta announced on Wednesday a new policy requiring advertisers to disclose when potentially misleading AI-generated or altered content is featured in political, electoral, or social issue ads on Facebook and Instagram. This decision comes as lawmakers and regulators are preparing to take on the issue ahead of the 2024 presidential election.
The new rule applies to content that contains “realistic” images, videos, or audio that falsely show someone doing something they never did or imagine a real event playing out differently than it did in reality. Additionally, content depicting realistic-looking fake people or events will also need to be disclosed. The policy is expected to go into effect next year.
Nick Clegg, Meta’s president of global affairs, explained the new policy in a Threads post on Wednesday. He stated, “In the New Year, advertisers who run ads about social issues, elections & politics with Meta will have to disclose if image or sound has been created or altered digitally, including with AI, to show real people doing or saying things they haven’t done or said.”
It is important to note that content that has been edited in inconsequential or immaterial ways, such as cropping or color correcting, will not need to be disclosed according to Meta’s Wednesday blog post.
For ads containing digitally altered content, Meta plans to flag the information to users and log it in its ads database. Earlier this week, Reuters reported that Meta was banning political campaigns and groups from using its new slate of generative AI advertising products. These tools allow advertisers to create multiple versions of ads, including different backgrounds, text, and image and video sizing.
The decision to disclose AI-generated content in political ads is aligned with efforts by lawmakers to address the issue. Earlier this year, Rep. Yvette Clarke (D-NY) and Sen. Amy Klobuchar (D-MN) introduced bills requiring campaigns to disclose when ads include AI-generated content. Additionally, the Federal Election Commission, the regulatory agency in charge of political advertising, is expected to make a decision on a new rule requiring political campaigns to disclose the use of AI-generated content, although it’s unclear exactly when a vote on this rule could take place.
This move by Meta signals the company’s commitment to enhancing transparency in digital advertising and ensuring that users are informed about the content they are exposed to. The use of AI in advertising has raised concerns about the potential for spreading misinformation and manipulating public opinion. By mandating the disclosure of AI-generated or altered content, Meta aims to mitigate these concerns and promote a more transparent and trustworthy online environment.
The impact of this policy will be far-reaching, as it not only affects the content of political, electoral, and social issue ads but also sets a precedent for the regulation of AI-generated content in the digital advertising landscape. As AI technologies continue to advance, it is essential to establish guidelines and standards to protect consumers and maintain the integrity of digital media.
Furthermore, this move by Meta may prompt other digital platforms and advertising companies to reassess their policies regarding AI-generated content and take similar measures to enhance transparency and accountability.
In conclusion, Meta’s decision to require disclosure of AI-generated or altered content in political, electoral, and social issue ads is a significant step towards addressing the potential risks associated with the use of AI in digital advertising. By implementing this policy, Meta aims to promote transparency, accountability, and integrity in online advertising, ultimately contributing to a more trustworthy and reliable digital environment.