Facebook, as one of the leading social media platforms, is no stranger to the challenges of moderating and mitigating misinformation. Over the years, Facebook has employed machine learning and artificial intelligence systems to supplement its human-led moderation efforts. However, the company’s efforts in using AI have extended beyond moderation and now include advertising as well.
In October, Facebook introduced an experimental set of generative AI tools aimed at enhancing its advertising efforts. These tools are designed to perform various tasks, such as generating backgrounds, adjusting images, and creating captions for video content. The goal is to provide advertisers with more efficient and effective ways to create compelling ad content. The use of AI in advertising can automate time-consuming tasks and streamline the creative process.
According to a report by Reuters, Facebook’s parent company, Meta, has made a decision not to make these generative AI tools available to political marketers. This decision is a preemptive measure taken ahead of what is expected to be a brutal and divisive national election cycle. By barring political marketers from accessing the generative AI tools, Meta aims to avoid further controversies and potential misuse of AI technology in political campaigns.
It is worth noting that Meta’s decision aligns with the practices of other social media platforms. For example, TikTok and Snap have already banned political ads on their networks. Google also applies a “keyword blacklist” to prevent its generative AI advertising tools from touching on political speech. Even X, formerly known as Twitter, has a history of political controversies regarding its AI moderation systems.
However, Meta’s decision has yet to be publicly disclosed in any updates to its advertising standards. Reuters points out that although the decision follows the trend of other social media companies, Meta should make its stance clearer and more transparent to advertisers and users alike.
Despite the prohibition on political use, Meta’s policy allows for certain exceptions. The ban on AI-generated video content is primarily focused on misleading information. However, the company does grant exceptions for parody or satire. Currently, these exceptions are undergoing review by Meta’s independent Oversight Board, particularly in a case involving an “altered” video of President Biden. Meta’s argument in this case was that the video was not generated by AI, highlighting the need to differentiate between AI-generated and human-generated content.
Furthermore, Meta, along with other prominent AI companies in Silicon Valley, made voluntary commitments in July. These commitments were in response to the White House’s request for safeguards to be implemented in the development of future generative AI systems. The commitments include efforts to expand adversarial machine learning to identify and address potential issues with AI models, sharing trust and safety information within the industry and with the government, and the development of a digital watermarking scheme to authenticate official content and distinguish it from AI-generated content.
In conclusion, Facebook’s parent company, Meta, has extended its machine learning expertise to advertising by introducing generative AI tools. However, Meta has decided to withhold these tools from political marketers to avoid potential controversies and misuse of AI technology during election cycles. While the decision aligns with industry practices, Meta should be more transparent in communicating its policies to advertisers and users. Additionally, Meta’s commitments to safeguards in AI development demonstrate its commitment to responsible AI practices and fostering trust among its users.