The more evolved artificial intelligence becomes, the higher the chances digital misinformation might get far-reaching circulation, especially in politics. With the upcoming federal elections in Canada, Meta Platform is taking preventive measures ahead of AI manipulation in political advertising. They intend to restore trust in an era of “seeing is no longer believing” by requiring advertisers to declare when they use AI or other digital tools to modify campaign content.
Meta announced on Thursday it will require advertisers to disclose whether any AI or other digital methods were used to design political ads or ads concerning social issues. The move aims to foster transparency by ensuring voters know of any digital modifications to campaign messaging.
New Mandate Requirements
Meta’s new mandate would apply to advertisers that use photorealistic images, videos, or audio that are digitally created or altered. It applies to ads that imply that real persons are doing and saying things they never really did and said, to ads that use made-up people or fabricated events, and to edited footage of real events or content that could otherwise present a misleading event. With those disclosures, Meta attempts to limit the circulation of manipulated content that could mislead voters or unduly influence public discourse.
Meta’s Political Content Policies
This initiative follows Meta’s decision to extend its ban on new political ads in November 2023 , after the U.S elections. One of the factors that contributed to the decision was the growing concerns on the spread of misinformation that usually plagues other previous election cycles. On the other hand, in the course of the year, Meta has prevented political campaigns and advertisers from other regulated industries from accessing its new advertising tools powered by generative AI.
Nonetheless, they have been criticized because Meta keeps reversing some of its initiatives on fact-checking. Earlier in the year, the company closed its whole fact-checking program in the U.S. due to discussions on controversial topics such as immigration and gender identity. This was taken as some result of strong pressure from conservative groups that want lesser intervention in moderation of political content.
Challenges and Implications
Meta believes that generative AI will not significantly influence its platforms. As of December 2024, AI-generated content was considered to have had little adoption on Facebook and Instagram. Still, grave concerns remain about the possibility of AI implications for political narratives as the generative tools become more sophisticated and easier to access. In the interest of improved transparency, Meta has developed a tool where users can voluntarily inform fellow users if they are sharing AI-generated content so that the platform can apply its labels to that content. Whether or not users will participate in this process is still to be determined.
In anticipation of Canada’s federal elections, Meta’s new disclosure policy reflects a proactive step in defending democratic processes. Its effectiveness, however, will depend on advertisers’ compliance and Meta’s ability to enforce the rules uniformly. While Meta’s latest policy can be viewed as a laudable step, there has been legitimate speculation concerning the corporation’s more significant accountability in the fight against misinformation. As political campaigns increasingly use AI in their strategies, voters and regulators alike will need vigilance in judging the credibility of digital campaign materials.