The rapid growth of generative artificial intelligence (AI), which can create text, images, and video in seconds in response to prompts, has heightened fears that the new technology could be used to sway major elections this year, as more than half of the world’s population is set to cast ballots. A group of 20 tech companies has announced that they will work together to prevent deceptive AI content from interfering with elections across the globe this year.
The tech agreement, which was unveiled at the Munich Security Conference, is signed by entities like OpenAI that are developing generative AI models for content creation. Social media sites like Meta Platforms, opens new tab, TikTok, and X, formerly known as Twitter, are among the other signatories. These platforms will confront the difficulty of preventing dangerous content from appearing on their websites.
As part of the agreement, parties pledge to work together to develop technologies that will identify fraudulent AI-generated images, audio, and videos to create public awareness campaigns to inform voters about this type of content; and to take action against such content on their services.
Watermarking or embedding information could be technologies used to identify content generated by AI or verify its origin, according to the firms. The agreement didn’t outline how each corporation was supposed to carry out its obligations or a deadline for achieving them.
Already, political influence and even persuasion to abstain from voting are achieved through the application of generative AI. A robocall with fictitious audio of US President Joe Biden was distributed to New Hampshire voters in January, advising them to abstain from voting in the state’s presidential primary.
More than 50 nations are scheduled to conduct elections in 2024, coinciding with the agreement struck on Friday at the yearly security conference in the German city. This has already been done by Bangladesh, Taiwan, Pakistan, and most recently, Indonesia.