Democratic tech leaders like Matt Hodges, CEO of Zinc Labs, told me that training campaigns on these tools now could prevent more headaches in the future.
“We don’t want to start this process six months from now. Starting today is how we stay ahead of this curve,” says Hodges, who was also the former engineering director for the Biden 2020 campaign. Zinc Labs also provides AI training for campaigns.
Earlier this year, major tech companies like Amazon, Google, Meta, and Microsoft They signed an agreement agreeing to apply “reasonable precautions” To prevent their generative AI tools from contributing to some electoral disaster around the world. The agreement requires the companies to detect and classify deceptive content created using artificial intelligence.
Microsoft and Google have integrated tagging and watermarking software into campaign workshops as well. Microsoft says it provides a crash course on its “content credentials,” or watermarking technology, and explains to campaigns how they can apply it to their own campaign materials to ensure its authenticity. Likewise, Google explains its own software, SynthID, which labels images created using its AI tools.
These types of content authentication systems are what big tech companies believe can mitigate the risks of deepfakes. cheapfakesand other forms of content modified by artificial intelligence as a result of the disruption of the US elections.
But despite signing technology agreements and other voluntary measures, none of these authentication methods are foolproof. As WIRED’s Kate Knibbs has previously reported.
It’s a little more complicated than just promoting content authentication to Microsoft and Google. AI-powered chatbots Copilot and Gemini haven’t proven capable of answering simple questions about election history either. When asked who won the 2020 presidential election, both chatbots declined to provide an answer, My colleague David Gilbert reported last week. These will be the models that provide political direction to campaigns. They are also the models that support AI robots Answering voters’ questions or Run as candidates themselves.
Six months after Election Day, Big Tech is supplying campaigns with the poison and antidote to the AI generation. Even if its authentication software could identify AI-generated content 100% of the time, the government would likely need to step in to standardize the technology across the board.
So, for now, and perhaps the rest of the year, it will be up to the AI industry not to make any catastrophic mistakes when it comes to creating or detecting malicious content.
Chat room
After reading Annie Jacobsen’s wonderful book Nuclear War: A Scenario, I became a bit obsessed with reading about the end of the world. 𝓳𝓾𝓼𝓽 𝓰𝓲𝓻𝓵𝔂 𝓽𝓱𝓲𝓷𝓰𝓼 ★~(◠‿◕✿)
So, this week I want you to flood my inbox with your worst fears when it comes to AI and all the elections happening this year. I’m looking for something scary but also realistic.
I want to hear from you! Leave a comment on the site, or send me an email at mail@wired.com.
💬 Leave a comment below this article.
Reads wired
do you want more? subscribe now For unlimited access to WIRED.
What else are we reading?
🔗 How Americans navigate politics on TikTok, X, Facebook, and Instagram: Despite the change in its leadership, X, formerly Twitter, remains the top platform for users searching for political news. A poll showed that Republicans are happier with the platform controlled by Elon Musk as well. (Pew Research)
🔗 Surgeon General: Why I’m calling for a warning label on social media platforms: In an op-ed for The New York Times, US Surgeon General Vivek Murthy explains why he believes the government should attach warning labels to social media platforms. Murthy’s call comes before a decision is taken Murthy v. Missouri The condition is expected to decrease this summer. (New York times)
🔗 FOCUS ON FACT: Biden pauses as he leaves a fundraiser in Los Angeles that has become a target of opponents: The Biden campaign is facing its first major scandal of the election cycle. Clips from a series of high-profile events, such as the recent G7 summit, have gone viral on platforms like X after they were deceptively edited to exaggerate the effects of Biden’s age. (AP)
download
On WIRED this week Policy Lab Podcast, host Leah Fieger talks with my colleague and chief correspondent David Gilbert about some things His latest reports A national militia group organized by an imprisoned rioter on January 6. You can find it Wherever you listen to podcasts.
See you next week! You can contact me By email, Instagram, X And reference in makenakelly.32.