In mid-December, election officers from across Arizona trooped into a bland hotel ballroom for a training session and were greeted by the most routine of messages: brief remarks and a PowerPoint presentation from Arizona Secretary of State Adrian Fontes.

But the election workers had been duped. Fontes’ remarks had been generated by AI, and it required serious prompting by the organizers for those in the audience to realize that the video they had just seen was fake.

Across the United States, election officials are grappling with how to prepare for how AI might be used to subvert this fall’s elections, and they are increasingly turning to exercises, trainings and demonstrations like the one in that Phoenix ballroom to showcase the power of these technologies and educate election workers about what AI makes possible.

But trainings like those in Arizona represent the leading edge of preparations being made by election officials, and in the absence of well-formulated plans, policymakers say awareness and education are the best tools available to prepare election officials to quickly respond to AI-generated content.

“All our lives we’re trained that what you see and what you hear is real,” Jerry Keely, an election security advisor at the Cybersecurity and Infrastructure Security Agency, said during an appearance at an Arizona cybersecurity conference last month. “Well now that’s not true, because I’ve seen and heard lots of unreal things.”

Ground zero for election denialism — and AI training

The need to prepare election workers for AI-generated disinformation is especially pressing in Arizona, which was ground zero for former President Donald Trump and his allies’ attempt in 2020 to overturn the results of the election.

In 2024, election officials fear that Arizona may once again become the site of election conspiracies, disinformation and scurrilous plots — only this time amplified by AI.

Skepticism about the integrity of elections has proved remarkably durable in Arizona, where in the aftermath of the 2022 midterm elections a mere 36% of Republicans said they believed that election had been fairly administered. Meanwhile, scores of Arizona election officials have quit or retired in the face of what they describe as a steady stream of threats, harassment and hostility from voters still convinced they conspired to rig the results of state elections.

For officials in Maricopa County — Arizona’s most populous county and the epicenter of its election conspiracies — disinformation represents a personal threat. ​​”We have unfortunately experienced first-hand that what’s posted on social media is not only a potential indicator of threat from a cybersecurity perspective, but it’s also a potential kinetic threat,” Lester Godsey, the county’s chief information security officer, said last month.

Many election officials and experts told CyberScoop that unlike most traditional cybersecurity threats, there is little that officials can do before a jurisdiction is targeted by AI-generated content to mitigate or harden their defenses. Planning for how to respond quickly and coordinate with other election officials, the media and law enforcement are often cited as the primary means of defense.

At the session held in December, the state convened with local election officials, federal agencies like CISA, tech companies including OpenAI and Microsoft, and third-party organizations like the Brennan Center for Justiceto conduct simulations of how these technologies may be used by malicious actors to target an election.

Michael Moore, the chief information security officer for the Arizona Secretary of State’s office, told CyberScoop that the exercises involved a number of homemade simulated deepfakes — a catch-all term for AI-generated content designed to impersonate someone. The event “showcased the full gamut of voice cloning, image creation as well as video,” Moore said.

In addition to the fake video of Fontes, the Arizona exercise included videos of a local official speaking fluent German despite never learning the language, and another of an official from a state election nonprofit throwing ballots in the garbage. A cloned audio phone call featured an official from the secretary of state’s office asking election officials for passwords to their voter registration databases. An image created using the MidJourney photo generation tool depicted Moore planting a bomb at the office of a print vendor that the state uses to create ballots.

Larry Norden, who directs the Brennan Center’s work on elections, said the trainings aimed to demystify the technology for election officials and better equip them to explain it to voters.

“My sense is it moved this from the realm of science fiction or words on a piece of paper that people didn’t fully grasp to ‘this a problem of here and now,’” Norden said of the exercise.

But experts like Norden are at an early stage in educating Arizona’s officials. He said he hoped that officials would take home the importance of “basic things” that have always been foundational to security. That includes securing communications channels, giving voters official sources, verifying social media accounts, reporting imitation accounts and using multifactor authentication to guard against phishing emails and voice cloning that may seek to exploit trusted relationships to gain access to election systems.

Demystifying the threat

Election officials on the front lines of preparing for how AI might influence the 2024 election emphasize that they aren’t trying to re-invent the wheel.

Minnesota Secretary of State Steve Simon told CyberScoop that many of the best defenses that state and local officials have against AI-generated content rely on the same tools they’d use to combat any number of cybersecurity-related threats: communication, coordination, and establishing trusted sources of information ahead of time.

In January, the Minnesota Secretary of State’s office convened with representatives from 50 counties and federal agencies for a half-day election security training. That event included video vignettes and coordinated discussions around the threat of deepfakes and AI, which were folded into broader exercises and dialogue about disinformation, cybersecurity and physical threats.

Officials in Minnesota are also developing some new tools. Last year, Minnesota passed a law banning the use of AI-generated content that impersonates another person without their permission and using that content to influence an election within 90 days of voters going to the polls. States like Michigan have passed similar laws this year, and Simon said the Minnesota statute would give his office the ability to seek court-ordered takedowns for individuals and platforms that disseminate deepfakes online.

Deepfakesare merely the latest twist in election officials’ ongoing battle against disinformation. “Don’t think of it as some brand-new thing you have to learn about or be an expert in. It’s just a new method of expanding the challenge,” he said.

But states like Arizona and Minnesota appear to be outliers in their preparations for AI-generated disinformation targeting elections. Last month, CNN reached out to election officials in all 50 states and asked about their plans to combat deepfakes this election cycle. Of the 33 that responded, fewer than half cited specific trainings, policies or programs to deal with the threat.

That means many election officials are going to be left in a lurch come Election Day. David Becker, the executive director and founder of the Center for Election Innovation and Research, said that election officials “used to be technocrats, involved in a transparent process to facilitate American voters” but have now had to take on a range of new responsibilities.

“Over the last several years they’ve also had to become security experts, cybersecurity, physical security [and] communications experts,” he said. “They’ve had to navigate stress for themselves and staff in ways they haven’t had to do before.”

The emergence of generative AI adds another wrinkle to those responsibilities, putting many officials and candidates in the unenviable position of having to validate or debunk visual and audio messages that many voters trust implicitly.

At the same time, it’s important not to overstate the threat posed by AI. A study published earlier this year by researchers at the Carnegie Endowment for International Peace that examined interventions to counter disinformation noted that the few documented incidents of deepfakes being used to influence politics have been mostly ineffective.

“Studies suggest that people’s willingness to believe false (or true) information is often not primarily driven by the content’s level of realism,” write authors Jon Bateman and Dean Jackson. “Rather, other factors such as repetition, narrative appeal, perceived authority, group identification, and the viewer’s state of mind can matter more.”

Such realism is cold comfort to frontline election officials, who say they have learned to expect the unexpected. 

Bill Ekblad is an election security official in Minnesota and a veteran of the 2016 election, when Russia’s attempts to catapult Trump into the White House caught election administrators off guard. Four years later, Ekblad points out, the foreign influence operations of the previous election cycle did not materialize and were replaced by a deluge of physical threats to election workers.

Those experiences taught him that it’s difficult to predict the primary security challenge of an election, so you’re better off being ready for anything and learning how to react quickly. “We used to say in the Navy, ‘be brilliant at the basics,’ and in our case that’s communicating, collaborating, sharing situational awareness,” Ekblad said.

That echoes the advice given by Kathy Sullivan, the New Hampshire Democratic operative whose phone number was spoofedin a January robocall that cloned President Joe Biden’s voice to discourage Democrats from voting in that month’s presidential primary. Sullivan told CyberScoop that she believes the decision to rapidly respond to the calls by notifying state officials, law enforcement and the media played a significant role in neutering the effect on voting.

Other election officials struggle with drawing positive lessons from events in New Hampshire. Arizona’s Moore cited audio cloning as the type of synthetic media he feared most. Like many election officials, he said he was unnerved by the New Hampshire robocall — not because it was executed well, but because it wasn’t.

Moore compares the New Hampshire robocall to a criminal using a laser rifle to rob a convenience store. ““The actual implementation was not well done,” he said. “But it was basically a perfect clone of President Biden’s voice. So what happens when people use these tools more intelligently?”

The post Training days: How officials are using AI to prepare election workers for voting chaos appeared first on CyberScoop.

Leave a Reply

Your email address will not be published. Required fields are marked *

Explore More

Ads for Zero-Day Exploit Sales Surge 70% Annually

February 28, 2024 0 Comments 0 tags

Group-IB research warns of rising use of zero-day threats in targeted attacks

Single RCE Bug Features Among 60 CVEs in March Patch Tuesday

March 13, 2024 0 Comments 0 tags

No zero-day vulnerabilities to fix in this month’s Microsoft Patch Tuesday

HSE Misconfiguration Exposed Over a Million Irish Citizens’ Vaccine Status

March 15, 2024 0 Comments 0 tags

An AppOmni researcher detailed a misconfiguration in the HSE COVID Vaccination Portal, exposing the health and personal data of over a million Irish citizens