GeoLegal Weekly #32 - Deepfakes and the Liar’s Dividend
In a post-truth world, AI deepfakery has the potential not just to warp elections but to warp everyone’s sense of reality. In legal systems designed to get at “truth,” this is particularly dangerous.
You’ve seen Elon Musk dancing in sync with Donald Trump. You’ve seen Swifties supporting him. You’ve seen an image of Vice President Kamala Harris generated to look like a Communist firebrand. And you’ve questioned whether a photo of crowds gathered at Air Force 2 is real. If not, then you haven’t been following Donald Trump on social media this week.
As 50 Cent once said, “I hate a liar more than I hate a thief / A thief is only after my salary, a liar’s after my reality.” It’s hard to sum of the the current political climate better than the back half of that line whereby a host of actors are battling to warp everyone’s view of what is real and fake in the world, with high stakes. It’s not just Trump that’s trying to warp your reality. In fact, the barrage of election misinformation and deepfakes flying around is truly stunning - and it is coming from domestic and foreign actors alike.
What we need to be most careful about, however, is what law professors Bobby Chesney and Danielle Citron call the “liar’s dividend.” This is the risk that we aren’t just fooled by deepfakes but actually that our sensitivity to this possibility makes it easier to be manipulated by liars into believing that genuine evidence before our eyes might actually also be fake. Mia Tellmann and I will go through the impact of deepfakes in elections and the courtrooms in our piece this week.
Before diving into our essay, however, I wanted to understand the topic better so I spoke to my good friend Dave Salvo of the German Marshall fund, who is an expert on misinformation. Our interview below goes deep on how foreign propaganda finds its way into our day to day news feed, for instance, via plausible sounding US local news websites (ie The San Diego Reporter) that are actually foreign fronts for pushing fake news.
Click above or here for our video interview
Election Manipulation
In the last few decades, the threat and frequency of political interference and foreign state manipulation, especially of national elections, has grown immensely as technology advances. Jason Cade of BAE Systems breaks election interference into three distinct categories, which seek to (1) influence, (2) disrupt and (3) undermine democratic institutions, political processes, and national unity.
Under the first pillar, “influence”, the goal of political interference is to change the public perception of candidates in order to manipulate turnout of would-be voters. When campaigns intend to cause the second goal of “disruption”, they will often begin by sabotaging the voting registration and ballot casting processes, as well as all the infrastructure responsible for the election itself. In the third pillar, “undermine”, the central component is to erode trust in the electoral process, leading voters to question its legitimacy and integrity - like through deepfakes. This is particularly pernicious as it can create a sense that “democracy is failing” and, thus, that the system itself is the problem.
There have been countless recent examples of political interference, where misinformation has been spread by local actors and international players, including Russian interference in US and EU elections, Iranian interference in US elections, Chinese influence on Taiwanese and US elections, and North Korean influence on South Korean elections. The countries are increasingly turning to tools like ChatGPT to run campaigns nicknamed “Doppelganger” or “Spamouflage” in order to conjure up social media posts, debug their manipulation software or write and translate text to spread misinformation at an alarming rate. Some quick thoughts on each actor before turning to business and the courts.
Iran
While Russian interference was long considered to be the most potent security threat for US election cycles, especially since Russia’s interference in the 2016 presidential campaign, the US government now sees Iranian political interference as a similarly powerful national security threat.
In August 2024, news broke of a suspected Iranian hack of the Trump campaign and attempted hack of the Biden-Harris campaigns. Iran’s attempted interference in the Trump campaign was a complicated, lengthy process, with Tehran laying the foundation to do so from 2020. In the 2020 US election cycle, Iranian hackers sent voters threatening emails, aimed at preventing them from going to the polls. In 2024, the attack had two main components: (1) attempting to influence voters, especially in swing states, by sending misinformation about trending topics and (2) trying to get intelligence about the campaigns themselves.
To achieve the first, the Iranian hacking group Storm-235 has been operating a number of websites posing as legitimate US news sites, which write about topics like LBTQ+ rights and the Israel-Hamas war. In an attempt to achieve the second component, Iran sent spear-phishing emails to campaign staff, which resulted in Trump’s campaign being hacked, while the Biden-Harris campaign came away unscathed for the time being. A source called “Robert” then leaked internal Trump documents to the press, including a vetting file on Trump’s Vice Presidential pick, JD Vance.
Russia
US intelligence agencies released a report in October 2023 highlighting Russia’s unique ability to and intention of “carrying out operations to degrade public confidence in election integrity,”, especially through misinformation campaigns in the EU and US.
In 2024 in Europe, there have been instances of Russian operatives offering to pay members of the EU Parliament, in exchange for their promotion of Russian propaganda, as well Russian disinformation campaigns aimed at spreading pro-Russian and anti-Ukrainian propaganda on Meta across the EU, including in Italy, Poland, Slovakia, and Moldova, especially before EU election dates.
There is also evidence that Russia has begun to target the 2024 US presidential election, through mis- and disinformation campaigns often focused on the Russia-Ukraine war. In July 2024, the Office of the Director of National Intelligence warned that Russian interference was targeting voters in swing states, by spreading divisive rhetoric and false details about certain politicians, and that Russia is “undertaking a whole-of-government approach to influence the election, including the presidential race, Congress and public opinion”. Details have recently emerged about a Russian-run disinformation group, CopyCop, which “alters articles from mainstream and conservative-leaning U.S. and U.K. media as well as from Russian state-affiliated outlets and spreads them to U.S. election-themed websites, all within 24 hours of the original articles being posted”. Furthermore, conspiracy theories surrounding former President Trump’s assassination attempt were promoted on fake US news sites, run by pro-Russian sources, which published an AI-generated audio deepfake that seemed to show former President Obama and an aide discussing the assassination attempt, suggesting that the Democratic Party orchestrated the attack.
China
China has also recently come under the spotlight for its interference in the Taiwanese elections, by using deep fakes and fake online accounts to promote conspiracy theories, including “over egg shortages, Taiwan’s submarine production, political and sex scandals, and Taiwan’s readiness for war, fuelling fears over conscription and young people being forced to fight, as well as casting doubt over the US’s support”. The country has also recently set its sights on influencing the US presidential elections, with Chinese-run social media accounts pretending to be members of the Republican party and Trump supporters, pushing false narratives. TikTok has also been under investigation for allegedly being used by the CCP to influence US elections, by disseminating pro-Chinese content that capitalizes on partisan divisions, all of which can be considered a threat to US democracy.
It’s important to highlight that foreign state manipulation can also be done with the intention to sow division and capitalize on the ensuing chaos, as seen by potential Russian involvement in the Southport riots in the UK. While it is not entirely clear if Russia was behind the disinformation that was spread online to spur the Southport protests, some experts believe Russian social media accounts on TikTok and Telegram spread false information and encouraged individuals to demonstrate.
Self-Sourcing and Western Propoganda
Deep fakes have deeply troubling implications for electoral legitimacy around the world and much of it is self-sourced, meaning it’s not coming from international meddling. The US has had a number of deep fake crises, including a Biden robocall that encouraged voters not to vote in the New Hampshire primaries, Trump supporters creating and sharing AI-generated images that showed his warm embrace of the African American community, and an AI-generated audio clip of Kamala Harris’ voice, shared online by Elon Musk. Most recently, Trump’s claims of Harris using AI-generated rally crowd images has cast the deep fake conversation back into the national spotlight. Across the Atlantic, UK Prime Minister Rishi Sunak and London Mayor Sadiq Khan were the targets of hundreds of deep fake ads promoted online, both of which threatened to tarnish their reputation and generate mass outrage over false claims.
However, deep fakes or claims of AI generated content can also be spread by politicians themselves, with local politicians using them to sow distrust in the electoral process and spread misinformation to win elections. This was evident in elections in Slovakia and Argentina recently. In Argentina during election season, political candidates used AI to create a positive narrative for themselves and spread fake audio recordings of their opponents. In Slovakia, pro-Russian propaganda was spread before parliamentary elections, with some experts claiming that “Slovak politicians are the main purveyors of disinformation”, including the Smer-SD party, which ended up winning the election. In this case, it was evident that Russia’s misinformation campaigns required cooperation from local actors; many pro-Russian Slovak media outlets promoted Russian narratives, arguing that the opposition party “would reinstate military conscription and send Slovak soldiers to Ukraine to fight for Kyiv … [and] was a U.S. and NATO puppet who put the interests of the EU and the Atlantic alliance above those of ordinary Slovaks”.
I also was fascinated to read a Reuters deep dive on how the US Department of Defense ran a propaganda campaign in the Philippines during COVID designed to limit uptake of the Chinese COVID vaccine. And, of course, Israel uses propaganda like when it targeted US lawmakers to influence them to fund its military.
The Liar’s Dividend - In Business and in the Courtroom
The current combination of deep fakes, foreign state manipulation, and election interference being used to undermine political processes have had immense implications on the legitimacy of democracy and its institutions. When politicians intentionally promote AI-generated images and audio recordings and trade accusations of promoting fake images, it creates the sense that anything could be fake - even things that are really real. As such, the use of AI in elections “destabilizes the concept of truth itself”, with no truth baseline, meaning that politicians and their supporters can pick and choose which alternate reality they believe and choose to align themselves with. With citizens unsure which forms of media to trust and which politicians they can trust, traditional institutions lose all credibility.
Of course, there are corporate and legal implications from all this fakery. Deep fakes can be highly damaging to company reputations. WPP staff were targeted with a deepfake combination of video and voice of their CEO asking them to set up a new business for fraudulent purposes earlier this year. Half a dozen other top UK executives have been emulated as well in recent months.
Corporations also bear downside from the way deepfakes manifest in the political process. The increasing likelihood of and frequency of foreign state manipulation in elections and other social movements makes elections less routine and increases their volatility. It becomes increasingly harder to predict the outcome of elections, meaning that companies need to create wildly different scenarios to prepare for one administration or another; as such, planning and outlining future scenarios becomes key.
Additionally, national laws are struggling to keep up with the fast pace of technological advances, leaving gaps and loopholes in regulation that companies have to fill to the best of their abilities. Furthermore, as national laws begin to catch up to regulate new technology, companies will need to look out for new laws and comply with new regulations. Also, when the government occasionally decides to take drastic action against media companies (ie TikTok) or companies that routinely use AI, it will complicate those companies’ legal obligations and abilities.
In recent years, the availability of and ease with which deep fakes can be created has made prosecutors, defense attorneys, and judges’ jobs in the courtroom all the more challenging because (1) the likelihood of including “falsified evidence … and causing an unjust result” increases and (2) the likelihood that “the opposing party will challenge the integrity of evidence, even when they have a questionable basis for doing so” increases. Attorneys have started to lean into the “deepfake defense” strategy, through which they argue that genuine video or audio evidence is unreliable because there is no guarantee that it is unaltered.
This practice has become known as the “liar’s dividend”, since “a skeptical public will be primed to doubt the authenticity of real audio and video evidence”, as law Chesney and Citron have argued.
These strategies were recently used by lawyers in a case involving January 6th protestors and in a case involving Elon Musk’s statements on Tesla’s self-driving cars. The lawyers of two defendants on trial for their participation in the January 6th riots questioned the validity of videos that showed the men inside the capitol and at the protests, arguing that the jury should therefore move to acquit the men. In Elon Musk’s case, a similar argument was made, with his lawyers arguing that Musk’s claims about Tesla’s self-driving abilities captured on video several years ago were actually inaccurate and amounted to a deepfake video. In both cases, these arguments were unsuccessful, with judges vehemently rejecting the attorney’s claims. The judge in Musk’s case wrote that the arguments were “deeply troubling” and that the Court was “unwilling to set such a precedent by condoning Tesla's approach here." As such, these examples show that “categorical denunciation of evidence because deepfakes exist is not a wise strategy” and that defense attorneys “should be brought only when there is something the defense can point to that would suggest that the evidence is fake”.
To introduce audio or visual evidence, the courts currently use a “fair and accurate portrayal” standard, which “sets an extremely low bar”, since “the witness need only testify that the depiction is a fair and accurate portrayal of her knowledge of the scene”. Nonetheless, many experts have expressed a need for further regulation on permitted evidence, especially with the possibility of AI generated or manipulated evidence. Currently, because the law hasn’t caught up yet, there are attempts by lawyers to test the bounds of the legal system, but the law will likely adjust to this new reality, although it may take some time. Experts believe that going forward, “it's more likely that courts will confront accusations of fakery against real evidence than attempts to introduce fake evidence”, but that in these cases, ethics and norms will be especially important to maintaining legal legitimacy. Amendments to the current framework will likely need to be made, whether that be introducing a higher burden on those who introduce video evidence or ensuring that an authenticating witness has reviewed the evidence.
Nonetheless, an increasingly AI-rife evidence base will have implications for juries. For example, juries might be more skeptical towards audio or visual evidence, meaning that the plaintiff’s lawyers will need to work harder to prove that the evidence is real, resulting in increased costs, which could create a financial burden for some individuals seeking to take legal action. You could argue the jury’s increased skepticism will likely benefit the defendant but of course a jury could be presented with completely fake evidence that it believes for some reason. At a time when public trust in US courts is already very low, the inclusion of deepfakes in the courtroom further erodes this trust, as it blurs the line between what is real and what is fake. Controversy over deepfake evidence eliminates our shared base of common facts, from which the justice system should be able to secure, if not advance, the truth and justice. If juries can no longer trust evidence presented in the courtroom, citizens will likely not trust the court’s final decision on a case, resulting in a loss of trust in the judicial system and an overall loss of trust in democratic institutions as a whole.
That’s it for this week!
-SW & MT