GeoLegal Weekly #23: Four AI Wars
I unpack how AI is sparking wars on inputs, regulation, autonomous weapons, and jobs. You can also get exclusive first access to our new Geolegal software by signing up below.
In a world on fire, every day is a fire drill. GeoLegal Notes readers can get priority access to Hence’s Geolegal software product so you can stay on top of geopolitical events that will turn into legal risks. Join the waitlist here or read more at the bottom.
Four AI Wars
We’ve had technological innovation before, and we often argue that a rising tide lifts all boats. It would be more precise to say that a rising tide could lift all boats, with productivity gains providing enough bumper income to compensate losers. Of course, looking back at recent waves of technological progress, we have primarily seen the automation of lower level jobs. The only compensation for the losers was job retraining and some cheaper goods for purchase on the other side. It’s no surprise that after decades of digitalization and globalization as well as financial crises and pandemics, we’re seeing populist movements being led by those who are left behind.
But what happens when technology gains threaten the more highly educated? What happens when the power of the technology being created is so threatening to national security that companies - which usually can capitalize unreservedly on such productivity gains - are put under restrictive constraints? What happens when the technology in play promises (or threatens) to give its users - states, companies, individuals - a decisive advantage if harnessed first?
Well, we are about to find out. The power of AI - and its many enhancing and destructive potential applications - is leading to a much more zero-sum approach in how it is being harnessed and regulated. Below, we’ll go through the list of conflicts sparked by AI as well as possible geolegal implications. I’m going to cover four AI Wars below:
War with AI
War for Inputs
War of Regulation
War on Jobs
I went in depth on these points on my panel at LegalTechTalk last week and sparked some serious debate with Sally Sfeir-Tait, Mark O’Conor, and Marco Araujo. Most everyone was more optimistic than me! Hopefully we can continue that debate in the comments below.
War with AI
A central AI war is, well, war itself. One of the biggest fears about AI is that it will become entangled with warfare in a way that leads to autonomous weapons with a mind of their own. Recent research suggesting that LLMs are warmongers scares the life out of me: Five LLMs applied to wargames showed a predilection for escalation, even to the point of nuclear war.
AI in warfare is a right now, ongoing thing. Ukraine, for instance, uses AI to identify targets that will have the biggest psychological impact. Or to connect disparate pools of data for counterintelligence purposes. Israel is reportedly using algorithms that assess whether humans are militants and whether buildings may be potential targets. US spending on AI in the military has tripled in the past year. American programs have similar aims to those outlined above in Ukraine or Israel, where the systems are tracking and helping to attack targets. China, too, is using AI for similar purposes though public reporting tends to emphasize its use on trawling information. I think we’ll increasingly see AI used for wargaming and high-level knowledge tasks in and around war, if not for the direct initiation of strikes at the moment.
The critical point to note today is that all governments claim there are human decision-makers in the loop - that while systems may surface threats, it’s ultimately humans that are deciding whether or not to strike. The problem is that this is an unstable equilibrium. Humans are slow and often unavailable, even in wartime. Think about algorithmic trading, for instance: Allowing computers to “do their thing” can create untold advantages simply based on reaction times and processing times (granted war and geopolitics is a much more complicated system than stock prices). Still, do we really believe governments will keep this off limits in the long-run?
I don’t. Countries will increasingly see the benefits of training AI to react faster than humans ever could. For instance, in the Cold War, we often thought about mutually assured destruction where countries would not attack each other because they could be obliterated in return. But with AI, it is possible to get to a point of decisive advantage - to determine that there is a single unmissable instance where launching first strike could be decisive but the window to do so is so small that only a machine could act on the opportunity. If you think your enemy might do this, then you might do this too.
The only way to constrain such behavior is to tie hands against it. But the international community can't even agree on toothless declarations let alone verifiable mechanisms. There are a host of efforts to try to build international consensus to limit the ability of countries to use AI with actual lethal effects. Of course, I don’t have to remind readers how pessimistic I am about new international legal constraints in the current geopolitical environment (for instance see my 2024 GeoLegal Outlook). The UN General Assembly approved its first resolution in October 2023 combating Lethal Autonomous Weapons systems with notable objections from India and Russia and abstentions from China, Iran, Israel and Saudi Arabia. The US proposed its own declaration on Responsible Military Use of Artificial Intelligence and Autonomy at a summit in the Hague last year but many of the same countries would not even sign on to a non-binding call to action. If we can’t agree there’s a problem, we’re unlikely to find a solution.
War for Inputs
Probably the most well covered AI conflict is the ongoing “chip war” between the US and China. Without rehashing the full argument of Chris Miller’s excellent book Chip War: The Fight for the World's Most Critical Technology, the US and China are engaged in a process of decoupling their chip interdependence. The US is effectively blocking the export of goods or know-how that would more easily enable China to be on the cutting edge of AI, making its influence akin to drafting US companies (and their foreign partners) into its AI cold war (see #18 Your Company Has been Drafted for War!). And China is attempting to build up its own self-sufficiency. Sitting uneasily between this is Taiwan, whose chip-making powerhouse TSMC, is philosophically coupled to the US but geographically wedded to China, and European companies like ASML, the Dutch semiconductor equipment manufacturer, which has been restricted in providing services to China that it otherwise might have provided.
In short, the semiconductor supply chain is getting split in two which is going to be maddening for companies operating in it but probably not going to make a huge difference from a geopolitical point of view. The technology is moving so quickly that keeping your enemy one or two steps behind will not provide decisive victory. And efforts to use commercial and state-based espionage to level that playing field will be much easier when the target knowledge or products to acquire are some of the most used commercial materials in the world (chips) and not weapons systems.
As this all plays out, the threat of China approaching the US in its AI capabilities has led to a dramatic increase in US regulatory surveillance of global hardware firms, taking in manufacturers of chips, lithographic equipment, optical devices, and so on. Such restrictions are being applied not just to China, but to entities that might show an inclination to invest there. The regulatory leash could be extended to Chinese use of cloud computing services in the US, on the theory that these provide access to chips China is otherwise prohibited from using. Such ever-increasing scope suggests a nested Matryoshka Doll of preventative regulations, extending across different entities, technologies, and geographies.
Of course, the other critical input is talent. This is true at a microeconomic level and a geopolitical level. On the former, we’re seeing mid-level AI jobs earning $1 million per year in some quarters. We’re also seeing a battle royale between companies to poach their top AI experts. This is being driven again by the sense that decisive advantage involves taking big risks and expenses now lest you not be able to do so in the future.
At a geopolitical level, we’re seeing walls go up to restrict the free movement of intellectual talent between the US and China, while students from India arrive in ever-greater numbers into American universities. In prior eras, countries like the US might benefit because of emigration from the USSR or Nazi Germany due to genocide, persecution, or, in the case of the former, higher material incomes. All of these factors made it easier to assume or rely on the allegiance of new arrivals. But in today’s globalized world, it is not so cut and dry, and geopolitical rivalry is giving rise to a default suspicion that foreign researchers and students have something to hide. The US continues to make it harder for Chinese researchers to work in the US, creating almost a default assumption that they are there to spy or export know-how. Chinese students studying in America are reporting having trouble getting through immigration ports even when they already have visas. According to Amnesty International, they also experience surveillance and potential repercussions for their families back home if they speak publicly - or in the classroom - in a way Chinese authorities don’t appreciate. With 47% of all top AI researchers globally from China and 28% of top-tier researchers in the US coming from China according to MacroPolo, there’s an opportunity cost of creating a difficult environment for Chinese students in U.S. universities. There’s also a benefit to source countries of talent who would rather their talent return home or keep them there in the first place.
War of Regulation
The EU might not be leading the world in AI research and applications, but it has led the world in setting up a mechanism for AI regulation. The EU does have one AI startup— Mistral — that is attracting attention, but as with other IT/software ecosystems, the AI landscape is essentially a Transpacific one, with rival hubs in the US and in China. Former Goldman Sachs CEO and US Treasury Secretary Hank Paulson recently made the boldly nationalist comment that “When it comes to tech, America innovates, China scales, and Europe regulates.” I’m sure my non-American friends would take issue with this casting but there is some truth to the fact that Europe makes its mark strongest in drafting rules that balance innovation with potential societal harm.
It is very likely that the absence of domestic corporate interests in the sector has permitted the EU to leap ahead in regulating activities that affect its citizens even when EU firms are not involved. Morrison Foerster notes that the EU Artificial Intelligence Act of March 2024 “is therefore extremely wide-reaching. It is noteworthy that the AI Act applies to the outputs of AI systems used within the EU, even if the AI providers or deployers are themselves not located in the EU.” This could be yet another instance of the sheer size of market size forcing firms to tailor product specifications globally to conform with EU rules, a phenomenon the Union proudly calls a “Regulatory Superpower” and the scholar Anu Bradford has dubbed The Brussels Effect. Of course it can also have the unintended consequence of stifling innovation and advantaging the big established players who have resources to comply.
Per the note cited above, the EU regulation impacts a very broad range of AI uses and underlying technologies. Among other things, it requires all AI generated content AND interactions to be identified as such; it prohibits untargeted scraping of facial images for recognition technologies and “the use of emotion recognition in workplaces.” It also requires the producers of General Purpose AI to maintain a registry of content used to train the model and to ensure that copyright is respected (though it places the onus on the owner to declare an opt-out from scraping).
Meanwhile, although the US is also getting into the AI regulatory game, its efforts are more narrowly focused. This is in part because Congress is gridlocked and progress can only be made via presidential executive order (EO), as I covered in GeoLegal Weekly #3 on AI. A recent report from the Brookings Institution comparing US and EU approaches notes that “The EO primarily outlines guidelines for federal agencies to follow with a view to shape industry practices, but it does not impose regulation on private entities except for reporting requirements by invoking the Defense Production Act. The EU AI Act, on the other hand, directly applies to any provider of general-purpose AI models operating within the EU, which makes it more wide-ranging in terms of expected impact. While the US EO can be modified or revoked, especially in light of electoral changes, the EU AI Act puts forward legally binding rules in a lasting governance structure.”
War on Jobs
Finally, I don’t think it’s news to anyone that AI might make you and me redundant! The fact is that when we disaggregate our knowledge jobs into actual tasks that we do, it’s easy to see that AI can take on more and more of it. A recent Accenture analysis, for instance, found that AI can impact 40% of working hours, particularly in knowledge intensive industries.
While natural language interfaces are not today tantamount to actual intelligence (my colleague Karthik posits that AI really should stand for “accelerated inference”), chat-based generative AI does make it easier to imagine that our colleagues and clients might some day prefer interacting with AI than with us (or at least me!).
There’s much to be said about the impact on the legal sector, but I’ll save that for another day. For now, simply consider the fact that as AI reorders the economy, it puts many more higher-end jobs at risk than technological advances of the past. While we can debate whether the net effect will be more jobs overall at some time in the future, the fact is that the process to get from here to there will be one of eliminating jobs and roles of many people employed today. People can’t retrain immediately, and societies filled with cost of living crises are not societies that seem primed to start doling out universal basic incomes any time soon. This raises the specter that the unemployed well-off will use their resources to try to capture more of the public spending pie with their influence. And that the unemployed worse-off will feel an ever reducing agency over their lives, potentially finding protest as their key outlet.
—
GeoLegal Waitlist
As I mentioned above, we’re starting to sign-up companies for our waitlist to get access to our GeoLegal software platform. In short, the platform digests global events in order to understand legal risks, and allows you to implement some pretty sophisticated workflows like rapidly generating board memos or client alerts. If you are a GeoLegal Notes reader, you can get priority access by filling out our waitlist request here.
Final note - I get to release GeoLegal Notes but a whole team of folks help me research and analyze the world. Thanks to Karthik Sankaran, David Bender, Dan Currell, Varun Oberai, and Mia Tellmann for their numerous contributions to this piece.
-SW