REPLACE Big Data Show logo

AI regulation: US first, EU better?

The European Parliament adopted the AI Act in March 2024. The US enacted a landmark Executive Order on AI in October 2023. So… Is anyone winning? And what do both things mean for anyone who’s never been called to the bar? Team DU has been wading through the legal texts so you don’t have to…

Depending on who you ask, love, money, or laws make the world go round. Artificial Intelligence is similarly powered. And so far it’s been getting a lot of love and a lot of money, but not a lot of law.

There is an argument to be made that this lack of regulation is one reason why the love and the money appear to be slowing down. As various people at Data Universe 24 pointed out, AI has a trust problem. But for whatever reason, governments around the world are now racing to regulate its development and deployment.

Two recent biggies include the US and the EU. America was first out of the blocks on October 30, 2023 when President Biden issued a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI. The EU joined the party on March 13, 2024 when the European Parliament adopted the Artificial Intelligence Act.

How do they differ and is one better than the other, in language that won’t suck the moisture from your body faster than a salted vampire*?

Legally bound

According to international law firm, White & Case: “Currently, there is no comprehensive federal legislation or regulations in the US that regulate the development of AI or specifically prohibit or restrict their use.”

What about that landmark executive order? We’ll come back to it.

Meanwhile, in the EU, according to international law firm, Wilmer Cutler Pickering Hale and Dorr, the AI Act “is considered to be the world’s first comprehensive horizontal legal framework for AI. It provides for EU-wide rules on data quality, transparency, human oversight and accountability.”

So AI in the US is an unregulated gold rush?

Whilst it may be legally precise to say “there is no comprehensive federal legislation or regulations in the US”, that really doesn’t mean AI here is the new Wild West. International law firm, DLA Piper:

“The Executive Order (EO) draws on the powers of the Presidency to require primary executive departments to formulate consensus industry standards and regulations for AI usage, which creates a risk of divergent standards … In contrast, the AI Act aims to establish a regulatory framework across the entire EU as a single regulation.”

TL;DR: The EO is piecemeal and focuses on guidelines, the AI Act is wholesale and enforces binding regulations with fines and other penalties. These penalties can be up to €35 million or 7% of total worldwide annual turnover.

But which is better?

If everything before this line is legal fact, everything that follows is informed (we hope) opinion.

According to CNN, the AI Act means the EU is “leapfrogging the United States once again on the regulation of a critical and disruptive technology.”

CIO is less black and white, pointing out that: “The European Union and the US have agreed to increase co-operation in the development of technologies based on AI, placing a particular emphasis on safety and governance.”

While Brookings points out that, despite the EU and US sharing a “conceptual alignment” on AI regulation, the devil is – as always – in the details. “Regarding many specific AI applications, especially those related to socioeconomic processes and online platforms, the EU and US are on a path to significant misalignment.”

And that misalignment is already affecting some of the big dogs. According to the FT, Apple and Meta are both delaying the launch of their latest AI models because of EU rules. This is not a bug. Apple is wary of the EU’s Digital Markets Act, which, as the FT also points out, is “aimed at enabling local start-ups to better compete with Big Tech companies, most of whom are US-based”.

So?

So, a regulatory framework across the entire EU – built on ready-fanged regulations – does appear to mean effective, block-wide protection and increased competition. All of which should, ultimately, be better for the consumer.

But, according to a recent global study, businesses in North America are 185% more likely to be already “fully using and implementing generative AI” in their processes than businesses in South West and Eastern Europe. Which would appear to show that the current US regulatory environment does, at the very least, make it easier to get projects started.

You say potato, I say potato.

And, in the end…

Perhaps the most useful comparison is the EU’s General Data Protection Regulation (GDPR). It too was applied wholesale when America’s data protection approach was piecemeal. It too created a pop-up industry of “whose approach is best” forecasters. And yet, because trade today really is global (even if you’re not trading globally, you’re trading with global traders), it turned out to be the high tide that raises all ships.

Of GDPR, Kirsten Mycroft, International Association of Privacy Professionals member and Enterprise Chief Privacy and Data Ethics Officer for BNY Mellon said: “It’s been really good. It’s been good for the privacy profession, it’s been good for individuals who are at the heart of the GDPR, it’s driven an acceleration of privacy program maturity and privacy technology development, and for privacy professionals it’s been an amazing opportunity.”

If the AI Act can achieve similar, everyone wins.

*Why new rules are still written in language guaranteed to put even the most over-caffeinated to sleep is beyond me. But that’s an argument for another day.


Want Weekly Insights delivered straight to your inbox?
Sign up below to make sure you never miss out on the newest stories from Data Universe.