The EU AI Act was finally passed on 9 December 2023, after a grueling 38-hour negotiation. In this exclusive interview with AI expert Walter Pasquarelli, learn about the groundbreaking developments following the EU AI Act’s announcement and key implications for European businesses. Pasquarelli also shares practical insights on how to get started with complying with the EU AI Act and how the Act will impact the progress of AI innovation in organizations.
How does the EU AI Act adapt to the fast-paced changes in AI technology, and how does it categorize different AI applications based on risk levels?
When we started the conversation about an EU AI Act, it was before the launch of generative AI tools like ChatGPT, so we focused on a very different understanding of artificial intelligence. Some have argued that this used to be more of an approach towards products, but the launch of generative AI tools has transformed our understanding of the possibilities of AI. That meant the EU AI Act needed updating. I remember that in the final 38 hours, the amount of information from the trial log was incredible. I’m glad that we made it to where we are right now.
It’s the world’s first comprehensive legislation that regulates how AI can be used in European markets and the European bloc.
At its heart, the EU AI Act has four pillars, creating four risk buckets where different AI applications can fall – low risk that will face almost no regulatory action, medium risk, high risk, and prohibited risks at the top. The EU AI Act looks at the AI ecosystem in Europe and categorizes it into these four buckets. Based on that, companies developing tools will face various regulatory actions. Think about the four risk categories and the resulting regulations on products.
One AI year passes in seconds. For example, the developments that have happened within the AI ecosystem and the technical possibilities that are out there. In two years, the whole environment has changed. We can now produce video generators that look hyper-realistic, and tools that can create entire marketing copy in hours. This will advance faster and faster. This creates a need for regulations that won’t be outdated in a year or two. Can the EU AI Act achieve this? Its broad approach positions it well, but it will certainly need more updates as technological breakthroughs happen.
Will the EU AI Act’s broad approach addresss risks like bias?
The reason why it’s so broad is, on the one hand, the EU AI Act doesn’t seek to regulate AI products per se; it seeks to regulate the risk. It acknowledges that developing European legislation takes ages. The only way to tackle this is by producing something relatively broad. It’s different compared to China, which is very fast in creating regulations using a horizontal approach. They can do it because they have a different kind of legislative process.
Now, when we look at specific provisions, how do we categorize these risks? The EU AI Act does provide a list of applications that are high-risk. For example, using AI tools for determining the creditworthiness of an individual and AI tools used by the police, which typically have shown elements of bias. If we look at issues such as money laundering, I think the EU will provide descriptions of what these applications are. Much of it will be judged by case law in the upcoming years, and keep in mind that there’s going to be an adaptation period where organizations can consult with the EU on that.
Were AI experts consulted for the EU AI Act?
That’s the million-dollar question: how to involve experts in developing these legislations instead of policymakers. I think, particularly in the AI field, it’s even harder because AI skills are scarce in the government sector. When it comes to involving experts, what regulators and legislators did was conduct so-called stakeholder consultations, gathering opinions and feedback on the EU AI Act. However, only large organizations were able to provide feedback, as they have the necessary bandwidth and resources to formulate their policy positions and understand them. There has been criticism that there are insufficient experts from startups and small companies in drafting appropriate policies.
Is the EU a frontrunner when it comes to AI legislation?
Yes, because it’s the main comprehensive regulation that is out there. China is very fast in regulating these tools, predominantly for their internal domestic reasons, such as a political agenda and economic prerogatives, but also simply because they want to compete internationally at the geopolitical stage, and AI is such an important element of their strategy. Europe has produced the most overarching legislation; it’s a fact of life, and it’s not going to change. It’s going to influence companies in the EU but also companies outside of the EU, which is known as the Brussels effect. The U.S. came up with its own Executive Order on AI, claiming to be the most sweeping act of legislation or policy there is. However, it’s just an executive order, an instruction by the President to various agencies to develop standards and regulations. There’s nothing concrete yet.
How will the EU AI Act affect funding for AI initiatives?
The tech sector is a point of strategic advantage worldwide. In the U.S., legislation is laxer because it allows for wider experimentation by technology firms without worrying about visits from regulators. There are advantages, such as higher risk potential and risk appetite. But at the same time, many things can go wrong, especially for consumers.
On the other hand, the EU wants to put consumer protection front and center. There is an advantage in having these regulations to produce predictability and legal certainty. If I want to invest in a company, I know what to expect in terms of regulatory risks. Another thing to consider is whether there is a direct link between regulation and venture capital. European investors are more reluctant to invest similar amounts of funds as their American counterparts, and it’s too early to say whether the EU AI Act will have a positive or negative effect on that. Legislation can support or harm it, but other elements might have an impact on VC funding.
There are also arguments that legislation will slow down innovation because there’s less room for experimentation. Next, we’re trying to regulate a technology that hasn’t fully matured yet. That’s the challenge of regulating AI because it needs an evolutionary regulatory framework. After all, the technology is still developing and changing. It’s not like regulating nuclear energy, which is still high risk but won’t be much different in 10 years. It’s different for AI, especially in other regions with fully fragmented policy environments, different data governance regimes, and legislations between countries.
The EU AI Act, although more stringent, has the potential to harmonize legislation across countries.
How have lobbying groups affected the EU AI Act?
At the final stages of the EU AI Act development, a few countries, notably Germany, Italy, and France, said that this kind of legislation is not right for their markets. From what I know, this was a direct result of lobbying from companies saying, ‘No, don’t do this.’ But at the end of the day, they are still stuck with it. So, you could argue about how successful that has been.
Among some of the larger technology firms, there is not a lot of positive thinking around the EU AI Act. That would imply to me that the lobbying efforts, which have been enormous with millions going into them, haven’t been particularly successful. There might have been certain provisions, minor ones that have been influenced. Surprisingly, most of the European Commission’s efforts to fend off lobbyists have been relatively waterproof. Public relations and public policy between the tech sector and the EU Commission are important because there are many provisions and interactions that need to happen to ensure the legislation matches the requirements of different sectors.
So, lobbying is a dirty word, but it still needs to happen so that a harmonization process occurs.
Who is responsible in enforcing the EU AI Act?
That is the Achilles’ heel of the EU AI Act. With their Data Protection Officers (DPO), particularly under the GDPR, this used to be a national effort whereby DPOs would enforce Pan-European legislation on a national level. The problem there is, and as I alluded to earlier, the scarcity of AI skills. You might have this big regulation with a huge overarching framework, but implementation will be difficult due to the skills shortage. That is going to be the make or break for the EU AI Act. My understanding from my sources is that even those responsible for developing the EU AI Act have an AI skills shortage. If we have a centralized European AI office, that’s possibly the better approach to combat the skills shortage.
How can multinational companies handle legislation is different countries?
It depends on the strategy that you would prioritize.
To ensure you’re not infringing any regulations, stick with the EU AI Act as a general regulatory yardstick, and you will be safe in most countries.
This is because the Act has the strictest interpretation of AI products. It’s more difficult if you come from the U.S., where there is a different understanding of how data should be used and what is ethical or not. Some of my U.S. clients don’t want to deal with the GDPR. It’s easier if you go from Europe to the U.S., or Europe to other regions such as the Middle East. It’s harder if you go from the U.S. to the EU because that means you must adapt.
What business leaders can do to stay ahead of the EU AI Act?
I would advise every company to join the AI Pact. It’s a voluntary association that helps you have a forum for exchange and a direct source of information. Embrace the idea; it’s there, and you have to accept it.
Another thing to consider is to scan existing AI tools and products for issues. For example, what kinds of data do you use? Who’s your target audience? How have the models been trained? This assessment helps categorize your company’s AI products and determine where they fall into the four risk categories. However, extra considerations are needed for sensitive sectors such as healthcare and insurance, where data needs to be handled carefully.
After the assessment, plan the right types of regulations and provisions to put in place. It’s not going to happen overnight; the EU AI Act won’t be enforced immediately. I also advise organizations of all sizes to read the EU AI Act; surprisingly, it’s accessible to read. Be aware of the risks of your own products. You want to understand the issues based on the EU AI Act that your products will face.
Read the piece of legislation, reflect on your products, and I guarantee compliance with the EU AI Act will be achievable.
*The interview answers have been edited for length and clarity