Inside the EU AI Act: Exclusive Insights from Lead Author, Gabriele Mazzini

On August 1, 2024, the EU AI Act officially came into force, establishing the world’s first comprehensive legal framework for regulating AI technology. In this exclusive interview, we speak with Gabriele Mazzini, the architect and lead author of the Act, to gain an insider’s perspective on its development. Mazzini offers a behind-the-scenes look at the complex policy-writing process, discussing how various stakeholders were consulted, and how consensus was reached on the Act’s risk-based approach. He also provides crucial advice for business leaders navigating compliance, shares important updates since the law took effect, and discusses the global implications of the Act. Most importantly, Mazzini reassures companies that now is not the time to panic, but to prepare for the future of AI regulation.

 

What motivated you to take on the role of the lead author of the EU AI Act? How did your background in law influence your policy-writing process?

I realized from the get-go that AI policy was fascinating. I have been passionate since the beginning, notably in trying to understand the intersection between AI as a technology and law as a tool to govern technology. I drafted a quite comprehensive paper about the intersection between AI and EU law in 2018, way before the Commission started working on the AI Act. At the time, I was working in a department in the Commission, which was not the department that ultimately led the work on the Act but was mostly focused on the liability implications of AI. We were reflecting on whether the liability regime in the EU needed to be changed to enable AI. My background in law and the study put in understanding the complexity of the intersection between AI and EU law was essential for the work I did afterwards on the AI Act. When working in policymaking as a regulator it is essential to think holistically, especially in a field like AI where implications are manifold and broad and where regulatory action takes the form of a horizontal legal framework, like the AI Act, which applies across all sectors. 

 

How did you engage with various stakeholders during the development process? What role did their input play in shaping the Act?

It’s a privilege to interact with many stakeholders as a policymaker and listen to many different views. You also start seeing how society sees your work, and whether they see opportunities or risks. At the same time, it’s also a major responsibility because you have to make sure that whatever choices you make as a policymaker are grounded on facts and evidence and you have as much as possible an up-to-date understanding and knowledge about the matter you regulate.  

It’s both a privilege and a responsibility. I’ve always interpreted that role with much respect and not as a tick-the-box exercise where the job is done after meeting X number of stakeholders. Consulting with and engaging with stakeholders is much more than that. On an individual basis, I’ve always had an open-door policy from the beginning and was willing to meet with whoever was interested in talking to me. The institution as a whole has of course also engaged with stakeholders in a structured way.  

This goes back to a time when the Act was not even in the conception phase. The Commission started engaging with stakeholders already in 2018 and 2019 when it set up an expert group on artificial intelligence. This expert group was composed of around 52 individuals from different backgrounds, namely industry, academia, NGOs, and civil society. That group already gave a broad perspective on the emergence of AI and the policy implications of AI. They also developed ethical guidelines for trustworthy AI which were not a deliverable of the European Commission but of this separate expert group. That work already initiated a structured dialog between the European Commission and the stakeholders.  

That work was also complemented by the establishment of an online platform (the AI Alliance) where citizens and any interested party could provide feedback and suggestions. Another important set of consultation processes took place after the adoption of the White paper. Before the Commission came up with the actual legal framework, which happened in 2021, it adopted a White paper on AI in February 2020, and this was essentially how the institution tried to identify a number of potential ideas for what could be the ultimate draft legal framework and aimed to catalyze feedback on those ideas. That was also another interesting way we consulted widely with stakeholders.  

 

Can you share any particularly challenging moments during the writing process? How did you balance competing interests and priorities to reach a consensus?

No process is perfect. It’s challenging to deal with a legal framework that is so complex and large and ensure everyone fully understands what you’re trying to do. This is because any stakeholder typically tends to have a peculiar perspective when looking at and considering the policy work that is unfolding, which is linked to the needs and interests they represent. When trying to build something horizontal, sometimes the input you receive from several stakeholders does not necessarily fit the overall picture. So, the skill of the policymaker is to try to merge the narrow focus or perspective into the ultimate goal, which is in this case, a broader framework. 

 

What led to the risk-based framework of the EU AI Act?

It was pretty clear to me since the beginning that regulating any AI application or AI technology as such did not make sense. At the same time, also for those applications that may have deserved to be regulated, it did not seem warranted to establish the same type of rules. Hence the idea of a ‘pyramid’-like approach tailored to the actual use case.  

This idea was quite fascinating because we realized that we did not want to regulate AI as a technology.  

We didn’t want to regulate any AI application as if AI always creates risks. To create a balanced legal framework that does not hinder development and intervenes only when necessary, you need to focus on the application level and the use case. Therefore, the risk-based approach was exactly that solution, because depending on the type of risk that the application would generate, the rules would be different. We identified three risk levels where binding legal frameworks apply, plus a fourth level for which no binding rules are foreseen, but certain forms of voluntary compliance are possible. Of course, this choice was not ‘carved in stone’. There is no ontological value in the risk levels either that could have been articulated differently. But I think it was an interesting and groundbreaking idea. 

 

The EU AI Act officially came into force on August 1st. What significant updates or events have unfolded since then that business leaders should take note of?

The fact that the Act entered into force doesn’t mean it’s immediately applicable. The Act is law, so it is binding, but it does not apply in its entirety until after three years.  

There is a so-called transition period. The first applicable rules that companies need to comply with will be the rules on the prohibitions. The top of the risk pyramid, if you want. The second set of rules is around the general-purpose AI models and will be applicable one year after 1 August 2024. Two years after that, on 1 August 2026, all the other rules of the AI Act are applicable except for certain provisions regarding high risk.  

Business leaders need to understand the timeline in which the rules become applicable.  

What has happened since the publication of the Act is that the administrations, both in the Commission and in the Member States, have started to set up internal processes and structures to ensure enforcement. Business leaders, notably those that may be concerned by the rules applicable to the general-purpose AI models, should pay attention to the work that has already started in developing the Code of Practice at the EU level, i.e. facilitated by the Commission. These Codes of practice should be finalized before the entry into application of the relevant chapter of the AI Act, which means before 1 August 2025. 

Another important fact business leaders should keep in mind is that the Act is not 100% clear on all its provisions. In fact, the European Commission will have to develop several executive actions called implementing acts and delegated acts as well as guidelines and templates for about 70 items. There are still many areas where clarification is needed, which is not ideal.  

Therefore, there is an opportunity for business leaders and companies to shape the process of finetuning and clarifying the AI Act in order to determine the actual extent to which certain rules may apply to them. In other words, it is time to make their voices heard. They should be active in the implementation phase now that the legislative phase is finalized, but  so much is still to be clarified.   

 

With penalties for non-compliance potentially reaching up to 35 million euros or 7% of annual turnover, what immediate steps should businesses take to ensure they are not at risk?

They should not consider themselves to be at the receiving end of a process they cannot influence. Instead, now is a time to engage critically with the provisions, especially when those rules provide a certain margin of appreciation. Companies need to proactively engage with the regulators and suggest interpretations, positions, and ideas to make sure that those rules are applied reasonably and sensibly. This is one of the challenges of regulating technology, where there is a knowledge gap between the regulators and the companies that develop those technologies.  

Of course, it goes without saying that regulators should not be dependent only on the company’s views. Although it was not obvious in our case, especially at the beginning of the process, regulators should invest heavily in having internal deepseated expertise on the matters that it intends to regulate. You need to know what you want to regulate in order to do that well. Only if you have your own technical expertise you can properly engage with external stakeholders constructively, while at the same time retaining the independence of judgment that is necessary to take broader societal considerations into account. On the other hand, those who developed the technology and the products must have a say in suggesting the best ways to comply. This exchange needs to happen. I understand sometimes companies, especially the smaller ones, don’t have the resources to engage extensively with the regulators, but I think at this time when so much still needs to be clarified it’s an exercise that is worth doing. It doesn’t have to be individual companies; it could be industry associations.  

 

Many companies are facing a shortage of AI talent. How do you think this skills gap will impact the successful adoption of the EU AI Act?

Because those skills are rare, companies need to increase their strength in certain AI-related skills. The concern is that, as I mentioned before, companies at this stage may have to invest more in compliance than AI skills. That may impact the company’s ability to compete in the AI space.  

If you spend more money on compliance, as opposed to research and development or AI engineers which are also scarce, there is a risk of imbalance. The same may happen with authorities because they must ensure compliance with all these rules and need to equip themselves with several technical skills.  

I hope this set of rules will be somewhat clarified as soon as possible so that companies can hopefully shift more of their budget to AI skills rather than AI compliance. In my view, the successful adoption of AI in Europe depends on the ability to get this legal framework, and the tools needed to implement this framework, working effectively and sensibly as fast as possible. So there is still important work to do. 

 

Who holds the primary responsibility for implementing and enforcing the EU AI Act within organizations?

It should be a team effort. The Act does not foresee a figure like a data protection officer (DPO) in the privacy legislation. This is not an obligation, so the Act does not require, for instance, a Chief AI Officer in companies. The obligations that the Act establishes are on the economic actor, which is the provider, the deployer, so the company itself. This means that the companies can organize themselves as they wish. The Act gives total freedom to organizations to organize themselves depending on their size. I don’t think there is necessarily only one model. Ultimately, the legal responsibility is on the company. If there is a lack of compliance, the company will have to pay the fine.

 

How do you see the EU AI Act influencing AI regulation in other parts of the world?

There is a huge interest around the world. Since I left the Commission, I’ve traveled from South America to Asia, and I have witnessed a growing interest in understanding this piece of legislation. It’s quite normal in this phase because AI governance and regulation is something that is of interest globally. Governments are wondering how to deal with the ‘AI wave’.  

This interest is also reflected by the collective efforts at an international level. For instance, UN agencies are investing heavily in reflecting on AI governance frameworks. As the EU is the first regional actor to come up with such a comprehensive legal framework on AI, it’s normal that countries around the world are looking with interest at that framework and are asking themselves whether they should get inspiration.  

It’s too early to say whether the Act will turn into a regulatory model for other regions around the world. There is a need to understand whether those choices fit the socioeconomic or legal context in those countries. The capacity to implement a framework like the AI Act also differs from country to country. A legal framework is not just a piece of paper. It requires human resources, skills, funding, and structures to turn it into an effective tool that can achieve the objectives it was designed for. It needs to be managed and brought to life. Not all countries are in the same position, and they would be well-advised to consider questions of implementation and enforcement from the get-go, not after the law has been agreed. 

 

Are there any specific areas where you believe the Act could have a significant global impact?

I hope the risk-based approach can be considered as one of the foundational elements. The idea is to consider AI as a tool that has both benefits and risks and is not necessarily dangerous by its nature. It’s a technology with different risk levels depending on how it’s used. I’d like to see this risk-based approach adopted widely. 

The extent to which certain areas of the AI Act may have an impact beyond EU borders could also depend on certain company choices, especially for companies that sell their products and services in the EU. They may adjust their compliance system to the EU legal framework simply because they want to sell in the EU.  

Those companies may therefore decide to adopt the same or similar compliance structure when selling their products outside the EU. It’s up to the companies whether to have two systems, one for the EU market and one for the non-EU market. It’s not for me to say what is economically convenient for companies. But these considerations may be relevant in determining whether we may see a larger or a narrower adoption of certain areas of the Act. 

 

What are the key trends or developments shaping the AI landscape in the coming years? How might the Act need to evolve to address these future challenges?

It will be interesting to see whether the trend in generative AI will continue along the lines we have seen so far. This trend towards developing larger models that require more data, and more computing power, is based on certain underlying architectural choices. Perhaps intelligence will come from other foundational choices that do not necessarily rely on growing data sets or computing power. This will ultimately shape the investments around creating a technology stack to support this.  

From a regulatory and policy point of view, it’s a challenge to keep regulation up to date, but it’s not impossible. When I think about the AI Act, making sure it’s future-proof was one of my main concerns since the beginning. However, certain choices made after the adoption of the Commission proposal, such as regulating foundation models or deleting the possibility of updating the AI definition, do not necessarily go in that direction from my point of view. We will see whether the Act will be able to stand the test of future developments. 

Currently, I’m more concerned about ensuring the Act works now to enable trustworthy innovation in Europe. This is where the Act will prove its value. It should be applied in a way that is accessible, easy to understand, and provides legal certainty to companies so that they can rely on a stable legal framework and focus on building the products.  

 

*The interview answers have been edited for length and clarity.

Navigating the EU AI Act: Mitigate Risks and Seize Opportunities

The EU AI Act was finally passed on 9 December 2023, after a grueling 38-hour negotiation. In this exclusive interview with AI expert Walter Pasquarelli, learn about the groundbreaking developments following the EU AI Act’s announcement and key implications for European businesses. Pasquarelli also shares practical insights on how to get started with complying with the EU AI Act and how the Act will impact the progress of AI innovation in organizations.  

 

How does the EU AI Act adapt to the fast-paced changes in AI technology, and how does it categorize different AI applications based on risk levels?

When we started the conversation about an EU AI Act, it was before the launch of generative AI tools like ChatGPT, so we focused on a very different understanding of artificial intelligence. Some have argued that this used to be more of an approach towards products, but the launch of generative AI tools has transformed our understanding of the possibilities of AI. That meant the EU AI Act needed updating. I remember that in the final 38 hours, the amount of information from the trial log was incredible. I’m glad that we made it to where we are right now. 

It’s the world’s first comprehensive legislation that regulates how AI can be used in European markets and the European bloc. 

At its heart, the EU AI Act has four pillars, creating four risk buckets where different AI applications can fall – low risk that will face almost no regulatory action, medium risk, high risk, and prohibited risks at the top. The EU AI Act looks at the AI ecosystem in Europe and categorizes it into these four buckets. Based on that, companies developing tools will face various regulatory actions. Think about the four risk categories and the resulting regulations on products. 

One AI year passes in seconds. For example, the developments that have happened within the AI ecosystem and the technical possibilities that are out there. In two years, the whole environment has changed. We can now produce video generators that look hyper-realistic, and tools that can create entire marketing copy in hours. This will advance faster and faster. This creates a need for regulations that won’t be outdated in a year or two. Can the EU AI Act achieve this? Its broad approach positions it well, but it will certainly need more updates as technological breakthroughs happen. 

 

Will the EU AI Act’s broad approach addresss risks like bias?

The reason why it’s so broad is, on the one hand, the EU AI Act doesn’t seek to regulate AI products per se; it seeks to regulate the risk. It acknowledges that developing European legislation takes ages. The only way to tackle this is by producing something relatively broad. It’s different compared to China, which is very fast in creating regulations using a horizontal approach. They can do it because they have a different kind of legislative process. 

Now, when we look at specific provisions, how do we categorize these risks? The EU AI Act does provide a list of applications that are high-risk. For example, using AI tools for determining the creditworthiness of an individual and AI tools used by the police, which typically have shown elements of bias. If we look at issues such as money laundering, I think the EU will provide descriptions of what these applications are. Much of it will be judged by case law in the upcoming years, and keep in mind that there’s going to be an adaptation period where organizations can consult with the EU on that. 

 

Were AI experts consulted for the EU AI Act?

That’s the million-dollar question: how to involve experts in developing these legislations instead of policymakers. I think, particularly in the AI field, it’s even harder because AI skills are scarce in the government sector. When it comes to involving experts, what regulators and legislators did was conduct so-called stakeholder consultations, gathering opinions and feedback on the EU AI Act. However, only large organizations were able to provide feedback, as they have the necessary bandwidth and resources to formulate their policy positions and understand them. There has been criticism that there are insufficient experts from startups and small companies in drafting appropriate policies. 

 

Is the EU a frontrunner when it comes to AI legislation?

Yes, because it’s the main comprehensive regulation that is out there. China is very fast in regulating these tools, predominantly for their internal domestic reasons, such as a political agenda and economic prerogatives, but also simply because they want to compete internationally at the geopolitical stage, and AI is such an important element of their strategy. Europe has produced the most overarching legislation; it’s a fact of life, and it’s not going to change. It’s going to influence companies in the EU but also companies outside of the EU, which is known as the Brussels effect. The U.S. came up with its own Executive Order on AI, claiming to be the most sweeping act of legislation or policy there is. However, it’s just an executive order, an instruction by the President to various agencies to develop standards and regulations. There’s nothing concrete yet. 

 

How will the EU AI Act affect funding for AI initiatives?

The tech sector is a point of strategic advantage worldwide. In the U.S., legislation is laxer because it allows for wider experimentation by technology firms without worrying about visits from regulators. There are advantages, such as higher risk potential and risk appetite. But at the same time, many things can go wrong, especially for consumers

On the other hand, the EU wants to put consumer protection front and center. There is an advantage in having these regulations to produce predictability and legal certainty. If I want to invest in a company, I know what to expect in terms of regulatory risks. Another thing to consider is whether there is a direct link between regulation and venture capital. European investors are more reluctant to invest similar amounts of funds as their American counterparts, and it’s too early to say whether the EU AI Act will have a positive or negative effect on that. Legislation can support or harm it, but other elements might have an impact on VC funding. 

There are also arguments that legislation will slow down innovation because there’s less room for experimentation. Next, we’re trying to regulate a technology that hasn’t fully matured yet. That’s the challenge of regulating AI because it needs an evolutionary regulatory framework. After all, the technology is still developing and changing. It’s not like regulating nuclear energy, which is still high risk but won’t be much different in 10 years. It’s different for AI, especially in other regions with fully fragmented policy environments, different data governance regimes, and legislations between countries. 

The EU AI Act, although more stringent, has the potential to harmonize legislation across countries. 

 

How have lobbying groups affected the EU AI Act?

At the final stages of the EU AI Act development, a few countries, notably Germany, Italy, and France, said that this kind of legislation is not right for their markets. From what I know, this was a direct result of lobbying from companies saying, ‘No, don’t do this.’ But at the end of the day, they are still stuck with it. So, you could argue about how successful that has been. 

Among some of the larger technology firms, there is not a lot of positive thinking around the EU AI Act. That would imply to me that the lobbying efforts, which have been enormous with millions going into them, haven’t been particularly successful. There might have been certain provisions, minor ones that have been influenced. Surprisingly, most of the European Commission’s efforts to fend off lobbyists have been relatively waterproof. Public relations and public policy between the tech sector and the EU Commission are important because there are many provisions and interactions that need to happen to ensure the legislation matches the requirements of different sectors. 

So, lobbying is a dirty word, but it still needs to happen so that a harmonization process occurs. 

 

Who is responsible in enforcing the EU AI Act?

That is the Achilles’ heel of the EU AI Act. With their Data Protection Officers (DPO), particularly under the GDPR, this used to be a national effort whereby DPOs would enforce Pan-European legislation on a national level. The problem there is, and as I alluded to earlier, the scarcity of AI skills. You might have this big regulation with a huge overarching framework, but implementation will be difficult due to the skills shortage. That is going to be the make or break for the EU AI Act. My understanding from my sources is that even those responsible for developing the EU AI Act have an AI skills shortage. If we have a centralized European AI office, that’s possibly the better approach to combat the skills shortage.  

 

How can multinational companies handle legislation is different countries?

It depends on the strategy that you would prioritize.  

To ensure you’re not infringing any regulations, stick with the EU AI Act as a general regulatory yardstick, and you will be safe in most countries.  

This is because the Act has the strictest interpretation of AI products. It’s more difficult if you come from the U.S., where there is a different understanding of how data should be used and what is ethical or not. Some of my U.S. clients don’t want to deal with the GDPR. It’s easier if you go from Europe to the U.S., or Europe to other regions such as the Middle East. It’s harder if you go from the U.S. to the EU because that means you must adapt.  

 

What business leaders can do to stay ahead of the EU AI Act?

I would advise every company to join the AI Pact. It’s a voluntary association that helps you have a forum for exchange and a direct source of information. Embrace the idea; it’s there, and you have to accept it. 

Another thing to consider is to scan existing AI tools and products for issues. For example, what kinds of data do you use? Who’s your target audience? How have the models been trained? This assessment helps categorize your company’s AI products and determine where they fall into the four risk categories. However, extra considerations are needed for sensitive sectors such as healthcare and insurance, where data needs to be handled carefully. 

After the assessment, plan the right types of regulations and provisions to put in place. It’s not going to happen overnight; the EU AI Act won’t be enforced immediately. I also advise organizations of all sizes to read the EU AI Act; surprisingly, it’s accessible to read. Be aware of the risks of your own products. You want to understand the issues based on the EU AI Act that your products will face. 

Read the piece of legislation, reflect on your products, and I guarantee compliance with the EU AI Act will be achievable. 

 

*The interview answers have been edited for length and clarity 

What is Beyond the AI Hype for Volvo Group?

Business leaders need to face a reality where AI isn’t just a buzzword, it’s the engine driving real business impact. Volvo Group is leading by example by weaving AI into the fabric of its operations, seeking tangible, transformative, and sustainable impact. In this exclusive interview, Robert Valton, Head of Data, Analytics & AI, Volvo Group Connected Solutions, shares valuable insights on the game-changing nature of AI, one of the key enablers to offer tailor-made end-to-end solutions to customers and to achieve Volvo Group’s long-term ambitions to be 100% safe, 100% fossil-free, and 100% more productive.

 

How did Volvo Group approach the implementation and scaling of AI technology? Can you share a few use cases that showcase the value of AI at Volvo?

Volvo Group has great products like trucks, buses, construction equipment, and marine and industrial engines. There has been a lot of focus on the products, but we also turn towards services and solutions where we can really utilize AI. For example, an iPhone has around 10 sensors like GPS, accelerator, gyroscope, and barometer. A truck has 10 times as many sensors as the iPhone. Imagine the possibilities we have with the truck’s data. I want to highlight that AI is a game changer, especially when you utilize real data. 

With synthetic data or transfer learning you might leverage AI with minimal or no data initially, but if we have real data, we can bring the full value of AI. 

To create value, you need to balance both data and AI technology, and you need to have a business need, a challenge to solve. It’s not always clear how to formulate the question, to know about the possibilities and the value of data and AI.  

This is our aim in the Volvo Group to help articulate this need, both the spoken and unspoken, addressing both the known and unknown questions.  

Traditionally, we have used AI for autonomous driving connected to a product. We have continued to explore AI around the product. For example, predictive maintenance to understand the product’s lifetime and its components. With AI, we can predict when a truck needs to go in for preventive service before the components malfunction to ensure we always have the truck on the road delivering goods. But this is still around the product. If we expand it to the driver or the operator, we can support driver training with fuel and energy coaching through digital twin technology.  

We also use AI for transportation optimization to understand if there are bottlenecks in a transportation flow. Generally, 50% of trucks in Europe transport empty. So, 50% don’t have cargo and 25% of trucks are standing still. That means we have a lot of underutilized capability and capacity. If we can utilize AI to address this, we can deliver more cargo with the existing fleet, which is better for the environment. 

You can use AI to solve all problems, but not all problems deserve AI.  

Sometimes deploying AI can be too cumbersome, expensive, and complex. We should always adjust the tool to the problem, so we are efficient. The right tool for the right use. Another aspect is AI for internal efficiency. For example, we have a lot of coding in-house, and AI is our coding buddy to verify and test code. You can also use AI to automate manual tasks or quality issues in a process.  

 

Generative AI (GenAI) has taken the business world by storm this year. For organizations who don’t know where to start with GenAI, how should they go about implementing the technology?

Start, try, and explore. Many people talk about GenAI but they haven’t tried it. So, I often ask in different meetings, “Have you tried AI technology? Have you tried for example AI tools like ChatGPT or DALL-E?” just to get an understanding.  

We decided to have a bit of a lean approach to this in our organization so we created AI in Action, a series of events where we explore how AI can support us both with internal efficiency and our services. We invite our entire organization, and we start with an inspirational event having presenters demonstrating how we can fully use AI in our context in a safe and compliant way. We have discussions about compliance, ethics, and all those questions that need to be on the table before trying out AI.  

We didn’t do this from a technology perspective, we did it from a business perspective. So, we started to ask our business stakeholders what their pain points are, focusing on that rather than on what AI can do. This was a super interesting journey because everyone’s eager to start using AI now. But let’s not forget why. So, we went back and talked about this, and found different pain points that we could address with AI. We have since calibrated some and decreased some, and now we are on four of those doing a hackathon.  

The great thing was that when we started this, we had 600 people in our organization globally who joined this inspirational event. You get the energy and passion from the complete organization, it’s not a top-down directive. It’s building the data-driven culture and transformation journey. It requires that you think of AI as more than GenAI, more than a tool or service. This is all about leadership, strategy, value, data, and compliance. Here we need to navigate and make sense of it.  

 

There is a lot of buzz around hiring a Chief AI Officer (CAIO). Do you think it’s time for board-level representation of AI?

The answer is connected to the size and the kind of organization you have. But I would say that we will see more CAIO roles in companies in general and at least one board member with an AI focus. Appointed owner of data, analytics & AI at C-level with the right focus and mandate will enable the company to be a leader in the “data to value” transformation.  

AI is more than a technology, it’s something that goes cross-organization and balances technology and business.  

If everyone now has access to AI, what makes you unique compared to other companies? What customer relationship would you like to have? That’s also something that you need to reflect on. Would you just like a digital interface for all your customers? Would you like to have a more personal interaction somewhere? That’s why I believe that AI should be on the top management and board level. If you handle that right, it gives you an advantage. If you don’t, it will probably be the end of your business. 

 

As an AI leader yourself, what challenges have you encountered with AI governance?

There are different maturity levels in an organization, and you need to have the dynamics to balance that. You need to talk about and address what should be centralized and distributed. You need to make sure that you build and support a data-driven culture, that everyone’s on board, and you have to figure out the right way to work. But at the same time, avoid having a lot of people reinventing the wheel. In an ideal setup, you would have one truth of information that is free-floating in the organization.  

This is why we need to address the governance part to make sure that everyone is on the same page, that we are smart about what tools we use, what processes we utilize, what we should make ourselves, what we should buy, and who we should partner with. It’s important to have a structured approach to all those questions.  

We also need to address that we might have old truths based on gut feelings. With a data-driven approach, with a black box that contains AI, you might come with a truth that challenges the old hypothesis. That’s about trust and change management. How do you handle that? Do we have leaders that believe that we can utilize this technology? It might require that we upskill people and completely change the way with AI in the equation. My firm belief is that AI Is not only for the tech geeks, instead we should focus on the value it gives. Coming back to the question about the CAIO, I believe that we need to have people balancing between business and technology here so we can also utilize AI effectively, not just because it’s a cool technology.  

 

The rapid development of AI technology requires leaders to have strong AI literacy. What are the top strategies to foster AI literacy in upper management?

We need to go from PowerPoints to action, from hype to reality. It’s a great opportunity to share with the top management how this can be used as a capability to drive transformation from data to value. Give concrete examples and support top management to try themselves. They need to understand and see concrete examples in a context. And if they aren’t already, help them ask the right questions.  

One thing that’s super important is for companies to define AI.  

GenAI is just one tool in the box. There’s natural language processing (NLP), computer vision, predictive analytics, simulations, optimization, and more. I’ve been working with AI for the last 10 years, but I’ve only worked with GenAI since last year. Another thing that will be important is to understand the business disruption that will happen because of AI. How can we relate to that? How can we make sure that we have the strength to be part of that and utilize AI to bring value to our business, both for customers and for internal efficiency?  

Also, work proactively with ethics. How would you like to see AI used in your organization? For example, I work a lot with recruitment. If you have an AI that is trained in a certain way that might choose certain individuals, that is not the way to go, we should have a diverse setup that goes with the right candidate regardless of age and gender to have diverse and dynamic teams. Coming back to the data, it will be a nominator moving forward, understanding the data that you have and the value that data can provide. Then you can decide what to do with the data, you can partner up and collaborate. You need to be dynamic with the way that you proceed. 

 

Europe has been a trailblazer when it comes to regulating AI and data privacy. How can business leaders navigate the complexities of compliance and not abandon innovation?

I believe it’s important that we are careful. Today we have what we call narrow AI. You have Siri, for example, you ask Siri a question and Siri will answer, the typical AI that many companies work with. But the next level is general AI, where AI can move between different tasks. So, imagine if Siri started to go into other areas like autonomous driving. That’s not what Siri is built for but if her intelligence expanded, she could take on new things.  

The next level of general intelligence is super intelligence, that’s when AI will outsmart humans. And in that era, AI will be more intelligent than mankind. We must find ways to relate to the evolution of AI. So, I’m receptive to regulations stating how we should evolve AI. It’s also important that we talk about compliance, ethics, and personal data protection.  

I don’t see that it’s either or, I think we can find a balance between compliance and innovation, especially innovation and AI for good.  

For example, if I say our goal is to enable more transportation with less climate impact, that’s quite a nice goal to have, and then we balance that with being compliant. I’m convinced that we will find that balance.   

 

How do you see AI growing in the next 5 years? How will it transform the automotive and transport industry?

AI is a game changer. Many people compare it to electricity or the Internet, and I agree. So, we do need to relate to AI. It’s not that we can live without it. Instead, we need to relate and adopt. 

Examples of data to value, supported by AI: 

  • Vehicle/machine: Secure uptime by predicting the lifetime of components and enable replacements of components before breakdown. 
  • Driver/operator: Train, coach, and provide feedback to drivers and operators for optimal fuel and energy consumption. 
  • Operation: Identifying anomalies like waiting time in transportation flows in real-time and automating manual steps. Potential to significantly improve transport cycles and increase operation efficiency.  
  • Transportation and mobility: Predicting the power demand as a result of future charging infrastructure to enable the transformation to electrified transportation. The insights from the work are presented to the Swedish government, the EU parliament, a number of grid companies, and was also instrumental in the development of the public tool “Behovskartan” and ACEA map of common truck stop locations. 

We need to understand how this will affect the complete organization and what strategies we should have to address this.  

Everything from the value creation to data to our target architecture to our teams. Whether we should upskill or reskill, we need to have a broad picture of this. For example, I heard a statement that it’s not AI that will take your job, it’s a person who utilizes AI that will. We should also be proactive. Instead of being in the backseat about regulations, I believe as big companies, we can take responsibility and drive things, so we make sure that AI is a tool to do good things. But it shouldn’t be that AI is on top of everything.  

Connected to Volvo Group’s industry, AI has the potential to help us reach 100% transport utilization. We can have a much more connected transportation flow because the current one is really scattered. With AI and connected data, we can do a lot of good things and secure mobility. One of Volvo Group’s higher goals is to address sustainability and reduce our climate impact.  

I think this is a very interesting time that we are in. I’m not a tech guy in that context. I’m not a data scientist. I’m coming more from sales, innovation, and leadership. It has been a good recipe for me to drive this and to bridge the gap between business and technology. 

 

*The interview answers have been edited for length and clarity. 

AI-Powered Cybersecurity: Start With a Chief AI Officer

In this era of digitization where data and connectivity underpin every business decision, protecting your digital assets isn’t just crucial; it’s the fundamental core of business survival. AI offers a potential of a more resilient digital infrastructure, a proactive approach to threat management, and a complete overhaul of digital security.

According to a survey conducted by The Economist Intelligence Unit, approximately 48.9% of top executives and leading security experts worldwide believe that artificial intelligence (AI) and machine learning (ML) represent the most effective tools for combating modern cyberthreats.

However, a survey conducted by Baker McKenzie highlights that C-level leaders tend to overestimate their organizations’ readiness when it comes to AI in cybersecurity. This underscores the critical importance of conducting realistic assessments of AI-related cybersecurity strategies.

Dr. Bruce Watson and Dr. Mohammad A. Razzaque shared actionable insights for digital leaders on implementing AI-powered cybersecurity.

 
Dr. Bruce Watson is a distinguished leader in Applied AI, holding the Chair of Applied AI at Stellenbosch University in South Africa, where he spearheads groundbreaking initiatives in data science and computational thinking. His influence extends across continents, as he serves as the Chief Advisor to the National Security Centre of Excellence in Canada.
Dr. Mohammad A. Razzaque is an accomplished academic and a visionary in the fields of IoT, cybersecurity, machine learning, and artificial intelligence. He is an Associate Professor (Research & Innovation) at Teesside University
 

The combination of AI and cybersecurity is a game changer. Is it a solution or a threat?

 

Bruce: Quite honestly, it’s both. It’s fantastic that we’ve seen the arrival of artificial intelligence that’s been in the works for many decades. Now it’s useable and is having a real impact on business. At the same time, we still have cybersecurity issues. The emergence of ways to combine these two things is exciting.

Razzaque: It has benefits and serious challenges, depending on context. For example, in critical applications such as healthcare or driverless card, it can be challenging. Driverless cars were projected to be on the roads by 2020, but it may take another 10 years. Similarly with the safety of AI, I think it’s hard to say.

 

What are your respective experiences in the field of cybersecurity and AI?

 

B: I come from a traditional cybersecurity background where it was all about penetration testing and exploring the limits of a security system. In the last couple of years, we’ve observed that the bad guys are quickly able to use artificial intelligence techniques. To an extent, these things have been commoditized. They’re available through cloud service providers and there are open-source libraries with resources for people to make use of AI. It means the barrier for entry for bad actors is now very low. In practice at the university as well as when we interface with the industry at large, we incentivize people to bring AI techniques to bear on the defensive side of things. That’s where I think there’s a real potential impact.  

It’s asymmetrical warfare. Anyone defending using traditional methods will be very quickly overrun by those who use AI techniques to generate attacks at an extreme rate.

R: I’m currently working on secure machine learning. I work with companies that are developing solutions to use generative AI for automated responses to security incident. I’m also working on research on secure sensing, such as for autonomous vehicles. This is about making sure that the sensors data is accurate, since companies like Tesla rely on machine learning. If you have garbage in, you’ll produce garbage out.

 

Given AI’s nature, is there a risk of AI developing itself as an attacker?

 

B: It fits well with the horror scenarios from science fiction movies. Everyone is familiar with Terminator, for example. We’re not at that point yet where there’s a possibility of AI developing arbitrary new ways to attack systems. However, we’re also not far from that point. Generative AI, when given access to a large body of malicious code, or even fragments of computer viruses, malware, or other attack techniques, it is able to hybridize these things rapidly into new forms of attack, quicker than humans can. In that sense, we’re seeing a runaway process. But it is still stoppable, because systems are trained on data that we provide them in the first place. At a certain point, if we let this free to fetch codes on the internet or be fed by bad actors, then we’ll have a problem where attacks will start to dramatically exceed what we can reasonably detect with traditional firewalls or anomaly detection systems.

It scares me to some extent, but doesn’t keep me awake at night yet. I tend to be an optimist and that optimism is based on the possibility for us to act now. There isn’t time for people to set around and wait until next year before embracing the combination of AI and cybersecurity. There are solutions now so there’s no good reason for anyone to be sitting back and waiting for an AI-cybersecurity apocalypse. We can start mitigating now.

R: We use ChatGPT and other LLMs that are part of the generative AI revolution. But there are also tools out there for bad actors like FraudGPT. That’s a service you can buy to generate an attack scenario. The market for these types of tools is growing, but we’re not yet at a self-generating stage.

 

Are we overestimating the threat of AI to cybersecurity?

 

B: A potential issue is that we simply do not know what else is out there in the malware community. Or rather, we have some idea as we interact with malware and the hacker community as much as we can without getting into trouble ourselves, but we do see that they’re making significant advances. They’re spending a lot of time doing their own research using commodity and open-source products and manipulating them in such a way that they’re getting interesting and potentially dangerous results.

 

How can the good guys stay ahead of bad actors? Is it a question of money, or the red tape of regulations?

 

R: Based on my research experience, humans are the weakest link in cybersecurity. We’re the ones we should be worried about. IoT is responsible for about 25% of overall security concerns but only sees about 10% of investment. That’s a huge gap. The bad guys are always going to be ahead of us because they do not have bureaucracy. They are proactive while we need time to make decisions. And yes, staying ahead is also a question of money but it’s also about understand the importance of acting promptly. This doesn’t mean forgoing compliance and regulation. It means we have to behave responsibly, like changing out passwords regularly.

B: It’s very difficult to advocate for getting rid of governance and compliance, because these things keep us honest. There are some ways out of this conundrum, because this is definitely asymmetrical warfare where the bad guys can keep us occupied with minimal resources while we need tremendous resources to counter them.

One of the ways around it is to do a lot of the compliance and governance using AI systems themselves. For monitoring, reporting, compliance – those can be automated. As long as we keep humans in the loop of the business processes, we will experience a slowdown.

The other way of countering the issue is to get together on the defensive side of things. There’s far too little sharing of information. I’m talking about Cyberthreat Intelligence (CTI). Everyone has recognized for a long time that we need to share when we have a breach or even a potential breach. Rather than going into secrecy mode where we disclose as little as possible to anyone, we should be sharing information with governments and partner organizations. That way, we actually gain from their defensive posture and abilities.

Sharing cyberthreat intelligence is our way of pulling the cost down and spreading the burden across a collective defence network.

 

What is the first thing business leaders should to do prepare for what AI can do and will be used for?

 

R: When it comes to cybersecurity, technical solutions are only 10%. The other 90% is responsibility. Research shows that between 90 to 95% of cybersecurity incidents could have been avoided if we behaved responsibly. The second thing is that cybersecurity should be a consideration right from the start, not an afterthought. It’s like healthcare. You need to do what you can to avoid ever needing medical care in the first place. It’s the same here.

B: The number one thing is to make sure that your company appoints a Chief AI Officer. This may be someone who is also the CIO or CSO, but at the very least there should be board-level representation of AI and its impact on the business. Any business in the knowledge economy, financial industry, technology, as well as manufacturing and service industries – all are going to have to embrace AI. People may think it’s a fad, but AI will absolutely steamroll organizations that don’t embrace it immediately.  That’s what I would do on day one. Within a couple of days after that, there must be a working group within the company to figure out how to roll out AI, because people will be using it whether openly or discreetly. AI forms a tremendous force multiplier for running your business but also a potential security threat for leakage of information out of the business as well. So you need a coherent roll out in terms – in terms of information flow, your potential weaknesses, embedding it into corporate culture and bringing it into cybersecurity. Any company that ignores these things is in peril.

 

Where does ethics come into this?

 

R: No one can solve the problem of AI or cybersecurity individually. It needs to be collaborative. The EU AI Act outlines four categories of risk – unacceptable, high, limited, and minimal. The EU doesn’t consider it an individual state problem. In fact, they also have a cybersecurity legislation that clearly states that it would supersede state-level regulations. The UK, on the other hand, is slightly more pro-innovation. The good news is that they are focused on AI assurance research which include things like ethics, fairness, security, and explainability. So if businesses follow the EU AI Act and focus on AI assurance, they can lead with AI securely and responsibly.

B: There are a couple of leading frameworks for ethical and responsible AI use including from the European Union as well as the UN. Many of the standard organizations have been working hard on these frameworks. Still, there is a sense that this is not something that can be naturally embedded within AI systems. On the other said, I think it’s become increasingly likely and possible that we can build limited AI systems that have only one job of looking out for the ethical and responsible behaviour of either humans or other systems. So we are potentially equipping ourselves with the ability to have the guardrails themselves be a form of AI that is very restricted and conforms to the rules of the EU or other jurisdictions.

 

Which areas do you see as having the biggest potential for using AI within cybersecurity – for example identification, detections, response, recovery?

 

B: I’m hesitant to separate them because each of those are exactly where AI should be applied. It’s possible to apply them in tandem. AI has an immediate role in detection and prevention. We can use it to evaluate the security posture of an organization and make immediate suggestions and recommendations for how to strengthen it. Still, we know that at a certain point, something will get through. It’s impossible to defend against absolutely everything. But it is important to make quick moved in terms of defending and limiting damage, sharing information, and recovering. Humans are potentially the weak links there too. Humans monitoring a system will need time to assess a situation and find the best path forward, whereas an AI can embody all the relevant knowledge within our network and security operation centres and generation recommendations quicker. We can have faster response times which are key to minimizing damage.

 

What are some significant upcoming challenges and opportunities within the AI-powered cybersecurity domain in the next two years?

 

R: Definitely behaviour analysis, not only to analyse systems but users as well for real-time, proactive solutions. The systems we design, including AI, are for us. We need to analyse our behaviour to ensure that we’re not causing harm.

B: Another thing AI is used for is training, within cybersecurity but across corporations as well. There’s a tremendous amount of knowledge and many companies have training for a wide variety of things. These can be fed into AI systems that resemble large language models. AI can be used as a vector for training. The other thing is a challenge on how quickly organizations will decide to be open with peer companies. Will you have your Chief AI Officer sit at a roundtable of peers from other companies to actually share your cybersecurity horror stories? The other significant challenged is related to change management. People are going to get past the novelty of ChatGPT as a fun thing to play around with and actually develop increasing fears about potential job losses and other threats posed by AI.

AI Takes the UK By Storm: 5 Strategic Developments

The United Kingdom (UK) is the beating heart of European tech as the continent’s best-funded country and investment attraction. The UK’s AI sector has been booming over the past few years, with a third of Europe’s AI start-up companies located in the UK. There are currently over 1,300 AI companies in the country – a whopping 600% increment over the last decade. Here are five important developments that highlight the UK’s journey to becoming a global AI superpower.  

 

1. The UK AI market is estimated to grow to over $1tn by 2035 

According to a 2023 study by The International Trade Administration, the current AI market in the UK is valued at over $21bn. That number is predicted to grow exponentially to more than $1tn by 2035 and may improve the UK’s current standing as the third largest AI market globally behind the U.S. and China.  

The UK is also attracting lucrative investments from tech companies across the pond. Salesforce is planning to invest $4 billion in its UK business in the next five years in response to strong demand for AI and digital transformation. Furthermore, Google has partnered with the University of Cambridge to drive progress in responsible AI through a multi-year research collaboration agreement and a grant for the university’s new Centre for Human-Inspired AI.  

Additionally, the UK is a market leader in open source AI, home to providers such as Significant Gravitas and Stability-AI. Founded in 2018, Significant Gravitas made a name for itself with the development of AutoGPT, a semiautonomous variant of ChatGPT. Significant Gravitas’ open AI code repository is number one in the UK and is the second AI project to gain 100,000 stars on GitHub.  

Furthermore, research by the Office for Artificial Intelligence and the Department for Science, Innovation, and Technology discovered that over 3,000 companies in the UK generated £10.6bn in AI-related revenue in 2022. 60% of AI businesses are purely AI companies, while 40% are diversified companies with AI offerings. 

The study also found that the industries that widely utilize AI are: 

  • Automotive 
  • Industrial automation and machinery 
  • Energy, utilities and renewables 
  • Health, wellbeing and medical practice 
  • Agricultural technology  
 

2. The UK AI sector is booming with £18.8bn investment, £3.7bn GVA, and 50,000 jobs 

More than 50,000 people work in AI-related roles, yielding £3.7bn in gross value added (GVA). The AI sector has also secured £18.8bn in private funding since 2016. Research from McKinsey also predicts that the value of generative AI could be equivalent to the UK’s total GDP to the work economy over the next few years. 

Recognizing AI’s potential to spur economic growth, the government introduced the National AI Strategy in 2021. This initiative sets out the UK’s strategic intent to guide action over the next ten years across three pillars: planning for the long-term needs of the AI ecosystem, ensuring AI benefits all sectors and regions, and getting national and international governance of AI technologies right. So far, the National AI Strategy has led to the publication of The Future of Compute Review, an AI regulation policy paper, which prompted a full range of stakeholder feedback.  

In addition, former Prime Minister Sir Tony Blair and William Hague, the former leader of the Conservative Party, released a joint report, A new national purpose: AI promises a world-leading future of Britain, which described AI as “the most important technology of our generation.” The report also indicated that the UK could pioneer the deployment and use of AI technology in the real world, “building next-generation companies and creating a 21st-century strategic state”. Strategies on how to achieve this include launching major AI talent programs, requiring generative AI companies to label synthetic media, and building AI-era infrastructure as a public asset. 

In early 2023, the UK government announced the building of a £900 million supercomputer to drive the country’s AI research and innovation capabilities that will be housed at the University of Bristol. The government also plans to invest £100m to position the country as a key player in producing AI-powered computer chips. An order of up to 5,000 graphics processing units (GPUs) from Nvidia is already in the advanced stages.  

There are numerous government initiatives to help UK companies fund solutions around AI development. For instance, the Fairness Innovation Challenge allows companies to apply for up to £400,000 in government investment to produce solutions to combat bias and discrimination in AI systems. This is in response to the challenges companies are facing with AI bias, such as inadequate access to data on demographics, and ensuring potential solutions meet legal requirements. According to a report by the Bank of England, AI bias could have serious repercussions in the finance sector, especially when incorrect data points affect mortgage lending practices.  

 

3. AI and machine learning are tech leaders’ priorities to drive innovation and efficiency 

AI is unsurprisingly a strong investment area among IT and digital leaders in the UK. A survey conducted by Info-Tech found that over 30% of IT leaders have investments in AI and machine learning in their organizations. When it comes to security tools, 79% of IT leaders favor those involving AI and machine learning functions (CyberEdge).  Another important finding from the survey is that AI is the fastest-growing technology for investment in 2023 and is the second most popular technology after cloud computing.  

Speaking of the cloud, AI capabilities are key to further cloud implementation. Both AI and machine learning are vital to accessing large volumes of data and scalability through the cloud. 

According to a 2023 study by Tata Consultancy Services: 

  • 74% of respondents have invested in AI and machine learning over the last two years to propel cloud innovation 
  • 78% plan to invest in the next year or two 
  • 80% reported that AI tools have boosted employee productivity, decision-making, and process efficiency 

According to the FIS Global Innovation Report, generative AI is a priority for many UK business leaders with 60% already adopting the technology and planning to increase spending within the next year to remain competitive. 

In April 2023, a report on post-pandemic economic growth by the House of Commons Business, Energy, and Industrial Strategy Committee highlighted the potential impact of AI on employee productivity in the UK. Based on original research by Deloitte, notable findings include that AI could boost UK labor market productivity by 25% by 2035. Additionally, research by EY found that senior business leaders in the UK are keen to develop AI technologies to spur economic growth and are looking for guidance due to the country’s complex AI regulation and compliance landscape.  

 

4. The UK government is tackling AI risks amid employee pushback on workplace usage   

Despite AI’s widespread adoption and visible benefits, AI has been officially classed as a security threat to the UK for the first time following the publication of the National Risk Register (NRR) 2023. There has also been pushback from employees on the usage of AI in the workplace.  

According to Randstad UK: 

  • 3 in 5 workers want AI banned from the workplace 
  • 60% of workers would support a government decision to ban AI from the workplace 
  • 37% would not use AI tools for work-related tasks, while 27% would consider it 
  • 2 in 5 workers said they were already using AI tools at work 

A separate study by Forbes Advisor found that the top AI concerns among UK citizens were:  

  • Dependence on AI and loss of human skills (42%) 
  • Autonomous AI systems making decisions without human intervention (39%) 
  • Job displacement and impact on employment (39%) 

In addition, only 45% of employees in the UK are confident that their organization’s risk management processes can support enterprise-wide generative AI integration (Avanade). Trade Union’s Congress (TUC), the UK’s biggest trade union, has created a task force to draft urgent legislation to protect workers’ rights and ensure jobs are not lost to AI. The task force aims to publish and lobby the AI and Employment Bill in 2024.  

The UK government has been proactive in developing frameworks and strategies around AI use to mitigate risks and increase transparency. For instance, the creation of the Frontier AI Taskforce to develop safe AI practices for the future and ensure the UK has sovereign capabilities and broad adoption of safe and reliable foundation models. The UK government will also host its Global AI Safety Summit in November 2023 where it will announce an international AI advisory group and kickstart international discussions on AI regulation. Leading up to the summit, the government body, UK Research and Innovation (UKRI), declared that it will invest £37m in AI projects to help businesses and research organizations pioneer AI and machine learning solutions.  

Additionally, the UK’s House of Lords Communications and Digital Committee opened its inquiry into large language models (LLMs) in September 2023. Issues that were covered in the session include how LLMs differ from other forms of AI and their evolution over the next three years, the role and structure of the Foundation Model Taskforce, and the government’s role in responding to the opportunities and risks presented by LLMs.  

 

5. AI has been adopted by the UK’s largest corporations, from Unilever to Rolls-Royce 

Let’s look at notable use cases across different industries in the UK: 

  • Consumer Goods: Unilever has harnessed GPT API to create AI tools that minimize food waste, auto-generate product listings, and filter emails sent to customer service, sorting spam from legitimate messages. AI tools reduced the amount of time customer service agents spent responding to customers by more than 90%. 
  • Aerospace and Defense: Rolls-Royce has adopted predictive analytics and machine learning to reduce aircraft engine emissions and optimize maintenance, enabling customers to keep planes in the air longer. AI-powered processes have helped Rolls-Royce extend the time between maintenance for some engines by up to 50% and save 22 million tons of carbon.  
  • Beauty and Cosmetics: Estée Lauder applied AI, augmented reality (AR), and machine learning capabilities to develop a Voice-enabled Makeup Assistant (VMA) to support visually impaired individuals in selecting suitable products. The VMA utilizes AI technologies to analyze makeup on the customer’s face and uses voice guidance to help customers create their ideal look.  
  • Retail: Marks & Spencer has deployed cutting-edge computer vision technology in over 500 of its stores. Using handheld devices with built-in AI, store associates can capture images of products on shelves and compare them to store-specific planograms in real time. This innovative approach ensures a well-organized and customer-friendly shopping experience. 
  • Finance: Nationwide Building Society (NBS) uses AI to create synthetic data that is statistically similar to real data but does not contain any personally identifiable information. This allows NBS to share data with vendors for validation and other purposes without compromising customer privacy. By using synthetic data, NBS has been able to reduce the time needed to supply data for vendors from six months to three days.  
  • Healthcare: KPMG developed a machine learning algorithm to predict which patients were likely to breach the 4-hour waiting time target in A&E, and to identify ways to reduce waiting times. The algorithm was trained on anonymized patient records and dynamic data about hospital operations. The algorithm was able to successfully identify 79% of patients who would breach the 4-hour waiting time target at the point of arrival. 
  • Manufacturing: ENEL partnered with AVEVA to develop an AI solution to help reduce greenhouse gas emissions and meet sustainability targets. The AI solution identifies risks within ENEL’s power generation fleet and provides instructions on how to mitigate those risks, minimizing the impact on major assets and production operations. The AI solution has detected dozens of issues, enabling ENEL to improve operational efficiency, lower maintenance costs, and ensure maximum power generation. 
 

Although there are numerous governmental initiatives, frameworks, and discussions on AI use, there is currently no comprehensive law governing the deployment of AI for UK businesses. Some clarity was given in a policy paper titled A pro-innovation approach to AI regulation that suggests an alternative approach to AI governance by only regulating AI when needed. However, the government is committed to staying ahead of AI developments and working with stakeholders to develop effective regulations to minimize risks and spur AI innovation.  

AI Literacy: A Must-Have Skill for the Modern Business Leader

Today’s business leaders need to sharpen their AI literacy skills to implement, scale, and leverage the technology in their organizations effectively. In this exclusive interview, Daniel Käfer, former Danish Country Head for Meta and Global Digital Marketing Director at Ooredoo; shares expert insights on why AI implementation starts at the top, what makes a successful AI strategy, measures to recruit and retain AI talent, and more.  

 
Daniel Käfer’s remarkable career in tech includes roles as the Danish Country Head for Meta and Global Group Director of Digital Marketing at Ooredoo, a renowned international telecommunications provider across MENA and APAC. He is now considered a distinguished tech leader, author, and entrepreneur; and is a partner at www.supertrends.com – a platform that maps out both past and future tech innovations with help from AI.
 

Key Takeaways from Daniel Käfer 

  • “AI is not hype. In five years, ChatGPT and similar tools will be 10 times more effective than it is today. AI is going to grow quicker than anything we’ve seen before.” 
  • “It’s the responsibility of the CEO and board members to set the direction for AI in their organizations, not the Head of Digital or other roles.” 
  • “Building a digital transformation-AI team and retaining the most talented employees will be more challenging than ever.” 
  • “Get started today. Experiment with different AI tools to help you understand the technology. When you use your knowledge together with the tools, you will see the power of AI. Invest the time needed to get the best results.” 
 

Why is AI considered a super trend?

AI impacts all other trends. No trend in the world is bigger than AI and I believe we are just in the beginning. AI will impact every area of our life. I think we still see people divided. There are more people discussing AI than using it. AI is very underestimated; most people understand to what extent AI can support us with many of the challenges we face.  

When I speak to business leaders, AI is under-hyped and under-leveraged. There’s some fear when it comes to AI on many levels. Even when people consider their own career, right? How do you embrace something that will become more intelligent than you and will probably outcompete you in several areas?  

The average age of a board member in a S&P 500 company is well into his 60s. And if you look at the users of ChatGPT, it’s only really used among college students according to the numbers. There’s a generation gap to some extent and the 2% who have tried that ChatGPT has not really tried it. They scratched the surface but were left disappointed.

But it’s a complex tool. It’s like giving somebody a huge book and they only go through a few pages and say, “It’s not for me, I’m not sure what it can do for me.” So, I think it’s misunderstood. I think people do not invest the time they need in AI to really understand and leverage it at this point

 

What are the building blocks of a solid AI strategy?

Before we were discussing AI, we were discussing digital transformation. I’ve been part of putting a digital transformation team together myself and trust me, it’s difficult. It takes experts to hire experts. When you add AI to this complexity, it gets even more difficult. So where do you start?  

You need to start at the very top and make sure that AI is part of the strategy at the CEO and board level. You cannot just assign someone to take care of AI, it must be at the very top level. Get some people in and start working on strategy. The type of AI tools you choose doesn’t matter at this point.  

The most important questions now are, “Do we use AI?” And the answer should be yes. And the other one is, “How do we use it?” And then make some strategic decisions. For example, there could be areas in terms of copyright where you might not want to use AI for music or pictures. However, there could be other areas where AI could be leveraged, such as recording, transcribing, and summarizing meetings. You might also use it in recruiting or marketing.  

There are so many areas, but I think it’s more about defining how to use it, where to use it, and then creating a plan for using it. You might not start with a perfect tool. I don’t think that’s a big issue. It’s more about getting it at the right level and having a strategy in place. 

 

What jobs will AI replace? What can leaders do to prepare their employees?

I spoke with people from the advertising industry, and they admitted that a job that would normally take two to three weeks with four full-time people plus contractors can now be done by one person in a matter of hours with the help of AI.  

Firstly, people will not lose their jobs to AI, but they will lose their jobs to people using AI. There’s nothing we can do about that. That’s just the fact of the future and this will only accelerate. People always say, “If I only have more time to do this.” With AI you do have more time. When I was working at Ooredoo and Meta, there were a ton of projects where we wished we had more time. They were not completed because there were not enough resources.  

As long as we’re asking for more resources, and I’ve yet to meet a company that’s not doing that, AI does not have to displace jobs. However, I’m not saying that there’s no risk of losing your job to AI. Most people see the greater good of AI, but they do not want to be negatively impacted by it. I’m not going to sugarcoat it. Jobs with repetitive tasks will be the first to go. But skills like good communication and coming up with out-of-the-box ideas cannot be replicated by AI. 

Leaders need to encourage their people to be open to change and identify where they can add value in the future.  

 

Why must leaders include employees in the company’s AI strategy?

Transparency in communicating the benefits of AI is important. AI can be a big motivator in a way. For example, employees may want to know how AI will improve the company’s offerings.  

If AI replaces automatic tasks, then some roles won’t be needed anymore. But if those people pivot, they can do something much more interesting in the workplace. Therefore, leaders need to explain the benefits of AI and the expectations needed for their employees to reap its benefits. You can’t say nobody will get fired because of AI and do the opposite. When one person loses their job, everybody will panic.  

It’s about getting the process clear on the management level and then explaining the benefits to the employees. But of course, be clear about the risks of not moving forward and developing AI

One of the biggest pitfalls around AI implementation is talking about it but not doing it.  

When leaders talk about how important and effective AI is but don’t have a plan, employees will not feel involved at all. There will be a huge negative effect if you don’t build that strategy.  

 

Why should leaders consider a humble approach to AI adoption?

I think we are all riddled with fear that we need to be a know-it-all at the executive level. I listened to a keynote from one of the Coca-Cola executives last week. The funny thing is, he started out saying, “I’m just learning, I’m actually really new to this, I don’t know a lot.”  

And of course, he’s got a great career within Coca-Cola so I’m sure he’s brilliant. But he just had this very humble approach to say that they’re still testing things out, some of it works and some doesn’t, and they don’t always know why.  

If we could have this humble approach, I think that will help in the future to map out where we can add value instead of being a know-it-all. We are not self-critical enough to know where we can add value and where we count. 

 

What are the challenges of recruiting and retaining AI talent?

The first challenge is understanding what you want. You may want somebody who’s skilled in AI but what does that mean? Is that a prompt engineer or someone who understands AI strategy? I would always recommend getting the strategy right first to build the frame of your AI department. Actually, I don’t see it as an AI department, I see it as a digital transformation department anchored very high up in the organization. 

It’s a mindset more than a technical skill.  

It’s about creating processes and making decisions. You don’t need a super technical guy to use Bard or Midjourney. Anybody can learn that. It’s like going to the gym. Once you start, it gets easier. If you have a decent digital transformation team, they can probably be part of the process. I would get the AI strategy in place before I do anything else. Unless you want, for example, to build a sophisticated chatbot, then hire an external expert. 

 

What is your advice for dealing with risks associated with intellectual property and privacy?

A tool like Midjourney basically works like an artist who gets inspired by millions of different artists until it creates something new. I think the more the models are trained, the less likely a prompt will be so specific that a single work would infringe on another work.  

The other risk is how a model is trained and who would be liable. I’m not a lawyer but I think that, for example, if Midjourney is trained on certain works and the copyright owner doesn’t allow it, Midjourney would be the first target more than its users.  

But that said, this is where companies need to sit down with their lawyers and assess AI risks. We don’t know where this ends and I do foresee many lawsuits within art, graphic design, and music. We will also see regulation taking place differently across the globe. I think Europe will lead with regulation but lack innovation, unfortunately. It also depends on where you do business, whether you’re global or local.  

 

How can companies with limited resources utilize AI?

AI and big tech have created huge advantages for smaller companies. ChatGPT is just one of the tools, but you can have a whole army of tools playing together for a couple of $100 a month.  

It’s not that complex. If you use ChatGPT in your strategy process, if you learn how to have conversations with ChatGPT, you’re already halfway there. Then you start to understand how you can use AI. You don’t have to build your own AI systems. It’s a matter of looking at your processes to see where you can save time and which areas need improvement.  AI can offer quick help with some processes but sometimes it’s more about opening your mind. For example, asking AI if it’s realistic to break into a new market.  

While AI may not give you the answer in the first prompt, it will start a conversation. 

 

*The interview answers have been edited for length and clarity.

How Business Leaders Can Leverage AI’s Economic Power

AI is transforming the global economy, disrupting traditional industries, and creating new opportunities for growth. The potential for economic advancement is undeniable as businesses adopt AI to boost productivity, enhance decision-making, and meet evolving consumer demands.  

What tools and knowledge do business leaders need to navigate the complex landscape of AI and capitalize on its potential? In our exclusive interview, Mohamed Roushdy, Digital Transformation and Fintech Advisor at IFC – International Finance Corporation, United Arab Emirates; shares valuable insights and answers.  

 
Mohamed Roushdy is an experienced information technology professional with a track record spanning over 25 years in various industries and business segments such as financial services, real estate, and advisory services. He is currently the Digital Transformation and Fintech Advisor at the IFC – International Finance Corporation, United Arab Emirates; and the Founder of FinTech Bazaar. His previous positions as CIO and CXO advisor have laid a strong foundation for his ability to lead teams to successfully complete major business and digital transformation programs.
 

What is the overall economic impact of AI? Which sectors are benefiting from AI the most?

AI will impact the economy in four ways. The first thing is efficiency improvements, and we’re going to see a lot of use cases for AI eliminating repetitive work. Efficiency here is key, with the help of AI, machine learning, and deep learning. Secondly is risk mitigation. AI has been used for years to detect fraud.  Next is revenue growth, how much will AI contribute to my GDP? The revenue streams will come because of AI. This ties to the fourth element which is customer experience. These are the four pillars when we talk about AI in any industry. 

The most impacted industry would be healthcare. Going back to COVID-19, AI and big data brought the vaccine to life within a few months. Other sectors include financial services where there are many use cases already. And then retail and e-commerce, agriculture, and transportation.  

 

Where do you think AI is on the Gartner Hype Cycle?

I agree it’s now at the top of the hype cycle, but AI development started a long time ago. We are now moving within two to five years from the hype cycle to the mainstream and actual use cases. We are coming out of the hype, and I would say there is a triangle of technology — AI, blockchain, and IoT. AI is the one that is going to deliver great value and pass the hype.  

 

Going back to efficiency — what are your thoughts on AI’s impact on the workforce?

Most repetitive work will go. This will displace a great number of employees, I think studies are saying by 2030, AI will displace 85 million workers. This number looks scary, right? But AI will also generate 97 million jobs. So, the net is positive, not negative.  

Seeing what’s happening here, we need to ensure organizations and governments are reskilling and upskilling the workforce.  

The same workforce that does repetitive work in your company knows your business well. You cannot just let them go.  

You have to take the same people, reskill them, and put them elsewhere in your organization. AI will affect the workforce but it’s a positive impact. Upskilling and training are very much required.  

 

It’s difficult for people in the workforce to imagine this new way of working. What’s the role of a business leader in this transition?

Training and awareness are important to open new avenues for them, and this extends to governments as well. Business leaders have a social responsibility to get this segment trained and moving ahead. As I said, the number of jobs generated by AI and machine learning will be higher than displaced jobs. But we have to make sure that employees are being trained.  

I remember one professional I met at a forum saying, “You have to learn how to learn.”  

This is really the time we must learn how to learn. 

Nowadays, people don’t only learn by going to school or taking a course. There are many ways you can learn online. There needs to be resources and guidance for employees around AI and machine learning. This is very important for the new era. 

 

What are your thoughts on the relationship between data regulation and innovation?

A very important concern about AI and machine learning is data privacy.  

We are seeing great moves from countries and organizations to regulate AI use and provide guidelines. For example, the EU AI Act. So, regulatory frameworks and privacy laws are in place to help data privacy. But at the same time, we’re trying to make sure this won’t stop innovation. If you get hit by regulation and stop innovation, you won’t see the benefits. That’s why we’re trying to see how we can balance regulation, convenience, and the ability to innovate

We must also focus on principle-based regulation. This is what most countries are going for. One of the most important things here is AI ethics. For example, there are guidelines for your AI platform or whatever you’re developing that must have AI ethics in place. What’s important is that innovation shouldn’t stop. 

 

How would you advise business leaders to handle risk mitigation around AI?

If there is no risk, there is no reward. It’s important to accept that there is some kind of risk if you want to integrate AI and move forward. But it has to be calculated risk and you have to know how to mitigate them if they happen. Without risk, you cannot innovate. As I said, any guideline should be principle-based and help foster innovation. The developments we see today with AI and machine learning happened because people took calculated risks. Yes, there is a dark side to it like deepfakes. The bad and the good will always be there. But that doesn’t mean you have to stop. I would say today’s technology brings more good than bad. 

 

Can you share an experience of maximizing profit using AI?

I come from banking, and AI enables customer personalization.  

You can know what the customer needs today and in the future. You’ll be able to do things more effectively and generate more revenue because you are doing things differently. The data you have provides more insights; maybe your customer does not need financial services abroad or maybe he needs another product your competitor has. So, you can start doing new things today because of AI, machine learning, and data like bring a new product to the market and potentially bring more revenue to your organization. AI also creates seamless customer experiences.  

When you bring IoT, blockchain, and big data together with AI, the value becomes exponential.  

This combination enables faster transactions, and you’re able to make more deals and make the customer happy with seamless or frictionless experiences. 

 

What is your advice to business leaders on implementing AI successfully?

Implementation is easier said than done. It also needs cooperation from stakeholders. My advice would be to know exactly what you want to achieve and look for a good use case.  

Start with a small use case, bring all the stakeholders within your organization in, and educate them to get the C-level buy-in. Show them the value AI will bring to your organization and the market. The issue with AI is that the algorithm development takes time, as with value and results. You won’t see results six months after implementation. You’re tuning the algorithm, you’re getting the right data, and avoiding AI bias as well.  

There are many resources needed to train a model — budget, people, and more. At the same time, if you don’t have the expertise, try to find good partners to help you get use cases. As soon as you get results and management sees something happening, then you can scale up. You have to go into the journey with a good plan and convince people with results. I think that’s very important. 

 

*The interview answers have been edited for length and clarity.

AI Governance: Balancing Competitiveness with Compliance

The AI landscape is innovating at full speed. From the recent release of Google’s Bard and OpenAI’s ChatGPT Enterprise to growing implementation of AI tools for business processes, the struggle to regulate AI continues.

In Europe, policymakers are scrambling to agree on rules to govern AI – the first regional bloc to attempt a significant step towards regulating this technology. However, the challenge is enormous considering the wide range of systems that artificial intelligence encapsulates and its rapidly evolving nature.

While regulators attempt to ensure that the development of this technology improves lives without threatening rights or safety, businesses are scrambling to maintain competitiveness and compliance in the same breadth.

We recently spoke to two experts on AI governance, Gregor Strojin and Aleksandr Tiulkanov, about the latest developments in AI regulation, Europe’s role in leading this charge, and how business leaders can manage AI compliance and risks within their organizations.

 
Gregor Strojin is the Vice Chair of the Committee on Artificial Intelligence at the Council of Europe and former chair of the Ad Hoc Committee on AI. He is a policy expert with various roles including senior adviser to the Slovenian President of the Supreme Court and the State Secretary of the Ministry of Justice.
Aleksandr Tiulkanov is an AI data and digital policy counsel with 18 years of experience in business and law. He has advised organizations on matters relating to privacy and compliances for digital products and in the field of AI.
 

Europe Trailblazing AI Governance

 

Why does AI need to be regulated?

Aleksandr: Artificial intelligence is a technology that we see almost everywhere nowadays. It is comparable to electricity in the past, but more influential. Crucially, it’s not always neutral in how it affects society. There are instances where technologies based on artificial intelligence affects decisions which, in turn, affect people’s lives. In some cases where there is a high risk of impact, we should take care and ensure that no significant harm arises.

Gregor: Regulations are part of how we manage societies in general. When it comes to technology that is as transformative as AI, we are already faced with consequences both positive and negative. When there is a negative impact, there is a responsibility either by designers, producers, or by the state to mitigate and minimize those negative effects on society or individuals. We’ve seen the same being done with other technologies in the past.

 

Former President Barack Obama said that the AI revolution goes further and has more impact than social media has. Do you agree?

Gregor: Definitely. Even social media has employed certain AI tools and algorithms that grab our attention and direct our behavior as consumers, votes, schoolmates – that has completely changed the psychology of individuals and the masses. AI is an umbrella term that encompasses over a thousand other types of users.

AI will change not only our psychology but also logistics and how we approach problem solving in different domains.

Aleksandr: The change is gradual. As Gregor said, we already see it in social media – for example, in content moderation. Those are largely based on language and machine learning models. AI is driving what we see on the platform as well as what we can write and even share. To some extent, it means that some private actors are influencing freedom of speech.

 

Let’s talk about the role of Europe in AI compliance regulations. Can you explain why Europe is a trailblazer here?

Gregor: Europe has a special position geopolitically due to its history. It’s not one country. It’s a combination of countries that are joined by different international organizations or multi-supranational organizations such as the European Union and the Council of Europe to which individuals’ countries have given parts of their sovereignty. This is a huge difference compared to the United States or China which are completely sovereign in their dealing.

When it comes to the European Union in particular, many types of behaviors are regulated by harmonizing instruments of the EU to have a uniform single market and provide some level of quality in terms of safety and security to all citizens – so we don’t have different rules in Slovenia, Germany, France of Spain. Instead, this is one market of over 500 million people.

 

Gregor, can you give us a brief overview of the latest developments in AI regulation and compliance in the EU?

Gregor: There are two binding legal instruments that are in the final phases of development. The most crucial one is from the European Union, the AI Act. It is directed at the market itself and is concerned with how AI is designed, developed, and applied by developers and users. The AI Act addresses a large part of the ecosystem, but it does not address the people who are affected by AI. Here is where the second instrument comes in, the Convention on AI that is being developed by the Council of Europe.

Another thing to mention is that the EU’s AI Act only applies to EU members and is being negotiated by the 27 member states. The Council of Europe’s instrument is being negotiated by 47 member states as well as observer states and non-member states such as the United States, Canada, Japan, Mexico, and Israel. The latter has a more global scope.

In this way, I see the EU’s AI Act as a possible mode of implementation of the rules set by the conventions of the Council of Europe. This is still partially theoretical, but it’s likely we’ll see both instruments finalized in the first half of next year. Of course, there will be a transitory period before they come into effect. This is already a good indication of how businesses must orient themselves to ensure compliance in due time.

 

Should what the EU is doing be a blueprint for the rest of the world?

Gregor: Yes, if they choose to. I think many in Europe will acknowledge that we have different ways of approaching problems and freedom of will, but if you want to do business in Europe, you have to play by Europe’s rules. This is an element in the proposed AI Act as well as the General Data Protection Regulation (GDPR) legislation from the past decade which employs the Brussels effect – meaning that the rules applied by Europe for Europe also apply to companies outside of Europe that do business here even if they do not have a physical presence here. So, if producers of AI from China or the United States wish to sell their technology in Europe, they have to comply with European standards.

 

What are the business implications of the European approach?

Aleksandr: The European approach harmonizes the rules for a single market. It’s beneficial for businesses as they won’t have to adapt to each country’s local market. I say it’s a win-win for businesses who are approaching the European continent. We’ve already seen this happening with the GDPR. As long as they have a European presence, they adopt the European policy globally. This could happen with AI regulations as well.

If you look at the regulatory landscape, we can see some regulatory ideas coming up in North America and other continents. In China, there are some regulatory propositions. But I would say that the European approach is the most comprehensive. Chances are it will be taken as a basis by many companies.

 

Balancing Innovation and Compliance

 

What do you say to concerns that this is just another set of regulations to comply with in a landscape that is constantly innovating at speed?

Gregor: I’ve been working with technology for more than 20 years. I also have experience with analog technology that is regulated, like construction building.

What we’re dealing with here is not just regulation for regulation’s sake, but it benefits corporations in the long run because it disperses risk and consequences of their liabilities. It creates a more predictable environment.

There are many elements of regulation that have been proposed for AI that have been agreed to by different stakeholders in the process. We must consider that the industry was involved in preparing both these regulatory instruments I’ve mentioned.

Some issues like data governance are already regulated. There are, of course, disagreements on elements like transparency because there may be businesses advantages that are affected by regulation. On the other hand, technology does not allow for everything. There are still open questions on what needs to be done to ensure a higher quality in the processes development to mitigate risk.

 

So there needs to be a balance between regulation, competitiveness, and the speed of innovation. How can we be assured that AI regulation does not harm competitiveness in business?

Gregor: The regulation proposed by the European Commission is just one element in the basket of proposals of the so-called Digital Agenda. There are, of course, some other proposals on content moderation that came into existence just recently that are binding. But there are also several instruments which address the promotion and development of AI systems, both in terms of subsidies for companies and individuals to develop digital skills and to create a comprehensive and stable environment for IT technology in Europe. There are billions being thrown into subsidies for companies and innovators. There is a big carrot, and the stick is in preparation, but it is not here yet.

Aleksandr: I must also underline that there are things in place that facilitate the upcoming EU regulation, such as the Regulatory Sandboxes. You may have seen an example of this in Spain. Businesses will be able to test out their hypothesis on how they want to operate these AI systems that could potentially be harmful.

It’s important to understand that the scope of the regulation is not over extensive. I would say it only covers really high-risk systems to a large extent, and some lower risk systems but only where it’s important. For example, there are transparency obligations when it comes to defects for lower risk systems. Then there are meaningful rules for high-risk systems which affect people’s lives – like government aid or the use of AI in law enforcement or hiring.

It’s important to have proper data governance and risk management in place for systems that affect people on a massive scale.

Also, if you look at mature organizations with this technology already in the market, they are making sure that the data used to train their AI systems is good enough. They are doing it themselves as they don’t want to get in trouble with their clients. Regulations are not so unusual.

 

In that case, will innovation be faster than the regulations can keep up with?

Gregor: That’s a pertinent question when it comes to technology. It is imprudent, from the position of a policymaker, to try to regulate future developments as that would impede innovation.

I don’t think there’s any impediment of innovation happening at this moment. Perhaps you could categorize getting subsidies for being compliant with ethical recommendations as that, but it’s not really an impediment.

In the future, there will be limitations to innovation of AI in the same degree as biotechnology, for example, where there are clear limits on what is allowed and under what conditions to prevent harm. That is narrowly defined. The general purpose, of course, is to increase the quality of these products, and create a safe environment and as predictable a playing field for customers in the market.

 

Business Focus: AI-Risk Management

 

What’s coming up next on AI governance that business leaders should consider?

Gregor: At this point, what’s coming up next for policy development is the fight back from those who do not want such legislation. It’s something we’ve already seen this year. Many think we had an AI revolution only this year. No. It’s a technology that’s been around for a few years and there have been calls for regulation of AI on the basis of existential threats.

If we take those calls seriously, we must completely backtrack and change the direction of what is already being developed.

But I do think if we follow through with what has been proposed to ensure the safety and security of this technology, we will also solve the problem of a so-called super intelligence taking over humanity. First, we need to ensure correct application of existing rules to human players.

 

With all this in mind, what advice do you have for business leaders when it comes with regulations and compliance in the field of AI? What can they start with tomorrow?

Aleksandr: Technical standards will be the main thing. I would advise all those developing this technology to take part in technical committees in their national standard setting bodies which can then translate into work on the European level of standards.

Take into account your practical concerns and considerations so that these technical standards can address business concerns in terms of product development. It is important to follow and participate in this work on regulation development for the AI ecosystem.

Another thing is to consider risk management frameworks to address AI-specific risks. The NIST or ForHumanity Risk Management Frameworks are a practical tool for organizations to control how they operate and deploy AI systems in a safe and efficient manner. Business leaders can also begin to appoint people who would be responsible for setting up processes.

There will be a transitional period, as there was with the GDPR. If companies can demonstrate that they are compliant with European standards that are still under development, they will automatically be considered compliant with the EU AI Act. But this is ongoing work.

Start considering broader risk management frameworks as a first step to get the ball rolling in organizations.

Gregor: Technical development skills alone are not sufficient to build a competitive and scalable organization, especially as not only Europe but other regions are preparing to introduce regulatory measures. My advice is similar to Aleksandr’s; build on your capacities for risk and compliance management. I think it will pay back quite soon.

Bard vs ChatGPT: Which is Better for Business?

Google’s AI chatbot Bard has finally launched in the European Union (EU), positioning itself as a direct competitor of ChatGPT. With Bard AI on the market, European IT leaders now have another option to pilot generative AI initiatives. According to a report by MIT Technology Review Insights and Databricks, most CIOs are adopting generative AI as an enterprise-wide strategy and 78% consider scaling AI a top priority.  

However, is Bard better than ChatGPT? Let’s review both AI chatbots’ features, pros and cons, and privacy policies.   

*Update: Bard was rebranded to Gemini on 8 February 2024.

 

ChatGPT vs Bard: A Quick Overview 

CHATGPTBARD
Developer OpenAI Google 
Language Model Generative Pre-training Transformer 3 (GPT-3) or Generative Pre-training Transformer 4 (GPT-4) Language Model for Dialogue Applications (LaMDA) and Pathways Language Model (PaLM 2) 
Data Training Set  Common Crawl, Wikipedia, books, articles, documents, and the Open Internet (limited knowledge after September 2021) An “infiniset.LaMDA” Includes data from Common Crawl, articles, books, Wikipedia, access to Google in real-time 
Languages  Supports over 50 languages Supports over 40 languages
Programming Languages Supported  JavaScript, Python, C#, PHP, Java, and more C++, Go, Java, Javascript, Python, Typescript, and more 
Sign in Method Any email address Personal Google email address 
Price Free*  
*ChatGPT Plus is $20/month 
Free 
 

ChatGPT vs Bard: Pros and Cons  

Bard and ChatGPT are similar in terms of having a user-friendly interface, an easy sign-up process, and a chat-sharing function. However, Bard and ChatGPT have their own advantages and limitations.  

ChatGPT Pros and Cons 

Pros Cons 
Accounts can be created using any email address, work or personal  Unable to retrieve real-time data. Web browser feature only available for ChatGPT Plus* 
Has more plugin options with third-party applications  Unable to analyze text in URLs. The text needs to be copied and pasted into the chat. 
Availability of ChatGPT API for integration with company products and services Only provides one answer per prompt  
Better for content creation – produces long responses Unable to retrieve images  
*On 3 July 2023, OpenAI disabled the Browse with Bing feature that was introduced in May to provide real-time results after instances of displaying content that could bypass paywalls and privacy settings.  

Bard Pros and Cons 

Pros 
Cons
 
Able to export responses to Google workspaces like Docs and Gmail Accounts can only be created with personal Google accounts or authorized Google Workspace account
Real-time data retrieval – better for research   Limited plugins with other tools  
Provides three draft answers per promptStill in the experimental phase – more prone to errors, biases, and stereotyping   
Able to analyze text through URLs  Limited integrations with non-Google products  
Able to use images in prompts and retrieve images in responses  No API is available yet  
Reminder: Both Bard and ChatGPT are not free from hallucinations and may produce inaccurate results. All responses must be fact-checked and require human intervention with proofreading and editing.  
 

ChatGPT vs Bard: Which is More Secure?  

The issue of data security and privacy with generative AI chatbots continues to be a concern, especially in the EU. The delayed launch of Bard in the EU was due to Google’s efforts to make changes to controls for users and increased transparency to comply with regional privacy laws.  

Google has also agreed to conduct a review and report back to the Irish Data Protection Commission (DPC) in three months’ time. In addition, a task force under the European Data Protection Board (EDPB) is looking into both Bard and ChatGPT’s compliance with the pan-EU General Data Protection Regulation (GDPR).  

Forbes contributor and author Joe Toscano also did a deep dive into the privacy practices of Bard and ChatGPT. Bard claims that they do not track user browsing activity or collect user data for advertising purposes. Advertising aside, Toscano found that Google may send Bard conversations to human reviewers and does not delete conversations. “It’s safer to just assume everything you put in will be saved and used to train Google’s systems,” Toscano says.  

On the other hand, ChatGPT collects certain personal user information such as IP addresses and device information. ChatGPT also stores all user prompts and responses. “There’s a good chance that if someone asks a question that’s similar or could use your content as a response your proprietary information will then be repurposed by the system,” Toscano adds.  

It’s unclear how data shared with Bard and ChatGPT are protected. In the meantime, the onus is on the users to refrain from sharing confidential and sensitive information and use VPNs whenever possible.  

 

The Use of ChatGPT and Bard at the Workplace

Since the launch of ChatGPT in late 2022, many organizations have leveraged the AI chatbot and other similar tools to ease workflows particularly in marketing, sales, and customer support. In addition, the coding functionalities in ChatGPT and Bard have made building applications much easier.  

However, the rising use of generative AI tools in the workplace opens a can of worms for IT and security leaders. Since tools like ChatGPT and Bard are highly accessible and user-friendly, employees tend to use them without supervision from IT and security teams. Gartner predicts that 5% of employees will engage in unauthorized use of generative AI in their organizations by 2026

ChatGPT has already made headlines with its security vulnerabilities. In May 2023, Meta released a report detailing their investigation into ChatGPT-posing malware that’s been stealing user accounts. A month earlier, Samsung banned ChatGPT organization-wide after employees unintentionally shared confidential information with the AI chatbot. 

Therefore, IT and security teams must work together to ensure generative AI tools are being used safely within the organization to reduce security risks and prevent data leaks. 

 

The Adoption of ChatGPT and Bard: What IT and Security Leaders Can Do

  • Conduct a shadow AI audit: This is to get a clearer picture of how widely generative AI tools are being used by employees. Determine which functions use it the most, what data they are sharing, and calculate security risks.  
  • Provide training on generative AI: Employees can benefit from function-specific training on how to use AI chatbots safely. Training should cover privacy policies of the tools, reminders to never input confidential company data, how to write effective prompts, security risks of generative AI, and more.  
  • Create policies for generative AI use: Establish clear guidelines on how employees must use AI chatbots at the workplace. For example, only using IT-approved generative AI tools and data sets.  
  • Invest in data-loss-prevention (DLP) tools: Carve out an annual budget for DLPs to bolster cybersecurity measures and prevent data leaks as more employees use generative AI tools.  
 

Is Bard Better than ChatGPT? 

Despite their risks, generative AI tools like ChatGPT and Bard have the potential to create more efficient workflows and drive employee productivity if used correctly. Therefore, IT and security leaders must make developing clear policies and guidelines around generative AI use a priority.  

The answer to whether Bard or ChatGPT is better highly depends on how both tools integrate with existing processes, how educated employees are in using them, and which one will pose fewer security risks for your organization. 

How to Use AI in Cybersecurity for Business

With rapid advancements in technology, security leaders are actively exploring how to use artificial intelligence (AI) in cybersecurity as traditional measures alone may no longer be sufficient in defending against sophisticated threats. AI has emerged as a potentially powerful tool in bolstering cybersecurity efforts, offering enhanced threat detection, prediction, and response capabilities among other uses.

A survey by The Economist Intelligence Unit revealed that 48.9% of global executives and leading security experts believe that AI and machine learning (ML) are best equipped for countering modern cyberthreats. Additionally, IBM found that AI and automation in security practices can significantly reduce threat detection and response times by up to 14 weeks of labor and reduce costs associated with data breaches. In fact, global interest in AI’s potential in countering cyberthreats is evident by the growing investments in it. The global AI in cybersecurity market is projected to reach USD 96.81 billion by 2032.

Despite the promise of AI, Baker McKenzie found in a survey that C-level leaders tend to overestimate their organization’s preparedness in relation to AI in cybersecurity. This serves to underscore the importance of realistic assessments on AI-related cybersecurity strategies.

 

Security Applications of AI

Many tools in the market leverage subsets of AI such as machine learning, deep learning, and natural language processing (NLP) enhance the security ecosystem. CISOs are challenged with finding the best ways to incorporate cybersecurity and artificial intelligence into their strategies.

 

1. Enhanced Threat Detection and Response

One of the main examples of AI in cybersecurity is its use for malware detection and preventing phishing, AI-powered tools are shown to be significantly more efficient compared to traditional signature-based systems.

Where traditional systems can prevent about 30% to 60% of malware, AI-assisted systems have an efficiency rate of 80% to 92%.

Researchers at Plymouth University detected malware with an accuracy of 74% on all file formats using neural networks. The accuracy was between 91% to 94% for .doc and .pdf files specifically. As for phishing, researchers at the University of North Dakota proposed a detection technique utilizing machine learning, which achieved an accuracy of 94%.

Given that phishing and malware remain the biggest cybersecurity threats for organizations, this is good news. These advancements enable organizations to identify potential threats more accurately and respond proactively to mitigate risks that could cause massive financial and reputational damage.

 

2. Knowledge Consolidation

A pressing issue for CISOs is the sheer volume of security protocols and software vulnerabilities poses a challenge for their security teams. An advantage of AI in cybersecurity is that ML-enabled security systems can consolidate vast amounts of historical data and knowledge to detect and respond to security breaches. Platforms like IBM Watson leverage ML models trained on millions of data points to enhance threat detection and minimize the risk of human error.

AI’s ability to improve its knowledge of cybersecurity threats and risks by consuming billions of data points and recognize patterns and anomalies faster than humans enables it to learn from past experiences and come up with increasingly efficient ways to deal with combat cyberattacks. This allows AI-powered security systems to keep pace with the evolving threat landscape more efficiently.

IBM notes that AI is also able to analyze relationships between threats in mere seconds or minutes, thus reducing the amount of time it takes to find threats. This is essential to reducing the detection and response times of cybersecurity breaches, which can significantly reduce costs to organizations as well.

The global average total cost of data breach according to IBM is $4.35 million USD in 2022. Organizations also took an average of 277 days to identify and contain a breach. However, if that number is brought down to 200 days or less with the help of AI, organizations can save an average of $1.12 million USD.

 

3. Enhanced Threat Analysis and Prioritization

Tech giants like Google, IBM, and Microsoft are investing heavily in AI systems to identify and analyze and prioritize threats. In fact, Microsoft’s Cyber Signal’s program leverages AI to analyze 24 trillion security signals, 40 nation-state groups, and 140 hacker groups to detect software vulnerabilities and malicious activities.

Given the vast amounts of data that must be analyzed, it’s not surprising that 51% of IT security and SOC decision-makers said they were overwhelmed by the volume of alerts (Trend Micro) while 55% cited their lack of confidence in prioritizing and responding to them. Moreover, 27% of surveyed respondents spent up to 27% of their time managing false positives.

Worryingly, Critical Start found that nearly half of SOC professionals turn off high-volume alerts when there are too many to process.

One answer to the question of how to use AI in cybersecurity is by applying it to analyze vast amounts of security signals and data points to detect and prioritize threats quickly and effectively. With the assistance of AI, security teams are better able to promptly respond to threats under the increasing frequency of cyberattacks.

 

4. Threat Mitigation

The complexity of analyzing every component of an organization’s IT inventory is well-understood. With the help of AI tools, the complexity can be managed. AI can identify points within a network that may be more susceptible to breaches and even predict the type of attacks that may occur.

In fact, some researchers have proposed cognitive learning-based AI models that can monitor security access points for authorized logins. This model can detect remote hacks early, alert the relevant users, and create additional security layers to prevent a breach.

Of course, this would also require training AI/ML algorithms to recognize attacks carried out by other such algorithms as cybersecurity and risks evolve in lockstep. For example, hackers have been found to use ML to analyze enterprise networks for weak points. This information is used to target possible entry points for phishing, spyware, and DDoS attacks.

 

5. Task Automation

When talking of AI applications in cybersecurity, task automation is one of the most widely adopted. Especially for repetitive tasks, such as analyzing a high-volume of low-risk alerts and taking immediate measures, AI tools can come in handy to free up human analysts for higher-value tasks. This is especially valuable to companies that are still short on qualified cybersecurity talent.

Beyond that, intelligent automation is also useful for gathering research on security incidents, assessing data from multiple systems, and consolidating it into a report for analysts. Shifting this routine task to an AI helper will save plenty of time.

 

How Threat Actors Are Using AI

While AI is proving to be a valuable tool in the cybersecurity arsenal, it is also becoming a mainstay for threat actors who are leveraging it for their malicious activities. AI’s high processing capabilities enable them to hack systems faster and more effectively than humans.

In fact, generative AI models such as ChatGPT and Dall-E have made it easier for cybercriminals to develop malicious exploits and launch sophisticated cyberattacks at scale. Threat actors can use NLP AI models to generate human-like text and speech for social engineering attacks such as phishing. The use of NLP and ML enhances the effectiveness of these phishing attempts, creating more convincing emails and messages that trick people into revealing sensitive information.

AI enables cybercriminals to automate attacks, target a broader range of victims, and create more convincing and sophisticated threats. For now, there is no efficient way to distinguish between AI- or human-generated social engineering attacks.

Apart from social engineering attacked, AI-powered cyberthreats come in various forms including:

  • Advanced persistent threats (APT)s that use AI to evade detection and target specific organizations;
  • Deepfake attacks which leverage AI-generated synthetic media to impersonate real people and carry out fraud; and
  • AI-powered malware which adapts its behavior to avoid detection and adjust to changing environments.

The rapid development of AI technology allows hackers to launch sophisticated and targeted attacks that exploit vulnerabilities in systems and networks. Defending against AI-powered threats requires a comprehensive and proactive approach that combines AI-based defense mechanisms with human expertise and control.

 

AI and Cybersecurity: The Way Forward

The integration of AI into cybersecurity is transforming the way organizations detect, prevent, and respond to cyber threats. By harnessing the power of AI, organizations can bolster their cybersecurity defenses, reduce human error, and mitigate risks.

Having said that, the immense potential of AI also increases the risk of cyber threats which demand vigilant defense mechanisms. After all, humans remain a significant contributing factor to cybersecurity breaches, accounting for over 80% of incidents. This emphasizes the need to also address the human element through effective training and awareness programs.

Ultimately, a holistic approach that combines human expertise with AI technologies is vital in building a resilient defense against the ever-evolving landscape of cyber threats.

 

FAQ: AI in Cybersecurity

How is AI used in cybersecurity?

In cybersecurity, AI removes the need for human experts to do tedious, time-consuming tasks. AI can read an immense amount of data and identify potential threats while reducing false positives by filtering non-threatening activities. This helps human security experts to focus on vital tasks instead.

How will AI improve cybersecurity?

AI technologies can spot potential weak spots in a network, flag breach risks before they occur, and even automatically trigger measures to prevent and mitigate cyberattacks from ransomware to phishing and malware.

What are the risks of AI in cybersecurity?

AI-enabled cybersecurity tools are reliant on the data sets they are trained on. This means bias may unintentionally skew the model, resulting in mistaken analysis and inefficient decisions that could lead to terrible consequences.

What are pros and cons of AI in cybersecurity?

Some benefits of AI-based security tools include quicker response times, better threat detection, and increased efficiency. On the other hand, there are ethical concerns to AI such as privacy, algorithmic bias, and talent displacement.