Dr Rebecca Wynn: “We Didn’t Think of AI Privacy By Design”

In the era of the AI revolution, data privacy and protection are of utmost importance.

As the technology evolves rapidly, risks associated with personal data are increasing. Leaders must respond and adapt to these changes in order to push their businesses forward without sacrificing privacy.

We speak to award-winning cybersecurity expert Dr Rebecca Wynn about the data privacy risks associated with AI and how leaders can mindfully navigate this issue.

 
Dr Rebecca Wynn is an award-winning Global CISO and Cybersecurity Strategist, host and founder of Soulful CXO Podcast. She is an expert in data privacy and risk management and has worked with big names such as Sears, Best Buy, Hilton, and Wells Fargo.
 

From a business perspective, how has AI changed how companies approach data protection and privacy?

Honestly, right now many companies are scrambling. They hope and pray that it’s going to be okay, and that’s not a good strategy. One of the things we see with leading technologies is that the technologies come up first. Then governance, risk, and compliance people play catch up. Hopefully, in the future, this will change, and we will be on the same journey as the product. But right now, that’s not what I see.

Once the horses are out of the barn, it’s hard to get them back. Now we’re trying to figure out some frameworks for responsible AI. But one thing people need to be careful about is their data lakes. Is misstated data going into the data lake?

From a corporate perspective, are you’re not monitoring what people are putting into the data lake? Even from your own individuals, are they putting your intellectual property out there? What about company-sensitive information? Who owns that property? Those are the things that are very dangerous.

For security and compliance, you really need to be managing your traffic, and you do that through education on the proper use of data.

Can you speak on the role of laws and regulations in ensuring data privacy in the AI era?

There are two types of people. There are ones who prefer to go ahead and ask what the guidelines are and what’s the expected norm for businesses and society as a whole – they stay within those guidelines. Then there are companies that ask about enterprise risk management. What is the cost if we go outside those lines? We see this in privacy. They ask questions like “What are the fines? How long might it take to pay those fines? Will that go down to pennies on the dollar? How much can I make in the meantime?”

Laws give you teeth to do things after the fact. Conceptually, we have laws like the GPDR, which is the European Union trying to establish AI rules. There’s the National Institute of Standards and Technology AI framework in the US, PIPEDA in the Canada.

The GDPR and the upcoming AI Act are obviously important to companies based in the EU. How aggressive should we expect a regulatory response to generative AI solutions might be?

I think it’s going to take a while because when GDPR initially came into place, they went against Microsoft, Google, and Facebook. But it took a long time to say what exactly these companies did wrong and who would take ownership of going after them.

It will take years unless we have a global consortium on AI with some of these bigger companies that have buy-in and are going to help us control it. But to do that, big companies must be a part of it and see it as important.

And what are the chances that these big companies are going to start cooperating to create the sort of boundaries that are needed?

If we can have a sort of think tank, that would be very helpful. AI has very good uses but unfortunately, also very negative consequences. I’m not just talking about movies like Minority Report, but I also think about when wrong data gets out. Like in Australia, we see potentially the first defamation law case against ChatGPT.

Even on a personal level, information on you is out there. Let’s say for example you are accused of a crime, which is not true. That gets into ChatGPT or something similar. How many times can that potentially come up? I asked ChatGPT to write me a bio and it says I worked for Girl Scouts of America, which I never did.

That’s the type of thing I’m talking about. How do you get that out of the data pool? What are the acceptable uses for privacy data? How do you opt-out? These are the dangers right now. But it has to be considered from a global perspective, not only by region. We talked about legal ramifications and cross-border data privacy. How do you stop somebody in the US from being able to go ahead and use data from the EU a bit differently? What about information that crosses borders via AI? It hasn’t been discussed because no one even thought of it just a year ago.

What are appropriate measures for organizations to take with shadow IT uses of GPT tools?

We need to train more on the negative effects of such tools. I don’t think people are trying to do it from a negative perspective, but they don’t think about the negative impact. If I’m using one of these tools to help me generate code, am I looking at open-source code? Is it someone else’s code that someone put in there? Is this going to cause intellectual property issues?

When you talk about shadow IT, you are looking at what is potentially leaving a network and what’s coming in. So, it usually sits above data loss prevention tools. But how do you do it without being too ‘big brother-ish’.

All this comes from enterprise risk management. You need to have conversations with your legal and compliance teams. Most people just want to get their job done and they don’t think about the negative repercussions to the company’s reputation. You have to have those conversations.

Talk to your staff about what tools they’re using in a no-fear, psychologically safe way. Ask them why they’re using those tools and the advantages it gives them. From there, you can narrow down the top two or three tools that are best for the company. This lowers your risk.

It’s about risk mitigation and managing that in a mindful way because you can’t have zero risk. You can’t block everyone from doing everything.

How can current data protection legislation support businesses and individuals with data that has been used to train large language models?

It’s chasing things after the fact. We’ll find out that there are a lot of language models trained on data that were not used in the manner we agreed to in our contracts. I think there are going to be some legal ramifications down the pipeline. We’ll find out that the data used in these models are not what I would call sanitized. I’ve seen it again and again; intellectual properties are already in the pool and the data was not structured or tagged so we can’t pull it back out.

In that case, how can we work with protected data and at the same time, with large language models?

That’s tough. I’ll give you an example. Let’s say there’s an email with a cryptic key embedded into it. What you can do is hold the other key and spiral it off. I like that from a company and individual perspective. Because if someone shared something intellectual property of mine with another person, maybe an article I wrote or a code, I could then look at the spiral and see who sold or resold that data. From there, I could expire it. From a legal perspective, I would have a trail.

What happens if we could do that with every piece of information that you make? If the data is tagged immediately, you could see what it was created for and expire it for other uses. It won’t be in anyone’s database.

I think we can get there. But right now, I don’t see how we can get the horses back into the barn effectively when the data is not individually tagged.

Should we forbid access to ChatGPT and other AI apps to all users?

You could use a variation of that. Consider why you’re using an AI tool. Ask your teams why they’re using it and then think about how you might mitigate risk. If it’s about rephrasing certain text to be more effective for clients, then allow it. That’s a positive internal use. Maybe it’s about marketing and rephrasing things for various social media platforms.

But what if they just want to play around and learn more about it? Then maybe you need to have a sandbox where they can do that, and you don’t have to worry about data leaving your network.

How can we ensure that our AI systems are trained on unbiased and representative data to prevent unfair decision-making?

To be honest with you today, I don’t think we can. Part of it is because data is in a data lake. For a lack of better phrasing, garbage data in, garbage data out. If you look at search engines out there, they’re all built on databases and those databases are not just clean. They weren’t built initially with clean data at the start. We didn’t think about structure back them.

What you could do is have something like a generated bio of yourself, see what’s wrong with it, and give a thumbs up or down to say if it’s accurate or not to clean up the data. But can anyone clean up that data or is it only you? If there’s inaccuracy in my data, how can I flag that? It seems like anyone can do it.

So the question is, is there a method to go back and clean up the data? Here’s where I wish we had the option to opt-in instead of opt-out.

When it comes to data security, what are the top three things companies should keep in mind?

First, is to not have fear or uncertainty in your own people. We think mainly from a security and privacy governance perspective that everyone intends to do the wrong thing. Instead, I think you need to assume that your workforce is trying to do the right thing to move the company forward. The real training is about what is the organization’s acceptable use policy. What is the expectation and why is it important? If things go awry, what are the consequences to the individuals, company reputation, and revenue?

Next, do how do we monitor that? Do we have an internal risk assessment against our AI risk? If you’ve not looked at your business liability insurance recently and you have a renewal coming up, take a look. There is an AI risk rider that is coming up for most if not all policies in the future that you, as a company, are using AI responsibly and that you are doing risk assessments and managing it with firewalls, data loss prevention strategies, and things like that.

Holistically, it’s enterprise risk management. I think you need to be transparent and explain to individual users in those terms, but we haven’t always been doing that. You need to figure out how you can make everyone understand that they are an important puzzle piece of running the business. It’s a mind shift that we need.

Should we create AI gurus in our companies to communicate the risk and stay up to date on the technology and its usage?

We’re on the cusp of that. A lot of us have been talking behind the scenes about whether there is now a new role of AI CISO.

The role of the Chief Information Security Officer will be focused solely on AI and how that’s being used internally and externally.

You may have the Chief Security Officer who is operational-focused only. I end up being more externally facing with strategy than I am with daily operations. We’re seeing various CISO roles to handle that for the company.  Digital Risk officers, that’s a legal perspective. I think we’re seeing a rise of AI Cybersecurity Officers or similar titles.

Should we worry about privacy threats from AI in structured data as well?

I think you should always worry about privacy. When we look at the frameworks, EISA has a framework, EU has the AI rules. There are things coming out of Australia and Canada as well, that’s what we’re trying to gravitate towards.

But, as individuals, how much can we really keep track of our data and where it is anymore? If you haven’t looked at the privacy policies on some websites, I say as an individual you need to opt-out. If you’re not using those apps on your phone anymore, uninstall them, kill your account, and get out of them. Those policies and how they’re using that information are only getting longer and longer.

As a company, do you have a policy in place about how your data is being used between your companies? Are you allowing them to be put into AI models? What are the policies and procedures for your own employees when it comes to third parties that you do business with?

That’s why I said there needs to be a lot of training. From an enterprise risk management standpoint, you cannot manage risk if your risk is not defined.

What is the social responsibility of companies in an AI-driven world?

I wish it would be a lot more, to be honest with you. Elon Musk and people like that are being a little more forward-think about what and where to do we want to be in 2050. All technologies are good technology. When AI initially came out in the 1950s and machine learning in 1960s, it was a new shiny toy but people were scared too.

I think from a social perspective, anytime we have something that allows us to be digitally transformed in a way that allows us to communicate and see correlations quicker, and see what people are facing around the world, that’s good.

But then, it can also bring in fake news. We say trust and verify, but what do you do when you’re using AI tools that have all the wrong information? It’s scary. That’s when we must use critical thinking. Does this story make sense? Does it seem reasonable? We’re starting to see right now how AI can be used for good and evil. In terms of cybersecurity, fake email and such are used in targeted phishing attacks.

For companies, do you have back channels to verify things? I once received a text message from a CEO that sounded like him but did not ring true to me. When I asked him through back channels, he said it was not him. Your gut is right most of the time. Trust your gut, but also give people other avenues to verify that data.

Do you think there’s too much focus on the content of a framework to identify and include all risks as opposed to focusing on the processes to get to the right answers?

I agree. I’m always about the so what. Know it, document it, implement, manage, and measure. But then what? If I have a framework solely as a framework, that’s great but it’s about what you put in.

I think the problem is that you start from the top-down. We end up having to get people on the same page and saying we need a framework. And then it gets down to the meat and that’s what you’re talking about. Why do we need this? How do we put it into play? How can we test it and measure it?

Unfortunately, policies and procedures start from the top down, but for boots and on the ground, the thought starts with implementation. That’s where I think training comes into play. This is where people like me talk to you about the day-to-day.

*Answers have been edited for clarity and length.

ChatGPT and GPT-4: How to Implement Generative AI in Your Organization

ChatGPT has taken the world by storm with over 20 million daily users. However, there are still questions on how generative AI tools can be leveraged in businesses and implemented organization-wide. With data security concerns leading several countries to ban ChatGPT, is generative AI still worth exploring for businesses?   

In this exclusive interview, AI expert and best-selling author Lasse Rouhiainen shares his thoughts and insightful advice on the latest developments of ChatGPT and GPT-4, and how to implement and utilize generative AI effectively in a business context.  

*This article is a recap of our interview with Lasse Rouhiainen at the session, GPT-4 and Beyond: The Next Chapter in AI and Business Communication. 

 
Lasse Rouhiainen is a best-selling author and international expert on artificial intelligence, disruptive technologies, and digital marketing. He focuses on investigating how companies and society can better adapt to artificial intelligence and benefit from it. Rouhiainen has also spoken at Mobile World Capital and TEDx. His latest book, Artificial Intelligence: 101 Things You Must Know Today About Our Future, was selected by Book Authority as one of the best AI books of all time.
 

How should companies implement ChatGPT and on what level?

It comes from the management. They need to understand that we live in the era of AI. We have to put resources, both time and money, into it. We have to get everybody in the company to start using it, not only IT. Also, remember that whatever you share with ChatGPT goes to the service of OpenAI. Obviously, don’t share your financial information there. Companies like Microsoft are creating solutions where you can have an internal ChatGPT for your company where you can share valuable information. 

In addition, understand that ChatGPT is not a search engine. Our brains are wired to use search engines because we have been doing it for 20 years. We go to Google and type one thing, and we find our answer. What we write for ChatGPT needs to be more than “Give me a 400-word article on management,” for example. We need to give ChatGPT a paragraph. So, everybody in your organization should write paragraphs with as much context and details as possible.  With AI, the more data we give it the better.  

Next, start using ChatGPT in your Intranet or a place where your colleagues share best practices. That way you can share the best prompts and it’s really useful. As a company, you should also send an email to everybody to remind them not to share sensitive information with ChatGPT. 

It’s interesting to see two kinds of people here. Some are excited by ChatGPT and use it all the time and have gotten results. The other kind knows it’s important but tries to avoid it.  Well, this is all about business. We will see the implications within 18 months where a lot of people will be unemployed and not know what to do. It will become a societal problem. Business-wise, there’s been a lot of anecdotes and amazing success stories already. 

 

Is ChatGPT far away from large commercial use?

No, it’s not. It’s a significant and revolutionary tool, and back then we didn’t have GPT-4. We didn’t know that Microsoft would implement this tool in their products.  

ChatGPT is a new layer of the Internet. If you’re not using it and not building on top of it, you will be out of business and lose your competitive edge in a few months.  

It’s happening in every industry, even industries that normally have been safe like the financial industry. Also, companies are building and launching ChatGPT internally. Bloomberg has ChatGPT which has been great at analyzing financial information. This means that 80% of financial analysts will probably lose their jobs, even if they have a Ph.D.  

 

Which industries will be impacted by ChatGPT and who else will lose their jobs?

Industries that have a lot of repetition or those where machines can be taught something repetitive. For example, the financial industry. The education industry will also be impacted gradually as it doesn’t have many AI applications due to privacy concerns in Europe. I just had a call with a university chain with 20 universities worldwide on how they could use ChatGPT to become more competitive. No industry is safe.  

It’s important to be proactive and spend time learning how to talk better to the computer, and not be those people who say that’s not their area of interest. At the same time, it’s an amazing tool to grow your sales and improve your business strategy. This is because ChatGPT has been trained on every piece of business information out there and it has read all the business books. So, it’s more knowledgeable than all of us. For example, we should use it to analyze strategic decisions, or new products and services. 

I don’t know who will be fired, but I do know that a lot of people will be. According to OpenAI and their research, 25% of jobs in Europe and the U.S. will be impacted by ChatGPT. For example, mathematicians, translators, and everything that is repetitive will go. According to Goldman Sachs, 32% of administrative or managerial work will go away, in addition to 44% of lawyers. 

 

What are your thoughts on certain countries banning ChatGPT?

If it happens in your country, don’t start crying. You can always use the ChatGPT API. There are many chatbots that are almost as good, such as you.com. There are a lot of options. The European Union (EU) has a long history of first banning something before they start investigating. Once they start investigating, they change their mind. Also, there are many political reasons why the EU has been doing this. It’s not the best strategy because ChatGPT is a tool that is extremely helpful and democratizes tools that can help teenagers start businesses from zero, for example. That was never possible before ChatGPT. It’s an empowering tool for people who want to use it. The EU also doesn’t want to be dependent on American cloud services. There’s a Finnish initiative where a European language model is in the works, and I think that’s a really innovative way to not depend on American technology. 

 

If everyone is using ChatGPT, how do companies maintain a competitive edge? Will ChatGPT become a commodity?

Right now, there’s a high likelihood that your competitors are not using it to its full potential. But I recommend that you start using it and not wait until your competitors use it first. In addition, there are other AI tools on the market. One of them is AutoGPT. It’s open source so everybody can use it and create their own versions. People are calling it an autonomous artificial intelligence agent. You just need to give it one goal and it will do the rest. At the same time, there’s a lot of hype about generative AI. Don’t get too worried or excited by it. Just focus on your business and use ChatGPT as a tool to help you.  

 

Can ChatGPT be used for economic and financial analysis considering it lacks real-time market data?

There are talks of a plugin that will allow users to use real-time data soon, so I’m not too worried about it. For economic and financial analysis, it’s really good. I would start searching what are the best prompts for financial industry analysis.  

For example, I could ask ChatGPT to analyze the GDP of Finland and Sweden and put the results in a table. I would also advise people who want to write better prompts to write their best prompts and put them through ChatGPT. You can ask ChatGPT to make your prompt more comprehensive and detailed. GPT-4 is really good at understanding nuances and will provide a better prompt. You can create business value from the answer to your prompt and make amazing financial products. 

 

How do companies utilize ChatGPT effectively?

It depends on your daily operations. Again, the best person to answer that is ChatGPT. You can list down everything you do in your day-to-day life and work. Identify three activities with the most repetition and ask ChatGPT to generate creative solutions and ways to manage them. You can also use ChatGPT to get ideas for content. You can feed it different scenarios that could go wrong and how to prepare for each scenario.  

For example, one of my clients has an online store and used ChatGPT to generate emails for seven different scenarios her customers may face. It’s like your assistant that’s always with you and can help you overcome many challenges you have in your work. The paid version does not cost a lot. ChatGPT is the first tool that gives us access to powerful deep-learning algorithms. Get all the benefits from the free version and upgrade to the paid version once you see results. 

 

What do you think of how ChatGPT collects and presents third-party data?

This is an interesting ethical analysis, specifically with generative AI that creates images. There’s a big court case against Stable Diffusion, which is a project that is trained using Getty Images to generate any image you want. I understand it’s a big thing because many people work their whole lives to take photos and share them. All of a sudden, their work is being used and they get no profit. It’s a big ethical issue. When it comes to writing, I don’t see it as big of a problem. For instance, GPT-4 can give me reliable sources and citations.  

 

How can we spot AI-generated information? Is there a need for digital authentic source marking?

There are many solutions being built at the moment but most of them are not good. For example, students already know that you can use ChatGPT to write text and then copy and paste it to other tools like Grammarly, change some sentences, and it already looks human-worthy. I think OpenAI is also working on some kind of tool, where there will be a label for AI-generated content. When it comes to videos, it will be difficult. That’s where we have a big problem because anyone can be impersonated, we can create videos with anyone’s voice. That’s one thing that needs to be managed. 

 

What are the limits of ChatGPT on deriving correct information for business-critical decisions?

GPT-4 can currently give citations and sources. For example, I used it to analyze the hotel industry in Helsinki and the answers were amazing. Also, give as much context as possible without sharing sensitive company information. However, don’t copy and paste the answers you get, and remember to think critically. That’s how you get the best business results. 

AI will help us do things better, cheaper, and faster. The problem is that ChatGPT is simple, it’s just a website. People don’t understand that behind it are some of the most powerful AI algorithms. I recommend that everybody accept this new reality where your future depends on how well you talk to a computer. It’s a question of your career and your company’s success.  

Embrace these tools and be architects of the future rather than victims by resisting technology. I want to invite everybody to join this revolutionary AI journey. 

 

*The interview answers have been edited for length and clarity. 

Microsoft Europe CSA: AI & Humans Must Evolve Together in Cybersecurity

The digital evolution brought along automation which enabled transformations of economies and businesses on a large scale. What does the rise of artificial intelligence tools and machine learning in the digital landscape mean for cybersecurity and how can humans evolve together alongside these technologies? 

We caught up with Sarah Armstrong Smith, Microsoft’s European Chief Security Advisor about the impact of AI on cybersecurity and what the future of cybersecurity could look like.

 

1. Digital Acceleration Brings Evolved Threats

 

It’s estimated in the next five years that over half of the world’s data will live in the cloud. With that comes huge computational power that is available on demand and at scale, giving us the agility and flexibility to innovate.  

A clear effect of this is the accelerated development and use of smart technology in recent years. For example, more companies are investing in digital twins which enables them to try different things and run diagnostics without having to do it in a physical environment.  

However, this comes with concerns about security, particularly in high-tech areas that deal with intellectual property and other sensitive data.  

The global spending on cybersecurity is estimated to grow to about 1 trillion in just a few years

What does this mean for security? Multiple factors must be considered to ensure privacy, security, and compliance with regulations. Beyond that, there’s also the defensive side of anticipating and managing evolving threats. Not only do we need to obtain information, but we also need to know how to act on it efficiently in real-time. This is where AI comes into play. 

 

2. Security is in Transformation

 

The early discourse on AI verged on alarmist with warnings that the technology would eradicate jobs and leave millions of workers out of an income. In fact, the World Economic Forum projected that AI and automation would displace over 5 million jobs by 2020. However, the open market now has about 6 million unfilled cybersecurity jobs alone. In fact, there is a huge demand for talent in robotics, machine learning, IoT, big data, and AI.  

The mass migration of businesses to the cloud at different levels of maturity has shifted expectations on the type of connectivity provided, devices used, and trustworthiness of those devices. Most enterprises are running a hybrid business model due to challenges with the legacy estate. This complicates cyber security efforts. Security leaders are compelled to understand the integration between the IT, IoT and cloud environments in order to ensure a connected ecosystem that is smart, reliable and safe.  

With all that combined, there is an increased attack surface and multiple blind spots. Particularly, organizations are still deeply fragmented when it comes to their approaches. At the same time, cyberattacks are rising exponentially.  

Moreover, attackers are becoming more sophisticated. They can move very quickly and outmaneuver security operations and technologies of even larger organizations because they are not constrained by regulatory requirements. Attackers are also investing heavily in automation and scripting.  

Every time attackers bring out new malware or new attacks, we learn those, we counteract them, we have detections, we’re automatically blocking the malware, and different attempts that they’re trying. However, we know that resources are at a premium and we can’t just add and throw more money at it.  

We have to think of ways in which we can increase the attacker’s cost and reduce our cost with our ability to act and respond as quickly as possible.” 

 

3. What Can CISOs Do? 

 

A. Prevention of threats 

The first imperative is we need to prevent as many threats as possible with automation. We have to detect, we have to respond quickly and we have to continually learn. As much as the attackers are learning our defenses, they’re learning about the technologies that we have, they’re learning how to counteract that. When they’re counteracting that, the whole cycle starts again. We’re seeing this perpetual cycle of prevention, detection and response. 

B. Understand human attacker decision cycle 

With human operator attacks, particularly ransomware operators and nation-state actors, part of their attack profile is really their ability to observe. They have to sit, watch and learn about your environment.  

We know that attackers understand IT infrastructure very well. What they don’t understand is how you specifically deployed technologies in your environment or other technologies you are utilizing. They have to learn and orient across your environment, and potentially keep elevating privilege. They have to get access to different parts of the infrastructure to learn what to do and what attack is going to work best in this environment. 

This is the kind of cycle we’re really looking at when it comes to that human attacker decision cycle. From a security operations perspective, our job is really to understand this cycle. Irrespective of the fact that they probably have been in that network for weeks or months, it’s really at the point where they have triggered some kind of action that we respond. 

We need to get better. We need to be proactive and preempt. We need to understand how the attacker is operating, understand this cycle, and get into the mind of the attacker for us to be able to make some decisions.  

Where this comes to is our ability to defend and act quicker across that entire cycle, meaning we need to maximize the visibility of our network. We need diversity of threat intelligence, and we need that from different sources. Importantly, we need that real-time information.  

C. Automation + humans in threat detection and response  

The other thing that we need to do is reduce the number of manual steps or potential errors that may occur. Part of this is about the ability to automate that detection and response. From security operations, we don’t want to be pivoting across different technologies because that decreases the time that we have to act. It potentially means there are going to be more errors because we’ve got conflicting or duplication of information. 

With that, we have to maximize human impact. We’ve got to get this information and intelligence in front of our humans because it’s humans that understand the context, it’s humans that understand the business risk, and it’s humans that understand consequences. 

We’ve got to get the human and automation layer right. This is about continuous learning.

 We need automation, we need those evasion techniques. However, we also can’t stop every single attack now because attackers are evolving at pace. Instead, we have to assume a compromise mindset. 

 

What the Future Looks Like in AI & Cybersecurity? 

 

In terms of the future, we’re going to see more use of virtual reality and mixed reality. We’ve already talked about how AI and automation are going to really shift our ability to get deep insight. Looking at how attacks are evolving, it’s estimated that we’re going to see an IoT botnet, which is probably going to be able to launch one of the biggest DDoS attacks that we’ve ever seen. We will probably also see a cyber attack of such magnitude that one of the countries will be forced to carry out a physical attack against the nation state that targeted them. 

We will also start to see not just digital buildings, but digital cities which increases the attack surface. We’re going to see the proliferation of cyber-attacks at scale, infiltrating the IT the IoT, and OT simultaneously. That’s going to drive the need for regulatory control and human oversight with regard to how these AI and ML machines are working, the decisions that they’re making, and the ability to cause a disruption at that scale. 

 

AI & Humans Must Evolve Together 

 

We’ve got to use AI and ML, but we have to also understand the behaviors of those humans and overlay these technologies with human expertise. We then have to increase our speed and quality of detection and response to be dynamic in real-time to threats as it happens. We have to keep speeding up the response with our orchestration and automation. 

As we’re moving farther into that mixed augmented reality, the real value for security operators is to actually visualize that infrastructure. When they can see the attackers coming through the network, they can see them literally moving across the estate, it means they can start to take action at scale.  

Ultimately, we are not going to take any humans out of the equation. In fact, the reality is we’re going to have more augmentation between the AI and the human combined.

ChatGPT: Does it Pose More Opportunities or Threats to Businesses?

The generative AI tool ChatGPT has gained exponential interest and concern in equal measure since it launched in November 2022. Reaching 1 million users in just five days, ChatGPT is expected to bring its pioneer OpenAI $200 million by the end of the year. Before businesses jump on the ChatGPT bandwagon, it’s important to understand the opportunities and risks the tool presents, as well as its ethical considerations and limitations.  

In this exclusive interview, futurist and thought leader Gerd Leonhard shares his thoughts on ChatGPT and gives sound advice to business leaders on how to leverage and manage this emerging iteration of AI.  

*This article is a recap of our interview with Gerd Leonhard at the session, Unleashing the Power of ChatGPT: What Does It Really Mean for Business Transformation?  

 
Gerd Leonhard is a renowned futurist and thought leader. He was named one of Wired’s Top 100 Most Influential People in Europe and ‘one of the leading media futurists in the World’ by The Wall Street Journal. He is widely regarded as a global influencer and has advised business leaders from Fortune 500 companies as well as government officials and NGOs. .
 

What is ChatGPT and what are its limitations?

It’s a language learning model, an AI software that looks for patterns and clues on how to answer a question by looking at millions of statements that are related to the question. Of course, language models and AI have been around for 20 years with deep learning and machine learning. But the fact is that OpenAI decided to release ChatGPT to the public, creating a moon landing moment. ChatGPT is Sputnik, and the people on the moon are Bing and Google trying to integrate it. So, it’s a big deal for many reasons. I think it has great potential but also grave concerns like every big technology.  

However, ChatGPT isn’t going to be like the Metaverse, cryptocurrency, or blockchain. It’s a real game-changing moment and something we should be looking at across the board. 

 

Will ChatGPT replace existing search engines?

Search engines have been integrating ChatGPT gradually. For example, You.com and Bing, and Google soon — in the sense of having an optional chat bar. But we have to be careful of the relevance of those answers and fact check – which ChatGPT can’t do in a meaningful way currently. AI is a tool that we have to remain skeptical of. To paraphrase Kevin Kelly, “humans are for questions, machines are for answers.” I think that is so true. In this case, we should not be overestimating the truth and the validity of the answer.  

I think the key problem is that it’s so tempting. ChatGPT may turn into a huge laziness machine if we are not careful. With OpenAI’s recent partnership with Microsoft, we can expect fast development but it’s not going to replace search engines. Something to keep in mind is that humans make decisions by looking at a decision tree, images, and feelings.  

AI knows nothing about real life, it only knows data life. So, we should remain careful as to how much we rely on AI.  

 
Dive into the latest trends and technologies impacting business leaders in the Business Buzz Outlook series. View upcoming sessions here.
 

What opportunities does ChatGPT present to businesses?

At this point, it’s at a very early stage at the commercial level. It’s not scalable. It’s not real-time. It does not have ethical guardrails. I think it’s far away from large-scale commercial use.  

There’s an old saying that goes, “one machine can replace 100 ordinary workers, but no machine can replace one extraordinary worker.” A person using generative AI will beat a person without it. The tool alone won’t beat anybody.  

It will be the same for businesses. If you don’t use these tools to become faster, smarter, and more efficient, you’re going to get left behind. I’ve said this for a long time, but whatever can be automated, digitized, robotized, virtualized, and chatbot-ized; will be. That has a significant impact on our structure, companies, profit margins, and so on. But that doesn’t mean we’re going to be out of work. Instead, AI frees up my time to do more meaningful work. So far, ChatGPT presents opportunities to the finance, customer service, and airline industries. 

 

Do you think jobs will be lost to ChatGPT?

Technology needs to be used in a wise way. Look at call centers for example, I think 90% of those jobs will be automated but there will still be a need for supervisors. But yes, it will have an impact on routine jobs.  

If you work like a robot, a robot will take your job.  

If you learn like a robot, you’ll never have a job or you end up working for the robots. Think of Maslow’s Hierarchy of Needs, the same is true for jobs. The lowest level is data, information, and simple binary law knowledge. Machines have an unlimited repository of knowledge, and this will be clear by 2030. We are moving to the next level of work which is about tacit knowledge, quiet knowledge, understanding, wisdom, and purpose. There are plenty of jobs there. Also, I think if you’re going to save 50% of operating money using AI, that money needs to be put back to create possibilities of re-education and reskilling.  

 

Can ChatGPT measure specific outcomes in terms of customer satisfaction?

I think it could do that well. Imagine feeding all your data into an internal version of ChatGPT. You could ask intelligent questions like, “Do customers really like this product?” It could yield some results that are astonishing. Of course, data security and safety are issues here. In addition, a lot of data that humans use is not data that machines get. So, I think we need to be using it for trivial work. For example, travel agencies can utilize ChatGPT to figure out where their customers most likely want to go. From there, they can create a web page with that offering, alter codes on existing web pages, and more.  

 

How can ChatGPT be used to generate creative and engaging content for marketing and sales purposes?

There’s a great article by The Atlantic about how prompting AI is becoming a skill. Prompting means giving the right commands but if you drill deeper and ask, more complex questions, you will get more complex answers. So, you need to be good at prompting. One thing businesses can try immediately is to prompt their own products. You can ask how customers are receiving the product, what they don’t like about the product and more.  

However, keep in mind that it’s not real-time as ChatGPT’s database stops in 2021. That is one of the great confusions about ChatGPT and generative AI in general. I don’t think it’s possible to make it real-time. Because real-time is an infinite universe with data doubling every two weeks. That is where search engines have an advantage over generative AI.  

 

What is your advice to business leaders who want to implement ChatGPT in their businesses?

Firstly, experiment and try everything but be careful at the same time. For example, many companies thought that social media marketing could replace all marketing activities and save them money. Turns out that wasn’t true. Social media is expensive. So, you’re not going to save money because of this tool, you’re probably going to save some time. Appoint someone to oversee this and go for low-hanging fruits. For instance, explore bots for customer service and microsites. 

Secondly, we should embrace technology but not become technology.  

We should establish clear borders as to what it’s good for what it’s not. You don’t want to risk losing the trust of your customers. Too much of a good thing can be bad, and that is true with technology. Too much technology can really make a mess in your organization and customer communication.  

 

*The interview has been edited for length and clarity.  

Insurance Fraud Detection Using Machine Learning: What You Should Know

Fraudulent insurance claims cost insurance companies and consumers in Europe €13bn annually. Insurance fraud is rife, especially in the property, automotive, and healthcare sectors. Insurance companies are recognizing the need to adopt digital innovations urgently to reduce instances of fraudulent claims and better prepare for future threats. According to a report by Forrester, global investments in Insurtech exceeded $15B in 2021. 

How can AI and machine learning help your organization detect insurance fraud more effectively?

 

How to Detect Insurance Fraud

 

Investigating fraudulent claims is costly and time-consuming for insurers. It is physically impossible for insurance companies to do a thorough check of the thousands of claims that enter their systems daily.   

Early computerized systems could do so much – only allowing rudimentary analysis and search for fraudulent indicators known as red flags. A big limiting factor with this system is that fraudulent claims had to fit into a particular template or else they would not be recognized. Therefore, new technology is a blessing to insurance companies, providing game-changing solutions to enhance and automate processes along the insurance value chain.  

Nordic insurance companies have already modernized their fraud detection processes with RPA, which assists in verifying information located in different sources to detect the right data. Using RPA, an insurance company recorded a decreased claims cycle time from 6 – 10 minutes to 90 seconds. 

That being said, how do insurers ensure the utmost accuracy in filtering out fraudulent claims? This is where machine learning comes in. 

 

Machine Learning to the Rescue  

 

AI is known for simplifying menial tasks and freeing human agents to do more complex analyses. In terms of insurance fraud detection, machine learning applies aspects of AI to give systems the ability to improve from experience with no extra programming by analyzing large, labeled data sets.  

Machine learning can improve fraud detection techniques in the following ways: 

  • Processes data in a short period of time.  
  • Highlights where connections can exist between various factors that human eyes cannot detect. 
  • Applies various data analysis techniques to allow the discovery of new fraud schemes. 

Although it borrows underlying principles found in statistical models, the main focus of machine learning is producing predictions. These predictions are based on the analysis of known outcomes, known as “ground truth.” Machine learning also can search for fraud in unstructured and semi-structured data such as claims notes and documents.  

Furthermore, machine learning can prevent fraud by detecting suspicious patterns in claims processing and customer background checks, which can potentially save insurers a lot of money. Since investing in a fraud prevention system, this Turkish insurer saved $5.7 million and recorded a 210% increase in ROI.  

 

The Insurance Fraud Detection Dataset 

 

The ground truth provides a label that identifies the outcome of each claim based on a historical dataset of insurance claim information and patterns. While there are varying outcomes between insurance claims, the labels are generally divided into “valid” claims or “fraudulent” claims.  

Health Insurance Fraud Detection Dataset 

In this case study, there are close to a million claims records with more than 20 variables. Claims have been assessed and labelled as normal and flagged for possible fraud. Claims that were flagged showed signs of suspicious policy profiles or malicious agencies, claims, or hospital-related fraudulent behavior. A machine learning model was created, a so-called binary classifier, to detect the two labels as accurately as possible. A supervised learning approach was applied since the data was already labelled.  

Auto Insurance Detection Dataset 

This project highlights the challenge of building a model that can detect fraud, where legitimate insurance claims far outweigh the fraudulent ones. This problem is known as imbalanced class classification. The data set consists of 1,000 auto incidents and insurance claims which had a total of 39 variables before any cleaning or feature engineering. Specific types of machine learning models, such as neural networks, natural language processing, and network graph analytics were also utilized in this dataset. 

 

Anomaly Detection in Insurance Fraud

 

Deep anomaly detection is a popular form of machine learning that can be utilized by the insurance industry to detect fraud. In claims processes, anomaly detection will analyze genuine claims by consumers. It then forms a model of what a typical claim looks like which is then applied to larger data sets. Insurers can also use anomaly detection to identify the suspicious behavior of users on an insurer’s network. In addition, deep anomaly detection can be combined with other AI applications such as predictive analysis to further automate the fraud detection process. 

 

Insurance Fraud Detection Using Big Data Analytics  

 

The Digital Insurer recommends a 10-step approach to implement analytics in fraud detection: 

  1. Perform SWOT – A SWOT analysis of existing fraud detection frameworks and processes to identify gaps must be conducted.  
  1. Build a dedicated fraud management team – It is important to have a team, not an individual, handling fraud claims.  
  1. Whether to build or buy – Companies must evaluate whether they have the capacity and resources to build their own analytics framework or whether they need to engage an external vendor. 
  1. Clean data – Remove inefficiencies and redundancies and integrate siloed databases. 
  1. Come up with relevant business rules – Companies should leverage existing domain expertise and experienced resources. 
  1. Come up with pre-determined anomaly prediction thresholds –Companies should provide inputs for threshold values for different anomalies.  
  1. Use predictive modelling – An effective fraud detection method is one that uses data mining tools to build models that produce fraud propensity scores linked to unidentified metrics.  
  1. Use of SNA – Effective identification of fraud activities by modelling relationships between various entities involved in the claim.  
  1. Build an integrated case management system leveraging social media – This allows investigators to capture all key findings that are relevant to an organization including claims data and social media data.  
  1. Forward thinking analytics solutions – Insurers should always be on the hunt for additional sources of data to improve existing fraud detection systems.  

An insurance company’s efficacy in distinguishing between valid and fraudulent claims plays a big part in determining its financial strength, allowing optimal compensation and support for its customers. 

Alin Kalam: Nurturing Growth and Innovation Through Data, AI, and Sustainability

The IT industry continues to grow and shift rapidly due to the pandemic and CIOs are constantly on the lookout for ways to foster and adopt new technologies into their organization. Whether it is sustainable transformations or implementing AI, change is necessary.

As the Head of International Market Intelligence & Data Strategy for UNIQA international, Alin Kalam shares with us his insights on the need for agility through AI, achieving business competence, and nurturing innovation.

 
Be part of Aurora Live, an exclusive members-only platform that’s tailored for CxOs seekng the latest industry insights, high-level networking opportunties, and more.
 

Finding Agility in Artificial Intelligence and Overcoming Disruptions

Businesses and IT leaders today need to be quicker to respond to the ever-changing landscape of their industry and overcome disruptions. Whether it’s to implement hybrid workplace models or to incorporate new technologies such as artificial intelligence and data analytics, there is a definite need for CIOs to strategize.

Kalam shares his insights on the key challenges that CIOs need to be aware of when incorporating new technology and how to effectively transition towards data-driven business models.

 

What are the key challenges for CIOs who are trying to adopt new technologies especially in the AI field?

 

Surely one of the major challenges of establishing AI technologies in companies is lack of trust and also limited knowledge existing. On the technical side, I see the IT productionizing & operational issues arising since 2019. 

Often it is not the number of best practices, that lack but the ability to align market circumstances with existing technologies with own true business needs. Therefore, I see the cultivation of AI-driven innovation much more as a strategic challenge nowadays than only a technological one.

 

What should CIOs be aware of in the transition towards data-driven business models that serve dehumanization of critical business fields?

 

On the one hand, dehumanization must be done quickly to address short-term issues e.g. through the implementation of RPA or AI products to combat challenges caused by Covid, and on the other hand, CIOs must balance strategically what and where they are automatizing/dehumanizing. I already have seen examples of cost reduction projects through dehumanization that are creating huge strategic risks for companies in the long run. 

For sure there will be someday an “after Covid” and using the current crisis as scapegoat for cost-cutting only without putting the focus on the product portfolio, customer needs, and above all operational risks of IT systems, can become a huge source of risk. 

Here I rather appeal to strategic long-term aspects than short-termed gains only and to address this concern CIOs must become business-driven more than ever!

 

The Need For Sustainability and Competent Business Intelligence

Companies were forced to change their policies, behaviors, and business strategy due to the prolonged coronavirus pandemic. The recent COP26 climate conference showed that companies are committed to making sustainable-focused organizational changes.

For Kalam, the need for sustainability in IT is clear highlights the challenges that many are still facing, in addition to incorporating competent business intelligence to ensure sustainable growth. 

 

Sustainable transformation in the IT & innovation field has become a key topic for upcoming years. What are the specific areas of action for CIOs in this field?

 

For sure sustainability as a topic is here to stay! Not only do we have the macro aspects of it addressing the major concerns of our time, but it has become also a business driver in so many sectors. 

With my initiated project Sustainista I, therefore, have tried to interconnect companies with the scientific community ensuring exchanging of data, know-how, best practices, and transparency. The biggest challenge in this field is the lack of market and scientific standards at the same time. ESGs might be known to many of us but breaking down its info business actions according to standard approaches/processes is the biggest challenge!

In an ideal world, CIOs and related roles are taking ownership of this topic and driving it to doable tasks, otherwise, I am afraid to see sustainability just as a cosmetic and marketing label without a true impact on business and how we do things.

A particular starting point is to understand macro goals as an organization and break them down to a very data level in organizations delivering measures and related actions with the help of existing data. Many companies I know from various sectors have started with external data sets 1st to deliver quick success that can feed this long-term topic.

 

How would you advise companies who are still struggling to incorporate Business Intelligence?

 

Here I clearly follow the storyline of failing fast succeed sooner. Instead of propagating a piece of technology IT must build a bridge with business and deliver quick wins. Even now I am often devastated whenever I see only PDFs and Excel Sheets with numbers/KPIs that do not reflect the fast reality of our businesses and data-driven decision-making across borders! 

Major issues companies face are data quality, integrity, and security issues. CIOs are hereby in the role of process enablers. Instead of being only technology-driven often the implementation of BI must be done in a joint-venture manner.

 

Ensuring Growth Through Data and Overcoming Legacy Challenges

One of the biggest hurdles for digital transformation efforts still stems from legacy systems that are often outdated and not integrated with modern solutions for business uses. Despite the fact that modernizing legacy IT systems is required for businesses to ensure growth, IT leaders are still faced with roadblocks and challenges.

For Kalam, however, legacy systems are not necessarily the main roadblock as it once was. Instead, the focus now for CIOs should be to apply best practices during data-driven business transformation and simplify their approach to nurturing experimentation.

 

With regards to data-driven business models, what are the best practices that CIOs and IT leaders need to keep in mind? 

 

In a matter of fact, the approach of data-driven business transformation is everything but only data-centric! It covers the end-to-end processes of entire product lines and the strategic setup of a company. After many years of data harmonization/migration projects, companies often find out their undone homework regarding “creating true business values to the company itself and its customers”. 

I myself often propagate the term “no business value without data, no data without a business case”. Between this symbiotic relationship lies the true success of transformation efforts. 

Aside from this core topic I often miss the foresight of wisdom! It means seeing the potential of data not only in core businesses but its extensions and added capacities. In my objective point of view, this foresight of wisdom and true added potential is often the key success factor to many.

 

One of the main challenges for organizations is to overcome legacy infrastructure. How can CIOs overcome the legacy obstacle? What are the skills and mindset needed to promote modernization for an organization?

 

To be honest I really do not see legacy infrastructure as the biggest road-blocker anymore. Especially throughout the last decade, there have been so many progressions in simplifications of legacy systems, that I have become more optimistic on that end out of my own experiences! 

I can´t remember when I have seen companies e.g. migrating legacy data systems into new all-in-one and all-ruling superior DWH, Data Lake, etc. Instead of searching for the holy grail, we have become more realistic about using data where they are at their best and being created. 

This Data Mesh approach has become a blueprint for software solutions as well just as agility was cultivated from the IT/Software world into day-to-day business & project management. But this process has just begun a couple of years ago, the community yet does not have a buzzword, but hey, never say never…!

 

Innovation and experimentation are at the heart of data-driven business models. How does one nurture an environment that promotes experimentation within their organization?

 

I rigorously follow the principle of K.I.S.S (Keep it simple, stupid) in the incubation phase of innovation projects. Instead of talking only and selling in this phase, organizations should apply these principles, aside from a minimum set-up of governance, risk mitigation process regarding GDPR, privacy, organizational risks, etc., and allow experimentation. 

Here the old wisdom of “too many rules & regulations kill true innovation & creativity” should be applied. 

If the internal challenges are too big, often I have guided companies and leading bodies into the world of entrepreneurship. 

The most successful CIOs & IT managers are those who run new innovation ideas or projects as a starting business operating from day 1. This can be a guarantee of nursing the true nature of innovation when nothing else is working.

Lokke Moerel: Digital Sovereignty and the Changing Landscape of AI & Privacy Laws

As we enter the second half of 2021, it’s becoming evident that societies worldwide embrace digital transformation as part of their everyday lives. This is backed by the fact that half of the world now uses social media and at least 4.66 billion people around the world now use the internet.

However, as societies become more digitized, the vulnerabilities that come with it also increase. From malware attacks that rose by 358% to a significant increase in risk of successful ransomware attacks due to remote working during Covid-19, to difficult-to-combat online conspiracy theories of the anti-vax and anti-5G movements, stimulated by Russian infiltration.

Lokke Moerel, professor of Global ICT Law at Tilburg University and member of the Dutch Cyber Security Council, shares her insights into the need for digital sovereignty within the EU and how AI and privacy laws are changing rapidly due to digitization.

 

Accelerating Digital Sovereignty across Europe

 

In today’s increasingly digitalized landscape, more and more users feel the need to keep their data safe and are willing to leave popular platforms, such as Whatsapp, based on a change of privacy terms.

With 92% of Western data being kept in the US, EU nations have realized the need to adopt a joint strategy on how data is controlled and shared. While fostering the Digital Single Market is needed for innovation to thrive, effective safeguards must be placed to protect users in a data-driven world.

Lokke goes into detail about how the current situation has exacerbated the need for digital sovereignty in the EU, particularly for the Netherlands as advised by the Dutch Cyber Security Council.

 

Europe has been focusing on digital sovereignty and recently, the Dutch Cyber Security Council issued public advice that the digital sovereignty of the Netherlands is under pressure. What does digital sovereignty mean?

 

We are one of the most digitalized societies and this has been accelerated by the Corona crisis. Within no time, people worked from home, and children were schooled online. It was amazing to see how quickly we were up and running again. However, every upside has downsides and we saw new vulnerabilities and dependencies. 

  • A tremendous increase in the activities of cyber criminals abusing the vulnerabilities due to remote access to systems when people worked from home.
  • Foreign states stealing COVID-19 research
  • Flaws in privacy and security of video tooling.
  • More data on children are in the clouds of non-EU providers due to the increased use of digital teaching tools.
  • The dependency of the Netherlands on social media platforms for combating misinformation and the lack of control from the government to combat it.

The core message of the public advice of the Council is that our digital dependencies are now so great that the digital sovereignty of the Netherlands is under pressure. This goes further than guaranteeing the cybersecurity of our critical IT systems and the data generated with these systems. We also need to maintain control over our essential economic ecosystems and democratic processes in the digital world.

 

Can you give us examples of how digital sovereignty (or lack of it) can affect the economic ecosystems and democratic processes?

 

Examples of essential eco-systems:

Lack of control over critical technologies will result in new dependencies. For example, without proper encryption, we will not be able to protect the valuable and sensitive information of our governments, companies, and citizens. Current encryption will not hold against the computing power of future quantum computers.

We will therefore have to innovate now to protect our critical information also in the future. This is not only relevant for future information, but also current information. Do not forget that foreign states systematically intercept and preserve encrypted communications in anticipation that these may be decrypted at a later stage. 

To be able to make large-scale use of data analysis using AI, enormous computing power is required (which requires cloud computing) as well as access to large quantities of data, which will require combining data in specific industry sectors (such as health), which is currently difficult.

Efficient access to harmonized data and computing infrastructure will become the foundation for the Dutch and European innovation and knowledge infrastructure. Maintaining control over this is an essential part of our strategic autonomy.

Examples of democratic processes: When the state is not in control over the election process, due to targeted misinformation and systematic infiltration of social media by foreign states to influence citizens, our digital sovereignty is at stake.

We see that digital sovereignty is very high on the EU’s agenda. For our neighbor Germany, for example, it is Chefsache. In the Netherlands, however, we mainly respond to cyber threats in a technical and reactive manner. We respond in crisis mode. 

The council thinks it is high time for a more coordinated and proactive approach, starting with ensuring three basis facilities: sovereignty-respecting cloud for secure data storage and data analysis, secure digital communication networks, and post-quantum cryptography.

 
Want more insights on cybersecurity? Join industry leaders and C-suites from top 500 companies and gain exclusive insider knowledge at Management Events’ 600Minutes Cyber Security in Belgium.
 

CISO and Their Roles in Digital Sovereignty

 

At the core of digital sovereignty issues is the need to safeguard information assets for European countries.

As the Netherlands continues to build upon its Dutch Digitalisation Strategy 2.0 and integrate more cloud-based technologies within its economic ecosystems and democratic processes, it is up to chief information security officers (CISO) to be aware of what it all means for an organization and how it affects its cloud strategies.

 

What does digital sovereignty mean for the CISO?

 

Most governments and companies will have a corporate cloud policy. I see that these policies really try to address the direct requirements of a specific cloud project. 

When deciding whether to bring services to the cloud, the company will weigh up the benefits of public cloud (better security, better functionalities) on a project-by-project basis against the specific dependencies and security issues in the project in question.

However, considerations of loss of sovereignty are not taken into account. As a result, for each project, the decision can be justified, but ultimately these decisions together do threaten our sovereignty, where in the future you want to be able to process data across cloud solutions for example (an example of The Tragedy of the Commons).

I think it is important for CISOs to be aware of all the EU initiatives to increase our digital sovereignty.

 

What should they be aware of in terms of initiatives?

 

GAIA-X: many people think that the GAIA-X project, is about setting up a European cloud infrastructure. GAIA-X is, however, not about creating Europe’s own vertical cloud hyperscalers. It is also not about keeping the non-EU cloud services providers out or keeping all data within the EU. It is about achieving interoperability between cloud offerings by setting common technical standards and legal frameworks for cloud infrastructure and services. 

This form of interoperability goes beyond the portability of data and applications from one vendor to another to prevent vendor lock-in; it really concerns the creation of open APIs, interoperability of key management for encryption, unambiguous identity, and access management, full control over storage and access to data, etc.

Worth keeping track of I would say.

European Data Spaces: data spaces intended to unlock the value of European data for innovation. 

The aim is to create common data spaces for certain sectors with common interests (e.g., for health data and governments) so that the scale of data required for innovation for this group can be achieved.

 

Looking Into AI and Its Purpose in Cyber Security

 

As remote working conditions and digital processes continue to become the norm for users and organizations, cyber attacks are becoming increasingly prevalent. 95% of cybersecurity breaches are a result of human error and as the information security market is expected to reach $170 billion in 2022, the cost of digital attacks can be enormous.

AI has always been seen as a silver bullet for organizations to combat cyber-attacks and increase resilience in areas where a majority of human error lies. However, Lokke describes the potential and possibilities of AI as both good and bad, depending on how it is utilized.

 

What scares you the most regarding the seemingly endless possibilities of AI?

 

Like all technology: AI is not good, it is not bad, but it is also not neutral. 

To start with, AI is as good as the purpose for which it is used. In the cyber context, this means that we really should keep ahead of the bad guys. 

New technologies play an increasingly crucial role in cyber resilience. If we are not on top of new technologies like AI and encryption, this will result in new vulnerabilities and dependencies. An example here is that with AI, bad actors can detect and exploit vulnerabilities automatically and on a large scale.

However, AI is also expected to make it possible to automatically detect and patch vulnerabilities. I am currently involved in a research project, to investigate what options there are to facilitate real-time security patching by suppliers.

 

Privacy Laws in The EU and Its Future

 

With digital sovereignty being top-of-mind for EU nations and the increased awareness of data privacy among the public, governments and regulators understand that there is a need for comprehensive privacy laws that protect both users and businesses.

From California Privacy Rights Act to the ever-evolving GDPR, more and more data protection acts are being introduced and implemented across the globe. Moerel shares her views on how privacy laws will continue to shift and change to adapt to the new digital landscape and what global privacy laws mean for an organization.

 

In what ways do you see privacy laws changing in the future?

 

Every week there is a new privacy law being adopted somewhere in the world. By now there are about 130 countries with omnibus ‘GDPR style’ privacy laws. Everybody heard about the Californian Privacy Rights Act, but less well known is that by now, 20 other U.S. states have introduced privacy bills. 

In the EU we now have the draft proposal of the European Commission for an AI regulation and it is not a risky prediction to say that – like what happened with GDPR – other countries will also look at this draft and start preparing their own legislative proposals.

The way to deal with a myriad of global rules is to implement a very robust company-wide security and privacy protection program. After all, compliance with the law is a baseline where you cannot go under. Do a proper job and you do not have to worry about compliance. 

In the end, it is about trust more than compliance.