Emerging Cybersecurity Trends for 2024

Organizations face an unprecedented array of cyber threats that constantly evolve in complexity and sophistication. It is imperative for security and IT leaders to stay ahead of the curve by exploring merging cybersecurity trends that will help safeguard their organization’s valuable assets and maintain a robust security posture.

From the convergence of networking and security to threat intelligence and the Cybercrime Atlas, we explore the transformative trends shaping the future of cybersecurity.

 

1. Convergence of Network and Security

 

Before the rise of hybrid clouds and networks – an estimated 76% of organizations use more than one cloud provider – businesses would build their security layer on top of their networks. However, the architectural complexity of this approach led to poor user experience, increased cybersecurity risk, and presented many challenges in maintenance and troubleshooting.

As the threat landscape evolves alongside technological advancements, organizations need a modern approach to security and networking which offers end-to-end visibility to allow quicker identification and reaction to potential threats.

One way to do this is by converging networking and security. The three main aspects of this are:

  1. Adopting a distributed firewall: Also dubbed a hybrid mesh firewall by Gartner, organizations need to secure across their entire network infrastructure including location, device, content, and applications by implementing a network-wide security policy such as Zero Trust.
  2. Consolidating vendors: Instead of selecting vendors based on the “best of breed”, companies should consolidate technology vendors to just a few that can work together in the ecosystem. Solutions that are designed to work together will lead to a well-integrated security network allowing security teams to optimize their strategies.
  3. Implementing OT-aware strategy: Organizations must create a layer of defense around the OT components connected to their network using capabilities like Network Access Control, data segmentation, and micro-segmentation.  to strengthen the security of OT devices on the network, moving toward a zero trust model.

Evolving approaches and perspectives to network and security are imperative to meet changing organizational demands, the fluctuating threat landscape, and emerging technological advancements.

 

2. Threat Intelligence

 

Also known as cyberthreat intelligence or CTI, threat intelligence is data regarding cybersecurity threats that are collected, processed, and analyzed to understand potential targets, attack behaviors, and motives. Threat intelligence enables security teams to be more proactive and data-driven in their prevention of cyberattacks. It also helps with more efficient detection and response to attacks that may occur. All this results in reduced cybersecurity risks, prevention of data breaches, and reduced costs.

IBM notes that cyber intel reveals trends, patterns, and relationships that will give an in-depth understanding of actual or potential threats that are organization-specific, detailed, contextual, and actionable. Threat intelligence is becoming an indispensable tool in the modern cybersecurity arsenal.

According to Gartner, the six steps to the threat intelligence lifecycle are:

  1. Planning: Analysts and stakeholders within the organization come together to set intelligence requirements that typically include questions stakeholders need answers to such as whether new strains of ransomware are likely to affect their organization.
  2. Threat data collection: Based on the requirements defined in the planning stages, security teams collect any raw threat data they can. For example, research on new malware strains, the actors behind those attacks, and the types of organizations that were hit, as well as attack vectors. The information comes from threat intelligence feeds, information-sharing communities, and internal security logs.
  3. Processing: The team then processes the data on hand in preparation for analysis. This includes filtering out false positives or applying a threat intelligence framework. There are threat intelligence tools that can automate this stage of the lifecycle which utilize AI and machine learning to detect trends and patterns.
  4. Analysis: The raw data is analyzed by experts who will test and verify the identified trends, patterns, and insights to answer the questions raised and make actionable recommendations tailored to the organization security requirements.
  5. Dissemination: The insights gained are shared with the relevant stakeholders, which can lead to action being taken based on those recommendations.
  6. Feedback: Both stakeholders and analysts look back on the latest threat intelligence lifecycle to identify any gaps or new questions that may arise to shape the next round of the process.
 

3. Employee Trust

 

Though zero trust is growing as a cybersecurity principle – and it has proven to be effective in protecting organizational assets – the overapplication of this approach on employees could lead to negative effects at the workplace.

Cerby’s State of Employee Trust report found that 60% of employees reported that when an application is blocked, it negatively affects how they feel about the organization. The erosion of employee trust and reduced job satisfaction is a result of overreliance on controls that block, ban, and deny employees from using specific applications. In fact, 39% of employees are willing to take a 20% pay cut if they could have freedom to choose their own work applications.

Though the zero trust approach lowers the cost of data breaches by 43% (IBM), the same approach cannot be applied to employees. The Cerby study found that higher employee trust led to higher levels of workplace happiness, productivity, and contribution.

Experts recommend that organizations adopt an enrolment-based approach to security that balances cybersecurity and compliance requirements with trust-forward initiatives. This will help organizations build digital trust with their employees by giving them more control over their tools while maintaining security and reliability.

Other trust-based initiatives that can build employee trust include:

  • Ongoing training and support to keep employees updated on the latest tools and technologies.
  • Incorporating employee feedback into the decision-making processes.
  • Constantly communicating with employees on their workflows and security needs.
 

4. Cybercrime Atlas

 

The Cybercrime Atlas is an initiative announced by the World Economic Forum (WEF) back in June 2022 to create a database by mapping cybercriminal activities. Law enforcement bodies across the globe can then use this database to disrupt the cybercrime ecosystem. The first iteration of the Cybercrime Atlas was officially launched in 2023. The concept was ideated by WEF’s Partnerships against Cybercrime group that is made up by over 40 public and private organizations. The Cybercrime Atlas itself is made by WEF in collaboration with Banco Stander, Fortinet, Microsoft, and PayPal.

Though the Cybercrime Atlas won’t be available for commercial use, its use by law enforcement agencies will create ripples in the cybersecurity landscape. Analysts from around the world were gathered to come up with a taxonomy for the Atlas. From there, 13 major known threat actors became the initial focus. Analysts used open-source intelligence to collect various information about these threat actors from their personal details to the types of malicious services they used. The information collected was investigated and verified by humans. The data will eventually be shared with global law enforcement groups such as Interpol and FBI for action.

The goal of the Cybercrime Atlas is to create an all-encompassing view of the cybercrime landscape including criminal operations, shared infrastructure, and networks. The predicted result of this is that the security industry will be better able to disrupt cybercrime. By February 2023, the project moved from its prototype phase to a minimum viable product. Essentially, there are now dedicated project managers and contributors working to build the database and work out the relevant processes.

It was also noted that the information being used to build the database is open-source, meaning there is no issue with country-specific regulations on data. Once the open-source repository is created, there will not be security or proprietary constraints in sharing the data with local law enforcement agencies.

Though commercial organizations will not be directly using the Cybercrime Atlas, they will still indirectly benefit from it. As the project develops and matures, law enforcement agencies will be better equipped to investigate cybercrimes and catch threat actors.

Dr Rebecca Wynn: “We Didn’t Think of AI Privacy By Design”

In the era of the AI revolution, data privacy and protection are of utmost importance.

As the technology evolves rapidly, risks associated with personal data are increasing. Leaders must respond and adapt to these changes in order to push their businesses forward without sacrificing privacy.

We speak to award-winning cybersecurity expert Dr Rebecca Wynn about the data privacy risks associated with AI and how leaders can mindfully navigate this issue.

 
Dr Rebecca Wynn is an award-winning Global CISO and Cybersecurity Strategist, host and founder of Soulful CXO Podcast. She is an expert in data privacy and risk management and has worked with big names such as Sears, Best Buy, Hilton, and Wells Fargo.
 

From a business perspective, how has AI changed how companies approach data protection and privacy?

Honestly, right now many companies are scrambling. They hope and pray that it’s going to be okay, and that’s not a good strategy. One of the things we see with leading technologies is that the technologies come up first. Then governance, risk, and compliance people play catch up. Hopefully, in the future, this will change, and we will be on the same journey as the product. But right now, that’s not what I see.

Once the horses are out of the barn, it’s hard to get them back. Now we’re trying to figure out some frameworks for responsible AI. But one thing people need to be careful about is their data lakes. Is misstated data going into the data lake?

From a corporate perspective, are you’re not monitoring what people are putting into the data lake? Even from your own individuals, are they putting your intellectual property out there? What about company-sensitive information? Who owns that property? Those are the things that are very dangerous.

For security and compliance, you really need to be managing your traffic, and you do that through education on the proper use of data.

Can you speak on the role of laws and regulations in ensuring data privacy in the AI era?

There are two types of people. There are ones who prefer to go ahead and ask what the guidelines are and what’s the expected norm for businesses and society as a whole – they stay within those guidelines. Then there are companies that ask about enterprise risk management. What is the cost if we go outside those lines? We see this in privacy. They ask questions like “What are the fines? How long might it take to pay those fines? Will that go down to pennies on the dollar? How much can I make in the meantime?”

Laws give you teeth to do things after the fact. Conceptually, we have laws like the GPDR, which is the European Union trying to establish AI rules. There’s the National Institute of Standards and Technology AI framework in the US, PIPEDA in the Canada.

The GDPR and the upcoming AI Act are obviously important to companies based in the EU. How aggressive should we expect a regulatory response to generative AI solutions might be?

I think it’s going to take a while because when GDPR initially came into place, they went against Microsoft, Google, and Facebook. But it took a long time to say what exactly these companies did wrong and who would take ownership of going after them.

It will take years unless we have a global consortium on AI with some of these bigger companies that have buy-in and are going to help us control it. But to do that, big companies must be a part of it and see it as important.

And what are the chances that these big companies are going to start cooperating to create the sort of boundaries that are needed?

If we can have a sort of think tank, that would be very helpful. AI has very good uses but unfortunately, also very negative consequences. I’m not just talking about movies like Minority Report, but I also think about when wrong data gets out. Like in Australia, we see potentially the first defamation law case against ChatGPT.

Even on a personal level, information on you is out there. Let’s say for example you are accused of a crime, which is not true. That gets into ChatGPT or something similar. How many times can that potentially come up? I asked ChatGPT to write me a bio and it says I worked for Girl Scouts of America, which I never did.

That’s the type of thing I’m talking about. How do you get that out of the data pool? What are the acceptable uses for privacy data? How do you opt-out? These are the dangers right now. But it has to be considered from a global perspective, not only by region. We talked about legal ramifications and cross-border data privacy. How do you stop somebody in the US from being able to go ahead and use data from the EU a bit differently? What about information that crosses borders via AI? It hasn’t been discussed because no one even thought of it just a year ago.

What are appropriate measures for organizations to take with shadow IT uses of GPT tools?

We need to train more on the negative effects of such tools. I don’t think people are trying to do it from a negative perspective, but they don’t think about the negative impact. If I’m using one of these tools to help me generate code, am I looking at open-source code? Is it someone else’s code that someone put in there? Is this going to cause intellectual property issues?

When you talk about shadow IT, you are looking at what is potentially leaving a network and what’s coming in. So, it usually sits above data loss prevention tools. But how do you do it without being too ‘big brother-ish’.

All this comes from enterprise risk management. You need to have conversations with your legal and compliance teams. Most people just want to get their job done and they don’t think about the negative repercussions to the company’s reputation. You have to have those conversations.

Talk to your staff about what tools they’re using in a no-fear, psychologically safe way. Ask them why they’re using those tools and the advantages it gives them. From there, you can narrow down the top two or three tools that are best for the company. This lowers your risk.

It’s about risk mitigation and managing that in a mindful way because you can’t have zero risk. You can’t block everyone from doing everything.

How can current data protection legislation support businesses and individuals with data that has been used to train large language models?

It’s chasing things after the fact. We’ll find out that there are a lot of language models trained on data that were not used in the manner we agreed to in our contracts. I think there are going to be some legal ramifications down the pipeline. We’ll find out that the data used in these models are not what I would call sanitized. I’ve seen it again and again; intellectual properties are already in the pool and the data was not structured or tagged so we can’t pull it back out.

In that case, how can we work with protected data and at the same time, with large language models?

That’s tough. I’ll give you an example. Let’s say there’s an email with a cryptic key embedded into it. What you can do is hold the other key and spiral it off. I like that from a company and individual perspective. Because if someone shared something intellectual property of mine with another person, maybe an article I wrote or a code, I could then look at the spiral and see who sold or resold that data. From there, I could expire it. From a legal perspective, I would have a trail.

What happens if we could do that with every piece of information that you make? If the data is tagged immediately, you could see what it was created for and expire it for other uses. It won’t be in anyone’s database.

I think we can get there. But right now, I don’t see how we can get the horses back into the barn effectively when the data is not individually tagged.

Should we forbid access to ChatGPT and other AI apps to all users?

You could use a variation of that. Consider why you’re using an AI tool. Ask your teams why they’re using it and then think about how you might mitigate risk. If it’s about rephrasing certain text to be more effective for clients, then allow it. That’s a positive internal use. Maybe it’s about marketing and rephrasing things for various social media platforms.

But what if they just want to play around and learn more about it? Then maybe you need to have a sandbox where they can do that, and you don’t have to worry about data leaving your network.

How can we ensure that our AI systems are trained on unbiased and representative data to prevent unfair decision-making?

To be honest with you today, I don’t think we can. Part of it is because data is in a data lake. For a lack of better phrasing, garbage data in, garbage data out. If you look at search engines out there, they’re all built on databases and those databases are not just clean. They weren’t built initially with clean data at the start. We didn’t think about structure back them.

What you could do is have something like a generated bio of yourself, see what’s wrong with it, and give a thumbs up or down to say if it’s accurate or not to clean up the data. But can anyone clean up that data or is it only you? If there’s inaccuracy in my data, how can I flag that? It seems like anyone can do it.

So the question is, is there a method to go back and clean up the data? Here’s where I wish we had the option to opt-in instead of opt-out.

When it comes to data security, what are the top three things companies should keep in mind?

First, is to not have fear or uncertainty in your own people. We think mainly from a security and privacy governance perspective that everyone intends to do the wrong thing. Instead, I think you need to assume that your workforce is trying to do the right thing to move the company forward. The real training is about what is the organization’s acceptable use policy. What is the expectation and why is it important? If things go awry, what are the consequences to the individuals, company reputation, and revenue?

Next, do how do we monitor that? Do we have an internal risk assessment against our AI risk? If you’ve not looked at your business liability insurance recently and you have a renewal coming up, take a look. There is an AI risk rider that is coming up for most if not all policies in the future that you, as a company, are using AI responsibly and that you are doing risk assessments and managing it with firewalls, data loss prevention strategies, and things like that.

Holistically, it’s enterprise risk management. I think you need to be transparent and explain to individual users in those terms, but we haven’t always been doing that. You need to figure out how you can make everyone understand that they are an important puzzle piece of running the business. It’s a mind shift that we need.

Should we create AI gurus in our companies to communicate the risk and stay up to date on the technology and its usage?

We’re on the cusp of that. A lot of us have been talking behind the scenes about whether there is now a new role of AI CISO.

The role of the Chief Information Security Officer will be focused solely on AI and how that’s being used internally and externally.

You may have the Chief Security Officer who is operational-focused only. I end up being more externally facing with strategy than I am with daily operations. We’re seeing various CISO roles to handle that for the company.  Digital Risk officers, that’s a legal perspective. I think we’re seeing a rise of AI Cybersecurity Officers or similar titles.

Should we worry about privacy threats from AI in structured data as well?

I think you should always worry about privacy. When we look at the frameworks, EISA has a framework, EU has the AI rules. There are things coming out of Australia and Canada as well, that’s what we’re trying to gravitate towards.

But, as individuals, how much can we really keep track of our data and where it is anymore? If you haven’t looked at the privacy policies on some websites, I say as an individual you need to opt-out. If you’re not using those apps on your phone anymore, uninstall them, kill your account, and get out of them. Those policies and how they’re using that information are only getting longer and longer.

As a company, do you have a policy in place about how your data is being used between your companies? Are you allowing them to be put into AI models? What are the policies and procedures for your own employees when it comes to third parties that you do business with?

That’s why I said there needs to be a lot of training. From an enterprise risk management standpoint, you cannot manage risk if your risk is not defined.

What is the social responsibility of companies in an AI-driven world?

I wish it would be a lot more, to be honest with you. Elon Musk and people like that are being a little more forward-think about what and where to do we want to be in 2050. All technologies are good technology. When AI initially came out in the 1950s and machine learning in 1960s, it was a new shiny toy but people were scared too.

I think from a social perspective, anytime we have something that allows us to be digitally transformed in a way that allows us to communicate and see correlations quicker, and see what people are facing around the world, that’s good.

But then, it can also bring in fake news. We say trust and verify, but what do you do when you’re using AI tools that have all the wrong information? It’s scary. That’s when we must use critical thinking. Does this story make sense? Does it seem reasonable? We’re starting to see right now how AI can be used for good and evil. In terms of cybersecurity, fake email and such are used in targeted phishing attacks.

For companies, do you have back channels to verify things? I once received a text message from a CEO that sounded like him but did not ring true to me. When I asked him through back channels, he said it was not him. Your gut is right most of the time. Trust your gut, but also give people other avenues to verify that data.

Do you think there’s too much focus on the content of a framework to identify and include all risks as opposed to focusing on the processes to get to the right answers?

I agree. I’m always about the so what. Know it, document it, implement, manage, and measure. But then what? If I have a framework solely as a framework, that’s great but it’s about what you put in.

I think the problem is that you start from the top-down. We end up having to get people on the same page and saying we need a framework. And then it gets down to the meat and that’s what you’re talking about. Why do we need this? How do we put it into play? How can we test it and measure it?

Unfortunately, policies and procedures start from the top down, but for boots and on the ground, the thought starts with implementation. That’s where I think training comes into play. This is where people like me talk to you about the day-to-day.

*Answers have been edited for clarity and length.