Adam Grant: Powerful Tactics to Unlock Your Hidden Potential

For today’s modern business leaders, innovation is key to success. 

We live in a rapidly changing world with more uncertainty than ever before. This means we need to be better and faster at rethinking our assumptions.”

Innovating in a continuously developing tech environment requires leaders to challenge the status quo, rethink their assumptions, and be open to learning new skills. In this article, world-renowned organizational psychologist and best-selling author Adam Grant shares practical and actionable strategies to help leaders unlock the hidden potential in themselves and the people around them.  

 

1. BUILD A CHALLENGE NETWORK

 

Although a support network filled with mentors and sponsors is valuable, Grant introduces another kind of network that is critical for success.  

“Your challenge network is the group of thoughtful critics who you trust to hold up a mirror so you can see your own blind spots more clearly. They’re the people who have the courage to tell you the unpleasant truths.” 

However, Grant observes that many leaders around the world don’t have a challenge network. “That is a scary world to live in. So how do you get people to challenge your assumptions?” He shares a dynamic he’s been studying his whole career, the act of observing your network and identifying the givers and takers.  

GIVERS: “People who are constantly figuring out what they can do for you.” 

TAKERS: “People who want to know what you will do for them.” 

There is also the personality trait of agreeableness. “Agreeable people are warm, friendly, polite, and welcoming. Disagreeable people are more critical, skeptical, and challenging. For a long time, I assumed that agreeable people were always givers.” 

Grant then gathered data from 30,000 people around the world from different cultures and industries. The findings were surprising. “I found a zero correlation between how far you lean toward giving versus taking at work, and where you stood on that spectrum from agreeable to disagreeable personality.” He realized that the level of agreeableness was only a surface-level trait. On the other hand, giving and taking present an individual’s inner motives. “What are your real values and intentions when you deal with others?” 

To understand people more accurately, Grant categorizes people in the table below. “This is an oversimplification of all the complexity of human nature. But when you do this, you will find two combinations you recognize quickly, and maybe two that you overlook.”  

 
 

Ultimately, disagreeable givers are assets in a challenge network. “We need to do a much better job of valuing these people, as opposed to writing them off as prickly and a selfish taker.” Grant encourages business leaders to reach out to the disagreeable givers they know as a first step to building their challenge network. He has reached out to disagreeable givers in his network too. “As I’ve had those conversations, I’ve gotten much better, not only in constructive criticism but also coaching.” 

I see honesty as the highest expression of loyalty. The more candid you are with me, the more I will know that you’re trying to help me grow.” 

 

2. CREATE PSYCHOLOGICAL SAFETY

 

Coined by Amy Edmonson, psychological safety is the sense that you can take a risk and speak up without being punished or fearing reprisal. People with psychological safety can freely admit their errors, study what caused them, and rethink routines to prevent them.  

When organizations build psychological safety, people are not only more willing to think again, they’re also more courageous in telling you what you need to rethink.” 

Grant adds that in tech companies when people lack psychological safety, they bite their tongues. But when they have it, they let their ideas fly. “I see leaders do this by accident constantly, and they don’t even realize they’re doing it. One of the ways I catch it is when I hear leaders say, ‘Don’t bring me problems, bring me solutions.’ I get why leaders say it. You want people to take initiative and not whine and complain.” 

However, Grant says that this is a dangerous philosophy. If people only speak up when they have a solution, leaders will never hear their biggest problems which are too complex for any one person to solve.  

The foundation of building psychological safety is encouraging people to raise problems, even if they don’t know how to fix them yet.”  

How can business leaders build psychological safety?It’s really helpful to create a structure where people are rewarded for telling you what the problems are.”  

He gives an example of a Kill the Company exercise given by a consultant. “She divides the leadership team into small groups, and she says, ‘Your job is to put your own company out of business’. I’ve never seen a more energized group of executives in my life. The power of this exercise is people are more creative on offense. Psychological safety is built in when your job is to tell people how you would destroy your company. There is no problem that is unsafe to voice. So, I like to see leaders run this exercise twice a year because inevitably, the threats and opportunities will change.” 

Grant refers to these exercises as premortem exercises “where you imagine that one of your big decisions or your key strategies is going to fail in the next few years. And then you consider the most likely reasons why.” 

“You get better at seeing around corners, rethinking assumptions that are no longer true, and then evolving to improve your practices. We need to bake this into our interactions with people on a daily basis.” 

In addition, he advises business leaders to be more open to admitting their mistakes. It’s not enough for leaders to ask their team for criticism because they don’t know whether their leaders can handle the truth. “It’s often more effective for leaders to say, ‘Here are my mistakes.’ When leaders put their own weaknesses and imperfections on the table, their team has more psychological safety to speak up.  

In addition to claiming to want to hear criticism, leaders are proving that they can take it. “A simple way of doing this is to take your own review from your board or from your boss and share it with your team. I’ve seen a lot of leaders hesitate to do this; they don’t want to be too vulnerable. They’re trying to prove their competence.” 

“The people around you already know what you’re bad at. You can’t hide it from them. So, you might as well get credit for having the self-awareness to see it, and the humility and integrity to admit it out loud.” 

 

3. GET THE BEST IDEAS ON THE TABLE  

 

Grant also challenges the age-old practice of brainstorming. “There’s a strong tendency when we need creative ideas or make a critical decision to say, ‘Hey, let’s bring a group of people together to brainstorm because we know that five heads are better than two.’ Except for just one tiny wrinkle. It doesn’t work.”  

He explains that there is over 40 years of evidence that better ideas are generated if people work alone. In addition, he highlights three things that can go wrong in brainstorming groups:  

  • Production blocking: “We can’t all talk at once and some ideas get lost.” 
  • Ego threat: “I don’t want to look stupid, so I hold back on my most unconventional ideas.” 
  • Conformity pressure: “I want to jump on the bandwagon of the idea that the boss likes best or what’s most popular in the room.” 

To avoid the above from happening, Grant presents the idea-generating tactic of brainwriting.  

“You give people the problem or the topic and advance. You let them generate their own ideas independently. You collect them and then you have everyone in the group rate them. Once you have everybody’s independent ideas and judgment, you bring everyone together to figure out which possibilities are worth pursuing.” 

Why is brainwriting better than brainstorming?  

  • Individuals are more creative than groups but “they’re also terrible at judging their own ideas.” 
  • There is more variety of ideas that can be “adjusted and filtered by group wisdom, the wisdom of crowds should come in after we get all the possibilities on the table.” 
  • It works in a hybrid setting “where the chat window is made for brainwriting. The first 10 minutes is to type out your thoughts and then use the group’s judgment to assess and vet the potential in the room.” 
  • Provides a platform for the quieter voices “who might not be that comfortable selling and pitching their ideas but are the ones who have actually dreamed up the best ideas.”  

“[While] group brainstorming tends to record the loudest talker and the most extroverted person, brainwriting allows you to hear from the deepest thinker and the most creative voices.” 

 

4. RETHINK YOUR MINDSET 

 

After leaders have collected everybody’s independent ideas in a brainwriting process, how do they figure out which ideas work? Grant says it boils down to the mindset they bring to the table. “I’ve been studying the mental models that cause leaders to resist change, refuse to think again, and stymie the hidden potential in their organizations.”  

Those mental models can be organized into: 

  • Preacher: “You’re proselytizing your own views. 
  • Prosecutor: “You’re attacking somebody else’s views.” 
  • Politician: “You don’t bother to listen to people unless they already agree with your views.

“I think it’s worth reflecting on which is your biggest vice of these three mental models. I’ll tell you that mine is prosecutor mode. If I think you’re wrong, I believe it’s my professional and moral responsibility to correct you. I know that when I go into prosecutor mode, I shut down and become less open to new ideas.” 

“Whether you’re preaching, prosecuting, or politicking, you’ve already concluded that you’re right and other people are wrong. That means you stop learning.” 

How can business leaders release themselves from those mental models? Grant suggests leaders approach new ideas like a scientist and not let ideas become a part of their identity. “We know that good scientists have the humility to know what they don’t know, and the curiosity to constantly seek new knowledge.” 

We have a growing body of evidence that if you teach leaders to think more like scientists, they make better decisions.” 

Grant recalls his favorite demonstration of this approach in an experiment done with start-up founders in Italy. Hundreds of entrepreneurs were randomly assigned to a control group or a scientific thinking group. In the scientific thinking group, participants were asked to view their hypotheses as strategies and decisions as experiments. “Over the next year, the founders who have been randomly assigned to think like scientists brought on average more than 40 times the revenue of the control group.” 

As for the participants in the control group, Grant says, “When their product launch bombs, they still preach they were right. They prosecute their critics for being wrong, and they politic by lobbying the board to support the status quo.” 

“Learning to think like a scientist frees leaders from those traps. They start to listen to the ideas that make them think hard instead of just the opinions that make them feel good. They surround themselves with disagreeable givers who challenge their thought processes.” 

UK CISO Outlook: 7 Areas to Prioritize in 2024

CISOs in the UK faced giant hurdles this year, from the persistent skills shortage and budget limitations to advanced cyberattacks and economic uncertainty. Despite these challenges, CISOs still had to fulfill the critical role of protecting their organization’s digital assets and driving cybersecurity investments. According to research by ECI Partners, CISOs are the most in-demand leadership role in the UK, and that demand will remain for the next five years.  

As the year draws to a close, what should UK CISOs prioritize in 2024 to ensure their organization is prepared for the evolving cybersecurity landscape? This article uncovers 7 key focus areas for UK CISOs to add to their agenda. 

 

1. BOLSTER CYBER RESILIENCE MEASURES  

According to PwC’s Cyber Security Outlook 2023, 90% of UK senior executives ranked the increased exposure to cyber risk due to accelerating digital transformation as the biggest cybersecurity challenge for their organization. Cyber risks trump other risks associated with inflation, macroeconomic volatility, climate change, and geopolitical conflict. 25% of UK business leaders are also bracing for their company to be highly exposed to cyber risks over the next five years.  

The Cyber Breaches Survey 2023 by the Department for Science, Innovation & Technology highlights the measures taken by large UK businesses to curb cyberattacks

  • 63% have undertaken cybersecurity risk assessments in the last year  
  • 72% have deployed security monitoring tools 
  • 55% are insured against cybersecurity risks 
  • 55% review the risks posed by their immediate suppliers 
 

Cybersecurity leaders at Dell and Accenture also suggest 3 key actions to CISOs to support cyber resilience and speed up recovery if attacked: 

  • Implement a “lifeboat” scenario: Review technology dependencies, identify critical processes and assets, understand RTO/RPO requirements, and implement and regularly test recovery processes. That way, organizations can maintain operations if they suffer from a cyberattack.  
  • Ensure the obligations of third parties align with the organization’s requirements: Identify which critical processes and assets are managed by third-party vendors, validate the scope and liabilities of contracted services, and ensure they align with the organization’s requirements. 
  • Test the organization’s recovery capabilities: Employ external experts to simulate attacks on the organization’s defenses. Oversee how the IT and the business team would react and provide guided recommendations for improved security posture and resilience. 
 

2. MAXIMIZE CLOUD SECURITY INVESTMENTS  

MAXIMIZE CLOUD SECURITY INVESTMENTS  

As multicloud environments become more prevalent across industries, so do the cyber risks associated with them. PwC’s Cyber Security Outlook 2023 highlighted the top cybersecurity concern among UK business leaders: cloud-related threats.  39% of CISOs expect cloud-related threats to affect their organization the most. Cloud security threats pose the most risks compared to threats from laptop and desktop endpoints, web applications, and software supply chain. Therefore, it makes sense that UK CISOs are allocating the most budget to cloud security.  

According to findings by Cybersecurity in Focus, the top 3 expenditure areas among UK CISOs are: 

  • 25% Cloud security  
  • 20% Identity access management  
  • 18% Security and vulnerability management  
 

Let’s look at how cloud security investments have paid off in the UK’s public and private sectors: 

  • The Houses of Parliament appointed Ascentor to create a new information assurance process to address the increasing use of cloud-based solutions. Ascentor introduced a risk appetite statement and three different assurance paths based on information sensitivity. The assurance process is now well-established, and risks are regularly reappraised and managed. 
  • Bravura chose Vodafone Cloud and Security as its hosting and connectivity partner to ensure the protection of business-critical data. Vodafone Cloud managed primary and backup hosting and fixed connectivity and security, freeing up the IT team’s time and increasing efficiency.  
  • The UK Data Service faced the challenge of providing access to big data while meeting stringent privacy and security requirements. Therefore, the government body deployed solutions from Amazon Web Services (AWS) to offer a seamless and powerful search and analytics experience, enabling them to query any concept held in the data lake at the cell level and enrich data for better insights. 
  • University of Sunderland sought help from CrowdStrike to modernize its cloud security systems after experiencing a data breach. CrowdStrike offered an effective solution to secure the university’s 5,000 endpoints with little administrative overhead with its unique combination of technology, threat intelligence, and skilled expertise.  
 

3. SECURE BIGGER CYBERSECURITY BUDGETS 

73% of CISOs predict that economic instability will negatively impact cybersecurity budgets (Proofpoint). Another report by iomart and Oxford Economics, Security’s Lament: The state of cybersecurity in the UK 2023, supports this, finding that UK businesses that experienced budgetary constraints suffered a 25% increase in cyber incidents.  

27% of organizations think their cybersecurity budget is inadequate to combat growing cyberthreats.  

Smaller budgets are hindering meeting cybersecurity goals and causing blind spots in cyber strategies. In addition, increasing cyber insurance premiums are taking a toll on overall budgets.  

On the other hand, a study by BSS, How CISOs can succeed in a challenging landscape, found that although 61% of CISOs reported increased funding, it was paired with unrealistic expectations and a lack of understanding by budget holders on business threats. Interestingly, 78% of CISOs only received extra funding after the organization experienced high-profile cyberattacks. This has led 55% of CISOs to use the funding to put out immediate fires instead of long-term investments in security solutions.  

 

Here are several strategies UK CISOs can take to seek more funding from the board:  

  • Get support from other C-suites: By getting back up from the CFO and CEO, CISOs can understand business risks better to frame their funding requests. They can also reach out to colleagues in the purchasing and business units that will benefit from the extra funding. 
  • Demonstrate ROI, TCO, and the bottom line: Communicating these three areas is crucial in securing cybersecurity funding. CISOs must take this opportunity to explore the right tools that can help illustrate the effectiveness and value of their security programs to the board.  
  • Calculate the cost of not implementing security technology: To get the board’s attention, CISOs must calculate and communicate the financial risk of not implementing the security solution, including the likelihood and impact of a breach. 
  • Understand the board’s risk appetite: How much expense the board is willing to incur in the event of a worst-case scenario varies depending on the organization’s industry, risk tolerance, cyber insurance coverage, data sensitivity, and regulatory environment. 
 

4. IMPROVE RELATIONSHIP WITH THE BOARD 

As cybersecurity-related matters align more closely with business strategy, communication between CISOs and board members is paramount.  Only 9% of CISOs state that information security is a top priority in the boardroom’s meeting agenda. 

Additionally, a mere 22% of CISOs participate in business strategy and decision-making (BSS). Board members are also not as concerned about the susceptibility of the organization to future cyberthreats. A whopping 79% of UK CISOs are worried about their liability in the event of a cybersecurity incident, while the board is more nonchalant; only 54% of directors expressed similar concerns.  

 

According to cyber risk management leader Bitsight, CISOs can improve board relationships with:  

  • Ongoing communication: Regular open communication with the board on the organization’s cybersecurity posture, emerging threats, and the status of ongoing security initiatives is beneficial. This communication may be quarterly or semi-annual, depending on the needs of the organization. 
  • Educational engagement: CISOs can provide the board with resources and updates on cybersecurity risks, their impact on the organization, and the measures being taken to mitigate them. This is especially important for board members without a technical background. 
  • Risk reporting: Cybersecurity risks must be presented in the context of business risks. Explaining the potential impact of cybersecurity vulnerabilities on reputation, financial stability, and regulatory compliance can help the board understand the importance of cybersecurity investments. Risk assessments, metrics, and KPIs can be used to illustrate the potential impact. 
  • Cybersecurity governance framework: This is a valuable tool for outlining the roles and responsibilities of the CISO, management, and the board in cybersecurity decision-making, budget approval, and incident response. 
  • Incident response planning: The board should be involved in the development and testing of incident response plans. Board members must be aware of the roles they play in managing and overseeing the response. 
  • Vendor and third-party risk management: The CISO should strategically manage and reduce risks associated with third-party vendors to increase the board’s confidence. The board should be informed of these risks and how the organization is mitigating them. 
 

5. NAVIGATE COMPLEX CYBERSECURITY REGULATIONS 

The UK’s cybersecurity regulation landscape is complex as the country does not have one unified cybersecurity law. Rather than a single regulation, UK organizations have to refer to a myriad of existing legislations and adapt them to their organization’s cybersecurity needs.  

However, UK companies are receptive to the implementation of data privacy regulations. 59% are very prepared for the Global Data Protection Regulation (GDPR) in the UK and EU, as well as the Data Protection Act 2018 (DPA), according to the 2023 Global Data Privacy Law Survey Report by Womble Bond Dickinson. UK respondents are also more comfortable about the impact of privacy regulations on their ability to conduct cross-border business, with 40% stating that they are willing to cover the extra costs incurred by the regulations. However, another study found that 29% of UK CISOs are frustrated with changing regulations. 64% of CISOs comment that regulations change before they can meet previous requirements (BSS).  

 

UK CISOs can do the following in the meantime: 

  • Identify which cyber laws the organization needs to comply with by conducting extensive research or hiring a security expert. This includes international regulations if the company serves customers worldwide.  
  • Create an Information Security Management System (ISMS) with processes the company needs to comply with. Refer to international standards such as ISO 27001 that can provide a suitable framework. 
  • Keep the board informed on evolving cybersecurity regulations and ensure the organization remains in compliance. This includes discussing the potential legal and financial implications of non-compliance. 
 

6. BOOST UPSKILLING AND TRAINING PROGRAMS 

A September 2023 report from the Public Accounts Committee (PAC) warns that a lack of cybersecurity experts in the UK government should be of significant concern. Additionally, 48% of respondents agree that their organization suffers from a lack of expertise. 62% noted at least a quarter of their permanent headcount isn’t based in the UK, which highlights a deficit when it comes to knowledge of local regulations, compliance, and risk.  

According to a report by the Chartered Institute of Information Security (CIISec), most also claim the industry is facing a shortage of skills rather than people, hinting that better training could help alleviate challenges in this area. Research by Robert Walters concludes that the greatest shortages will be felt in Yorkshire (73%), London (62%) and the North (55%). Additionally, cybersecurity (56%) is the most sought-after skill in organizations. Instead of taking on the task of training and upskilling existing staff themselves, CISOs can seek support from third parties who offer cybersecurity training.  

For example, BT partnered with CAPSLOCK, an accredited cybersecurity boot camp, to retrain 30 employees over 17 weeks. They wanted to make clear that employees don’t need prior IT or security qualifications to break into the industry. Eight months after the boot camp, all learners are working in cyber roles at BT, aligned with their strengths and achievements in the program. Those who excelled in governance topics were assigned roles in governance and assurance, while those who performed well in technical modules are working in fields such as DDoS, security architecture and design, and forensics. 

 

7. IMPROVE SELF-RESILIENCE AND BUILD A SUPPORT TEAM 

According to a study by Proofpoint, 74% of UK CISOs are experiencing unreasonable job expectations and overwhelming responsibilities. 79% are concerned about personal liability and 74% have experienced burnout in the past 12 months. Furthermore, 8% of UK CISOs work more than 55 hours per week, which is considered “a serious health hazard” by the World Health Organization (WHO).” Worryingly, 50% of respondents say their workload keeps them awake at night, more so than suffering from a cyberattack (CIISec). 

 

Global Resident CISO for Proofpoint, Lucia Milică Stacy, suggests the following on how organizations can support their cybersecurity leaders

  • Bring in cybersecurity experts on the board who understand what the organization and the cybersecurity team grapple with.  
  • Establish a cybersecurity risk oversight committee to interpret cyber risk and how it affects the broader business goals and the valuation of the organization.  
  • Make sure CISO’s frustrations are heard by the board so there is transparency on the threats the organization faces, as well as what the security team goes through to fight those threats. 
 

In addition, CISOs can reduce stress and the possibility of burnout by delegating tasks to other team members, leveraging time-saving tools and technologies to automate menial tasks, and seeking support from other cybersecurity leaders who share the same challenges by joining business networks and attending industry events.  

 

By focusing on these seven areas, UK CISOs can better protect their organizations from unprecedented risks in the years to come.  However, it is important to note that CISOs cannot do this alone. CISOs are under immense pressure, and it is important for organizations to recognize the critical and stressful nature of their role. The board and executive team must equip CISOs with enough support and resources, in addition to recognizing and rewarding CISOs for their contributions. 

AI-Powered Cybersecurity: Start With a Chief AI Officer

In this era of digitization where data and connectivity underpin every business decision, protecting your digital assets isn’t just crucial; it’s the fundamental core of business survival. AI offers a potential of a more resilient digital infrastructure, a proactive approach to threat management, and a complete overhaul of digital security.

According to a survey conducted by The Economist Intelligence Unit, approximately 48.9% of top executives and leading security experts worldwide believe that artificial intelligence (AI) and machine learning (ML) represent the most effective tools for combating modern cyberthreats.

However, a survey conducted by Baker McKenzie highlights that C-level leaders tend to overestimate their organizations’ readiness when it comes to AI in cybersecurity. This underscores the critical importance of conducting realistic assessments of AI-related cybersecurity strategies.

Dr. Bruce Watson and Dr. Mohammad A. Razzaque shared actionable insights for digital leaders on implementing AI-powered cybersecurity.

 
Dr. Bruce Watson is a distinguished leader in Applied AI, holding the Chair of Applied AI at Stellenbosch University in South Africa, where he spearheads groundbreaking initiatives in data science and computational thinking. His influence extends across continents, as he serves as the Chief Advisor to the National Security Centre of Excellence in Canada.
Dr. Mohammad A. Razzaque is an accomplished academic and a visionary in the fields of IoT, cybersecurity, machine learning, and artificial intelligence. He is an Associate Professor (Research & Innovation) at Teesside University
 

The combination of AI and cybersecurity is a game changer. Is it a solution or a threat?

 

Bruce: Quite honestly, it’s both. It’s fantastic that we’ve seen the arrival of artificial intelligence that’s been in the works for many decades. Now it’s useable and is having a real impact on business. At the same time, we still have cybersecurity issues. The emergence of ways to combine these two things is exciting.

Razzaque: It has benefits and serious challenges, depending on context. For example, in critical applications such as healthcare or driverless card, it can be challenging. Driverless cars were projected to be on the roads by 2020, but it may take another 10 years. Similarly with the safety of AI, I think it’s hard to say.

 

What are your respective experiences in the field of cybersecurity and AI?

 

B: I come from a traditional cybersecurity background where it was all about penetration testing and exploring the limits of a security system. In the last couple of years, we’ve observed that the bad guys are quickly able to use artificial intelligence techniques. To an extent, these things have been commoditized. They’re available through cloud service providers and there are open-source libraries with resources for people to make use of AI. It means the barrier for entry for bad actors is now very low. In practice at the university as well as when we interface with the industry at large, we incentivize people to bring AI techniques to bear on the defensive side of things. That’s where I think there’s a real potential impact.  

It’s asymmetrical warfare. Anyone defending using traditional methods will be very quickly overrun by those who use AI techniques to generate attacks at an extreme rate.

R: I’m currently working on secure machine learning. I work with companies that are developing solutions to use generative AI for automated responses to security incident. I’m also working on research on secure sensing, such as for autonomous vehicles. This is about making sure that the sensors data is accurate, since companies like Tesla rely on machine learning. If you have garbage in, you’ll produce garbage out.

 

Given AI’s nature, is there a risk of AI developing itself as an attacker?

 

B: It fits well with the horror scenarios from science fiction movies. Everyone is familiar with Terminator, for example. We’re not at that point yet where there’s a possibility of AI developing arbitrary new ways to attack systems. However, we’re also not far from that point. Generative AI, when given access to a large body of malicious code, or even fragments of computer viruses, malware, or other attack techniques, it is able to hybridize these things rapidly into new forms of attack, quicker than humans can. In that sense, we’re seeing a runaway process. But it is still stoppable, because systems are trained on data that we provide them in the first place. At a certain point, if we let this free to fetch codes on the internet or be fed by bad actors, then we’ll have a problem where attacks will start to dramatically exceed what we can reasonably detect with traditional firewalls or anomaly detection systems.

It scares me to some extent, but doesn’t keep me awake at night yet. I tend to be an optimist and that optimism is based on the possibility for us to act now. There isn’t time for people to set around and wait until next year before embracing the combination of AI and cybersecurity. There are solutions now so there’s no good reason for anyone to be sitting back and waiting for an AI-cybersecurity apocalypse. We can start mitigating now.

R: We use ChatGPT and other LLMs that are part of the generative AI revolution. But there are also tools out there for bad actors like FraudGPT. That’s a service you can buy to generate an attack scenario. The market for these types of tools is growing, but we’re not yet at a self-generating stage.

 

Are we overestimating the threat of AI to cybersecurity?

 

B: A potential issue is that we simply do not know what else is out there in the malware community. Or rather, we have some idea as we interact with malware and the hacker community as much as we can without getting into trouble ourselves, but we do see that they’re making significant advances. They’re spending a lot of time doing their own research using commodity and open-source products and manipulating them in such a way that they’re getting interesting and potentially dangerous results.

 

How can the good guys stay ahead of bad actors? Is it a question of money, or the red tape of regulations?

 

R: Based on my research experience, humans are the weakest link in cybersecurity. We’re the ones we should be worried about. IoT is responsible for about 25% of overall security concerns but only sees about 10% of investment. That’s a huge gap. The bad guys are always going to be ahead of us because they do not have bureaucracy. They are proactive while we need time to make decisions. And yes, staying ahead is also a question of money but it’s also about understand the importance of acting promptly. This doesn’t mean forgoing compliance and regulation. It means we have to behave responsibly, like changing out passwords regularly.

B: It’s very difficult to advocate for getting rid of governance and compliance, because these things keep us honest. There are some ways out of this conundrum, because this is definitely asymmetrical warfare where the bad guys can keep us occupied with minimal resources while we need tremendous resources to counter them.

One of the ways around it is to do a lot of the compliance and governance using AI systems themselves. For monitoring, reporting, compliance – those can be automated. As long as we keep humans in the loop of the business processes, we will experience a slowdown.

The other way of countering the issue is to get together on the defensive side of things. There’s far too little sharing of information. I’m talking about Cyberthreat Intelligence (CTI). Everyone has recognized for a long time that we need to share when we have a breach or even a potential breach. Rather than going into secrecy mode where we disclose as little as possible to anyone, we should be sharing information with governments and partner organizations. That way, we actually gain from their defensive posture and abilities.

Sharing cyberthreat intelligence is our way of pulling the cost down and spreading the burden across a collective defence network.

 

What is the first thing business leaders should to do prepare for what AI can do and will be used for?

 

R: When it comes to cybersecurity, technical solutions are only 10%. The other 90% is responsibility. Research shows that between 90 to 95% of cybersecurity incidents could have been avoided if we behaved responsibly. The second thing is that cybersecurity should be a consideration right from the start, not an afterthought. It’s like healthcare. You need to do what you can to avoid ever needing medical care in the first place. It’s the same here.

B: The number one thing is to make sure that your company appoints a Chief AI Officer. This may be someone who is also the CIO or CSO, but at the very least there should be board-level representation of AI and its impact on the business. Any business in the knowledge economy, financial industry, technology, as well as manufacturing and service industries – all are going to have to embrace AI. People may think it’s a fad, but AI will absolutely steamroll organizations that don’t embrace it immediately.  That’s what I would do on day one. Within a couple of days after that, there must be a working group within the company to figure out how to roll out AI, because people will be using it whether openly or discreetly. AI forms a tremendous force multiplier for running your business but also a potential security threat for leakage of information out of the business as well. So you need a coherent roll out in terms – in terms of information flow, your potential weaknesses, embedding it into corporate culture and bringing it into cybersecurity. Any company that ignores these things is in peril.

 

Where does ethics come into this?

 

R: No one can solve the problem of AI or cybersecurity individually. It needs to be collaborative. The EU AI Act outlines four categories of risk – unacceptable, high, limited, and minimal. The EU doesn’t consider it an individual state problem. In fact, they also have a cybersecurity legislation that clearly states that it would supersede state-level regulations. The UK, on the other hand, is slightly more pro-innovation. The good news is that they are focused on AI assurance research which include things like ethics, fairness, security, and explainability. So if businesses follow the EU AI Act and focus on AI assurance, they can lead with AI securely and responsibly.

B: There are a couple of leading frameworks for ethical and responsible AI use including from the European Union as well as the UN. Many of the standard organizations have been working hard on these frameworks. Still, there is a sense that this is not something that can be naturally embedded within AI systems. On the other said, I think it’s become increasingly likely and possible that we can build limited AI systems that have only one job of looking out for the ethical and responsible behaviour of either humans or other systems. So we are potentially equipping ourselves with the ability to have the guardrails themselves be a form of AI that is very restricted and conforms to the rules of the EU or other jurisdictions.

 

Which areas do you see as having the biggest potential for using AI within cybersecurity – for example identification, detections, response, recovery?

 

B: I’m hesitant to separate them because each of those are exactly where AI should be applied. It’s possible to apply them in tandem. AI has an immediate role in detection and prevention. We can use it to evaluate the security posture of an organization and make immediate suggestions and recommendations for how to strengthen it. Still, we know that at a certain point, something will get through. It’s impossible to defend against absolutely everything. But it is important to make quick moved in terms of defending and limiting damage, sharing information, and recovering. Humans are potentially the weak links there too. Humans monitoring a system will need time to assess a situation and find the best path forward, whereas an AI can embody all the relevant knowledge within our network and security operation centres and generation recommendations quicker. We can have faster response times which are key to minimizing damage.

 

What are some significant upcoming challenges and opportunities within the AI-powered cybersecurity domain in the next two years?

 

R: Definitely behaviour analysis, not only to analyse systems but users as well for real-time, proactive solutions. The systems we design, including AI, are for us. We need to analyse our behaviour to ensure that we’re not causing harm.

B: Another thing AI is used for is training, within cybersecurity but across corporations as well. There’s a tremendous amount of knowledge and many companies have training for a wide variety of things. These can be fed into AI systems that resemble large language models. AI can be used as a vector for training. The other thing is a challenge on how quickly organizations will decide to be open with peer companies. Will you have your Chief AI Officer sit at a roundtable of peers from other companies to actually share your cybersecurity horror stories? The other significant challenged is related to change management. People are going to get past the novelty of ChatGPT as a fun thing to play around with and actually develop increasing fears about potential job losses and other threats posed by AI.