AI-Powered Cybersecurity: Start With a Chief AI Officer

In this era of digitization where data and connectivity underpin every business decision, protecting your digital assets isn’t just crucial; it’s the fundamental core of business survival. AI offers a potential of a more resilient digital infrastructure, a proactive approach to threat management, and a complete overhaul of digital security.

According to a survey conducted by The Economist Intelligence Unit, approximately 48.9% of top executives and leading security experts worldwide believe that artificial intelligence (AI) and machine learning (ML) represent the most effective tools for combating modern cyberthreats.

However, a survey conducted by Baker McKenzie highlights that C-level leaders tend to overestimate their organizations’ readiness when it comes to AI in cybersecurity. This underscores the critical importance of conducting realistic assessments of AI-related cybersecurity strategies.

Dr. Bruce Watson and Dr. Mohammad A. Razzaque shared actionable insights for digital leaders on implementing AI-powered cybersecurity.

 
Dr. Bruce Watson is a distinguished leader in Applied AI, holding the Chair of Applied AI at Stellenbosch University in South Africa, where he spearheads groundbreaking initiatives in data science and computational thinking. His influence extends across continents, as he serves as the Chief Advisor to the National Security Centre of Excellence in Canada.
Dr. Mohammad A. Razzaque is an accomplished academic and a visionary in the fields of IoT, cybersecurity, machine learning, and artificial intelligence. He is an Associate Professor (Research & Innovation) at Teesside University
 

The combination of AI and cybersecurity is a game changer. Is it a solution or a threat?

 

Bruce: Quite honestly, it’s both. It’s fantastic that we’ve seen the arrival of artificial intelligence that’s been in the works for many decades. Now it’s useable and is having a real impact on business. At the same time, we still have cybersecurity issues. The emergence of ways to combine these two things is exciting.

Razzaque: It has benefits and serious challenges, depending on context. For example, in critical applications such as healthcare or driverless card, it can be challenging. Driverless cars were projected to be on the roads by 2020, but it may take another 10 years. Similarly with the safety of AI, I think it’s hard to say.

 

What are your respective experiences in the field of cybersecurity and AI?

 

B: I come from a traditional cybersecurity background where it was all about penetration testing and exploring the limits of a security system. In the last couple of years, we’ve observed that the bad guys are quickly able to use artificial intelligence techniques. To an extent, these things have been commoditized. They’re available through cloud service providers and there are open-source libraries with resources for people to make use of AI. It means the barrier for entry for bad actors is now very low. In practice at the university as well as when we interface with the industry at large, we incentivize people to bring AI techniques to bear on the defensive side of things. That’s where I think there’s a real potential impact.  

It’s asymmetrical warfare. Anyone defending using traditional methods will be very quickly overrun by those who use AI techniques to generate attacks at an extreme rate.

R: I’m currently working on secure machine learning. I work with companies that are developing solutions to use generative AI for automated responses to security incident. I’m also working on research on secure sensing, such as for autonomous vehicles. This is about making sure that the sensors data is accurate, since companies like Tesla rely on machine learning. If you have garbage in, you’ll produce garbage out.

 

Given AI’s nature, is there a risk of AI developing itself as an attacker?

 

B: It fits well with the horror scenarios from science fiction movies. Everyone is familiar with Terminator, for example. We’re not at that point yet where there’s a possibility of AI developing arbitrary new ways to attack systems. However, we’re also not far from that point. Generative AI, when given access to a large body of malicious code, or even fragments of computer viruses, malware, or other attack techniques, it is able to hybridize these things rapidly into new forms of attack, quicker than humans can. In that sense, we’re seeing a runaway process. But it is still stoppable, because systems are trained on data that we provide them in the first place. At a certain point, if we let this free to fetch codes on the internet or be fed by bad actors, then we’ll have a problem where attacks will start to dramatically exceed what we can reasonably detect with traditional firewalls or anomaly detection systems.

It scares me to some extent, but doesn’t keep me awake at night yet. I tend to be an optimist and that optimism is based on the possibility for us to act now. There isn’t time for people to set around and wait until next year before embracing the combination of AI and cybersecurity. There are solutions now so there’s no good reason for anyone to be sitting back and waiting for an AI-cybersecurity apocalypse. We can start mitigating now.

R: We use ChatGPT and other LLMs that are part of the generative AI revolution. But there are also tools out there for bad actors like FraudGPT. That’s a service you can buy to generate an attack scenario. The market for these types of tools is growing, but we’re not yet at a self-generating stage.

 

Are we overestimating the threat of AI to cybersecurity?

 

B: A potential issue is that we simply do not know what else is out there in the malware community. Or rather, we have some idea as we interact with malware and the hacker community as much as we can without getting into trouble ourselves, but we do see that they’re making significant advances. They’re spending a lot of time doing their own research using commodity and open-source products and manipulating them in such a way that they’re getting interesting and potentially dangerous results.

 

How can the good guys stay ahead of bad actors? Is it a question of money, or the red tape of regulations?

 

R: Based on my research experience, humans are the weakest link in cybersecurity. We’re the ones we should be worried about. IoT is responsible for about 25% of overall security concerns but only sees about 10% of investment. That’s a huge gap. The bad guys are always going to be ahead of us because they do not have bureaucracy. They are proactive while we need time to make decisions. And yes, staying ahead is also a question of money but it’s also about understand the importance of acting promptly. This doesn’t mean forgoing compliance and regulation. It means we have to behave responsibly, like changing out passwords regularly.

B: It’s very difficult to advocate for getting rid of governance and compliance, because these things keep us honest. There are some ways out of this conundrum, because this is definitely asymmetrical warfare where the bad guys can keep us occupied with minimal resources while we need tremendous resources to counter them.

One of the ways around it is to do a lot of the compliance and governance using AI systems themselves. For monitoring, reporting, compliance – those can be automated. As long as we keep humans in the loop of the business processes, we will experience a slowdown.

The other way of countering the issue is to get together on the defensive side of things. There’s far too little sharing of information. I’m talking about Cyberthreat Intelligence (CTI). Everyone has recognized for a long time that we need to share when we have a breach or even a potential breach. Rather than going into secrecy mode where we disclose as little as possible to anyone, we should be sharing information with governments and partner organizations. That way, we actually gain from their defensive posture and abilities.

Sharing cyberthreat intelligence is our way of pulling the cost down and spreading the burden across a collective defence network.

 

What is the first thing business leaders should to do prepare for what AI can do and will be used for?

 

R: When it comes to cybersecurity, technical solutions are only 10%. The other 90% is responsibility. Research shows that between 90 to 95% of cybersecurity incidents could have been avoided if we behaved responsibly. The second thing is that cybersecurity should be a consideration right from the start, not an afterthought. It’s like healthcare. You need to do what you can to avoid ever needing medical care in the first place. It’s the same here.

B: The number one thing is to make sure that your company appoints a Chief AI Officer. This may be someone who is also the CIO or CSO, but at the very least there should be board-level representation of AI and its impact on the business. Any business in the knowledge economy, financial industry, technology, as well as manufacturing and service industries – all are going to have to embrace AI. People may think it’s a fad, but AI will absolutely steamroll organizations that don’t embrace it immediately.  That’s what I would do on day one. Within a couple of days after that, there must be a working group within the company to figure out how to roll out AI, because people will be using it whether openly or discreetly. AI forms a tremendous force multiplier for running your business but also a potential security threat for leakage of information out of the business as well. So you need a coherent roll out in terms – in terms of information flow, your potential weaknesses, embedding it into corporate culture and bringing it into cybersecurity. Any company that ignores these things is in peril.

 

Where does ethics come into this?

 

R: No one can solve the problem of AI or cybersecurity individually. It needs to be collaborative. The EU AI Act outlines four categories of risk – unacceptable, high, limited, and minimal. The EU doesn’t consider it an individual state problem. In fact, they also have a cybersecurity legislation that clearly states that it would supersede state-level regulations. The UK, on the other hand, is slightly more pro-innovation. The good news is that they are focused on AI assurance research which include things like ethics, fairness, security, and explainability. So if businesses follow the EU AI Act and focus on AI assurance, they can lead with AI securely and responsibly.

B: There are a couple of leading frameworks for ethical and responsible AI use including from the European Union as well as the UN. Many of the standard organizations have been working hard on these frameworks. Still, there is a sense that this is not something that can be naturally embedded within AI systems. On the other said, I think it’s become increasingly likely and possible that we can build limited AI systems that have only one job of looking out for the ethical and responsible behaviour of either humans or other systems. So we are potentially equipping ourselves with the ability to have the guardrails themselves be a form of AI that is very restricted and conforms to the rules of the EU or other jurisdictions.

 

Which areas do you see as having the biggest potential for using AI within cybersecurity – for example identification, detections, response, recovery?

 

B: I’m hesitant to separate them because each of those are exactly where AI should be applied. It’s possible to apply them in tandem. AI has an immediate role in detection and prevention. We can use it to evaluate the security posture of an organization and make immediate suggestions and recommendations for how to strengthen it. Still, we know that at a certain point, something will get through. It’s impossible to defend against absolutely everything. But it is important to make quick moved in terms of defending and limiting damage, sharing information, and recovering. Humans are potentially the weak links there too. Humans monitoring a system will need time to assess a situation and find the best path forward, whereas an AI can embody all the relevant knowledge within our network and security operation centres and generation recommendations quicker. We can have faster response times which are key to minimizing damage.

 

What are some significant upcoming challenges and opportunities within the AI-powered cybersecurity domain in the next two years?

 

R: Definitely behaviour analysis, not only to analyse systems but users as well for real-time, proactive solutions. The systems we design, including AI, are for us. We need to analyse our behaviour to ensure that we’re not causing harm.

B: Another thing AI is used for is training, within cybersecurity but across corporations as well. There’s a tremendous amount of knowledge and many companies have training for a wide variety of things. These can be fed into AI systems that resemble large language models. AI can be used as a vector for training. The other thing is a challenge on how quickly organizations will decide to be open with peer companies. Will you have your Chief AI Officer sit at a roundtable of peers from other companies to actually share your cybersecurity horror stories? The other significant challenged is related to change management. People are going to get past the novelty of ChatGPT as a fun thing to play around with and actually develop increasing fears about potential job losses and other threats posed by AI.

7 B2B Networking Mistakes & How to Avoid Them

B2B networking is one of the pillars of sales and marketing. In fact, the closing rate for customers obtained via business networking is 40% (Hubspot). Mastering the art of B2B networking is essential for every sales professional to build relationships with potential customers, learn about new opportunities, and stay ahead of the competition.

However, it can be a tough task as there are some B2B networking mistakes that even experienced salespeople might make that can them valuable time and resources. Fortunately, with every challenge there are solutions.

 

1. Being Unprepared

 

The worst thing you can be at a networking event, either virtual or physical, is unprepared. Not knowing who might be attending or the agenda of the events will lead you to uninteresting, surface level conversations. More importantly, you need to know which attendees are within your target market is so you can focus on networking with the right people. In sales, a buyer persona is your biggest asset and can lead to a 171% increase in sales for companies as they understand who their target market is (Zipdo).

In that vein, doing a little research on the specific people you want to network with at events will save you a lot of time and give you the best opportunity to connect with the right people.

Steps:

  • Request a list of attendees prior to the event (if possible) and research them and their companies. Look up any interviews they might have done and find out about their latest projects. This can help spark conversations.
  • Make a shortlist of who you’d most like to meet and connect with them on LinkedIn before the event.
  • Make a note of the interesting sessions at the event, especially those related to your product or services, so you can attend and find attendees to connect with who were interested in the same session.
 

2. Overselling

 

Networking is a skill that requires subtlety and precision. A salesperson who turns every conversation into a sales pitch and only talks about their own products and services is setting themselves up to fail.

Instead, focus on active listening – that’s a big part of communicating and forging a connection. People want to know that their needs and challenges are heard. Listen to understand and tailor your responses accordingly. Networking is about building relationships, not making a sale.

Steps:

  • Begin a conversation by asking about specific topics that were discussed at the event.
  • Guide the conversation towards their needs, challenges, and interests in that area.
  • Ask open ended questions and listen attentively so you can ask relevant follow-up questions.
 

3. Fumbling Your Introduction

 

A big challenge of networking events, especially in-person ones, is the limited time you may have to speak to everyone. There’s a lot of movement and social interactions happening, which means you must make a good impression immediately or risk losing people’s interest.

Something as simple as a firm handshake can provide a good first impressions for 72% of people who are meeting in-person. Approaching with confidence is also key as 70% of communication is non-verbal. Your posture, demeanour, and how you choose to approach someone will set the tone for the rest of the conversation.

Steps:

  • Prepare two to three simple introductions you can use when meeting people that will set you apart. Make sure to include who you are, what you do, and your interest. End the introduction with a question for the person you’re talking to that will push the conversation forward.
  • Prepare conversation boosters you can call on when needed if an interaction seems to be struggling. These should be open ended questions about their interest, current events, or share experiences.
  • In a virtual setting, you can include a link to your LinkedIn profile or website as part of your introduction.
 

4. Being Inflexible

 

When attending any networking event, be it virtual or in-person, you will meet all kinds of personalities. A style of approach that works on one person may not work for another, which means you must be flexible in your style of communication.

Attention is key. If you notice someone only speaks up when asked different questions, do that. There may be those who prefer talking about current events or others who prefer exchanging business cards first before talking about anything. You need to be adaptable and mirror the style of the person you’re networking in – that is how you can build trust and familiarity. This is especially true for in-person networking which professionals agree gives them the ability to read body language and facial expressions (77%) which can lead to building stronger business relationships.

Beyond that, Gitnux found that 80% of B2B buyers expect a B2C-like experience, which tends to be more personalised. This is where you can shine, by adjusting your style to suit the person you’re talking to.

Steps:

  • Pay attention to how the person speaks and their body language to mirror their stance and subtle movements.
  • Listen actively and adjust your tone, pace, and language to match the person you’re engaging with.
  • If you notice that the person you are talking to is disengaged, politely excuse yourself and offer a business card as a gesture of good faith. Do not take it personally.
 

5. Ending A Conversation Abruptly

 

How you end a conversation is just as important as first impressions. It’s your final chance to make a mark with the person you’re talking to and sometimes, it can be exactly what they remember you for. Fumbling your exit is a B2B networking mistake that can undo all the great work you’ve done so far to cultivate a professional connection.

If you remember to do just one thing, let it be an offer to connect on social media such as LinkedIn or Facebook, which are the two largest platforms used by business professionals. In fact, 35% of businesses professionals note that a conversation on LinkedIn has led to fresh opportunities including business partnerships. Make the most of that.

Steps:

  • Make sure to give out your business card, if you hadn’t already, at the end of the conversation. When receiving a card, make sure to accept graciously and stow it away in a separate pocket from your own.
  • Thank them for their time and insight. Tell them you enjoyed the chat and recount any key points that stood out to you.
  • Keep it light, friendly, and suggest a follow-up conversation or meeting. Offer to connect on LinkedIn.
 

6.  Mishandling Business Cards

 

Though it may seem arbitrary in a digital age, business cards are still an important element of B2B networking. Exchanging them is a sign of respect and a quick way to exchange contact details. Research suggests that 63% of people throw away business cards within 7 days of receiving them, but that sales still increase by 2.5% for every 2,000 business cards handed out. It’s a numbers game that you can win by giving out your cards to as many people as possible.

On the other hand, what you do with business cards that you receive is equally important. Receiving a business card isn’t just an exchange of contact details, it’s an invitation to stay in touch. You do not want to mishandle that gift and risk losing the connection you just made.

Steps:

  • Give out your business card while clearly pointing out any personal contact details such as a personal phone number or extension.
  • When you receive a business card, take a pause to look at it properly and ask a question about the information on the card i.e the company’s location, the person’s title, the design of the card.
  • Ensure that you place the business card in a separate pocket or card holder to keep it from getting mixed up with your own.
 

7.  Not Setting a Follow-up

 

The whole reason for networking is to create connections and grow possibilities. This is especially true given that 69.7% of attendees consider in-person B2B conferences as the best opportunity to lead about new products or services (Bizzabo).

So not following up on connections made while networking is essentially letting a lead go cold. All that work would have been wasted otherwise. It’s also important not to start follow-up with a sales pitch.

Steps:

  • Before exiting the conversation, make sure to offer connecting on online by exchanging LinkedIn profiles and/or email addresses.
  • Offer to meet again sometime soon to share more insights on the conversation you were just having. If you sense potential, invite them to schedule a one-to-one meeting.
  • Make sure to follow-up within the first 48 hours to thank them for their time. Reference a point in the conversation you had and make it clear that you are happy to continue the conversation.

7 Enterprise Architecture Trends to Watch in 2024

The rapid pace of technological change in recent years has become a catalyst for the evolution of role that enterprise architects have within organizations. No longer just responsible for designing and managing IT systems, enterprise architects are now strategic partners in helping organizations achieve their business goals through the use of technology. In fact, 79% of leaders say demand for EA services have increased in the past year (BizzDesign).

The role of enterprise architects has becoming increasingly cross-functional, working with stakeholders from across the organizations to ensure alignment of IT systems with business needs and the needs of all users beyond just a technical perspective.

The current digital landscape continues to evolve at a lightning pace, giving rise to the need for more mature EA functionality to help break down tech silos and create better collaboration within organizations. Enterprise architects must forge a strong strategic partnership with CIOs to enable the development of configurable and scalable solutions and define a roadmap for future changes.

Here are seven trends that are poised to shape enterprise architecture in 2024.

 

1. Enterprise Architects as Key Change Enablers

 

Prior to this, the job of enterprise architects was focused on reducing an organization’s technical debt and simply their application portfolios to increase business agility. Now, amid rapidly evolving technological and business environments where cloud and API-led services have made systems more complex, there is no longer a need to manage technical debt in the same way. Instead, speed and adaptability are crucial.

Old-style governance processes are too rigid and slow for the current pace of innovation which limits what teams can do to innovate at speed and deliver a competitive advantage.

This is where enterprise architects come in as key change enablers and must champion the value of EA to stakeholders by highlighting how it is essential for a smooth digital future. This will ensure higher EA adoption, increased investment, and ultimately multiply the impact of architects in the organization.

 

2. Enterprise Architecture to Drive Sustainability

A study by MSCI ESG Research LLC found that progress on the sustainable development goals as defined by the United Nations is sorely lacking. In fact, only 38% of respondents were “aligned” with the goals while 55% were either “neutral” or “misaligned”. These findings add fuel to the greenwashing flame, which is no longer tolerated.

EA is an excellent tool for acting on sustainably drivers and defining how different sections of an organization can link together to trace sustainability metrics and progress. Enterprise architects can leverage different frameworks to first run an impact analysis for the organization in consultation with experts to inform their models and insights. From there, it’s important to define actions that can be taken to drive sustainability practices forward both from a technology and people perspective while determining adjustments that may be necessary along the way.

 

3. AI-enabled Enterprise Architecture

 

The hype over artificial intelligence (AI) is not waning. In fact, we’re moving from hype to practical application of AI in business as the technology goes mainstream. What this means for enterprise architects is that AI will start to become a practical tool to enable smarter design practices. On the other hand, the rapid adoption of AI technologies also mean that enterprise architects will have to design systems that support the application of AI-enabled tools.

In the world of EA, AI can be used to optimize the architecture documents with the intention of improving data quality in the face of continuous changes. AI-enabled EA can also intensify collaboration and make EA more accessible to organizations as an easier and quicker way to create models.

 

4. Real-time Compliance

 

With the conversation surrounding AI regulation and data privacy taking centre stage, enterprise architects will be called upon to help ensure that organizations are compliant to new and changing regulations.

The target now for most organizations that want to be data-driven and proactive is real-time compliance. A mature EA will be necessary to show reports on the scope of controls, state of compliance, and provide access to real-time evidence of effectiveness.

Everyone from CISOs to internal auditors and risk managers will benefit from mature EA, which can provide an in-depth, enterprise-wide view of standards and policies that must be adhered to. This benefits regulators as well who will be able to rapidly access and read compliance reports.

From implementation to coordination, visibility, and traceability, EAs can help inform boardroom level decision-making as well as downline measures and processes. This will end up being a cost-saving move and a proactive approach to risk management for organizations.

 

5. Capability-based Planning

 

In a study by the Harvard Business Review during the global recessions in 1980, 1990, and 2000, 9% of organizations were found to have flourished and outperformed their competitors during the recovery period by more than 10% in profit and sales growth. It was posited that the difference lay in how these businesses made contingency plans and were prepared for various scenarios.

This agility is crucial, especially in the wake of not only a global pandemic but a major global economic crisis. Organizations that are poised to make smart investment decisions during economic downturns won’t end up wasting limited resources. This is enabled by EA.

With EA, an organization’s existing capabilities can be mapped out with a focus on a chosen scope, validated by stakeholders and subject matter experts. Enterprise architects can then perform an assessment on the strategic importance of these capabilities and inform changes to future initiatives and technological networks.

 

6. Focus on Customer Experience (CX)

 

The increased focus on customer experience across industries is fueled by consumers being empowered via online platforms. Organizations are paying more attention to delivering better and more personalized experiences that meet individual expectations and needs. This, in turn, leads to the to the rising complexity of an organization’s IT landscape.

Businesses need to leverage architecture tools to help map and understand customer dynamics across multiple channels and quickly adapt to changes in quickly. Mature EA can help organizations identify redundant systems that negatively impact customer experience and identify areas where data can be better shared across systems for a more seamless customer journey.

Enterprise architects can design systems to improve the efficiency and effectiveness of customer-facing processes, enable the development of innovative front-end applications and services, and improve the scalability and reliability of IT systems.

 

7. Emerging Tech

 

Of course, emerging technologies will continue to shape enterprise architecture in the coming years. There are plenty of new technologies and tools that will affect how EA evolves, but the most pressing as follows:

  • Machine learning: Much like AI, machine learning will become more prevalent as a means of automating EA tasks such as pattern identification, prediction, and generating actionable recommendations.
  • Cybersecurity: Emerging technologies in the realm of cybersecurity such as biometric authentication and decentralized identity access management (IAM) will require enterprise architectures to adapt their designs to ensure stronger security. 
  • Mobile Computing: The increased proliferation of mobile devices and applications in this world of remote and hybrid work will require enterprise architects to design IT systems that are mobile-forward, accessible from anywhere, and still secure.
  • Internet of Things (IoT): As vast amounts of data continue to be generated by IoT, enterprise architects must design systems that are capable of collecting, storing, and analysing the data to support business functions and achieve organizational goals.
  • Extended Reality (XR): This includes augmented reality (AR), virtual reality (VR), and mixed reality (MR). These technologies will enable enterprise architects to design and visualize IT systems and processes in new and innovative ways.
  • Blockchain: The use of blockchain technology as a way to create secure, tamper-proof records of transactions and data will be valuable to EA practices.

AI Governance: Balancing Competitiveness with Compliance

The AI landscape is innovating at full speed. From the recent release of Google’s Bard and OpenAI’s ChatGPT Enterprise to growing implementation of AI tools for business processes, the struggle to regulate AI continues.

In Europe, policymakers are scrambling to agree on rules to govern AI – the first regional bloc to attempt a significant step towards regulating this technology. However, the challenge is enormous considering the wide range of systems that artificial intelligence encapsulates and its rapidly evolving nature.

While regulators attempt to ensure that the development of this technology improves lives without threatening rights or safety, businesses are scrambling to maintain competitiveness and compliance in the same breadth.

We recently spoke to two experts on AI governance, Gregor Strojin and Aleksandr Tiulkanov, about the latest developments in AI regulation, Europe’s role in leading this charge, and how business leaders can manage AI compliance and risks within their organizations.

 
Gregor Strojin is the Vice Chair of the Committee on Artificial Intelligence at the Council of Europe and former chair of the Ad Hoc Committee on AI. He is a policy expert with various roles including senior adviser to the Slovenian President of the Supreme Court and the State Secretary of the Ministry of Justice.
Aleksandr Tiulkanov is an AI data and digital policy counsel with 18 years of experience in business and law. He has advised organizations on matters relating to privacy and compliances for digital products and in the field of AI.
 

Europe Trailblazing AI Governance

 

Why does AI need to be regulated?

Aleksandr: Artificial intelligence is a technology that we see almost everywhere nowadays. It is comparable to electricity in the past, but more influential. Crucially, it’s not always neutral in how it affects society. There are instances where technologies based on artificial intelligence affects decisions which, in turn, affect people’s lives. In some cases where there is a high risk of impact, we should take care and ensure that no significant harm arises.

Gregor: Regulations are part of how we manage societies in general. When it comes to technology that is as transformative as AI, we are already faced with consequences both positive and negative. When there is a negative impact, there is a responsibility either by designers, producers, or by the state to mitigate and minimize those negative effects on society or individuals. We’ve seen the same being done with other technologies in the past.

 

Former President Barack Obama said that the AI revolution goes further and has more impact than social media has. Do you agree?

Gregor: Definitely. Even social media has employed certain AI tools and algorithms that grab our attention and direct our behavior as consumers, votes, schoolmates – that has completely changed the psychology of individuals and the masses. AI is an umbrella term that encompasses over a thousand other types of users.

AI will change not only our psychology but also logistics and how we approach problem solving in different domains.

Aleksandr: The change is gradual. As Gregor said, we already see it in social media – for example, in content moderation. Those are largely based on language and machine learning models. AI is driving what we see on the platform as well as what we can write and even share. To some extent, it means that some private actors are influencing freedom of speech.

 

Let’s talk about the role of Europe in AI compliance regulations. Can you explain why Europe is a trailblazer here?

Gregor: Europe has a special position geopolitically due to its history. It’s not one country. It’s a combination of countries that are joined by different international organizations or multi-supranational organizations such as the European Union and the Council of Europe to which individuals’ countries have given parts of their sovereignty. This is a huge difference compared to the United States or China which are completely sovereign in their dealing.

When it comes to the European Union in particular, many types of behaviors are regulated by harmonizing instruments of the EU to have a uniform single market and provide some level of quality in terms of safety and security to all citizens – so we don’t have different rules in Slovenia, Germany, France of Spain. Instead, this is one market of over 500 million people.

 

Gregor, can you give us a brief overview of the latest developments in AI regulation and compliance in the EU?

Gregor: There are two binding legal instruments that are in the final phases of development. The most crucial one is from the European Union, the AI Act. It is directed at the market itself and is concerned with how AI is designed, developed, and applied by developers and users. The AI Act addresses a large part of the ecosystem, but it does not address the people who are affected by AI. Here is where the second instrument comes in, the Convention on AI that is being developed by the Council of Europe.

Another thing to mention is that the EU’s AI Act only applies to EU members and is being negotiated by the 27 member states. The Council of Europe’s instrument is being negotiated by 47 member states as well as observer states and non-member states such as the United States, Canada, Japan, Mexico, and Israel. The latter has a more global scope.

In this way, I see the EU’s AI Act as a possible mode of implementation of the rules set by the conventions of the Council of Europe. This is still partially theoretical, but it’s likely we’ll see both instruments finalized in the first half of next year. Of course, there will be a transitory period before they come into effect. This is already a good indication of how businesses must orient themselves to ensure compliance in due time.

 

Should what the EU is doing be a blueprint for the rest of the world?

Gregor: Yes, if they choose to. I think many in Europe will acknowledge that we have different ways of approaching problems and freedom of will, but if you want to do business in Europe, you have to play by Europe’s rules. This is an element in the proposed AI Act as well as the General Data Protection Regulation (GDPR) legislation from the past decade which employs the Brussels effect – meaning that the rules applied by Europe for Europe also apply to companies outside of Europe that do business here even if they do not have a physical presence here. So, if producers of AI from China or the United States wish to sell their technology in Europe, they have to comply with European standards.

 

What are the business implications of the European approach?

Aleksandr: The European approach harmonizes the rules for a single market. It’s beneficial for businesses as they won’t have to adapt to each country’s local market. I say it’s a win-win for businesses who are approaching the European continent. We’ve already seen this happening with the GDPR. As long as they have a European presence, they adopt the European policy globally. This could happen with AI regulations as well.

If you look at the regulatory landscape, we can see some regulatory ideas coming up in North America and other continents. In China, there are some regulatory propositions. But I would say that the European approach is the most comprehensive. Chances are it will be taken as a basis by many companies.

 

Balancing Innovation and Compliance

 

What do you say to concerns that this is just another set of regulations to comply with in a landscape that is constantly innovating at speed?

Gregor: I’ve been working with technology for more than 20 years. I also have experience with analog technology that is regulated, like construction building.

What we’re dealing with here is not just regulation for regulation’s sake, but it benefits corporations in the long run because it disperses risk and consequences of their liabilities. It creates a more predictable environment.

There are many elements of regulation that have been proposed for AI that have been agreed to by different stakeholders in the process. We must consider that the industry was involved in preparing both these regulatory instruments I’ve mentioned.

Some issues like data governance are already regulated. There are, of course, disagreements on elements like transparency because there may be businesses advantages that are affected by regulation. On the other hand, technology does not allow for everything. There are still open questions on what needs to be done to ensure a higher quality in the processes development to mitigate risk.

 

So there needs to be a balance between regulation, competitiveness, and the speed of innovation. How can we be assured that AI regulation does not harm competitiveness in business?

Gregor: The regulation proposed by the European Commission is just one element in the basket of proposals of the so-called Digital Agenda. There are, of course, some other proposals on content moderation that came into existence just recently that are binding. But there are also several instruments which address the promotion and development of AI systems, both in terms of subsidies for companies and individuals to develop digital skills and to create a comprehensive and stable environment for IT technology in Europe. There are billions being thrown into subsidies for companies and innovators. There is a big carrot, and the stick is in preparation, but it is not here yet.

Aleksandr: I must also underline that there are things in place that facilitate the upcoming EU regulation, such as the Regulatory Sandboxes. You may have seen an example of this in Spain. Businesses will be able to test out their hypothesis on how they want to operate these AI systems that could potentially be harmful.

It’s important to understand that the scope of the regulation is not over extensive. I would say it only covers really high-risk systems to a large extent, and some lower risk systems but only where it’s important. For example, there are transparency obligations when it comes to defects for lower risk systems. Then there are meaningful rules for high-risk systems which affect people’s lives – like government aid or the use of AI in law enforcement or hiring.

It’s important to have proper data governance and risk management in place for systems that affect people on a massive scale.

Also, if you look at mature organizations with this technology already in the market, they are making sure that the data used to train their AI systems is good enough. They are doing it themselves as they don’t want to get in trouble with their clients. Regulations are not so unusual.

 

In that case, will innovation be faster than the regulations can keep up with?

Gregor: That’s a pertinent question when it comes to technology. It is imprudent, from the position of a policymaker, to try to regulate future developments as that would impede innovation.

I don’t think there’s any impediment of innovation happening at this moment. Perhaps you could categorize getting subsidies for being compliant with ethical recommendations as that, but it’s not really an impediment.

In the future, there will be limitations to innovation of AI in the same degree as biotechnology, for example, where there are clear limits on what is allowed and under what conditions to prevent harm. That is narrowly defined. The general purpose, of course, is to increase the quality of these products, and create a safe environment and as predictable a playing field for customers in the market.

 

Business Focus: AI-Risk Management

 

What’s coming up next on AI governance that business leaders should consider?

Gregor: At this point, what’s coming up next for policy development is the fight back from those who do not want such legislation. It’s something we’ve already seen this year. Many think we had an AI revolution only this year. No. It’s a technology that’s been around for a few years and there have been calls for regulation of AI on the basis of existential threats.

If we take those calls seriously, we must completely backtrack and change the direction of what is already being developed.

But I do think if we follow through with what has been proposed to ensure the safety and security of this technology, we will also solve the problem of a so-called super intelligence taking over humanity. First, we need to ensure correct application of existing rules to human players.

 

With all this in mind, what advice do you have for business leaders when it comes with regulations and compliance in the field of AI? What can they start with tomorrow?

Aleksandr: Technical standards will be the main thing. I would advise all those developing this technology to take part in technical committees in their national standard setting bodies which can then translate into work on the European level of standards.

Take into account your practical concerns and considerations so that these technical standards can address business concerns in terms of product development. It is important to follow and participate in this work on regulation development for the AI ecosystem.

Another thing is to consider risk management frameworks to address AI-specific risks. The NIST or ForHumanity Risk Management Frameworks are a practical tool for organizations to control how they operate and deploy AI systems in a safe and efficient manner. Business leaders can also begin to appoint people who would be responsible for setting up processes.

There will be a transitional period, as there was with the GDPR. If companies can demonstrate that they are compliant with European standards that are still under development, they will automatically be considered compliant with the EU AI Act. But this is ongoing work.

Start considering broader risk management frameworks as a first step to get the ball rolling in organizations.

Gregor: Technical development skills alone are not sufficient to build a competitive and scalable organization, especially as not only Europe but other regions are preparing to introduce regulatory measures. My advice is similar to Aleksandr’s; build on your capacities for risk and compliance management. I think it will pay back quite soon.

Tia Jähi, KONE: Finding the Right Partner & Partnering Right

From cost fluctuations to accelerated digital transformation, business leaders are faced with a crucial responsibility of managing organizational resources smartly to stay competitive and at the forefront of development.

The plethora of IT solutions designed to optimize businesses drives the need for better IT vendor management and data-driven decision-making to maximize an organization’s potential. As an indispensable tool to IT leaders, vendor management is a complex and challenging task fraught with potential pitfalls.

We spoke to Tia Jähi, KONE’s Head of Business IT about the organization’s supplier management approach, and how to cultivate rewarding partnerships with vendors in the long run.

 
Tia Jähi is an IT leader for the renowned elevator company, KONE with the mission to improve the flow of urban life. She has served various roles in the company since 2012 and is experienced in vendor management and relationship management, with a focus on cost optimization, efficiency, and creating a culture of collaboration. Currently she is leading IT for KONE Europe.
 

How has your strategy for IT supplier management evolved over the last few years, especially with the rapidly changing digital environment?

 

The past years have been quite change-intensive with the pandemic, remote working, digital advances, and the consolidation of the supplier market. We have seen quite significant changes to the technology suppliers consolidating, and the market overall.

When it comes to the impact of this on strategy, it’s about being very active in monitoring the changes and creating a new outlook, because there have been cases when some of our core technologies have merged into other providers. We have close relationships, so when something happens, we can have the dialogue to ensure continuity of priorities.

Additionally, we also create transparency to our pipeline. For example, when we’re working with Supplier A which then goes through a merger or change in the market, perhaps expanding their scope, we are not just reviewing them as a supplier for technology in area A but also as strategic supplier. We open our roadmap from the perspective that even though we are currently doing business in this area, there are other core areas we could be investing in and that leads us to broadening our discussions. Of course, this applied especially to strategic suppliers.

 

In your experience, what are the most important components of a successful supplier relationship?

 

I often say, the first thing is about finding the right partner and running your RFIs and RFPs, but it’s not the hardest part. For example, if we think about services providers in the IT space, there are many world class players, most of whom are capable of doing the job. Then, it’s about our priorities and what’s right for the moment. After that, the work starts.

The more important and difficult part is partnering, and this happens regardless of whether it’s a strategic or preferred partner. It’s not just a relationship based on KPIs and monthly reports or spending. It’s about having intimacy on all levels.

Partnerships are not just selecting a vendor; you need to be a partner too.

This leads to my earlier point about sharing your roadmaps with your partners. You also need to understand their KPIs and expectations. If they’re entering a new market, for example, you need to be ready for them as well. Only then can you really build a sustainable, long-term, mutually beneficial relationship.

I was once asked how I would handle a quality issue with a supplier. The textbook answer would be to monitor SLAs and apply the penalties. That’s what you can do contractually. But you are not any better off even if you get service credits because you still have a problem. This is where partnering right comes up. We need to jointly understand the root causes of the issue. Is there attrition in the team? Immaturity in the technology? Depending on the relationship, what effort do I need and am willing to put in?

For example, we’ve seen, in the past year, both in the partner and corporate landscape that there has been huge attrition and turnover in certain geographies. I have to decide if this is something I want to invest more effort into – such as onboarding new team members. Again, it may contractually be the supplier’s responsibility to ensure the right quality of competence, but we’re both suffering. So do we jointly battle against attrition? It’s about making sure the teams are integrated and the supplier’s team feels like they are part of this company as well.

We do quite a lot of team building with our internal and supplier teams. In our part, we are investing in the relationship and in creating a joint vision for the whole team. To me, that is partnering right.

It boils down to getting people engaged and leading with a vision and a motivating view of the future.

This doesn’t only apply to colleagues on your payroll. We extend our recognition practices to our supplier teams as well. I think this is a core success element that should be considered, especially when we’re outsourcing part of the work more and more. It creates certain benefits without fully taking away leadership responsibilities.

 

What was one unexpected challenge you have faced when dealing with a supplier, and how did you overcome that challenge?

Of course, you need to be prepared for everything. But one example is the acquisition of technology suppliers, where all of the sudden you enter a new relationship with a new provider who has acquired a technology supplier you’ve been working with. There might be questions and concerns about their other customers, such as are they working with our competitors?

There may even have been some cases when these deals are announced publicly, so you become aware of the significant change via a press release. This raises questions. Are your counterparts still in place? How do you extend the relationship to the new leadership?

The other challenge comes with the Great Resignation following the pandemic. Especially with our colleagues in India, we’re seeing attrition numbers in double digits. Again, it’s a joint challenge that must be addressed.

 

What is your best advice to other IT leaders and C-level executives when it comes to creating a successful supplier management strategy?

 

It comes down to first understanding your data. We have done significant exercises, starting from the purchase order level, to figure out who the suppliers are, where we are spending, and what are the future strategic opportunities. Then, creating a segmentation framework and starting to draft down.

From the C-level, it’s about having commitment to things like executive sponsorship for strategic suppliers. If we have great engagement all the way from the executive board, then when we consider having a company as a strategic partner, it comes all the way from the executive board. That gives us participation and helps build relationships. Of course, there are the core principles, such as governance, that will differ based on the layer of the suppliers. Either way, there has to be skin in the game at every level.

Looking at the different approaches you could take with strategic level suppliers, it’s about investing time and sharing roadmaps. They need to feel like a strategic partner, and they need to feel that you are a strategic partner to them with opportunities and a joint vision.

It’s also about looking at chances to consolidate. When you have over 100 suppliers, not everyone can be strategic. But if you look more into the niche or smaller players, are there opportunities there for consolidation down the line? Maybe consolidation might be beneficial from a cost management perspective.

Essentially, start with three tiers of partners: strategic, preferred, and others before starting to understand the data.

 

Can you talk us through KONE’s  process of coming up with a clear supplier selection criterion?

 

Depending on the scope, volume, and newness of the initiative, we usually start off our own education journey with an RFI round to understand the market and purify our requirements, as well as get to know the technologies and capabilities available.

With the RFP process, we have a firm framework. Then it comes to candidate assessment. Here we create a visible comparison between the first-round suppliers and decide which ones to move forward with.

For example, with selecting a technology supplier, functional requirements are very important. But equally important are non-functional requirements from the perspective of cybersecurity, privacy, performance, global reach. There’s also the cost and cost of total ownership as well as their potential as a partner.

We do this rating transparently with a broad internal team. But the weight of each factor can change. For example, functional requirements may be worth 50% of the total evaluation with the other parts being smaller. Either way, we utilize the framework and fit it to the initiative we are working on.

Of course, there may be a clear answer from the ratings. but there are also soft elements to consider that may play a role as well. This is the partnership aspect. Do we believe in their roadmap? Are they innovators or a basic provider?

It’s not just about finding the right partner but partnering right.

We have made bets that have turned out to be very successful today, where a supplier may have equal functionalities, but we chose one over the other because of their technical design and future orientation.

You can’t distill everything into an Excel that gives you a number that is the answer. Even in this phase of partnership, you must invest time to truly understand them.

*The answers have been edited for length and clarity.

How to Use AI in Cybersecurity for Business

With rapid advancements in technology, security leaders are actively exploring how to use artificial intelligence (AI) in cybersecurity as traditional measures alone may no longer be sufficient in defending against sophisticated threats. AI has emerged as a potentially powerful tool in bolstering cybersecurity efforts, offering enhanced threat detection, prediction, and response capabilities among other uses.

A survey by The Economist Intelligence Unit revealed that 48.9% of global executives and leading security experts believe that AI and machine learning (ML) are best equipped for countering modern cyberthreats. Additionally, IBM found that AI and automation in security practices can significantly reduce threat detection and response times by up to 14 weeks of labor and reduce costs associated with data breaches. In fact, global interest in AI’s potential in countering cyberthreats is evident by the growing investments in it. The global AI in cybersecurity market is projected to reach USD 96.81 billion by 2032.

Despite the promise of AI, Baker McKenzie found in a survey that C-level leaders tend to overestimate their organization’s preparedness in relation to AI in cybersecurity. This serves to underscore the importance of realistic assessments on AI-related cybersecurity strategies.

 

Security Applications of AI

Many tools in the market leverage subsets of AI such as machine learning, deep learning, and natural language processing (NLP) enhance the security ecosystem. CISOs are challenged with finding the best ways to incorporate cybersecurity and artificial intelligence into their strategies.

 

1. Enhanced Threat Detection and Response

One of the main examples of AI in cybersecurity is its use for malware detection and preventing phishing, AI-powered tools are shown to be significantly more efficient compared to traditional signature-based systems.

Where traditional systems can prevent about 30% to 60% of malware, AI-assisted systems have an efficiency rate of 80% to 92%.

Researchers at Plymouth University detected malware with an accuracy of 74% on all file formats using neural networks. The accuracy was between 91% to 94% for .doc and .pdf files specifically. As for phishing, researchers at the University of North Dakota proposed a detection technique utilizing machine learning, which achieved an accuracy of 94%.

Given that phishing and malware remain the biggest cybersecurity threats for organizations, this is good news. These advancements enable organizations to identify potential threats more accurately and respond proactively to mitigate risks that could cause massive financial and reputational damage.

 

2. Knowledge Consolidation

A pressing issue for CISOs is the sheer volume of security protocols and software vulnerabilities poses a challenge for their security teams. An advantage of AI in cybersecurity is that ML-enabled security systems can consolidate vast amounts of historical data and knowledge to detect and respond to security breaches. Platforms like IBM Watson leverage ML models trained on millions of data points to enhance threat detection and minimize the risk of human error.

AI’s ability to improve its knowledge of cybersecurity threats and risks by consuming billions of data points and recognize patterns and anomalies faster than humans enables it to learn from past experiences and come up with increasingly efficient ways to deal with combat cyberattacks. This allows AI-powered security systems to keep pace with the evolving threat landscape more efficiently.

IBM notes that AI is also able to analyze relationships between threats in mere seconds or minutes, thus reducing the amount of time it takes to find threats. This is essential to reducing the detection and response times of cybersecurity breaches, which can significantly reduce costs to organizations as well.

The global average total cost of data breach according to IBM is $4.35 million USD in 2022. Organizations also took an average of 277 days to identify and contain a breach. However, if that number is brought down to 200 days or less with the help of AI, organizations can save an average of $1.12 million USD.

 

3. Enhanced Threat Analysis and Prioritization

Tech giants like Google, IBM, and Microsoft are investing heavily in AI systems to identify and analyze and prioritize threats. In fact, Microsoft’s Cyber Signal’s program leverages AI to analyze 24 trillion security signals, 40 nation-state groups, and 140 hacker groups to detect software vulnerabilities and malicious activities.

Given the vast amounts of data that must be analyzed, it’s not surprising that 51% of IT security and SOC decision-makers said they were overwhelmed by the volume of alerts (Trend Micro) while 55% cited their lack of confidence in prioritizing and responding to them. Moreover, 27% of surveyed respondents spent up to 27% of their time managing false positives.

Worryingly, Critical Start found that nearly half of SOC professionals turn off high-volume alerts when there are too many to process.

One answer to the question of how to use AI in cybersecurity is by applying it to analyze vast amounts of security signals and data points to detect and prioritize threats quickly and effectively. With the assistance of AI, security teams are better able to promptly respond to threats under the increasing frequency of cyberattacks.

 

4. Threat Mitigation

The complexity of analyzing every component of an organization’s IT inventory is well-understood. With the help of AI tools, the complexity can be managed. AI can identify points within a network that may be more susceptible to breaches and even predict the type of attacks that may occur.

In fact, some researchers have proposed cognitive learning-based AI models that can monitor security access points for authorized logins. This model can detect remote hacks early, alert the relevant users, and create additional security layers to prevent a breach.

Of course, this would also require training AI/ML algorithms to recognize attacks carried out by other such algorithms as cybersecurity and risks evolve in lockstep. For example, hackers have been found to use ML to analyze enterprise networks for weak points. This information is used to target possible entry points for phishing, spyware, and DDoS attacks.

 

5. Task Automation

When talking of AI applications in cybersecurity, task automation is one of the most widely adopted. Especially for repetitive tasks, such as analyzing a high-volume of low-risk alerts and taking immediate measures, AI tools can come in handy to free up human analysts for higher-value tasks. This is especially valuable to companies that are still short on qualified cybersecurity talent.

Beyond that, intelligent automation is also useful for gathering research on security incidents, assessing data from multiple systems, and consolidating it into a report for analysts. Shifting this routine task to an AI helper will save plenty of time.

 

How Threat Actors Are Using AI

While AI is proving to be a valuable tool in the cybersecurity arsenal, it is also becoming a mainstay for threat actors who are leveraging it for their malicious activities. AI’s high processing capabilities enable them to hack systems faster and more effectively than humans.

In fact, generative AI models such as ChatGPT and Dall-E have made it easier for cybercriminals to develop malicious exploits and launch sophisticated cyberattacks at scale. Threat actors can use NLP AI models to generate human-like text and speech for social engineering attacks such as phishing. The use of NLP and ML enhances the effectiveness of these phishing attempts, creating more convincing emails and messages that trick people into revealing sensitive information.

AI enables cybercriminals to automate attacks, target a broader range of victims, and create more convincing and sophisticated threats. For now, there is no efficient way to distinguish between AI- or human-generated social engineering attacks.

Apart from social engineering attacked, AI-powered cyberthreats come in various forms including:

  • Advanced persistent threats (APT)s that use AI to evade detection and target specific organizations;
  • Deepfake attacks which leverage AI-generated synthetic media to impersonate real people and carry out fraud; and
  • AI-powered malware which adapts its behavior to avoid detection and adjust to changing environments.

The rapid development of AI technology allows hackers to launch sophisticated and targeted attacks that exploit vulnerabilities in systems and networks. Defending against AI-powered threats requires a comprehensive and proactive approach that combines AI-based defense mechanisms with human expertise and control.

 

AI and Cybersecurity: The Way Forward

The integration of AI into cybersecurity is transforming the way organizations detect, prevent, and respond to cyber threats. By harnessing the power of AI, organizations can bolster their cybersecurity defenses, reduce human error, and mitigate risks.

Having said that, the immense potential of AI also increases the risk of cyber threats which demand vigilant defense mechanisms. After all, humans remain a significant contributing factor to cybersecurity breaches, accounting for over 80% of incidents. This emphasizes the need to also address the human element through effective training and awareness programs.

Ultimately, a holistic approach that combines human expertise with AI technologies is vital in building a resilient defense against the ever-evolving landscape of cyber threats.

 

FAQ: AI in Cybersecurity

How is AI used in cybersecurity?

In cybersecurity, AI removes the need for human experts to do tedious, time-consuming tasks. AI can read an immense amount of data and identify potential threats while reducing false positives by filtering non-threatening activities. This helps human security experts to focus on vital tasks instead.

How will AI improve cybersecurity?

AI technologies can spot potential weak spots in a network, flag breach risks before they occur, and even automatically trigger measures to prevent and mitigate cyberattacks from ransomware to phishing and malware.

What are the risks of AI in cybersecurity?

AI-enabled cybersecurity tools are reliant on the data sets they are trained on. This means bias may unintentionally skew the model, resulting in mistaken analysis and inefficient decisions that could lead to terrible consequences.

What are pros and cons of AI in cybersecurity?

Some benefits of AI-based security tools include quicker response times, better threat detection, and increased efficiency. On the other hand, there are ethical concerns to AI such as privacy, algorithmic bias, and talent displacement.

Emerging Cybersecurity Trends for 2024

Organizations face an unprecedented array of cyber threats that constantly evolve in complexity and sophistication. It is imperative for security and IT leaders to stay ahead of the curve by exploring merging cybersecurity trends that will help safeguard their organization’s valuable assets and maintain a robust security posture.

From the convergence of networking and security to threat intelligence and the Cybercrime Atlas, we explore the transformative trends shaping the future of cybersecurity.

 

1. Convergence of Network and Security

 

Before the rise of hybrid clouds and networks – an estimated 76% of organizations use more than one cloud provider – businesses would build their security layer on top of their networks. However, the architectural complexity of this approach led to poor user experience, increased cybersecurity risk, and presented many challenges in maintenance and troubleshooting.

As the threat landscape evolves alongside technological advancements, organizations need a modern approach to security and networking which offers end-to-end visibility to allow quicker identification and reaction to potential threats.

One way to do this is by converging networking and security. The three main aspects of this are:

  1. Adopting a distributed firewall: Also dubbed a hybrid mesh firewall by Gartner, organizations need to secure across their entire network infrastructure including location, device, content, and applications by implementing a network-wide security policy such as Zero Trust.
  2. Consolidating vendors: Instead of selecting vendors based on the “best of breed”, companies should consolidate technology vendors to just a few that can work together in the ecosystem. Solutions that are designed to work together will lead to a well-integrated security network allowing security teams to optimize their strategies.
  3. Implementing OT-aware strategy: Organizations must create a layer of defense around the OT components connected to their network using capabilities like Network Access Control, data segmentation, and micro-segmentation.  to strengthen the security of OT devices on the network, moving toward a zero trust model.

Evolving approaches and perspectives to network and security are imperative to meet changing organizational demands, the fluctuating threat landscape, and emerging technological advancements.

 

2. Threat Intelligence

 

Also known as cyberthreat intelligence or CTI, threat intelligence is data regarding cybersecurity threats that are collected, processed, and analyzed to understand potential targets, attack behaviors, and motives. Threat intelligence enables security teams to be more proactive and data-driven in their prevention of cyberattacks. It also helps with more efficient detection and response to attacks that may occur. All this results in reduced cybersecurity risks, prevention of data breaches, and reduced costs.

IBM notes that cyber intel reveals trends, patterns, and relationships that will give an in-depth understanding of actual or potential threats that are organization-specific, detailed, contextual, and actionable. Threat intelligence is becoming an indispensable tool in the modern cybersecurity arsenal.

According to Gartner, the six steps to the threat intelligence lifecycle are:

  1. Planning: Analysts and stakeholders within the organization come together to set intelligence requirements that typically include questions stakeholders need answers to such as whether new strains of ransomware are likely to affect their organization.
  2. Threat data collection: Based on the requirements defined in the planning stages, security teams collect any raw threat data they can. For example, research on new malware strains, the actors behind those attacks, and the types of organizations that were hit, as well as attack vectors. The information comes from threat intelligence feeds, information-sharing communities, and internal security logs.
  3. Processing: The team then processes the data on hand in preparation for analysis. This includes filtering out false positives or applying a threat intelligence framework. There are threat intelligence tools that can automate this stage of the lifecycle which utilize AI and machine learning to detect trends and patterns.
  4. Analysis: The raw data is analyzed by experts who will test and verify the identified trends, patterns, and insights to answer the questions raised and make actionable recommendations tailored to the organization security requirements.
  5. Dissemination: The insights gained are shared with the relevant stakeholders, which can lead to action being taken based on those recommendations.
  6. Feedback: Both stakeholders and analysts look back on the latest threat intelligence lifecycle to identify any gaps or new questions that may arise to shape the next round of the process.
 

3. Employee Trust

 

Though zero trust is growing as a cybersecurity principle – and it has proven to be effective in protecting organizational assets – the overapplication of this approach on employees could lead to negative effects at the workplace.

Cerby’s State of Employee Trust report found that 60% of employees reported that when an application is blocked, it negatively affects how they feel about the organization. The erosion of employee trust and reduced job satisfaction is a result of overreliance on controls that block, ban, and deny employees from using specific applications. In fact, 39% of employees are willing to take a 20% pay cut if they could have freedom to choose their own work applications.

Though the zero trust approach lowers the cost of data breaches by 43% (IBM), the same approach cannot be applied to employees. The Cerby study found that higher employee trust led to higher levels of workplace happiness, productivity, and contribution.

Experts recommend that organizations adopt an enrolment-based approach to security that balances cybersecurity and compliance requirements with trust-forward initiatives. This will help organizations build digital trust with their employees by giving them more control over their tools while maintaining security and reliability.

Other trust-based initiatives that can build employee trust include:

  • Ongoing training and support to keep employees updated on the latest tools and technologies.
  • Incorporating employee feedback into the decision-making processes.
  • Constantly communicating with employees on their workflows and security needs.
 

4. Cybercrime Atlas

 

The Cybercrime Atlas is an initiative announced by the World Economic Forum (WEF) back in June 2022 to create a database by mapping cybercriminal activities. Law enforcement bodies across the globe can then use this database to disrupt the cybercrime ecosystem. The first iteration of the Cybercrime Atlas was officially launched in 2023. The concept was ideated by WEF’s Partnerships against Cybercrime group that is made up by over 40 public and private organizations. The Cybercrime Atlas itself is made by WEF in collaboration with Banco Stander, Fortinet, Microsoft, and PayPal.

Though the Cybercrime Atlas won’t be available for commercial use, its use by law enforcement agencies will create ripples in the cybersecurity landscape. Analysts from around the world were gathered to come up with a taxonomy for the Atlas. From there, 13 major known threat actors became the initial focus. Analysts used open-source intelligence to collect various information about these threat actors from their personal details to the types of malicious services they used. The information collected was investigated and verified by humans. The data will eventually be shared with global law enforcement groups such as Interpol and FBI for action.

The goal of the Cybercrime Atlas is to create an all-encompassing view of the cybercrime landscape including criminal operations, shared infrastructure, and networks. The predicted result of this is that the security industry will be better able to disrupt cybercrime. By February 2023, the project moved from its prototype phase to a minimum viable product. Essentially, there are now dedicated project managers and contributors working to build the database and work out the relevant processes.

It was also noted that the information being used to build the database is open-source, meaning there is no issue with country-specific regulations on data. Once the open-source repository is created, there will not be security or proprietary constraints in sharing the data with local law enforcement agencies.

Though commercial organizations will not be directly using the Cybercrime Atlas, they will still indirectly benefit from it. As the project develops and matures, law enforcement agencies will be better equipped to investigate cybercrimes and catch threat actors.

Dr Rebecca Wynn: “We Didn’t Think of AI Privacy By Design”

In the era of the AI revolution, data privacy and protection are of utmost importance.

As the technology evolves rapidly, risks associated with personal data are increasing. Leaders must respond and adapt to these changes in order to push their businesses forward without sacrificing privacy.

We speak to award-winning cybersecurity expert Dr Rebecca Wynn about the data privacy risks associated with AI and how leaders can mindfully navigate this issue.

 
Dr Rebecca Wynn is an award-winning Global CISO and Cybersecurity Strategist, host and founder of Soulful CXO Podcast. She is an expert in data privacy and risk management and has worked with big names such as Sears, Best Buy, Hilton, and Wells Fargo.
 

From a business perspective, how has AI changed how companies approach data protection and privacy?

Honestly, right now many companies are scrambling. They hope and pray that it’s going to be okay, and that’s not a good strategy. One of the things we see with leading technologies is that the technologies come up first. Then governance, risk, and compliance people play catch up. Hopefully, in the future, this will change, and we will be on the same journey as the product. But right now, that’s not what I see.

Once the horses are out of the barn, it’s hard to get them back. Now we’re trying to figure out some frameworks for responsible AI. But one thing people need to be careful about is their data lakes. Is misstated data going into the data lake?

From a corporate perspective, are you’re not monitoring what people are putting into the data lake? Even from your own individuals, are they putting your intellectual property out there? What about company-sensitive information? Who owns that property? Those are the things that are very dangerous.

For security and compliance, you really need to be managing your traffic, and you do that through education on the proper use of data.

Can you speak on the role of laws and regulations in ensuring data privacy in the AI era?

There are two types of people. There are ones who prefer to go ahead and ask what the guidelines are and what’s the expected norm for businesses and society as a whole – they stay within those guidelines. Then there are companies that ask about enterprise risk management. What is the cost if we go outside those lines? We see this in privacy. They ask questions like “What are the fines? How long might it take to pay those fines? Will that go down to pennies on the dollar? How much can I make in the meantime?”

Laws give you teeth to do things after the fact. Conceptually, we have laws like the GPDR, which is the European Union trying to establish AI rules. There’s the National Institute of Standards and Technology AI framework in the US, PIPEDA in the Canada.

The GDPR and the upcoming AI Act are obviously important to companies based in the EU. How aggressive should we expect a regulatory response to generative AI solutions might be?

I think it’s going to take a while because when GDPR initially came into place, they went against Microsoft, Google, and Facebook. But it took a long time to say what exactly these companies did wrong and who would take ownership of going after them.

It will take years unless we have a global consortium on AI with some of these bigger companies that have buy-in and are going to help us control it. But to do that, big companies must be a part of it and see it as important.

And what are the chances that these big companies are going to start cooperating to create the sort of boundaries that are needed?

If we can have a sort of think tank, that would be very helpful. AI has very good uses but unfortunately, also very negative consequences. I’m not just talking about movies like Minority Report, but I also think about when wrong data gets out. Like in Australia, we see potentially the first defamation law case against ChatGPT.

Even on a personal level, information on you is out there. Let’s say for example you are accused of a crime, which is not true. That gets into ChatGPT or something similar. How many times can that potentially come up? I asked ChatGPT to write me a bio and it says I worked for Girl Scouts of America, which I never did.

That’s the type of thing I’m talking about. How do you get that out of the data pool? What are the acceptable uses for privacy data? How do you opt-out? These are the dangers right now. But it has to be considered from a global perspective, not only by region. We talked about legal ramifications and cross-border data privacy. How do you stop somebody in the US from being able to go ahead and use data from the EU a bit differently? What about information that crosses borders via AI? It hasn’t been discussed because no one even thought of it just a year ago.

What are appropriate measures for organizations to take with shadow IT uses of GPT tools?

We need to train more on the negative effects of such tools. I don’t think people are trying to do it from a negative perspective, but they don’t think about the negative impact. If I’m using one of these tools to help me generate code, am I looking at open-source code? Is it someone else’s code that someone put in there? Is this going to cause intellectual property issues?

When you talk about shadow IT, you are looking at what is potentially leaving a network and what’s coming in. So, it usually sits above data loss prevention tools. But how do you do it without being too ‘big brother-ish’.

All this comes from enterprise risk management. You need to have conversations with your legal and compliance teams. Most people just want to get their job done and they don’t think about the negative repercussions to the company’s reputation. You have to have those conversations.

Talk to your staff about what tools they’re using in a no-fear, psychologically safe way. Ask them why they’re using those tools and the advantages it gives them. From there, you can narrow down the top two or three tools that are best for the company. This lowers your risk.

It’s about risk mitigation and managing that in a mindful way because you can’t have zero risk. You can’t block everyone from doing everything.

How can current data protection legislation support businesses and individuals with data that has been used to train large language models?

It’s chasing things after the fact. We’ll find out that there are a lot of language models trained on data that were not used in the manner we agreed to in our contracts. I think there are going to be some legal ramifications down the pipeline. We’ll find out that the data used in these models are not what I would call sanitized. I’ve seen it again and again; intellectual properties are already in the pool and the data was not structured or tagged so we can’t pull it back out.

In that case, how can we work with protected data and at the same time, with large language models?

That’s tough. I’ll give you an example. Let’s say there’s an email with a cryptic key embedded into it. What you can do is hold the other key and spiral it off. I like that from a company and individual perspective. Because if someone shared something intellectual property of mine with another person, maybe an article I wrote or a code, I could then look at the spiral and see who sold or resold that data. From there, I could expire it. From a legal perspective, I would have a trail.

What happens if we could do that with every piece of information that you make? If the data is tagged immediately, you could see what it was created for and expire it for other uses. It won’t be in anyone’s database.

I think we can get there. But right now, I don’t see how we can get the horses back into the barn effectively when the data is not individually tagged.

Should we forbid access to ChatGPT and other AI apps to all users?

You could use a variation of that. Consider why you’re using an AI tool. Ask your teams why they’re using it and then think about how you might mitigate risk. If it’s about rephrasing certain text to be more effective for clients, then allow it. That’s a positive internal use. Maybe it’s about marketing and rephrasing things for various social media platforms.

But what if they just want to play around and learn more about it? Then maybe you need to have a sandbox where they can do that, and you don’t have to worry about data leaving your network.

How can we ensure that our AI systems are trained on unbiased and representative data to prevent unfair decision-making?

To be honest with you today, I don’t think we can. Part of it is because data is in a data lake. For a lack of better phrasing, garbage data in, garbage data out. If you look at search engines out there, they’re all built on databases and those databases are not just clean. They weren’t built initially with clean data at the start. We didn’t think about structure back them.

What you could do is have something like a generated bio of yourself, see what’s wrong with it, and give a thumbs up or down to say if it’s accurate or not to clean up the data. But can anyone clean up that data or is it only you? If there’s inaccuracy in my data, how can I flag that? It seems like anyone can do it.

So the question is, is there a method to go back and clean up the data? Here’s where I wish we had the option to opt-in instead of opt-out.

When it comes to data security, what are the top three things companies should keep in mind?

First, is to not have fear or uncertainty in your own people. We think mainly from a security and privacy governance perspective that everyone intends to do the wrong thing. Instead, I think you need to assume that your workforce is trying to do the right thing to move the company forward. The real training is about what is the organization’s acceptable use policy. What is the expectation and why is it important? If things go awry, what are the consequences to the individuals, company reputation, and revenue?

Next, do how do we monitor that? Do we have an internal risk assessment against our AI risk? If you’ve not looked at your business liability insurance recently and you have a renewal coming up, take a look. There is an AI risk rider that is coming up for most if not all policies in the future that you, as a company, are using AI responsibly and that you are doing risk assessments and managing it with firewalls, data loss prevention strategies, and things like that.

Holistically, it’s enterprise risk management. I think you need to be transparent and explain to individual users in those terms, but we haven’t always been doing that. You need to figure out how you can make everyone understand that they are an important puzzle piece of running the business. It’s a mind shift that we need.

Should we create AI gurus in our companies to communicate the risk and stay up to date on the technology and its usage?

We’re on the cusp of that. A lot of us have been talking behind the scenes about whether there is now a new role of AI CISO.

The role of the Chief Information Security Officer will be focused solely on AI and how that’s being used internally and externally.

You may have the Chief Security Officer who is operational-focused only. I end up being more externally facing with strategy than I am with daily operations. We’re seeing various CISO roles to handle that for the company.  Digital Risk officers, that’s a legal perspective. I think we’re seeing a rise of AI Cybersecurity Officers or similar titles.

Should we worry about privacy threats from AI in structured data as well?

I think you should always worry about privacy. When we look at the frameworks, EISA has a framework, EU has the AI rules. There are things coming out of Australia and Canada as well, that’s what we’re trying to gravitate towards.

But, as individuals, how much can we really keep track of our data and where it is anymore? If you haven’t looked at the privacy policies on some websites, I say as an individual you need to opt-out. If you’re not using those apps on your phone anymore, uninstall them, kill your account, and get out of them. Those policies and how they’re using that information are only getting longer and longer.

As a company, do you have a policy in place about how your data is being used between your companies? Are you allowing them to be put into AI models? What are the policies and procedures for your own employees when it comes to third parties that you do business with?

That’s why I said there needs to be a lot of training. From an enterprise risk management standpoint, you cannot manage risk if your risk is not defined.

What is the social responsibility of companies in an AI-driven world?

I wish it would be a lot more, to be honest with you. Elon Musk and people like that are being a little more forward-think about what and where to do we want to be in 2050. All technologies are good technology. When AI initially came out in the 1950s and machine learning in 1960s, it was a new shiny toy but people were scared too.

I think from a social perspective, anytime we have something that allows us to be digitally transformed in a way that allows us to communicate and see correlations quicker, and see what people are facing around the world, that’s good.

But then, it can also bring in fake news. We say trust and verify, but what do you do when you’re using AI tools that have all the wrong information? It’s scary. That’s when we must use critical thinking. Does this story make sense? Does it seem reasonable? We’re starting to see right now how AI can be used for good and evil. In terms of cybersecurity, fake email and such are used in targeted phishing attacks.

For companies, do you have back channels to verify things? I once received a text message from a CEO that sounded like him but did not ring true to me. When I asked him through back channels, he said it was not him. Your gut is right most of the time. Trust your gut, but also give people other avenues to verify that data.

Do you think there’s too much focus on the content of a framework to identify and include all risks as opposed to focusing on the processes to get to the right answers?

I agree. I’m always about the so what. Know it, document it, implement, manage, and measure. But then what? If I have a framework solely as a framework, that’s great but it’s about what you put in.

I think the problem is that you start from the top-down. We end up having to get people on the same page and saying we need a framework. And then it gets down to the meat and that’s what you’re talking about. Why do we need this? How do we put it into play? How can we test it and measure it?

Unfortunately, policies and procedures start from the top down, but for boots and on the ground, the thought starts with implementation. That’s where I think training comes into play. This is where people like me talk to you about the day-to-day.

*Answers have been edited for clarity and length.

Why Resenteeism Can Be More Harmful Than Quiet Quitting

Workplace engagement saw several waves of change in recent years from The Great Resignation to The Great Reshuffle, and a rise in presenteeism and quiet quitting. Now, a new buzzword has cropped up: Resenteeism.  

Following from quiet quitting, resenteeism is an active response to frustrations at the workplace, dropping all facades of being satisfied or even apathetic at work. As the labor shortage continues to plague CxOs across industries, the percentage of workers who are actively engaged in work is gradually declining, from 36% in 2020 to 32% in 2022. Given concerns of a looming recession, a rise in disgruntled employees poses a threat to overall productivity and business success. 

We spoke to Yvonne Alozie Obi, Director of Global Diversity and Inclusion Specialist at Standard Chartered Bank and Marjolijn de Boer, organizational psychologist and founder of The Human Factor about the root causes of resenteeism, how leaders can measure its impact, and the best strategies for creating a work culture that supports employee well-being.  

 
Yvonne Alozie Obi is a Certified Diversity & Inclusion Specialist and current Director of Global Diversity and Inclusion, Standard Chartered Bank where she plays a pivotal role in delivering the Global D&I strategy. Yvonne has received several awards and recognition for her contributions to business and society, including being named one of the Top 50 Women Leaders in Africa by Forbes Magazine in 2020.
Marjolijn de Boer is an organizational psychologist with over 15 years of professional experience in coaching, consultancy, and training. She is the founder of The Human Factor, an organization that partners with businesses and individuals to help them become high performing.
 

What is resenteeism? 

Marjolijn: With resenteeism, people may still be productive at work but do not feel valued or appreciated. This can happen within industries where many employees were laid off – those who end up staying for fear of not finding another job will pick up the slack and become overworked and resentful. This is very harmful to organizations. These employees usually talk about their dissatisfaction to other colleagues which creates an environment that is far from positive. 

Yvonne: Sometimes there are external factors as well, such as global economic situations that are resulting in layoffs. This can make employees feel resentful as well. Further, in the diversity and inclusion space, we see a lot of injustice happening and employees may feel resentful towards their companies for not responding in the way they want. This can fester over time and lead to resenteeism as well.  

M: Things are changing very rapidly but this has been happening for decades already. What you see in the workplace, in the worst cases, is that managers have very few conversations with their employees and teams. If you do not have a dialogue with your team as a leader, it can get out of hand very quickly because this can spread through the organization. Before you know it, there is a negative atmosphere. Management may think employees are being very productive, but in reality, they are not happy at all. 

How have your organizations dealt with managers that are not taking concrete actions to address resenteeism effectively? 

Y: It’s all part of the change management process, especially when a reorganization is happening, and people are disgruntled. It’s about planning to make sure that potential questions are answered and that organizations are as transparent as possible. Usually, town halls are held by the leadership.  

We need to equip leaders and managers to have challenging conversations and learn how to keep spirits alive.  

“Leaders must know how to get loyalty and engagement from employees who are not yet resentful – because it can catch on like wildfire.”

M: Sometimes, people may become resentful when you don’t utilize their potential to the fullest. This can lead to a drop in productivity when employees do tasks that are not fulfilling. You do see a productivity incline if you utilize the team’s potential. I agree with Yvonne that leaders must have open and deep conversations. This is a teachable skill. 

Who has these tough conversations with resentful employees and what is usually discussed? 

Y: In Standard Chartered, we have a group of coaches. It’s important to look inward first, so we make sure these employees do not project their own triggers onto the organization. Next, we also help the employees understand what is truly under their control. It could be they don’t have a healthy work-life balance, not because of the job demands or lack of resources, but because they may not personally be equipped to effectively balance these two aspects. It’s always important during coaching to help employees think through the patterns they may need to break out of.  

M: Speaking of the environment, no company is perfect. Of course, there are narcissistic leaders. In this case, a different kind of approach is needed. 

How can leaders spot resenteeism early? 

M: If you want to preventatively know what your team is feeling, you need to be vulnerable. Do check-ins with your team and be honest about your own feelings and challenges. 

Y: I think it’s important for organizations to improve and embrace not just the typical biannual appraisal ratings but continuous performance feedback. With regular conversations, managers can begin to spot where things start to become issues. Some people can be good at masking, but it’s a skill for leaders to go in and see what’s going on and be vulnerable as well. If leaders start role-modeling, they will have a team that feels comfortable sharing when they are not ok.  

 M: Because it’s behind closed doors, I think intuition in leaders goes a long way. When you’re in a meeting and something feels different, that’s your first signal. If you see people are less happy at work, that’s another signal. Use your intuition to look at non-verbal behavior while also starting conversations with your team. 

What kind of leadership style fits best to solve these issues? 

M: In my experience, a lot of leadership styles can be extremely effective. In fact, we always think leaders have to be extroverted. I think introverted leaders are extremely good for keeping the peace.  

“Leaders must be able to balance a people-focused approach to leadership while setting clear boundaries.”

Y: I agree. I don’t think there’s a one-size-fits-all leadership style that can manage resentful employees. I think self-awareness in a leader is key. A self-aware leader knows when they need to or can help, and when they need to ask for help. They can also draw boundaries. One thing we try from a D&I angle is to equip our leaders with inclusive leadership skills. Empathy is also important. It takes a lot of intentional empathy from a leader to manage a resentful employee. 

What is the importance of coaching and well-being programs within an organization? 

M: It’s extremely important because, firstly, people have a misconception about stress. We have stress but we also have a system to reload. If we don’t take enough time to reload, that’s a problem. Preventively talking about this with your leaders and employees – about taking rest times – is good. For example, writing emails in the evening or not taking lunch is a bad idea because there’s no time to recuperate. That’s an important message.  

Y: We have questions related to well-being in our annual surveys and we see more colleagues reporting that they do feel more included and are getting more manager support. We see the scores for psychological safety increasing. What we try to do is solve innovatively to improve business processes. If there are functions that operate inefficiently and keep stress levels high, we see how we can intervene and improve some processes. 

What steps can leaders take to reduce the stigma around mental health and create a culture that encourages employees to seek help when needed? 

M: When I do cultural transformational programs within organizations, I always start with management. That goes a long way and takes a lot of time and precedence over work long-term. 

Y: There can be huge power imbalances due to underrepresentation or employees being from a marginalized background. This is where psychological safety is key in ensuring optimal well-being among the workforce to be able to have difficult conversations. But it’s a very complex topic. That’s why organizations should introduce other resources such as employee assistance programs and coaching programs – a third party they can talk to when the situation becomes so unbearable that they can’t talk to their managers about it.  

We are a company of 85,000 employees. Building trust is an ongoing conversation. As an organization, we do everything we can to equip our people and leaders to have challenging conversations and get the skills they need to lead effectively. We’re trying to improve the role modeling of senior leaders. Additionally, we try to get people ambassadors who role model some of our valued behaviors. It’s an ongoing cycle. Trust also means different things across different markets. So, we also try to align with our country heads and make sure that the key messaging is repeated. It’s a marathon, not a sprint. 

What is a key takeaway you have about resenteeism and its impact on the business? 

M: The key takeaway is to invest in your leaders. How do you have conversations and build trust? How do you build personal leadership? It’s a fair step because in this case, it really does start with leadership. 

Y: Organizations should also begin to examine how they can be more influential externally, especially in complex situations. In the UK right now, the public appointments board is trying to get seasoned professionals from the private sector to apply for public boards. As a senior leader, it can help you to advocate for things that would benefit your workforce. Many employees, especially the newer generation, are value-based. It’s important for organizations to think about advocacy

*Answers have been edited for clarity and length.

LG’s Director of General Operations: A New Model for Future-Proofing Supply Chains

Supply chain leaders are emerging from a difficult couple of years plagued by various global crises with a renewed determination to build resilience in a volatile landscape. The question on everyone’s mind: What actions can be taken from the recent lessons learned?

During our insights session with supply chain business leaders, LG’s Director of General Operations, Gabriel Mesas Paton, walked us through the secret of the multinational electronics company’s collaborative supply chain strategy which helped them weather and quickly adapt to continuous disruptions.

 
Gabriel Mesas Paton, Director of General Operations, LG Electronics is directly involved in all divisions of the company from consumer electronics to B2B solutions. He also leads the integration of operations capabilities to foster LG leadership. Additionally, Gabriel is a Member of the Executive Board at CEL (Centro Español de Logistica) and speaks at different business schools and industry events.
 

How did the recent economic challenges impact LG’s supply chain?

We have been severely impacted since ours is a very fragile supply chain. The product lifecycle of electronics is very short, meaning that everything moves quickly. As such, the disruptions over the last three years caused our supply chain to suddenly stop. We don’t have a lot of inventory along our supply chain and therefore, no huge buffers that we can use to overcome these issues. These disruptions led to difficulties in delivering products to our clients. Having said that, I’d say we have been able to overcome those challenges.

Allow me to first explain how we operate. We have a global supply chain that moves hundreds of thousands of containers a year. We have factories in many countries and the entire chain is complex. Ours is also an integrated global supply chain, which is not typical in many industries. This brings with it major difficulties but also opportunities and advantages.

LG Electronics has a single forecasting and production capacity system as well as a single system for order and inventory management. Today, we can see inventory levels at warehouses, orders from clients, production capacity, inventory of components along our supply chain, suppliers, production facilities, and the capacity of our transportation partners and distribution centers. All this information is visible simultaneously to all parts of the supply chain. We’ve been able to build this integrated supply chain visibility for over 15 years.

Of course, when one part of the supply chain is affected, that impacts the rest of the chain. However, we do have the advantage of immediate communication and reaction.

 

The Collaborative, Planning, Forecasting, and Replenishment (CPFR) model in supply chain has been around for a while but has recently received criticism for its lack of usefulness in recent times. Can you elaborate on the new concept that LG has implemented instead?

CPFR was very popular from the 90s to around 2007. It was a way of collaborating with manufacturers and clients over a period of one to three years. Forecasts and production plans were made for 12-month periods and replenishment orders would be done for the three months following that. A review of the market would take around a month. It was a long commitment.

In today’s market, CPFR is no longer valid. You cannot make commitments for two years. Our products change every year. In fact, we launch new televisions every year, but we upgrade the features every six months. How can we make a two-year commitment for a product with a lifecycle of fewer than six months? We understood that we needed a framework that allowed for immediate response.

We came up with what we call FCR – forecast, check, and react. It’s a new process to produce the best forecast, continually check in with our clients and partners, and react immediately both locally and globally.

FCR is about setting up a joint process with our clients to do forecasts that we then use to come up with some rough intentions and commitments for three months. But of course, the level of commitment is low and our clients are not interested in committing for just a few months when the market is extremely volatile. So, we check in with market progress on a weekly basis and react immediately.

This process requires thorough collaboration and work. We are looking at data to create forecasts for the next three months. From there, we narrow down forecasts for each product on a weekly basis. Then we gather information on weekly sales to see how the market is progressing. If sales levels are as expected, we will confirm our orders and forecasts. If not, we will take other measures and adjust our projections.

It’s like what we used to do years ago. The difference is that we are managing information and visibility on a global scale. We have discussions with clients and partners daily. Decisions are also made daily.

For example, we can have production orders for our factory in Poland on Tuesday for products to be delivered to our clients in Iberia between Friday and Monday. Roughly 50% of our sales are executed directly from the factory to our clients without passing through a distribution center. This allows us a quick reaction capacity. FCR allows us to implement quick updates, review changes, and make adjustments in real-time.

 

Do you use product-in-use data for sales predictions?

Unfortunately, our market cannot be statistically forecasted because the effect of promotions and the effect of seasonality is so big. Having said that, it can be analytically forecasted. What we do is work on this joint forecast with our clients, using product data to estimate and project sales for the next few weeks. We consider our clients’ promotional plans and our own marketing investments. We do our forecasts at an aggregated level to procure materials, but not for sales of an individual client.

 

How important is the network of partners for LG Electronics?

Typically, inventory was the protection against uncertainties. Now, we know that inventory is extremely expensive and the risk of holding it is too high, especially with the marketplace and world we live in today.

The way to build buffers nowadays is by building capacity, which requires a network of partners.

For example, 15 years ago we could have 30,000 units of TVs in a local warehouse to protect from demand uncertainty and variability. Now, we build capacity between factories, supplies of components, transportation capacity, and manufacturing capacity. We are agile in production, meaning we can extend production by a few hours per day, giving us the ability to increase daily production by 20% to 25%. We also have the capacity to stretch transportation from five trucks per day to 200. This means we can cover for the uncertainty with the capacity to reach. This is only possible with a big network of suppliers and partners.

For example, if we had only one transportation partner, we could not reasonably expect them to have five trucks one day and 200 trucks the next. What would they be doing with their idle capacity? To divide the risk, we have many partners. In Iberia, we have over 25 logistics partners. The same is true at the European level.

So how can we work with them? We need to show them all this information in the FCR model. The forecasts and plans are aggregated at a higher level, then disaggregated at the partner level daily. Sharing this information allows us and our partners to prepare as much as possible.

 

What makes it possible for you to react to customer demand changes within a week?

Our production plan is adjusted daily in the factory. Transportation capacity is confirmed with carriers 24 or 48 hours ahead. Sales orders are confirmed daily with our clients. It’s just about having the right information and commitment.

One of our secrets is losing the concept of fighting and negotiating with clients.

We live in a market in which we need to work together with our clients by building mixed teams. There are some functions that still reside entirely within the client and within LG, but then we have several functions that are performed by a core process team that jointly works over the same information. They share roles, responsibilities, and the same KPIs. They share communication and tools which allow us to work closely with our clients. The secret is to work as a single team, to know that the only way to win is to do it together.

*Answers have been edited for length and clarity.