Dr Rebecca Wynn: “We Didn’t Think of AI Privacy By Design”

In the era of the AI revolution, data privacy and protection are of utmost importance.

As the technology evolves rapidly, risks associated with personal data are increasing. Leaders must respond and adapt to these changes in order to push their businesses forward without sacrificing privacy.

We speak to award-winning cybersecurity expert Dr Rebecca Wynn about the data privacy risks associated with AI and how leaders can mindfully navigate this issue.

 
Dr Rebecca Wynn is an award-winning Global CISO and Cybersecurity Strategist, host and founder of Soulful CXO Podcast. She is an expert in data privacy and risk management and has worked with big names such as Sears, Best Buy, Hilton, and Wells Fargo.
 

From a business perspective, how has AI changed how companies approach data protection and privacy?

Honestly, right now many companies are scrambling. They hope and pray that it’s going to be okay, and that’s not a good strategy. One of the things we see with leading technologies is that the technologies come up first. Then governance, risk, and compliance people play catch up. Hopefully, in the future, this will change, and we will be on the same journey as the product. But right now, that’s not what I see.

Once the horses are out of the barn, it’s hard to get them back. Now we’re trying to figure out some frameworks for responsible AI. But one thing people need to be careful about is their data lakes. Is misstated data going into the data lake?

From a corporate perspective, are you’re not monitoring what people are putting into the data lake? Even from your own individuals, are they putting your intellectual property out there? What about company-sensitive information? Who owns that property? Those are the things that are very dangerous.

For security and compliance, you really need to be managing your traffic, and you do that through education on the proper use of data.

Can you speak on the role of laws and regulations in ensuring data privacy in the AI era?

There are two types of people. There are ones who prefer to go ahead and ask what the guidelines are and what’s the expected norm for businesses and society as a whole – they stay within those guidelines. Then there are companies that ask about enterprise risk management. What is the cost if we go outside those lines? We see this in privacy. They ask questions like “What are the fines? How long might it take to pay those fines? Will that go down to pennies on the dollar? How much can I make in the meantime?”

Laws give you teeth to do things after the fact. Conceptually, we have laws like the GPDR, which is the European Union trying to establish AI rules. There’s the National Institute of Standards and Technology AI framework in the US, PIPEDA in the Canada.

The GDPR and the upcoming AI Act are obviously important to companies based in the EU. How aggressive should we expect a regulatory response to generative AI solutions might be?

I think it’s going to take a while because when GDPR initially came into place, they went against Microsoft, Google, and Facebook. But it took a long time to say what exactly these companies did wrong and who would take ownership of going after them.

It will take years unless we have a global consortium on AI with some of these bigger companies that have buy-in and are going to help us control it. But to do that, big companies must be a part of it and see it as important.

And what are the chances that these big companies are going to start cooperating to create the sort of boundaries that are needed?

If we can have a sort of think tank, that would be very helpful. AI has very good uses but unfortunately, also very negative consequences. I’m not just talking about movies like Minority Report, but I also think about when wrong data gets out. Like in Australia, we see potentially the first defamation law case against ChatGPT.

Even on a personal level, information on you is out there. Let’s say for example you are accused of a crime, which is not true. That gets into ChatGPT or something similar. How many times can that potentially come up? I asked ChatGPT to write me a bio and it says I worked for Girl Scouts of America, which I never did.

That’s the type of thing I’m talking about. How do you get that out of the data pool? What are the acceptable uses for privacy data? How do you opt-out? These are the dangers right now. But it has to be considered from a global perspective, not only by region. We talked about legal ramifications and cross-border data privacy. How do you stop somebody in the US from being able to go ahead and use data from the EU a bit differently? What about information that crosses borders via AI? It hasn’t been discussed because no one even thought of it just a year ago.

What are appropriate measures for organizations to take with shadow IT uses of GPT tools?

We need to train more on the negative effects of such tools. I don’t think people are trying to do it from a negative perspective, but they don’t think about the negative impact. If I’m using one of these tools to help me generate code, am I looking at open-source code? Is it someone else’s code that someone put in there? Is this going to cause intellectual property issues?

When you talk about shadow IT, you are looking at what is potentially leaving a network and what’s coming in. So, it usually sits above data loss prevention tools. But how do you do it without being too ‘big brother-ish’.

All this comes from enterprise risk management. You need to have conversations with your legal and compliance teams. Most people just want to get their job done and they don’t think about the negative repercussions to the company’s reputation. You have to have those conversations.

Talk to your staff about what tools they’re using in a no-fear, psychologically safe way. Ask them why they’re using those tools and the advantages it gives them. From there, you can narrow down the top two or three tools that are best for the company. This lowers your risk.

It’s about risk mitigation and managing that in a mindful way because you can’t have zero risk. You can’t block everyone from doing everything.

How can current data protection legislation support businesses and individuals with data that has been used to train large language models?

It’s chasing things after the fact. We’ll find out that there are a lot of language models trained on data that were not used in the manner we agreed to in our contracts. I think there are going to be some legal ramifications down the pipeline. We’ll find out that the data used in these models are not what I would call sanitized. I’ve seen it again and again; intellectual properties are already in the pool and the data was not structured or tagged so we can’t pull it back out.

In that case, how can we work with protected data and at the same time, with large language models?

That’s tough. I’ll give you an example. Let’s say there’s an email with a cryptic key embedded into it. What you can do is hold the other key and spiral it off. I like that from a company and individual perspective. Because if someone shared something intellectual property of mine with another person, maybe an article I wrote or a code, I could then look at the spiral and see who sold or resold that data. From there, I could expire it. From a legal perspective, I would have a trail.

What happens if we could do that with every piece of information that you make? If the data is tagged immediately, you could see what it was created for and expire it for other uses. It won’t be in anyone’s database.

I think we can get there. But right now, I don’t see how we can get the horses back into the barn effectively when the data is not individually tagged.

Should we forbid access to ChatGPT and other AI apps to all users?

You could use a variation of that. Consider why you’re using an AI tool. Ask your teams why they’re using it and then think about how you might mitigate risk. If it’s about rephrasing certain text to be more effective for clients, then allow it. That’s a positive internal use. Maybe it’s about marketing and rephrasing things for various social media platforms.

But what if they just want to play around and learn more about it? Then maybe you need to have a sandbox where they can do that, and you don’t have to worry about data leaving your network.

How can we ensure that our AI systems are trained on unbiased and representative data to prevent unfair decision-making?

To be honest with you today, I don’t think we can. Part of it is because data is in a data lake. For a lack of better phrasing, garbage data in, garbage data out. If you look at search engines out there, they’re all built on databases and those databases are not just clean. They weren’t built initially with clean data at the start. We didn’t think about structure back them.

What you could do is have something like a generated bio of yourself, see what’s wrong with it, and give a thumbs up or down to say if it’s accurate or not to clean up the data. But can anyone clean up that data or is it only you? If there’s inaccuracy in my data, how can I flag that? It seems like anyone can do it.

So the question is, is there a method to go back and clean up the data? Here’s where I wish we had the option to opt-in instead of opt-out.

When it comes to data security, what are the top three things companies should keep in mind?

First, is to not have fear or uncertainty in your own people. We think mainly from a security and privacy governance perspective that everyone intends to do the wrong thing. Instead, I think you need to assume that your workforce is trying to do the right thing to move the company forward. The real training is about what is the organization’s acceptable use policy. What is the expectation and why is it important? If things go awry, what are the consequences to the individuals, company reputation, and revenue?

Next, do how do we monitor that? Do we have an internal risk assessment against our AI risk? If you’ve not looked at your business liability insurance recently and you have a renewal coming up, take a look. There is an AI risk rider that is coming up for most if not all policies in the future that you, as a company, are using AI responsibly and that you are doing risk assessments and managing it with firewalls, data loss prevention strategies, and things like that.

Holistically, it’s enterprise risk management. I think you need to be transparent and explain to individual users in those terms, but we haven’t always been doing that. You need to figure out how you can make everyone understand that they are an important puzzle piece of running the business. It’s a mind shift that we need.

Should we create AI gurus in our companies to communicate the risk and stay up to date on the technology and its usage?

We’re on the cusp of that. A lot of us have been talking behind the scenes about whether there is now a new role of AI CISO.

The role of the Chief Information Security Officer will be focused solely on AI and how that’s being used internally and externally.

You may have the Chief Security Officer who is operational-focused only. I end up being more externally facing with strategy than I am with daily operations. We’re seeing various CISO roles to handle that for the company.  Digital Risk officers, that’s a legal perspective. I think we’re seeing a rise of AI Cybersecurity Officers or similar titles.

Should we worry about privacy threats from AI in structured data as well?

I think you should always worry about privacy. When we look at the frameworks, EISA has a framework, EU has the AI rules. There are things coming out of Australia and Canada as well, that’s what we’re trying to gravitate towards.

But, as individuals, how much can we really keep track of our data and where it is anymore? If you haven’t looked at the privacy policies on some websites, I say as an individual you need to opt-out. If you’re not using those apps on your phone anymore, uninstall them, kill your account, and get out of them. Those policies and how they’re using that information are only getting longer and longer.

As a company, do you have a policy in place about how your data is being used between your companies? Are you allowing them to be put into AI models? What are the policies and procedures for your own employees when it comes to third parties that you do business with?

That’s why I said there needs to be a lot of training. From an enterprise risk management standpoint, you cannot manage risk if your risk is not defined.

What is the social responsibility of companies in an AI-driven world?

I wish it would be a lot more, to be honest with you. Elon Musk and people like that are being a little more forward-think about what and where to do we want to be in 2050. All technologies are good technology. When AI initially came out in the 1950s and machine learning in 1960s, it was a new shiny toy but people were scared too.

I think from a social perspective, anytime we have something that allows us to be digitally transformed in a way that allows us to communicate and see correlations quicker, and see what people are facing around the world, that’s good.

But then, it can also bring in fake news. We say trust and verify, but what do you do when you’re using AI tools that have all the wrong information? It’s scary. That’s when we must use critical thinking. Does this story make sense? Does it seem reasonable? We’re starting to see right now how AI can be used for good and evil. In terms of cybersecurity, fake email and such are used in targeted phishing attacks.

For companies, do you have back channels to verify things? I once received a text message from a CEO that sounded like him but did not ring true to me. When I asked him through back channels, he said it was not him. Your gut is right most of the time. Trust your gut, but also give people other avenues to verify that data.

Do you think there’s too much focus on the content of a framework to identify and include all risks as opposed to focusing on the processes to get to the right answers?

I agree. I’m always about the so what. Know it, document it, implement, manage, and measure. But then what? If I have a framework solely as a framework, that’s great but it’s about what you put in.

I think the problem is that you start from the top-down. We end up having to get people on the same page and saying we need a framework. And then it gets down to the meat and that’s what you’re talking about. Why do we need this? How do we put it into play? How can we test it and measure it?

Unfortunately, policies and procedures start from the top down, but for boots and on the ground, the thought starts with implementation. That’s where I think training comes into play. This is where people like me talk to you about the day-to-day.

*Answers have been edited for clarity and length.

Antonietta Mastroianni, GAIA-X: The Key To The Future of Data Is Partnering

As cloud takes over as the mainframe of business with more and more tools and applications being transferred over, questions of security, reliability and privacy arise. Currently, there are three major cloud service providers, and they are all US-based. For European companies, how does this affect business? How can leaders ensure that their data is safe in the hands of these organizations and that they comply with EU regulations? 

We tackle these questions and more with Antonietta Mastroianni, Chief Digital and IT Officer at Proximus and Vice Chair of Finance at GAIA-X. GAIA-X is a project that is part of the broader strategy under the von der Leyen Commission of European Strategic Autonomy to develop a federation of data infrastructure and service providers in Europe to ensure European digital sovereignty.  

 
Antonietta Mastroianni is the Chief Digital and IT Officer at Proximus and Vice Chair of Finance at GAIA-X. She is an influential IT leader with over two decades of experience within the telco industry and was recently named the Telco Woman of the Year 2022. 
 

Tell us more about your role at GAIA-X and Proximus. What do these roles have in common? 

 

Both roles are linked, and they are both very interesting. At Proximus, I’m the Chief Digital and IT Officer, meaning I am responsible for all aspects of planning, building, and running IT and digital functions. This includes rollout in production, digital strategy, and application. These are particularly important in Proximus as we are expanding our offerings into an ecosystem, moving beyond just telco services and products. Beyond that, I’m also responsible for the data architecture because it’s not only about having all businesses running, but also about innovation and transformation, exploring different ecosystems for B2C and B2B customers. The role of data in digital transformation and acceleration is extremely important as it drives automation and AI.  

At GAIA-X, I am the Vice Chair of Finance. It’s an interesting project because, for the first time, Europe is getting together to set some standards that define not only the infrastructure but also the architecture, labeling, and rules for data. It is to make sure that we not only have the technology but also the governance in place for a fair exchange of data creation and storage in a secure way according to European guidelines. Both roles support each other.  

 

Why do you think US-based hyper-scalers have managed to secure such a big market in data?  

 

I think they’re extremely good at what they do. They had a very focused target and some of the technology has become part of our daily lives. Without the solutions they produce, our lives would be very different. Of course, if you provide a solution that helps people and you’re able to do it at a certain speed of innovation while maintaining a high-quality of product, then it is easy to scale. When it becomes a fundamental part of people’s lives, there is power as well.  

 

Where does GAIA-X stand in terms of working with these hyper-scalers? 

 

I think partnering is the right approach. For example, if you have a level two or three cloud, we have a strategy for partnering with a hyper-scaler. At the same time, being part of GAIA-X means being an active contributor to setting regulations. You cannot stop the technology and it’s important that the right level of partnership and transparency is established in terms of the “what, where, and when” of data usage.  

“Technology is advancing fast and the key to the future is partnering.”

 

To what extent can Europe become independent from US-based hyper scalers? And does GAIA-X have a strategy for this? 

 

There are well-defined levels of cloud. At levels three and four, of course, the data is stored in Europe and there are certain levels of confidential computing that guarantee you have control of your operation and privacy. At GAIA-X, we have worked out the labeling for level three. These are a set of regulations that should provide the highest security and protection. If everyone follows these rules, technology can evolve safely.  

In terms of sovereign cloud offerings in Europe, I hope we can do that soon. Right now, we’re still behind compared to US providers. We must accept this and accelerate the evolution of European providers. However, it will take a while to reach that same level of performance and durability. 

 

Why are Europe-based cloud providers running behind? And what can they do to catch up to their US counterparts? 

 

In the past, we’ve focused less on this. Also, technology has evolved so fast, especially with the enormous digital acceleration due to COVID. I think we are behind because we did not expect such a rapid rate of adoption of these technologies, and we were unprepared.  

 

Would you suggest that people should change from US providers to European ones, to aid in this growth? 

 

No, I’m very much in favor of partnering. I think there is no point in taking a radical approach. We live in a connected world. Instead, I think it’s a question of following the right rules and choosing the right ways to implement solutions. This includes finding partners and ensuring that everyone’s interests are protected.  

 

Legislation is often adaptive, a reaction to something that has already occurred. But Proximus and GAIA-X have a more proactive approach that emphasizes collaboration. Can you expand on that? 

 

Different members of GAIA-X share solutions and we have been working on various projects that take in these various inputs to create more end-to-end cooperation. The project is still young, so it’ll be a while before we see any huge impact. However, it’s heading in the right direction.  

I emphasize that it’s not a competition.  

“We are working together to create an architecture and a standard that can be implemented to enable everyone to make the best use of technology.”

 

How can good data organization, ownership, and governance benefit society at large? 

 

I’ll give the example of the flooding in Belgium last year. We have been building a solution to create an alert system to prevent people from getting caught or exposed to such conditions when it happens. This is an example of how data can be used for the good of society.  

This technology can also be steered in different directions and GAIA-X is about helping ensure that it is going in the right direction. 

 

What are the benefits of a Top 500 company in Europe working with GAIA-X? 

 

There are benefits in terms of cloud infrastructure and data labeling. The future will be all about data more than software development, in my opinion, and GAIA-X will be able to steer innovation in this field. That’s a major benefit for companies that work with the project.  

Beyond that, there is the benefit of interconnection with other companies – sharing information, creating partnerships, and exploring funding opportunities. The project is a collaboration of the best professionals in this space, so there is also the benefit of knowledge sharing and growth.  

H&M Group’s Data Mesh Journey: From Ideation to Implementation

Data mesh was coined by tech leader Zhamak Dehghani in 2019 to refer to an agile, domain-based approach to setting up data architecture. As organizations become more data-driven, it’s time for IT and digital leaders to explore data mesh and its advantages. Erik Helou, the former Lead Architect at H&M Group, shares pertinent insights on how the organization utilizes data mesh, its benefits, and the challenges encountered during implementation.  

*This article is a recap of Erik Helou’s presentation at the session, Decentralized Approach to Becoming an Agile Business. 

 

Why H&M Group Adopted Data Mesh

H&M Group found itself in the same spot as many organizations, experiencing many iterations of data platforms, ways of working with data, and centralized teams. “In those days, we spent four or five years on new AI efforts, working with data in different ways,” Helou explains. H&M Group’s data systems eventually had to scale with the organization’s growth, and data mesh enabled that.  

Data mesh addressed the following needs: 

  • An accelerated growth of data: “The organization experienced accelerated growth of data, resulting in the need to use data in newer and faster ways.” 
  • A rapidly changing industry that pushed demand to scale AI and data: “The industry was changing as our company was evolving. We needed to onboard, facilitate, and make use of data capabilities at a scale and speed we hadn’t seen before.” 
  • Business knowledge ownership inhibited growth: “A big thing that inhibited growth and expansion in the data space was how to scale and manage business knowledge ownership of the data; and the ways of interpreting the data.” 
  • Business tech organizations move towards product centricity: “We saw a shift towards product centricity in the business development and technology departments, rather than the usual IT delivery way of operating.” 
 

H&M Group’s Interpretation of Data Mesh

H&M Group was drawn to data mesh, and more specifically, the distributed domain model because it solved many of the organization’s pain points. “The idea of this model is to define a map of your business in domains, subdomains, and data products,” Helou explains.  

“In a typical retail case, you would have the business data domain of sales, which is a difficult source-aligned product domain. All source-aligned data domain products should be the most correct and easy-to-use window into the operational business. That way, anyone in the company who needs the data view on insights, discovery, or operational software development knows where to go.”  

Helou explains that data mesh creates an official domain of data products that work well together. There is also a team behind them that the staff can contact to guarantee the operational qualities of that data. “It’s easy to find your way in a data mesh that represents the entire company’s activities,” he adds.  

Next are the aggregate products or the consumer-aligned products, which are refined or engineered products that transform operational business data for different purposes. Consumer-aligned products focus on specific needs in the business. In addition, the aggregate is a typical customer 360 product that takes information from a number of data products with a wealth of business knowledge into something more refined that can be used by people or systems.  

 

The Four Pillars of Data Mesh

  1. Domain-oriented ownership: This enables the distribution of ownership, development, and maintenance of hundreds of data products in an enterprise like H&M Group. Employees need to understand their domain ownership and data product ownership. “That’s the operational model and culture that needs to be established in the organization,” Helou says.  
  2. Data as a product: “Teams supply their data products to present what happens in the business domain they are responsible for. They serve that to themselves and to the rest of the company as an important asset.” 

To make that happen in a structured and sustainable way, there are technological tools that need to be in place: 

  1. Self-service data platform: “This is so we don’t have 200 product teams purchasing software and designing things in completely different ways. There’s a lot to gain in terms of costs and interoperability between different products if they can be on the same platform and use the same tools.” 
  2. Federated computational governance: This is important for the sustainability, compliance, and quality of H&M Group’s data products. “It’s things from the legal parts to logging to data quality to discoverability. You have one catalog where you can browse, discover, and understand the data assets you have rather than browsing through a big database.” 
 
Discover the latest developments in the data world in Aurora Live’s Executive Insights sessions. Learn more.
 

Features of the Data Mesh Approach

  • Enable all teams to autonomously ingest, share, refine, and consume data: “The autonomous way of working was the most important thing for us. The self-service data platform is key to distributing the work. We wanted all new and existing data product teams to be enabled on their own without too much dependency on bottlenecks, like how data engineering teams used to be in a data warehouse space. Everyone should be able to find and consume that data autonomously rather than asking for permission or knowledge. It needs to be self-explanatory and as automated as possible.” 
  • Provide a self-service UI based on standardized infrastructure, modules, and platforms: Data products are shared through friendly UI which provides standardized infrastructure components. “You should be able to create the infrastructure and data pipelines on top of what’s already there for an entire company. Any additional platforms or modules that you need go through the Self-Service UI.” 
  • Provide monitoring, logging, and alerting: As a background capability for the self-service UI, data mesh provides standard monitoring, logging, and alerting systems to maintain consistency among product teams, as well as ensure data quality and operational quality. 
 

Data Centralization is Key

“We want to stay clear of centralizing too much to reduce bottlenecking. Central engineering in data warehousing teams needed to do everything for all the source systems to be represented in the data warehouse. Business teams had to go through the heavily loaded, bottleneck engineering team. Data mesh allows us to get away from that,” Helou says.  

What H&M Group centralizes:  

  • Ensure a holistic one-stop-shop data use experience: “This is the data catalog we need to offer centrally because it has to be federated. Everything needs to be collected in one place.” 
  • Apply governance to all aspects: “We need to centralize the governance, legal aspects, security, compliance, and data quality.” 
  • Establish reusable accelerators and toolkits: “These are the building blocks for all the product teams to establish their data pipelines. That way, the teams don’t have to build specific tools because they cost a lot of time and risk. This makes it easy to fix bugs across a lot large number of data products at the same time.” 
  • Create schema, contract, and landscape fundamentals: “We need also to centralize hygiene factors like schemas, data contracts, and landscape fundamentals to enable stable and trustworthy operations at this scale and manage changes in the integration points.” 
  • Massive communication effort: “The central team needs to continuously talk about how we use data mesh and why we use it.” 
  • Documentation is key: “We need to centralize documentation describing the many data products we offer.” 
 

Data Mesh Lessons and Challenges

  • Mindset shift and upskilling of employees: “It’s a cultural thing to distribute the ownership of data creation, data knowledge, and data use in this way, which is something that is very attractive for a lot of people. But it’s still a big shift and we need to educate each other on how the organization uses data mesh. There will be upskilling, and the addition of new technical tools.” 
  • Decentralize technology (no central DevOps): “The decentralization of technological systems to find the right balance on what to centralize and decentralize.” 
  • Architectural and technological change: “It has an impact on the entire data landscape on how things work in the operational backbone and how data flows.” 
  • Manage legacy platform deprecation: “We need to take legacy into account. There are many well-functioning data solutions for reporting analytics that have to keep running. But we also have to steer our investments towards data mesh products and find our transition from legacy systems to this new platform.” 
  • Onboard key data procedures early: “It’s important to onboard key data early and find a handful of data sets early that enable teams to migrate from legacy data solutions to the data mesh reality.”  
  • Focus on business priorities: “What’s your strongest business case, either that you need to keep operating or something completely new? What data needs to be there? Then, you can get business stakeholders to buy in and find experts that can guide you to create those data product teams.” 
 

The Future of Data Mesh at H&M Group

“In the long run, we hope to gain agility and autonomy. As an online retail business, we need to keep scaling fast and adapt quickly to changes in the business world. We also need to get rid of any bottlenecks in this data expansion. Data mesh provides a strong foundation to enable innovation around data. It allows business analysts to uncover new potential, insights, and value propositions. It also lays the foundation for us to continue growing in the digital era and in the data-centric way of working.” 

The Impact of Big Data Analytics Across Industries

Big data has long evolved from being confined to IT sectors to becoming a business imperative. In 2018, the International Data Corporation (IDC) forecasted that global revenue for big data and business analytics solutions would reach $60 billion in 2022 with a compound annual growth rate of 11.9% from 2017 to 2022. However, the IDC’s latest Spending Guide placed that figure at $215.7 billion in 2021. 

As companies continue to find new ways to better leverage the massive amounts of data being collected every moment to enable solutions and retain a competitive edge, we take a look at several case studies of how big data is applied in five different industries.  

 

Human Resources: Driving business performance via people analytics 

 

A McKinsey case study details a major restaurant chain with thousands of outlets around the world looking to improve customer satisfaction and grow revenue. Business leaders believe could be done by solving the company’s high staff turnover problem by better understanding people. 

New and existing data were collected from individuals, shifts, and restaurants across the US market including the financial and operational performance of each outlet. Some points considered include personality traits of employees, day-to-day management practices, as well as staff interactions and behaviors.  

The more than 10,000 data points were used to build a series of models to determine the relationship, if any, between the desired outcomes and drivers. The model was used to test over 100 hypotheses, many of which were posited by senior management based on their own observations and instincts from years of experience. 

Noting that some of the hypotheses were proven while others were disproven, McKinsey reported: “This part of the exercise proved to be especially powerful, confronting senior individuals with evidence that in some cases contradicted deeply held and often conflicting instincts about what drives success.” 

Ultimately, the analysis revealed four insights that have gone on to inform the company’s day-to-day people management in its pilot market.  

Just four months in, the company experienced: 

  • Over 100% increase in customer satisfaction scores 
  • 30 seconds improvement in speed of service  
  • Decrease in attrition for new joiners 
  • 5% increase in sales  
 

Supply Chain: Improving cost and service efficiency 

 

A multi-location manufacturer sought to mine its vast library of inventory, shipping, and freight billing data to find ways to improve spending while maintaining service levels. They also wanted to identify opportunities for better inventory management, trip reductions, and order consolidation.  

Using available data, the solution provider created an integrated data management and analytics platform. This was supplemented by a custom order management algorithm.  

The system helped the company consolidate orders heading out to the same location in order to ship them out in one go, thereby reducing congestion at the shipping dock and reducing freight costs by 25%.   

Predictive analysis applied to the company’s supply chain management also led to: 

  • 10% increase in shipping capacity 
  • Improved service-level metrics 
  • 10% decline in inventory levels  
  • Less shipment backlog during peak seasons 
  • Clarity on freight spend drivers 
 

Healthcare: Effective screening and treatment of diseases 

 

In China, there has been a rise in cerebrovascular diseases such as strokes. In response, the government launched a Healthy China 2020 plan aimed at improving public health. 

Following that, medical professionals investigated how best to treat strokes and related medical conditions by identifying three key areas: accurate screenings, precise treatments, and meticulous rehabilitation.  

They wanted a more effective way to analyze data than just using the traditional manual paperwork system, which was not scalable.  

Partnering with IBM, the Shanghai Changjang Science and Technology Department along with China’s top three hospitals developed an intelligent stroke assessment and management platform. The AI-enabled platform analyzes patient information, applies a screening model, and compares these details with known risk factors.  

Patients that have been identified as high-risk are then channeled to the appropriate physician with treatment recommendations and corresponding probabilities of success.  

This application of big data analysis led to: 

  • 15% improvement in diagnostic accuracy of stroke risks in patients 
  • 80.89% accuracy in predicting treatment outcomes 
  • Scaling risk screenings to cover a larger population and encouraging early treatment 
 

Financial Services: Post-trade analysis 

 

The National Bank of Canada’s Global Equity Derivatives Group (GED) provides trading solutions that manage securities such as stocks, futures, funds, and options. It collects and processes a high volume of stock-market financial data, but faces a challenge when it comes to data analysis.  

The bank sought to find a more effective and scalable way to process and analyze structures and unstructured data, as well as historical data, in order to develop a better analytical solution.  

Using an open-source big data processing framework and moving its processes to the cloud allowed the bank to achieve its goal of scalability. The GED was able to analyze hundreds of terabytes of trade and historical data. This now enables their business analysts to conduct quicker post-trade analysis.  

Big data analysis allowed the bank to: 

  • Reduce post-trade analysis process from a few weeks to a few hours 
  • More robust post-trade analysis 
  • Improved trading operations 
  • Increase revenue 
  • Increased customer satisfaction 
 

Manufacturing: Predicting Equipment Anomalies 

 

A major manufacturing company looked to deploy digital twin technology to make manufacturing more flexible and efficient. The company, which was struggling to meet its production targets due to unscheduled downtime, created an IoT sensor-enabled digital copy of its critical equipment to predict potential anomalies and maintain the flow of its assembly lines. 

Falling short of its production target also meant that the company faced increased operating costs, customer dissatisfaction, and lost market share to its competitors.  

Applying IoT-supported digital twins technology allowed the company to collect real-time data. When analyzed with other data sets – historical and maintenance-related – the company was able to remotely monitor and assess its physical assets.  

The ML-based algorithm sifted through plenty of data to help detect abnormal equipment behavior and proactively suggested corrective actions before failure. This led to: 

  • 100% achievement of production target 
  • 25% reduction in operation costs 
  • 54% increase in profit margins 
  • Timely product delivery 
  • Higher customer satisfaction and increased market share 

Data Fabric – Securing Your Flexibility and Freedom to Choose

A Data Fabric delivers the right data and applications to the right place, at the right time, and with the right capabilities. You are in control of your data and can keep it safe no matter if your workload is running locally, hybrid, or cloud based! You have the flexibility to transition into a hybrid, multi-cloud setup that suits your business and the freedom to choose the right service to any given workload – now and in the future. 

 

Your data—where, when, and how you need it! 

For nearly three decades, NetApp has been focusing on innovations enabling customers to build stronger, smarter, and more efficient infrastructures. The objective being delivery of the right data and applications to the right place, at the right time, and with the right capabilities. When it comes to your business, we’ll meet you at your level and explore where you want to go, and then help you get there with a data fabric designed for simplicity and agility. 

 

What is Data Fabric? 

In 2016, Dave Hitz, co-founder of NetApp, went on stage at NetApp Insight and introduced a new term: Data Fabric. It wasn’t a product, there were no deliverables, but it was a philosophy that NetApp was going to live by in the development of its new and existing products. 

He said that most new workloads were going to be cloud-based (but not all), and that while it’s really easy to deploy and destroy workload instances in the cloud, those workloads are useless unless they have relatively local access to the datasets required to achieve business outcomes. 

Any doubts about the cloud being production-ready had been clearly vanquished as AWS and Azure had already grown into behemoths, with each introducing new services seemingly every day. 

A few years after this announcement, it seemed that “Data Fabric” was going to be this overall term that fell into the category of “marketecture” — just a cool term with no real meaning or implementation. 

 
 

So what is the Data Fabric now?

NetApp has created a foundational delivery architecture for workloads and their data. This is unique as everyone else in the industry focuses on one or the other. Customers can provision, manage, and run production, development, or test application instances in the place that makes the most sense at that time. This has a tremendously positive impact on a data-driven application development and execution workflow, as organizations look to the cloud for their “use-as-you-need” compute farms.  

When you consider that according to IDC, the amount of data stored globally will grow from ~40ZB in 2019 to 175ZB in 2025, with 49% of that data stored in a public cloud, it’s clear that two things are true: 1) there’s going to be a ton of data in the cloud, and 2) there’s going to be a ton of data still resident in data centers.  

These datasets will consist of millions/billions (or more?) of files (or objects), with capacities already exceeding the petabyte range. Moving datasets of that sort around by scanning filesystems is simply not possible. 

At the core of the NetApp Data Fabric lies NetApp SnapMirror technology. SnapMirror allows you to efficiently move data from place to place in a way that makes the number of files irrelevant, without the need for third-party replication software or appliances that introduce high rates of failure and even higher skill requirements for administration. 

NetApp redeveloped SnapMirror at the beginning of the Data Fabric movement to open it up to other platforms such as S3 to expand the Data Fabric to as many use cases as possible. 

NetApp Cloud Volumes ONTAP has allowed customers to achieve much faster analytics results using lots of ephemeral cloud compute, leveraging data that resides primarily on-premises, and employing the Data Fabric to get that data into the cloud. The customer remains at the top of the food chain, as opposed to customers who get disrupted because they still cling to the traditional (read: slow and frustrating) 100% on-premises method of application delivery. 

If your organization is looking to achieve new or faster data-driven outcomes, it is imperative that you settle on a foundational architecture that not only gets and keeps your dynamic data in the places where you’ll be achieving those outcomes, but also brings your scaled applications to bear on that data to realize true acceleration. If you do your research, you’ll find that NetApp has led in this space from the onset and is so far ahead in its capabilities that you’ll want to grab onto the NetApp Data Fabric, hold tight, and get ready for a wild ride. 

 

Read more about NetApp’s approach to securing your flexibility and freedom to choose with a Data Fabric here and let’s connect on how to reach the Data Fabric strategy you need.  

 

Alin Kalam: Nurturing Growth and Innovation Through Data, AI, and Sustainability

The IT industry continues to grow and shift rapidly due to the pandemic and CIOs are constantly on the lookout for ways to foster and adopt new technologies into their organization. Whether it is sustainable transformations or implementing AI, change is necessary.

As the Head of International Market Intelligence & Data Strategy for UNIQA international, Alin Kalam shares with us his insights on the need for agility through AI, achieving business competence, and nurturing innovation.

 
Be part of Aurora Live, an exclusive members-only platform that’s tailored for CxOs seekng the latest industry insights, high-level networking opportunties, and more.
 

Finding Agility in Artificial Intelligence and Overcoming Disruptions

Businesses and IT leaders today need to be quicker to respond to the ever-changing landscape of their industry and overcome disruptions. Whether it’s to implement hybrid workplace models or to incorporate new technologies such as artificial intelligence and data analytics, there is a definite need for CIOs to strategize.

Kalam shares his insights on the key challenges that CIOs need to be aware of when incorporating new technology and how to effectively transition towards data-driven business models.

 

What are the key challenges for CIOs who are trying to adopt new technologies especially in the AI field?

 

Surely one of the major challenges of establishing AI technologies in companies is lack of trust and also limited knowledge existing. On the technical side, I see the IT productionizing & operational issues arising since 2019. 

Often it is not the number of best practices, that lack but the ability to align market circumstances with existing technologies with own true business needs. Therefore, I see the cultivation of AI-driven innovation much more as a strategic challenge nowadays than only a technological one.

 

What should CIOs be aware of in the transition towards data-driven business models that serve dehumanization of critical business fields?

 

On the one hand, dehumanization must be done quickly to address short-term issues e.g. through the implementation of RPA or AI products to combat challenges caused by Covid, and on the other hand, CIOs must balance strategically what and where they are automatizing/dehumanizing. I already have seen examples of cost reduction projects through dehumanization that are creating huge strategic risks for companies in the long run. 

For sure there will be someday an “after Covid” and using the current crisis as scapegoat for cost-cutting only without putting the focus on the product portfolio, customer needs, and above all operational risks of IT systems, can become a huge source of risk. 

Here I rather appeal to strategic long-term aspects than short-termed gains only and to address this concern CIOs must become business-driven more than ever!

 

The Need For Sustainability and Competent Business Intelligence

Companies were forced to change their policies, behaviors, and business strategy due to the prolonged coronavirus pandemic. The recent COP26 climate conference showed that companies are committed to making sustainable-focused organizational changes.

For Kalam, the need for sustainability in IT is clear highlights the challenges that many are still facing, in addition to incorporating competent business intelligence to ensure sustainable growth. 

 

Sustainable transformation in the IT & innovation field has become a key topic for upcoming years. What are the specific areas of action for CIOs in this field?

 

For sure sustainability as a topic is here to stay! Not only do we have the macro aspects of it addressing the major concerns of our time, but it has become also a business driver in so many sectors. 

With my initiated project Sustainista I, therefore, have tried to interconnect companies with the scientific community ensuring exchanging of data, know-how, best practices, and transparency. The biggest challenge in this field is the lack of market and scientific standards at the same time. ESGs might be known to many of us but breaking down its info business actions according to standard approaches/processes is the biggest challenge!

In an ideal world, CIOs and related roles are taking ownership of this topic and driving it to doable tasks, otherwise, I am afraid to see sustainability just as a cosmetic and marketing label without a true impact on business and how we do things.

A particular starting point is to understand macro goals as an organization and break them down to a very data level in organizations delivering measures and related actions with the help of existing data. Many companies I know from various sectors have started with external data sets 1st to deliver quick success that can feed this long-term topic.

 

How would you advise companies who are still struggling to incorporate Business Intelligence?

 

Here I clearly follow the storyline of failing fast succeed sooner. Instead of propagating a piece of technology IT must build a bridge with business and deliver quick wins. Even now I am often devastated whenever I see only PDFs and Excel Sheets with numbers/KPIs that do not reflect the fast reality of our businesses and data-driven decision-making across borders! 

Major issues companies face are data quality, integrity, and security issues. CIOs are hereby in the role of process enablers. Instead of being only technology-driven often the implementation of BI must be done in a joint-venture manner.

 

Ensuring Growth Through Data and Overcoming Legacy Challenges

One of the biggest hurdles for digital transformation efforts still stems from legacy systems that are often outdated and not integrated with modern solutions for business uses. Despite the fact that modernizing legacy IT systems is required for businesses to ensure growth, IT leaders are still faced with roadblocks and challenges.

For Kalam, however, legacy systems are not necessarily the main roadblock as it once was. Instead, the focus now for CIOs should be to apply best practices during data-driven business transformation and simplify their approach to nurturing experimentation.

 

With regards to data-driven business models, what are the best practices that CIOs and IT leaders need to keep in mind? 

 

In a matter of fact, the approach of data-driven business transformation is everything but only data-centric! It covers the end-to-end processes of entire product lines and the strategic setup of a company. After many years of data harmonization/migration projects, companies often find out their undone homework regarding “creating true business values to the company itself and its customers”. 

I myself often propagate the term “no business value without data, no data without a business case”. Between this symbiotic relationship lies the true success of transformation efforts. 

Aside from this core topic I often miss the foresight of wisdom! It means seeing the potential of data not only in core businesses but its extensions and added capacities. In my objective point of view, this foresight of wisdom and true added potential is often the key success factor to many.

 

One of the main challenges for organizations is to overcome legacy infrastructure. How can CIOs overcome the legacy obstacle? What are the skills and mindset needed to promote modernization for an organization?

 

To be honest I really do not see legacy infrastructure as the biggest road-blocker anymore. Especially throughout the last decade, there have been so many progressions in simplifications of legacy systems, that I have become more optimistic on that end out of my own experiences! 

I can´t remember when I have seen companies e.g. migrating legacy data systems into new all-in-one and all-ruling superior DWH, Data Lake, etc. Instead of searching for the holy grail, we have become more realistic about using data where they are at their best and being created. 

This Data Mesh approach has become a blueprint for software solutions as well just as agility was cultivated from the IT/Software world into day-to-day business & project management. But this process has just begun a couple of years ago, the community yet does not have a buzzword, but hey, never say never…!

 

Innovation and experimentation are at the heart of data-driven business models. How does one nurture an environment that promotes experimentation within their organization?

 

I rigorously follow the principle of K.I.S.S (Keep it simple, stupid) in the incubation phase of innovation projects. Instead of talking only and selling in this phase, organizations should apply these principles, aside from a minimum set-up of governance, risk mitigation process regarding GDPR, privacy, organizational risks, etc., and allow experimentation. 

Here the old wisdom of “too many rules & regulations kill true innovation & creativity” should be applied. 

If the internal challenges are too big, often I have guided companies and leading bodies into the world of entrepreneurship. 

The most successful CIOs & IT managers are those who run new innovation ideas or projects as a starting business operating from day 1. This can be a guarantee of nursing the true nature of innovation when nothing else is working.

Karin Immenroth: Developing Competency In a Data-Driven Business Culture

The advent of readily available data has fostered a new era of fact-based innovations in corporations, where exploring innovations and new systems can be backed up with empirical evidence. And with the disruption caused by COVID-19, there is accelerated adoption in data technology.

So why is it hard for businesses to adopt data as part of their organizational structure?

The biggest obstacles do not stem from the technical side of things; it’s about the culture. In this interview, Chief Data and Analytics Officer for RTL Deutschland Karin Immenroth shares with us how a business needs to transition into a data-driven culture and the approaches that a modern chief data officer (CDO) needs to adopt in today’s digital landscape.

 

The New Landscape of Data Culture

Over the past decade, data has steadily become an influential factor for decision-making processes. Especially in the past year where almost 60% of the global population is constantly online, businesses are looking into data analytics to better understand their customers and employees.

As with the aftereffects of the pandemic and the changing demands of today’s market, Immenroth highlights how the role of the data officer today has changed significantly while pointing out the underlying driving force for data transformation.

 

How has the role of the Chief Data Officers (CDO) changed and what challenges do they face in a post-pandemic market?

Companies didn’t have Chief Data and Analytics Officers ten years ago. That role didn’t exist yet. But because the market is changing dramatically due to progressive digitalization, “Data” as a topic is becoming more and more important. 

The biggest challenge, however, is cultural – it is not enough for a central data area to drive the cultural change, rather the entire company must start working in a data-centric way. 

The DATA Alliance is the central catalyst for RTL Deutschland on its way to becoming a content, tech, and data powerhouse. The pandemic has permanently changed the way we work. 

For us, as the DATA Alliance, the development surrounding the “mobile office” is very positive, as it means we can now work across Germany and in a completely flexible way. This helps us find and attract the best talent in the German market.

 

Why are companies still struggling to implement data competency and how has the pandemic affected their hesitancy towards adopting data culture?

We are in the middle of a cultural change, transitioning into a data-driven company. 

RTL Deutschland is a company with over 3,000 employees – a cultural change doesn’t happen overnight. It takes time, and it’s also important to have a few lighthouse projects that carry the topic of “data” into the organization and help spread awareness. 

We must make it easier for our colleagues throughout the company to access data, support them in interpreting data, and, of course, show them how to make better decisions based on this data. 

Just like the motto goes, “Use data, be better”. The pandemic has been a positive and driving force behind our cultural change – greater digitization has also brought the processing and implementation of data more broadly into society.

 

Developing and Simplifying Data For Organizations

Without a solid foundation for data culture, businesses will often miss out on the chance to fully utilize the data they’ve collected, or even encounter issues with data consistency or internal processes.

Deloitte reports that only 21% of the global workforce is confident in their data literacy skills. And with 70% of organizations expected to shift to new analytics techniques known as “small data” and “wide data”, businesses that are not data literate will get left behind.

Immenroth dives into how the leadership in RTL Deutschland has steered the company towards developing its analytics sector and advises those who are still trying to find success in building a data-competent organization. 

 

What can those in leadership roles do to improve data literacy within their organizations?

We have launched various projects that help our colleagues make better use of data for themselves and their day-to-day work. 

These are, for example, projects like our Reporting Center or our quota tool, Key Vision. We also support various stakeholders in the company by building data products and decision-support tools for their businesses. 

At the moment we are particularly active in the marketing, content, and digital sectors. And it’s also crucial for us to continue developing in the analytics sector, as it will enable us to make even better use of the treasure trove that is data analysis.

 

For companies and organizations that are struggling to find success in data, what key metrics and best practices should they focus on to drive the importance of data?

Our experience shows that it makes sense not to overcomplicate the initial steps. Very exciting and useful insights can often be found in simple descriptive data metrics. 

If you then go one step further and use analytics or even machine learning, data science, etc., you’ll often find unexpected results and insights that have been “fleshed out” by the data. 

I recommend a good dose of courage to use unconventional methods and approaches – we have had very positive experiences here and have been very pleasantly surprised on more than one occasion.

 

Starting Small and Establishing Data Competence Centers

In 2021, global big data and business analytics was forecasted to grow to $215 billion while connected IoT devices are expected to create 79.4ZB of data by 2025

With global economies adopting data analytics at an accelerated pace, businesses might be tempted to “go big” with investments in a data-driven culture. Immenroth believes that CDOs and organizations should do the opposite instead while building on Data Competence Centers to kickstart their digital transformation.  

 

In the pursuit of a data-driven culture, what pitfalls or common mistakes should CDOs or organizations be aware of?

More doesn’t always mean better. My experience is that it’s best and most sensible to start “small” and then expand gradually. In concrete terms – it is better to always start with a small proof of concept and then decide whether something bigger can emerge from it.

Fail fast and have the courage to make and admit mistakes… This is the best way to learn and then use what you’ve learned in your organization.

 

How would you advise CDOs or data leaders who want to seamlessly integrate competence centers?

My recommendation is to look at where topics related to data are anchored throughout the company. 

Then, based on that, you can build the core for the so-called Competence Center. It is advisable to define central topics and make them the heart of the Competence Center, and it is also fundamentally important that enough “data” ambassadors are distributed throughout the company in the areas correlating to each topic. 

In my opinion, it’s this balance that counts. In any case, our experience shows that a central Data Competence Center can be a very successful catalyst for the transformation of a company.

Marco Hoppenbrouwer: Fueling Growth Through Data-Driven Culture

With remote work expected to become a mainstay in the foreseeable future, IT and business leaders are looking at new ways to ensure that their employees are equipped with the necessary critical insights needed to make business decisions.

One of the key approaches is to develop a data-driven culture: the utilization of emerging technologies to drive business value, pushing an organization to be insight-driven as opposed to gut-feeling and operating in the dark.

Marco Hoppenbrouwer, Chief Data Officer for Global Functions & Finance at Shell, understands the value of data to drive value for businesses in today’s modern landscape. In this interview, Hoppenbrouwer shares his insights on how the chief data officer (CDO) role has evolved and why a data-driven culture is a necessity for corporations.

 

The Power of Digital and Data and The CDO’s Role

To compete in an age of rapid acceleration, companies need to be data-driven, but the transition from a feeling- to a fact-based organization is not an easy path. With only 24% of companies truly fostering data-driven cultures, it is up to the CDO to push the initiative of cultivating data technology in a business.

However, before CDOs can achieve that, Hoppenbrouwer points out why the components of Digital and Data are important for a business and what the focus needs to be for CDOs to streamline the transition towards a fact-based organization.

 

How has the role of the chief data officer (CDO) evolved in today’s data-driven culture?

 

Let me first set the scene as to why digital and data are so important for any business and one cannot do without the other:

The energy transition and digitalization are two mega-trends that affect the world in the coming decades. Both are expected to have a profound impact on the way everyone lives their lives.

Digital technologies can play a key role in the transition to a lower-carbon future. Furthermore, we see a rapid increase in digital products, services, and processes coupled with increasing expectations for a seamless digital experience from both our customers and employees. 

Digital is not new, but what’s different is the availability of technology, data, and capabilities that are growing at an exponential rate. Digital is also one of the few processes that require quality data as input to be successful. 

I’ve seen projects fail because a Proof of Concept was successful as it was run on manipulated datasets but this was not the reality in the field when the solution was to be deployed.

Maximizing the benefits of digital technology is heavily dependent on the readiness of an organization and its workforce, meaning:

  • It is key to upskill the workforce with new tech skills
  • The entire organization needs to be data-savvy

This also means that if the organization cannot keep up, it will rapidly be taken over by a competitor that leverages digital technologies and can deliver a more compelling value proposition faster and at a lower price point. A good example of digital disruption is SpaceX which disrupted the entire launch industry.

As to how this answers your question, the role of the CDO has changed from an information & compliance management role into a strategic value generator role. With digitalization and the emergence of the CDO, data is now at the forefront and is seen as a key-value driver that drives business outcomes.

 

What role should the CDO play in streamlining the transition to a fact-based organization?

 

There are three key areas the CDO should focus on:

  1. Accelerate digitalization and drive business value from data, meaning:
    • CDO knows how data enables the business strategy and what value it can drive
    • CDO knows what data is needed, who owns it, where it’s mastered, whether it can be trusted, and how it can be accessed
    • CDO formalizes roles & responsibilities for Data Management
  2. Increasing organization’s data literacy so that:
    • employees understand the importance and their role in data management, including data quality beyond their line of business. For example: if the quality of data you create is not good enough for downstream usage by another line of business, you create a problem in the value chain.
    • employees have the technical skills to drive value from data through citizen developer tools such as the Microsoft Power Platform. This can be achieved through developing role-based learning paths, setting up a community of practices for sharing key data-related best practices, and running DIY boot camps or Hackathons.
  3. The CDO should strive for data-based decision-making by ensuring that the required analytics & insights are timely available in the decision-making process
 

Eyeing The Potential and Opportunities of Data-Driven Culture

The field of data analytics has consistently grown, in terms of acceptance and importance, and will play a critical role as a decision-making resource for executives in modern companies. 

Gartner predicts that by 2024, at least 30% of organizations will invest in data and analytics platforms, increasing their business impact for trusted insights and encouraging new efficiencies. As such, CDOs must take the initiative in fostering data technology as an organizational asset for digital transformation.

But, what are the challenges and how should CDOs approach this transition?

Hoppenbrouwer delves into the main points of how the CDOs should facilitate the data strategy for an organization, and the perspective needed to overcome the challenges of digitalization in a post-pandemic market. 

 

With data having the potential to transform functions into a high business impact model, what initiatives should the CDO take to help this transition?

 

Ensure that there is a data strategy for each line of business. This helps pave a clear roadmap for the usage of data, the business value it enables, and the capabilities required to deliver this value.

Secondly, businesses will need to get the data into good shape, meaning:

  • Identify data ownership and resolve where ownership is unclear
  • Identify the data that matters, which means not all data, only the critical data
  • Make data issues transparent, such as data quality & remediation or master data management & replication
  • Embedding of data quality management in daily operations for data that matters
  • Drive fit for purpose improvements

Lastly, there needs to be a focus on upskilling the employees on data skills.

 

Are there challenges for the adoption of data technology skills and culture? How has the pandemic affected these challenges?

 

There are many challenges and plans that have been impacted, but I prefer to look at the opportunities.

I am heading our European Data & Analytics community and normally we hold local and focused events on specific data topics. These can be lunch & learn or deep dives on how to start on the AI journey, sharing best practices from a recent analytics project, etc. 

Due to the pandemic, we organized virtual sessions. These events offer the opportunity for staff to virtually meet other colleagues outside their daily routines and join with D&A communities in larger events across the organization, such as data literacy programs or DIY boot camps.

At the same time, the pandemic has accelerated the business’ digitalization plans, putting much more emphasis on data enablement. As a result, it has increased the need for Data, Data Strategy, Data Governance, Data Quality, Data Skills, and Data Capabilities in the organization.

 

People At The Core of Data and Digitalization

The explosion of available data has given corporations the potential to fuel a new era of fact-based innovation and new ideas through solid evidence. All this culminates in improving operations, clear strategies, and better ways to satisfy customers. 

Yet for many organizations, a strong data-driven culture still remains elusive with data rarely being the foundation for decision-making.

What makes it hard for corporations to be data-driven?

The answer lies beyond data technology. It is about kickstarting the culture at the very top through leadership that sets expectations and decisions anchored in data. The lack of data awareness is something that Hoppenbrouwer believes is one of the major pitfalls for those in leadership roles.

Not all employees are sufficiently aware of the importance of data for the organization. People have learned to store certain data, but they are not really sure why this is important and what other departments do with it.

As a result, data provided by one department to another is often incomplete or contains errors on more than one. Supplementing or correcting this leads to additional work and additional costs. It’s key for employees to understand the data value chain and the role they play in it.

 

What pitfalls should the CDO be aware of when pursuing a data-driven culture?

 

Culture is made up of people, and changing a culture means you need to get a change going with the people. People don’t change naturally unless there is a reason to do so.

Everybody wants to deliver their digitalization strategy due to the value it enables and this is a key catalyst to improve data culture. 

However, data is a foundational enabler, and data responsibilities were considered an add-on to the day job without recognition for good performance in data-related activity.

Leadership needs to change here and needs to step up, from the top, all the way to the supervisors on the shop floor.

What can the leadership do to make these changes? Some examples include:

  • Publicly speaking about the role and importance of data 
  • Clearly articulate how data enables the business strategy and the value it unlocks
  • Data roles & responsibilities need to be formalized AND effort recognized
  • Visibly recognizing the good work done in the organization to get the data right
  • Sharing of success stories and lessons learned
  • Encouraging staff to become more data literate, through sponsoring data literacy events, citizen data science boot camps, and inclusion in personal development plans

Ultimately, the CDO plays a key role here in supporting the leadership to drive change through the organization.

How to Fully Utilize Data for Improved Customer Experience

Every great business recognizes the importance of customer experience (CX) – a critical strategy in engaging and retaining customers to your brand.

With the e-commerce landscape booming amidst impacts from COVID-19, it’s apparent that CX has transcended through both digital and physical sales channels, and is a key competitive differentiator for brands.

But with the extensive research and analyses on achieving great customer experience, why is CX still an ongoing concern for businesses?

 

THE CX CHALLENGE

 

However, as straightforward as it may sound, it’s becoming harder for companies to achieve the customer experience that consumers expect due to:

 
 

Customer touchpoints are especially significant as these are the areas in the customer journey where the consumer interacts with your brand, and have a direct impact on their overall experience.

 
 

According to customer service provider, Help Scout, “a poor experience at one touchpoint can easily degrade the customer’s perception of multiple positive historical experiences at other touchpoints.” And Qiigo claims that it can take between 13 to 20 touchpoints, or touches, to convert a prospect into a customer. 

Fortunately, as businesses become more digitized, it’s much easier to identify customer behavior patterns and to improve touchpoints in their journey.

However, the amount of raw data available combined with the challenge of analyzing and acting on customer insights are factors as to why organizations are still lacking in quality customer experience.

 

PREDICTIVE ANALYTICS IN CX

 

Unlike prior generations, the consumers of today have higher expectations and a clear idea of what they want and how they want companies to deliver it to them.

But 71% of consumers are still receiving “An offer that clearly shows they do not know who I am” while 41% are seeing “Mistakes made about basic information about me.”

Such errors are taken as signs that the brands are ‘intentionally’ not placing importance on their customers when actually, it shows that organizations are not using their customer data to the fullest potential.

 

Pre-Purchase, Purchase and Post-Purchase

 

By leveraging data and artificial intelligence (AI), companies can improve all stages of their CX journey.

One example given by Capgemini showed how Amazon used AI and predictive analytics, before the browsing prospects even made a purchase, to:

 
 

Qymatix Solutions also emphasized the cruciality of using predictive analytics in the pre-purchase and purchase stages through predictive lead scoring while utilizing churn and crossselling predictions in the post-purchase phase.

Micro-Segmentation and Personalization

 

In the past, segmentation was sufficient to deliver an ‘adequately personalized customer experience’, but today, brands need to micro-segment their potential consumers for hyper-personalization.

Using machine learning, predictive modeling and data mining, predictive analytics help to:

 
 

In a use case by Wavicle Data Solutions, a restaurant chain’s consumers were segmented into multiple groups and clusters based on gathered data. Following that, “predictive analytics and machine learning created both macro and micro-segments of customers, with matching customized offers for each audience”.

At the end of their process, the restaurant chain was able to develop personalization and loyalty programs that engage customers with more customized offers and meaningful messages, increase customer retention, and grow revenue.

 

Resource Efficiency For Higher CX

 

Aside from giving consumers exactly what they need, predictive analytics also help in the efficient allocation of your resources

For instance, a coffee shop saved 38% of their marketing costs by predicting which of their customers were more likely to churn and sending them targeted offers to convert them into loyal customers.

Other examples, given by MarTech Series, show how predictive analytics can reduce resource wastage and streamline costs by planning staffing levels in advance for smoother and more timely customer experience, and upgrade delivery timelines by conveying transport route adjustments for on-time deliveries.

These efficiency strategies not only lead to savings for the company, but also ultimately improve the interactions and experience of the consumers.

But predictive personalization cannot be made without quality data, and data strategy is where some organizations face roadblocks.

 

MAPPING ORGANIZATIONAL DATA JOURNEY

 

While businesses often map out their customer journey, companies should also map out their internal data journey, which can involve multiple functions and C-suites, to determine weak areas in the sharing of their CX data.

For instance, are there information silos between the business departments? Which function has decision authority over data?

In a CX team proposed by TechTarget, the Chief Customer Officer (CCO) is responsible for the customer experience metrics and research while the Chief Experience Officer (CXO) “creates customer journey maps that use data to predict future consumer actions”.

On the other hand, Dion Hinchcliffe, Vice President and Principal Analyst at Constellation Research and Brian Hopkins, Vice President and Principal Analyst at Forrester Research, both talked about data-sharing and partnerships between different C-suites.

Hinchcliffe mentioned that the Chief Information Officer (CIO) and Chief Marketing Officer (CMO) each have a vital part to play in delivering quality customer experience.

Meanwhile, Hopkins believes that the Chief Data Officer (CDO) and CIO can form a powerful partnership to drive data strategy, where IT supports the CDO to maximize the impact of customer data.

To quote Hopkins, “The bottom line is that control over data is neither a pure tech decision nor a pure data decision.”

With more specialized C-level roles and functions emerging, organizations need to tear down data silos and establish active communication between all business functions for a joint effort towards better customer experience.

The Challenges of Data Governance in EU: Two Years Into GDPR

On 25th May of 2018, the now-renowned General Data Protection Regulation (GDPR) was fully implemented across the countries in the European Union (EU).

Superseding the 1995 Data Protection Directive, the GDPR addresses the processing, protection and portability of personal data within the EU and the European Economic Area (EEA).

 

How does the GDPR impact businesses?

 

Not only does the framework provide more control to individuals over the use and collection of their personal data, it also streamlines data regulations for businesses that are operating in the EU or offering their services to clients located in the EU.

Core dna best explains which companies are affected by the GDPR in the diagram below.


 

Through the 7 principles of the GDPR – lawfulness, fairness and transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity and confidentiality, and accountability, organizations are expected to control and process data, whether consumer or company information, in compliance with the regulations.

To clarify, businesses collecting customer data must document and have evidence of consent for every purpose the data will be used for.

 

“[The] generic consent or opt-out consent does not comply with GDPR. […] For example, if someone opts into email marketing, you cannot use this consent to send them a letter or call them or their company.”

GDPR for Business: What is GDPR and How Does it Impact You?

Digital Media Stream

 

What data does the GDPR cover?

 

The GDPR protects any private data that identifies a data subject (the customer), ranging from basic identity information and race or ethnicity to biometric data and political opinions. However, data that is irreversibly anonymous and unidentifiable is not considered as personal data and therefore, is not covered by the GDPR.

Thus far, the length of time a business is expected to store the data has not been firmly established, with the GDPR stating that the information should not be kept longer than necessary or required. In this case, organizations need to determine how long to keep the data based on either the national law or the purpose of the data collection and processing.

 

“Think about what is the purpose you want to achieve, and how long you will need the collected data to fulfill that purpose.”

How Long Should You Keep Personal Data?

Data Privacy Manager

 

The only information that can be kept for longer retention periods are data used “for archiving purposes in the public interest, and for scientific or historical research purposes or statistical purposes.”

 

Who handles the data?

 

According to the GDPR’s Recital 39, the data controller, an individual or company that controls the processing and purpose of data, is responsible for ensuring that the personal data are not kept longer than necessary, and for establishing time limits for data erasure or periodic review.

There is also the data processor, usually a third-party person or organization, that processes the data on behalf of the data controller, which can include implementing security measures to safeguard the data. The controller must ensure that the assigned processor has sufficient guarantees “to implement appropriate technical and organizational measures” in compliance with the regulation.

Based on the GDPR, the regulation requires companies to assign a Data Protection Officer (DPO) if they store or process data on a large scale or if they are a public authority or body. Either internally or externally appointed, the DPO’s responsibilities include:

 

  • Informing and advising the company and employees on compliance requirements;
  • Awareness-raising and training of staff involved with data processing;
  • Monitor compliance and conduct related audits; and
  • Cooperating and acting as contact point with supervisory authority on issues relating to data processing.

 

What challenges are businesses facing in being GDPR-compliant?

 

Although companies are expected to be GDPR-compliant by May 2018, according to research, only 20% have completed their GDPR implementations as of July 2018. More than 2 years later, 27% still have yet to start on GDPR compliance while 60% of tech companies are also not prepared for GDPR.

Many organizations faced, and are still facing, difficulties in their journey to become GDPR-compliant. From changing the way they handle customers’ data to tackling challenges in data retention and deletion, some businesses believe that the regulation limits their ability to operate efficiently or run a profitable company.

 

  • Lack Of Readiness

 

Complacency, lack of understanding, competing laws, unfamiliarity with data processes and usage – these are some of the reasons behind organizations’ lagging or partial compliance with the GDPR. 

Research also stated last-minute data identification and other preparations in the final months before the deadline as another possible reason for the lack of readiness.

For most businesses, both big and small, it has been no simple feat to juggle the different aspects of being GDPR-compliant, from consolidating the data gathered over the years, training employees in data management, and hiring the different required roles, including talents in GDPR program design and implementation.

It’s even more difficult for international companies that need to comply with differing data privacy laws. And more often than not, all the complexities have led businesses to hiring individuals or companies to specifically handle compliance.

 

“My concern is that in the rush to be ready for the GDPR before 2018, and indeed since, many companies have engaged with individuals or organizations which haven’t given them proper advice with regards to their requirements.”

– Brian Honan, CEO of BH Consulting,

GDPR: The First Two Years and Future Challenges

 

In fact, according to TrustArc, 87% of companies needed help with GDPR and used external firms to understand the regulations, to gain tools and tech for automation and operationalization of data privacy, and new policy and process creation.

 

Solution tip: Break the regulations and processes into manageable tasks. Conduct a risk assessment to identify compliance and data security gaps, and establish a formal data governance program to map the type of data collected, its purpose, usage and storage, and how it’s shared.

 

  • Control of External Parties

 

Based on the GDPR, all third-parties that are accessing or will access the data of the controller, including vendors, partners and external data processors, must be in compliance with the regulations.

As Ian Evans, the Managing Director for EMEA at OneTrust, aptly put it, “You now have the obligation to ensure that the people you contract with – and who undertake processing on your behalf – are also going to represent you and your views on privacy as well.”

So how should companies maintain data governance and control arrangements of third-parties?

All contracts with third-parties should be revised to define the data processes, including:

 

  • How information is used, managed and protected;
  • How breaches are reported;
  • What are the customers’ rights;
  • Acting only as per documented instructions;
  • Agreement to not contract a sub-processor without prior approval; and
  • Returning or deleting all data at the end of the contract.

 

Not only do businesses need to ensure that the external firms follow through on the privacy commitments, they’re also required to know their vendors’ privacy policies and ascertain that they have appropriate security measures in line with data protection compliance.

It should be noted that a data breach occurring at a third party or caused by a vendor is a shared responsibility between the parties – the processor must notify the data controller of the breach, and the controller, in turn, is expected to report the incident to a GDPR regulator within 72 hours.

Furthermore, the controller is responsible for informing the data subjects, or customers, of the breach, where the DPO will act as the point of contact between the controller, the regulatory office and the customers.

 

According to Soha Systems, 63% of all data breaches can be linked directly or indirectly to third parties. Additionally, only 37% of controllers believe that they will be notified by the vendor if there was a breach of data.

 

However, less than 20% of companies feel confident in being able to report a breach within the stipulated time while it was discovered that only 45% of EU companies made an effort to report such incidents.

 

Solution tip: To avoid the heavy costs of a vendor data breach, it’s best to have a solid vendor risk management program with strong technology and clear policies and procedures. Detailed audit records and processes also help to catch any issues before they escalate into a breach.

 

  • Data Deletion and Minimization

 

According to Symantec’s State of European Privacy Report in 2016, 90% of organizations believe that deleting customer data will be a challenge for them in regards to GDPR compliance while 60% said they are not equipped with an existing system to delete the data.

As the GDPR dictates businesses from holding unnecessary data and storing data for long periods, companies were determining what data to keep and the data retention period. Since the regulation also provides data subjects the right to data erasure, organizations also need to find the best solutions for permanently removing personal data.

The issue is that some companies may not know where their data is stored within the organization, thus making it difficult to locate and delete the data. There’s also the problem of backups, so how are organizations expected to erase personal data that is “often scattered across multiple applications, locations, storage devices, and backups”?

 

 

Aside from data deletion, data anonymization and pseudonymization are data minimization techniques that are used by businesses to comply with the regulations.

Data that has been anonymized disables the data subjects from being identified, and is excluded from the GDPR regulation as it’s no longer considered as personal data.

On the other hand, data pseudonymization “replaces personal identifiers with non-identifying references or keys”, preventing the identification of the data subject without the key. But data processed using this method is still regulated under the GDPR as the data subject can be re-identified through additional information.

While companies are using these methods to protect their data assets, organizations must ensure that they still comply with the data purpose limitation in Article 5 of the GDPR.

 

Solution tip: Implement automated data discovery software or machine learning technologies that are able to keep track of all the data in the organization’s databases, data lakes and legacy systems. Carefully review if anonymized data is possible for the company’s data use before implementing any anonymization solution or automated erasure software.

 

  • Data Security

 

The COVID-19 pandemic brought many challenges to organizations, one of them being the rise of data breaches as remote working continues to be the norm for companies. In fact, the months between March and June 2020 recorded more than 470 data breaches, pushing CIOs, CISOs and other C-suites to strengthen their cyber security strategies.

Breaches not only indicate a lack of data security, whether on the controller or processor’s part, but can also lead to hefty GDPR fines of up to €20 million, or 4% of the company’s total global turnover.

Reputation damage and loss of customer confidence are other consequences of such incidents, which can be hard to rectify even after containing the breach, seeing as “57% of consumers don’t trust brands to use their data responsibly”.

From low employee awareness of cyber threats and lax online behavior to unsecured endpoints and external access, there are many security gaps that hackers can utilize to gain access to a company’s data. 

 

“Data security does not equal data privacy, but it is an integral part in achieving it.”

– Paige Bartley, Senior Research Analyst at S&P Global Market Intelligence,

Expert Interview: Paige Bartley on Data Privacy

 

CIOs are already focusing on maintaining system security while employee training is a topmost priority for 92% of C-suites, according to our findings.

 

Solution tip: Update policies regarding the access and handling of data when managing it externally, and increase training of employees on the new policies, online safety and rising cyber threats. Limit data access to only authorized personnel, and implement systems to detect illegal access.

 

How should companies stay GDPR-compliant?

 

Executive leadership is vital in ensuring the organization remains compliant with the regulations.

While data compliance and cyber security may be in the realm of the CDOs, CISOs and CIOs, all stakeholders that collect and use customer data should be involved – from marketing and sales to finance and operations – along with the assigned DPO.

Clear and detailed procedures must be established and periodically reviewed to ascertain that the processes continue to adhere to the GDPR. This not only includes the handling and use of the data, but also in answering the requests of data subjects exercising their rights.

Furthermore, organizations should demonstrate accountability and transparency in all processing activities, which extend to keeping records of risks and compliance progress, maintaining a strong data protection and breach response plan, and ensuring the continued compliance of external parties.

Although companies might lament over the obstacles and concerns of being GDPR-compliant, studies showed that among the businesses that have implemented their compliance processes, 74% of organizations say the GDPR has a beneficial impact on consumer trust while 73% believe the regulation has actually boosted their data security.

Overall, the GDPR is showing a positive effect on businesses, especially for companies that show they value the privacy of their customers.