7 Enterprise Architecture Trends to Watch in 2024

The rapid pace of technological change in recent years has become a catalyst for the evolution of role that enterprise architects have within organizations. No longer just responsible for designing and managing IT systems, enterprise architects are now strategic partners in helping organizations achieve their business goals through the use of technology. In fact, 79% of leaders say demand for EA services have increased in the past year (BizzDesign).

The role of enterprise architects has becoming increasingly cross-functional, working with stakeholders from across the organizations to ensure alignment of IT systems with business needs and the needs of all users beyond just a technical perspective.

The current digital landscape continues to evolve at a lightning pace, giving rise to the need for more mature EA functionality to help break down tech silos and create better collaboration within organizations. Enterprise architects must forge a strong strategic partnership with CIOs to enable the development of configurable and scalable solutions and define a roadmap for future changes.

Here are seven trends that are poised to shape enterprise architecture in 2024.

 

1. Enterprise Architects as Key Change Enablers

 

Prior to this, the job of enterprise architects was focused on reducing an organization’s technical debt and simply their application portfolios to increase business agility. Now, amid rapidly evolving technological and business environments where cloud and API-led services have made systems more complex, there is no longer a need to manage technical debt in the same way. Instead, speed and adaptability are crucial.

Old-style governance processes are too rigid and slow for the current pace of innovation which limits what teams can do to innovate at speed and deliver a competitive advantage.

This is where enterprise architects come in as key change enablers and must champion the value of EA to stakeholders by highlighting how it is essential for a smooth digital future. This will ensure higher EA adoption, increased investment, and ultimately multiply the impact of architects in the organization.

 

2. Enterprise Architecture to Drive Sustainability

A study by MSCI ESG Research LLC found that progress on the sustainable development goals as defined by the United Nations is sorely lacking. In fact, only 38% of respondents were “aligned” with the goals while 55% were either “neutral” or “misaligned”. These findings add fuel to the greenwashing flame, which is no longer tolerated.

EA is an excellent tool for acting on sustainably drivers and defining how different sections of an organization can link together to trace sustainability metrics and progress. Enterprise architects can leverage different frameworks to first run an impact analysis for the organization in consultation with experts to inform their models and insights. From there, it’s important to define actions that can be taken to drive sustainability practices forward both from a technology and people perspective while determining adjustments that may be necessary along the way.

 

3. AI-enabled Enterprise Architecture

 

The hype over artificial intelligence (AI) is not waning. In fact, we’re moving from hype to practical application of AI in business as the technology goes mainstream. What this means for enterprise architects is that AI will start to become a practical tool to enable smarter design practices. On the other hand, the rapid adoption of AI technologies also mean that enterprise architects will have to design systems that support the application of AI-enabled tools.

In the world of EA, AI can be used to optimize the architecture documents with the intention of improving data quality in the face of continuous changes. AI-enabled EA can also intensify collaboration and make EA more accessible to organizations as an easier and quicker way to create models.

 

4. Real-time Compliance

 

With the conversation surrounding AI regulation and data privacy taking centre stage, enterprise architects will be called upon to help ensure that organizations are compliant to new and changing regulations.

The target now for most organizations that want to be data-driven and proactive is real-time compliance. A mature EA will be necessary to show reports on the scope of controls, state of compliance, and provide access to real-time evidence of effectiveness.

Everyone from CISOs to internal auditors and risk managers will benefit from mature EA, which can provide an in-depth, enterprise-wide view of standards and policies that must be adhered to. This benefits regulators as well who will be able to rapidly access and read compliance reports.

From implementation to coordination, visibility, and traceability, EAs can help inform boardroom level decision-making as well as downline measures and processes. This will end up being a cost-saving move and a proactive approach to risk management for organizations.

 

5. Capability-based Planning

 

In a study by the Harvard Business Review during the global recessions in 1980, 1990, and 2000, 9% of organizations were found to have flourished and outperformed their competitors during the recovery period by more than 10% in profit and sales growth. It was posited that the difference lay in how these businesses made contingency plans and were prepared for various scenarios.

This agility is crucial, especially in the wake of not only a global pandemic but a major global economic crisis. Organizations that are poised to make smart investment decisions during economic downturns won’t end up wasting limited resources. This is enabled by EA.

With EA, an organization’s existing capabilities can be mapped out with a focus on a chosen scope, validated by stakeholders and subject matter experts. Enterprise architects can then perform an assessment on the strategic importance of these capabilities and inform changes to future initiatives and technological networks.

 

6. Focus on Customer Experience (CX)

 

The increased focus on customer experience across industries is fueled by consumers being empowered via online platforms. Organizations are paying more attention to delivering better and more personalized experiences that meet individual expectations and needs. This, in turn, leads to the to the rising complexity of an organization’s IT landscape.

Businesses need to leverage architecture tools to help map and understand customer dynamics across multiple channels and quickly adapt to changes in quickly. Mature EA can help organizations identify redundant systems that negatively impact customer experience and identify areas where data can be better shared across systems for a more seamless customer journey.

Enterprise architects can design systems to improve the efficiency and effectiveness of customer-facing processes, enable the development of innovative front-end applications and services, and improve the scalability and reliability of IT systems.

 

7. Emerging Tech

 

Of course, emerging technologies will continue to shape enterprise architecture in the coming years. There are plenty of new technologies and tools that will affect how EA evolves, but the most pressing as follows:

  • Machine learning: Much like AI, machine learning will become more prevalent as a means of automating EA tasks such as pattern identification, prediction, and generating actionable recommendations.
  • Cybersecurity: Emerging technologies in the realm of cybersecurity such as biometric authentication and decentralized identity access management (IAM) will require enterprise architectures to adapt their designs to ensure stronger security. 
  • Mobile Computing: The increased proliferation of mobile devices and applications in this world of remote and hybrid work will require enterprise architects to design IT systems that are mobile-forward, accessible from anywhere, and still secure.
  • Internet of Things (IoT): As vast amounts of data continue to be generated by IoT, enterprise architects must design systems that are capable of collecting, storing, and analysing the data to support business functions and achieve organizational goals.
  • Extended Reality (XR): This includes augmented reality (AR), virtual reality (VR), and mixed reality (MR). These technologies will enable enterprise architects to design and visualize IT systems and processes in new and innovative ways.
  • Blockchain: The use of blockchain technology as a way to create secure, tamper-proof records of transactions and data will be valuable to EA practices.

How Business Leaders Can Leverage AI’s Economic Power

AI is transforming the global economy, disrupting traditional industries, and creating new opportunities for growth. The potential for economic advancement is undeniable as businesses adopt AI to boost productivity, enhance decision-making, and meet evolving consumer demands.  

What tools and knowledge do business leaders need to navigate the complex landscape of AI and capitalize on its potential? In our exclusive interview, Mohamed Roushdy, Digital Transformation and Fintech Advisor at IFC – International Finance Corporation, United Arab Emirates; shares valuable insights and answers.  

 
Mohamed Roushdy is an experienced information technology professional with a track record spanning over 25 years in various industries and business segments such as financial services, real estate, and advisory services. He is currently the Digital Transformation and Fintech Advisor at the IFC – International Finance Corporation, United Arab Emirates; and the Founder of FinTech Bazaar. His previous positions as CIO and CXO advisor have laid a strong foundation for his ability to lead teams to successfully complete major business and digital transformation programs.
 

What is the overall economic impact of AI? Which sectors are benefiting from AI the most?

AI will impact the economy in four ways. The first thing is efficiency improvements, and we’re going to see a lot of use cases for AI eliminating repetitive work. Efficiency here is key, with the help of AI, machine learning, and deep learning. Secondly is risk mitigation. AI has been used for years to detect fraud.  Next is revenue growth, how much will AI contribute to my GDP? The revenue streams will come because of AI. This ties to the fourth element which is customer experience. These are the four pillars when we talk about AI in any industry. 

The most impacted industry would be healthcare. Going back to COVID-19, AI and big data brought the vaccine to life within a few months. Other sectors include financial services where there are many use cases already. And then retail and e-commerce, agriculture, and transportation.  

 

Where do you think AI is on the Gartner Hype Cycle?

I agree it’s now at the top of the hype cycle, but AI development started a long time ago. We are now moving within two to five years from the hype cycle to the mainstream and actual use cases. We are coming out of the hype, and I would say there is a triangle of technology — AI, blockchain, and IoT. AI is the one that is going to deliver great value and pass the hype.  

 

Going back to efficiency — what are your thoughts on AI’s impact on the workforce?

Most repetitive work will go. This will displace a great number of employees, I think studies are saying by 2030, AI will displace 85 million workers. This number looks scary, right? But AI will also generate 97 million jobs. So, the net is positive, not negative.  

Seeing what’s happening here, we need to ensure organizations and governments are reskilling and upskilling the workforce.  

The same workforce that does repetitive work in your company knows your business well. You cannot just let them go.  

You have to take the same people, reskill them, and put them elsewhere in your organization. AI will affect the workforce but it’s a positive impact. Upskilling and training are very much required.  

 

It’s difficult for people in the workforce to imagine this new way of working. What’s the role of a business leader in this transition?

Training and awareness are important to open new avenues for them, and this extends to governments as well. Business leaders have a social responsibility to get this segment trained and moving ahead. As I said, the number of jobs generated by AI and machine learning will be higher than displaced jobs. But we have to make sure that employees are being trained.  

I remember one professional I met at a forum saying, “You have to learn how to learn.”  

This is really the time we must learn how to learn. 

Nowadays, people don’t only learn by going to school or taking a course. There are many ways you can learn online. There needs to be resources and guidance for employees around AI and machine learning. This is very important for the new era. 

 

What are your thoughts on the relationship between data regulation and innovation?

A very important concern about AI and machine learning is data privacy.  

We are seeing great moves from countries and organizations to regulate AI use and provide guidelines. For example, the EU AI Act. So, regulatory frameworks and privacy laws are in place to help data privacy. But at the same time, we’re trying to make sure this won’t stop innovation. If you get hit by regulation and stop innovation, you won’t see the benefits. That’s why we’re trying to see how we can balance regulation, convenience, and the ability to innovate

We must also focus on principle-based regulation. This is what most countries are going for. One of the most important things here is AI ethics. For example, there are guidelines for your AI platform or whatever you’re developing that must have AI ethics in place. What’s important is that innovation shouldn’t stop. 

 

How would you advise business leaders to handle risk mitigation around AI?

If there is no risk, there is no reward. It’s important to accept that there is some kind of risk if you want to integrate AI and move forward. But it has to be calculated risk and you have to know how to mitigate them if they happen. Without risk, you cannot innovate. As I said, any guideline should be principle-based and help foster innovation. The developments we see today with AI and machine learning happened because people took calculated risks. Yes, there is a dark side to it like deepfakes. The bad and the good will always be there. But that doesn’t mean you have to stop. I would say today’s technology brings more good than bad. 

 

Can you share an experience of maximizing profit using AI?

I come from banking, and AI enables customer personalization.  

You can know what the customer needs today and in the future. You’ll be able to do things more effectively and generate more revenue because you are doing things differently. The data you have provides more insights; maybe your customer does not need financial services abroad or maybe he needs another product your competitor has. So, you can start doing new things today because of AI, machine learning, and data like bring a new product to the market and potentially bring more revenue to your organization. AI also creates seamless customer experiences.  

When you bring IoT, blockchain, and big data together with AI, the value becomes exponential.  

This combination enables faster transactions, and you’re able to make more deals and make the customer happy with seamless or frictionless experiences. 

 

What is your advice to business leaders on implementing AI successfully?

Implementation is easier said than done. It also needs cooperation from stakeholders. My advice would be to know exactly what you want to achieve and look for a good use case.  

Start with a small use case, bring all the stakeholders within your organization in, and educate them to get the C-level buy-in. Show them the value AI will bring to your organization and the market. The issue with AI is that the algorithm development takes time, as with value and results. You won’t see results six months after implementation. You’re tuning the algorithm, you’re getting the right data, and avoiding AI bias as well.  

There are many resources needed to train a model — budget, people, and more. At the same time, if you don’t have the expertise, try to find good partners to help you get use cases. As soon as you get results and management sees something happening, then you can scale up. You have to go into the journey with a good plan and convince people with results. I think that’s very important. 

 

*The interview answers have been edited for length and clarity.

AI Governance: Balancing Competitiveness with Compliance

The AI landscape is innovating at full speed. From the recent release of Google’s Bard and OpenAI’s ChatGPT Enterprise to growing implementation of AI tools for business processes, the struggle to regulate AI continues.

In Europe, policymakers are scrambling to agree on rules to govern AI – the first regional bloc to attempt a significant step towards regulating this technology. However, the challenge is enormous considering the wide range of systems that artificial intelligence encapsulates and its rapidly evolving nature.

While regulators attempt to ensure that the development of this technology improves lives without threatening rights or safety, businesses are scrambling to maintain competitiveness and compliance in the same breadth.

We recently spoke to two experts on AI governance, Gregor Strojin and Aleksandr Tiulkanov, about the latest developments in AI regulation, Europe’s role in leading this charge, and how business leaders can manage AI compliance and risks within their organizations.

 
Gregor Strojin is the Vice Chair of the Committee on Artificial Intelligence at the Council of Europe and former chair of the Ad Hoc Committee on AI. He is a policy expert with various roles including senior adviser to the Slovenian President of the Supreme Court and the State Secretary of the Ministry of Justice.
Aleksandr Tiulkanov is an AI data and digital policy counsel with 18 years of experience in business and law. He has advised organizations on matters relating to privacy and compliances for digital products and in the field of AI.
 

Europe Trailblazing AI Governance

 

Why does AI need to be regulated?

Aleksandr: Artificial intelligence is a technology that we see almost everywhere nowadays. It is comparable to electricity in the past, but more influential. Crucially, it’s not always neutral in how it affects society. There are instances where technologies based on artificial intelligence affects decisions which, in turn, affect people’s lives. In some cases where there is a high risk of impact, we should take care and ensure that no significant harm arises.

Gregor: Regulations are part of how we manage societies in general. When it comes to technology that is as transformative as AI, we are already faced with consequences both positive and negative. When there is a negative impact, there is a responsibility either by designers, producers, or by the state to mitigate and minimize those negative effects on society or individuals. We’ve seen the same being done with other technologies in the past.

 

Former President Barack Obama said that the AI revolution goes further and has more impact than social media has. Do you agree?

Gregor: Definitely. Even social media has employed certain AI tools and algorithms that grab our attention and direct our behavior as consumers, votes, schoolmates – that has completely changed the psychology of individuals and the masses. AI is an umbrella term that encompasses over a thousand other types of users.

AI will change not only our psychology but also logistics and how we approach problem solving in different domains.

Aleksandr: The change is gradual. As Gregor said, we already see it in social media – for example, in content moderation. Those are largely based on language and machine learning models. AI is driving what we see on the platform as well as what we can write and even share. To some extent, it means that some private actors are influencing freedom of speech.

 

Let’s talk about the role of Europe in AI compliance regulations. Can you explain why Europe is a trailblazer here?

Gregor: Europe has a special position geopolitically due to its history. It’s not one country. It’s a combination of countries that are joined by different international organizations or multi-supranational organizations such as the European Union and the Council of Europe to which individuals’ countries have given parts of their sovereignty. This is a huge difference compared to the United States or China which are completely sovereign in their dealing.

When it comes to the European Union in particular, many types of behaviors are regulated by harmonizing instruments of the EU to have a uniform single market and provide some level of quality in terms of safety and security to all citizens – so we don’t have different rules in Slovenia, Germany, France of Spain. Instead, this is one market of over 500 million people.

 

Gregor, can you give us a brief overview of the latest developments in AI regulation and compliance in the EU?

Gregor: There are two binding legal instruments that are in the final phases of development. The most crucial one is from the European Union, the AI Act. It is directed at the market itself and is concerned with how AI is designed, developed, and applied by developers and users. The AI Act addresses a large part of the ecosystem, but it does not address the people who are affected by AI. Here is where the second instrument comes in, the Convention on AI that is being developed by the Council of Europe.

Another thing to mention is that the EU’s AI Act only applies to EU members and is being negotiated by the 27 member states. The Council of Europe’s instrument is being negotiated by 47 member states as well as observer states and non-member states such as the United States, Canada, Japan, Mexico, and Israel. The latter has a more global scope.

In this way, I see the EU’s AI Act as a possible mode of implementation of the rules set by the conventions of the Council of Europe. This is still partially theoretical, but it’s likely we’ll see both instruments finalized in the first half of next year. Of course, there will be a transitory period before they come into effect. This is already a good indication of how businesses must orient themselves to ensure compliance in due time.

 

Should what the EU is doing be a blueprint for the rest of the world?

Gregor: Yes, if they choose to. I think many in Europe will acknowledge that we have different ways of approaching problems and freedom of will, but if you want to do business in Europe, you have to play by Europe’s rules. This is an element in the proposed AI Act as well as the General Data Protection Regulation (GDPR) legislation from the past decade which employs the Brussels effect – meaning that the rules applied by Europe for Europe also apply to companies outside of Europe that do business here even if they do not have a physical presence here. So, if producers of AI from China or the United States wish to sell their technology in Europe, they have to comply with European standards.

 

What are the business implications of the European approach?

Aleksandr: The European approach harmonizes the rules for a single market. It’s beneficial for businesses as they won’t have to adapt to each country’s local market. I say it’s a win-win for businesses who are approaching the European continent. We’ve already seen this happening with the GDPR. As long as they have a European presence, they adopt the European policy globally. This could happen with AI regulations as well.

If you look at the regulatory landscape, we can see some regulatory ideas coming up in North America and other continents. In China, there are some regulatory propositions. But I would say that the European approach is the most comprehensive. Chances are it will be taken as a basis by many companies.

 

Balancing Innovation and Compliance

 

What do you say to concerns that this is just another set of regulations to comply with in a landscape that is constantly innovating at speed?

Gregor: I’ve been working with technology for more than 20 years. I also have experience with analog technology that is regulated, like construction building.

What we’re dealing with here is not just regulation for regulation’s sake, but it benefits corporations in the long run because it disperses risk and consequences of their liabilities. It creates a more predictable environment.

There are many elements of regulation that have been proposed for AI that have been agreed to by different stakeholders in the process. We must consider that the industry was involved in preparing both these regulatory instruments I’ve mentioned.

Some issues like data governance are already regulated. There are, of course, disagreements on elements like transparency because there may be businesses advantages that are affected by regulation. On the other hand, technology does not allow for everything. There are still open questions on what needs to be done to ensure a higher quality in the processes development to mitigate risk.

 

So there needs to be a balance between regulation, competitiveness, and the speed of innovation. How can we be assured that AI regulation does not harm competitiveness in business?

Gregor: The regulation proposed by the European Commission is just one element in the basket of proposals of the so-called Digital Agenda. There are, of course, some other proposals on content moderation that came into existence just recently that are binding. But there are also several instruments which address the promotion and development of AI systems, both in terms of subsidies for companies and individuals to develop digital skills and to create a comprehensive and stable environment for IT technology in Europe. There are billions being thrown into subsidies for companies and innovators. There is a big carrot, and the stick is in preparation, but it is not here yet.

Aleksandr: I must also underline that there are things in place that facilitate the upcoming EU regulation, such as the Regulatory Sandboxes. You may have seen an example of this in Spain. Businesses will be able to test out their hypothesis on how they want to operate these AI systems that could potentially be harmful.

It’s important to understand that the scope of the regulation is not over extensive. I would say it only covers really high-risk systems to a large extent, and some lower risk systems but only where it’s important. For example, there are transparency obligations when it comes to defects for lower risk systems. Then there are meaningful rules for high-risk systems which affect people’s lives – like government aid or the use of AI in law enforcement or hiring.

It’s important to have proper data governance and risk management in place for systems that affect people on a massive scale.

Also, if you look at mature organizations with this technology already in the market, they are making sure that the data used to train their AI systems is good enough. They are doing it themselves as they don’t want to get in trouble with their clients. Regulations are not so unusual.

 

In that case, will innovation be faster than the regulations can keep up with?

Gregor: That’s a pertinent question when it comes to technology. It is imprudent, from the position of a policymaker, to try to regulate future developments as that would impede innovation.

I don’t think there’s any impediment of innovation happening at this moment. Perhaps you could categorize getting subsidies for being compliant with ethical recommendations as that, but it’s not really an impediment.

In the future, there will be limitations to innovation of AI in the same degree as biotechnology, for example, where there are clear limits on what is allowed and under what conditions to prevent harm. That is narrowly defined. The general purpose, of course, is to increase the quality of these products, and create a safe environment and as predictable a playing field for customers in the market.

 

Business Focus: AI-Risk Management

 

What’s coming up next on AI governance that business leaders should consider?

Gregor: At this point, what’s coming up next for policy development is the fight back from those who do not want such legislation. It’s something we’ve already seen this year. Many think we had an AI revolution only this year. No. It’s a technology that’s been around for a few years and there have been calls for regulation of AI on the basis of existential threats.

If we take those calls seriously, we must completely backtrack and change the direction of what is already being developed.

But I do think if we follow through with what has been proposed to ensure the safety and security of this technology, we will also solve the problem of a so-called super intelligence taking over humanity. First, we need to ensure correct application of existing rules to human players.

 

With all this in mind, what advice do you have for business leaders when it comes with regulations and compliance in the field of AI? What can they start with tomorrow?

Aleksandr: Technical standards will be the main thing. I would advise all those developing this technology to take part in technical committees in their national standard setting bodies which can then translate into work on the European level of standards.

Take into account your practical concerns and considerations so that these technical standards can address business concerns in terms of product development. It is important to follow and participate in this work on regulation development for the AI ecosystem.

Another thing is to consider risk management frameworks to address AI-specific risks. The NIST or ForHumanity Risk Management Frameworks are a practical tool for organizations to control how they operate and deploy AI systems in a safe and efficient manner. Business leaders can also begin to appoint people who would be responsible for setting up processes.

There will be a transitional period, as there was with the GDPR. If companies can demonstrate that they are compliant with European standards that are still under development, they will automatically be considered compliant with the EU AI Act. But this is ongoing work.

Start considering broader risk management frameworks as a first step to get the ball rolling in organizations.

Gregor: Technical development skills alone are not sufficient to build a competitive and scalable organization, especially as not only Europe but other regions are preparing to introduce regulatory measures. My advice is similar to Aleksandr’s; build on your capacities for risk and compliance management. I think it will pay back quite soon.