Perspective

Mergers and acquisitions (M&A) and their impact on data

Surajit Mitra,

Vice President-Technology

Published: May 5, 2023

The merging of banks and financial institutions poses significant challenges to their information technology (IT) infrastructure and applications. Therefore, it is crucial to address the difficulties arising from mergers between large financial institutions with overlapping business areas and identify effective strategies to mitigate them. The focus must be on understanding how these enterprises manage the data underlying various applications during rationalization while considering how consolidation can offer valuable insights. However, it is not just about data management. These events have a far-reaching impact on both internal applications, such as enterprise resource planning (ERP), customer relationship management (CRM), and human resource (HR) systems, as well as external-facing applications that facilitate the daily business operations of the organization’s divisions.

During a merger and acquisition (M&A) at banks and financial institutions, internal applications transition users to a unified platform both enterprises use. This process may require substantial customization efforts to preserve the functionality of each system. Achieving a uniform landscape for all stakeholders may take months or even years. Meanwhile, business applications may immediately impact external stakeholders, necessitating prompt attention. This article explores the integration strategy for banks and financial institutions, specifically focusing on the impact of M&A on the data owned by the organizations involved.

Taking inventory of all applications

An essential first step is to inventory all applications – an activity that may have been partially done before the merger deal. However, post-M&A, the exercise needs to be extended by appointing a central committee that collects candid feedback for all applications. Some of them include pain points, challenges, metrics on application behavior, quality of service, and finding synergies that lead to a better target state than either of the entities in the merger. The participation of enterprise architecture (EA) groups is of prime importance here. Gathering correct data for applications across the enterprise bucketed by similar and overlapping functionality goes a long way in making the right EA decisions

Shallow integration of applications

A series of activities, starting with new logos and re-branding of websites, application screens, and reports, may need to begin immediately after the deal. The next wave may deal with a shallow integration of applications with user interfaces, browsers, and mobile exposed to customers and other stakeholders in the supply chain. Consolidation and integration of some downstream applications and reporting may become necessary for internal operations. However, a deep, intrinsic integration may not be needed across all applications or all parts thereof and may not be achievable for years. Such integrations may need architectural and tech-stack changes, for which the return on investments (ROIs) needs to be carefully determined

The backend or server-side must be separated into a three-tier architecture for shallow integration. Banks have mainframes that are hard to change. However, there’s almost always a façade before or after the data is processed in the mainframe. For example, a mainframe payment system in treasury operations may be fed normalized data by two distributed applications in the merged entities. In contrast, a browser-based application may interact with a Java application underneath and not the mainframe directly.

Shallow integration involves modifying the processing layer in application programming interfaces (APIs) and representational state transfer (REST) services to lending a uniform view to the customer. Some of these may deeply impact underlying transactional applications and data models.

Transactional OLTP applications and underlying data

Transactional applications at banks are hard to change and are prone to significant regression – especially for legacy monoliths. The ROI of changing online transactional processing (OLTP) applications is meager. Hence the M&A committee may decide to do a shallow integration driven by mapping tables and limited code changes, leaving most of the underlying code in the previous state. These are not band-aid fixes. It’s more of a service-oriented architecture (SOA) integration approach. Applications that support microservices may accomplish a more intrinsic integration. However, it may be rare to find two applications with similar functionality historically evolving into microservices with the exact domains and a bounded context for each domain. Such opportunities will be pretty rare.

Since OLTP applications, data models, and data go hand-in-hand, most banks would refrain from investing heavily in transforming these systems intrinsically. Instead, the merger EA committee may want to address the low-hanging fruit of integrating data post its transactional lifecycle, i.e., the operational and analytical data stores.

Data stores and BigData pipelines

It is easier to merge and integrate operational and analytical data stores and BigData pipelines primarily because, compared to OLTP systems, the coupling between different parts is less, and latency requirements for each operation are not in the order of milliseconds. For the same reason, such applications (essentially extract, transform, and load (ETL) and BigData/ machine learning (ML) workloads) and data are much more amenable to moving to the cloud

So, let’s assume that after the merger, one entity has an operational data store (ODS) or warehouse with risk and finance data on an on-prem Oracle. And the other entity has recently migrated its data to a cloud data warehouse (DWH) on Snowflake, Redshift, BigQuery, or Azure Synapse. It would make sense to initiate a project for the first entity, to migrate the Data to the same instance of the DWH using the tools and strategies of the second entity.

Different parts of data or applications are indeed correlated or coupled. However, the code coupling (e.g., in various ETL jobs) is far less than in OLTP applications, making this a low-hanging fruit. The benefit is excellent. To list a few: consolidated data in one place to drive insights across the entire new enterprise, new reporting and business intelligence (BI) standards, and easier adoption of a single tech stack to reduce license costs and upskilling staff requirements.

The entities in the M&A may have adopted two different cloud providers. While the new company may choose to remain multi-cloud, adopting one of the clouds for operational and analytical data is not a monumental task. The new entity may adopt one of the instances based on certain factors:  a more modern stack, data volumes, cost, performance, customer satisfaction, etc.

Analytical data migrations: challenges and mitigation

The challenges of merging two analytical systems and data stores from two enterprises, and the solution – is, fundamentally, not any different from merging two disparate sources within the same enterprise. This issue is common and may already have been resolved within one of the Data Lake or Warehouse implementations. Creating a data zone (or stage) of conformed dimensions is essential by using mapping tables and lookups that standardize the data. Without this step, the resulting analytics may yield false or misleading results.

The EA committee needs to carry out an exercise of rationalizing the data models, tools, mechanisms to share and distribute data, and, importantly, all reports shared internally and externally. The usual strategy is to create a sandbox on the cloud to have a parallel run of the ETL tools and BigData/ML workloads (mostly batch-driven in banks) and the new reports, such that a production parallel may be conducted before switching to the new implementation. Creating a data store with a truly unified data model – may be accomplished in a phased approach split by the functional areas and user groups.

 

 

How can Virtusa help?

Virtusa’s data practice boasts extensive experience in creating data lakes and warehouses, with a particular focus on cloud-based solutions. With our world-class center of excellence (CoEs) and a team of over 1000 certified cloud and data specialists, we have the expertise and resources to accelerate your journey toward achieving a unified data architecture and strategy for your merged enterprise.

We collaborate closely with our client’s business and enterprise architecture teams to streamline the merging of analytical data, leveraging our extensive experience to rationalize and optimize data practices. Our approach combines cutting-edge technologies like artificial intelligence and machine learning with industry best practices to create a highly scalable and secure data infrastructure.

Our team of experts works with businesses like yours to understand your unique business needs, tailoring our solutions to your specific requirements. Partner with Virtusa for its proven track record of delivering successful data projects for some of the world’s leading enterprises.

 

 

Speaker

Surajit Mitra

Vice President-Technology

Surajit Mitra, Virtusa's VP of technology, leads the data practice for banking and financial services clients. Focusing on data and cloud adoption, he has spearheaded modernization and digital transformation programs at several global corporations. He has extensive experience with data warehouses, lakes, and a variety of services on AWS, Azure, and GCP. Surajit is an avid night and weekend coder and a part-time payments and fintech blogger.

Banking & financial services

Learn how Virtusa is accelerating banking innovation driven by cutting edge digital engineering capabilities

Related content