Contents

Driving digital transformation efforts in your organization can be pretty difficult. Still, it’s necessary if you want to stay competitive and get access to new capabilities, and potentially new business models. In this article, we highlight the root causes of problems companies face, and tell you how to avoid them – based on our software development experience in the telecom industry.

Untangling legacy: A journey into the past

Let’s start with a look at the history of software development. It’ll be a pretty substantial analysis, so get ready for a long read.

In the late 1990s and early 2000s, the IT industry experienced unprecedented growth. Hardware advancements were accelerating rapidly, with each passing year bringing faster and more powerful technologies. 

Simultaneously, new programming languages and libraries emerged at a staggering pace, opening up a world of possibilities for implementing innovative systems and applications.

Programmers knew how to implement the applications that supported particular processes. However, these applications soon became tightly intertwined and interconnected. At that time, the options for connecting and integrating them at the technological level were limited and, as we would soon discover, far from ideal. 

This article delves into the IT world of the early 2000s, exploring the methodologies, architectures, and paradigms that ruled the industry at the time.

The millennial programming breakthrough

An image showing a laptop screen with some code.
Image source: Pexels.

In the early 90s, programmers primarily used Fortran, Cobol, C, and C++. Some of you may still have an old Fortran system running somewhere in the server room that can somehow never be rewritten. Back then, people still used text editors for programming, and learning to code required… A book.

In the early 2000s, the programming landscape was completely changed. Java, PHP, and C# emerged, along with IDEs (Integrated Developer Environments) and the wealth of resources offered by the World Wide Web. Software development was changed forever, with the object-oriented programming paradigm becoming the de-facto standard. Around the same time VMware emerged, mature versions of PostgreSQL and MySQL were already available, and IDEs Netbeans, Visual Studio, and, later, Eclipse were widely used.

The change was tremendous. It allowed for faster software delivery and improved quality, enabling mid-sized companies to afford tailored software solutions. And in turn, that software allowed them to gain the edge against their competitors. Given the huge success, these companies decided to implement more software and integrate it with existing software. 

With all those breakthroughs, there was one thing that did not change. Applications were still built as monoliths, large, siloed, independent, and self-sufficient. With passing time and features added – they turned out to be hard to scale and maintain. We’ll explore that in detail later.

Changes in the IT Industry during the 2000s

An image showing a CPU socket.
Image source: Pexels.

More and more applications were implemented to support more and more processes. And these processes became dependent on each other.

It’s noteworthy that by the mid-2000s, the exponential growth of hardware capabilities had largely stopped. Previously, processors and RAM capacity would roughly double every two years, but they reached a plateau around 2004, with speeds peaking at approximately 3,8 GHz. Afterward, the progress in performance was mostly through more advanced hardware architectures, faster bus speeds and, simply, more cores. However, not all applications could utilize more than one core at a time, and the capped single-core speed caused significant issues.

In the year 2000, there were around 400 million Internet users. In 2005 the amount had more than doubled and was 1 billion. In 2010 the amount reached almost 2 billion. The number of users of banking applications, news outlets, search engines, corporate websites, CRM, CMS, self-care, sales, and many other applications also rose.

In a way, all the applications of that time fell victim to their own success. The number of users continued to grow exponentially, and the hardware advancements slowed down. Applications were not designed for scalability, and the number of users and processes they had to handle soon exceeded the hardware’s capabilities.

The software and solutions had to undergo fundamental changes, a paradigm shift was needed. Unfortunately, it would only come in around 10 years and will be known as the microservices architecture.

The Agile Manifesto

The Manifesto for Agile Software Development was a turning point in software development. While non-waterfall methodologies, such as Scrum, had existed before, it was the clear and relatable principles outlined in the manifesto that propelled its widespread adoption. 

The primary beneficiaries of the new development paradigm were startups and small companies. For various reasons, large corporations, responsible for the software discussed in this article, were reluctant to embrace Agile practices and instead maintained their reliance on the waterfall methodology.

It would take another 10 to 15 years for Agile to be more widely spread to large corporations. Read more about the viability of such an approach in another article on the Pretius blog: Large scale Agile – is agile software development always the best option for big companies? 

Application integration standards in the mid-2000s

An image showing server hard drives.
Image source: Pexels.

At that time, there were two predominant patterns of applications integration: a database, and an Enterprise Service Bus.

Database-centric integration

Now, in the 20s, the paradigm is to hide the database behind an API layer, but it wasn’t the case in the 2000s. So, how is a database an integration method? Multiple instances of databases, Oracle in particular, were able to communicate with each other via a dblink, which, from the point of view of a programmer, made all the tables and views, and procedures available as if they were in a single DB instance.

It was very simple to configure and was very convenient to both programmers and businesses. All the systems had access to data from all other systems without additional cost! Sounds too good to be true!

In truth, database integration, especially if you’re not using modern data integration tools, is, in my opinion, the worst, most troublesome, unmaintainable, and non-debuggable method of integration.

First of all, let’s list the issues of implementing business logic on the database layer, taking an Oracle database and PL/SQL as a reference:

  • Limited features: SQL is designed to operate on data, and even PL/SQL is somewhat limited (compared to Java or C#) when it comes to some features and available libraries.
  • Limited testing and debugging: there is very little support for automated testing for PL/SQL code, it’s very difficult to mock only some data as the procedures are tightly coupled to database tables or views, and testing specific scenarios often requires dozens of database operations preparing the data and then clearing it; there is little support for debugging, viewing values of variables during execution, changing the code on the fly.
  • Difficult to read and maintain code: PL/SQL lacks the syntax of more modern programming languages; data access and business logic are intertwined. Quite often, data is stored in dynamic data structures with weak typing.
  • Difficult to trace side-effects: usually, the operations fetching data, performing business logic, and changing data are mixed, without a clear fetch-perform-update separation; with the inclusion of possible trigger operations, some logic in views or SQL queries, the particular place where a business operation took place is difficult to pinpoint.
  • Tight coupling: placing business logic in the database tightly couples it to the data layer, without any abstraction in between, which in turn makes a change in one place requiring a change in another, sometimes causing a cascade of changes.
  • Challenges in version control: SQL and PL/SQL are not well suited for storing the code in a version control system, merging and deploying the code on higher environments tends to cause issues; as the code runs on the database instance and not the developer’s computer it is very difficult and risky to debug on higher environments, as the code being changed is the code on that environment also ran by others.
  • Limited scalability: ultimately, the database resources are not well scalable and are subject to hardware-based limitations; PL/SQL lacks the level of support for parallelization that more modern languages like Java or C# have, not to mention the parallelization level that microservice-based architectures provide.
  • Inhibited segregation of responsibility: organizing the code into business and data access layers, separating domains, and introducing architectural boundaries was inhibited by a lack of support from the language and the tools required to write code in that language.

Now, let’s list the issues of integrating via databases.

  • All the points mentioned above are exponentially more troublesome in this scenario; for example, while debugging on a local database was challenging, debugging code on a database you do not own and know is nearly impossible.
  • Extremely tight coupling: with no architectural boundaries and no abstractions, as every application is free to access any data or call any procedure of any other application, it is virtually impossible to perform a change that would be isolated to a single application.
  • Decreased performance: the resource cost of executing a query distributed over several databases is very costly; the more complex the query is, the more costly it becomes, often exponentially.
  • Data validity and caching: performance issues are often solved with materialized views or local copies of foreign tables, which in turn cause issues with data validity; caching is one of the Two Hard Things in computer science, which adds to the already extreme complexity of database integrated solution.

By the sheer amount of points I made above, it should be clear that this integration method is unmaintainable in the long term. What’s worse, any attempt to overcome the issues of database integration seems to make the issues even worse.

Companies that relied on database integration as their primary integration method often found themselves in a less favorable position. Migrating from database integration to more modern solutions is usually considerably more expensive and fraught with risks than transitioning from direct integrations or service bus integrations.

On the other hand, there ARE some ways to mitigate some of the abovementioned issues. Despite the limitations of SQL, some abstraction layers can be implemented, and some decoupling strategies can be employed. You can also use additional tools like Liquibase to deal with version control and various other issues – my colleague Rafał Grzegorczyk wrote many popular articles about this solution:

Enterprise Service Bus integration

Enterprise Service Bus (ESB) is the communication system that enables the implementation of Service-Oriented Architecture (SOA). Contrary to the previously described database integration, ESB and SOA are actually valid patterns and are still used in some Enterprise Architectures.

By the mid-2000s, there were a few enterprise-grade solutions available: Sonic ESB, IBM WebSphere Message Broker, and Apache ServiceMix.

There were also message broker implementations that could decouple services in a solution, decoupling services. IBM MQ and Tibco Enterprise Message Service were already available in the 90s. They weren’t full ESBs, just message brokers, though they could play the role of communication layer in an SOA-based solution.

ESB-based solutions had many strong points:

  • Standardization: a central ESB introduces and enforces a set of rules like protocols, and message formats; with the introduction of ESBs, many architects defined data rules, like a set of ISO standards, Message-ID format, set of common fields, field naming rules, traceability/debug fields and more.
  • Loose coupling: an ESB was an architectural boundary that made systems unaware of each other, sometimes even modules of the same application communicated via the ESB in certain situations. Such an approach reduced dependencies, reduced solution complexity, allowed independent development and releases, and was even an enabler for switching a service implementation altogether without affecting other integrated services.
  • Abstraction: the service APIs were, theoretically, defined separately from their implementations; one could consider that aspect of ESBs as the groundwork for the later popularly adapted API-First approach.
  • Message routing: the same message could be routed to multiple recipients, some ESB implementations could also guarantee to deliver the message exactly once to each recipient.
  • Message transformation and enrichment: some business logic could be put into ESB to build a message from multiple data sources, conditionally execute some logic or even dynamically determine the list of recipients; this capability, however, was a source of a common pitfall of putting too much logic into a centralized and independent component, which made development, testing and debugging more complex

It also had some disadvantages:

  • Single point of failure: the tool used as ESB was central to the entire IT solution; if it did not work, for whatever reason, no application could perform its function, and virtually no process could be performed.
  • Divided ownership: ESB implementations typically involved a dedicated team responsible for managing and making changes to the ESB infrastructure, which often introduced bottlenecks to the development process or made outages in some functional areas not handled with due haste; no team was able to deliver any feature without the help of that central team. This dependency overall reduced the agility of the company.
  • Entry threshold: using ESB comes with associated costs and efforts required to meet its requirements and adhere to its established guidelines.

Overall the ESB was likely the best integration choice in the 2000s. Solutions that used the SOA pattern were at a good starting point for future migration to more modern architectures, including microservices and event-driven architectures.

The business impact: problems that drive digital transformation

An image showing some money.
Image source: Pexels.

All these technical problems would perhaps remain a topic of discussion for programmers, if not for the fact that they directly translated into business problems. Let’s take a step back and summarize the most important business impacts caused by old integration standards and legacy technologies.

Long time to market for new business products

The complex architecture based on monolithic systems, interconnected by a network of many often redundant database links, causes the waiting time for the implementation of new products to increase, which in a dynamic market with intense competition poses a threat to the company’s revenues. In many cases, products enter the market late, after the release of comparable proposals from competitors, drastically affecting sales.

The above is due to the scope of changes needed to introduce a new product. First of all, development is always required in multiple dependent systems.

An important factor is the existing technological debt (resulting from similar problems in previous projects) found in earlier products, parameters “hardcoded” instead of configured. It forces the use of existing solutions and bends them for use in the new product rather than introducing new, more versatile tools that would better suit the new product’s needs.

The whole thing causes a spiraling technological debt making the ecosystem increasingly unmanageable, hard to maintain, and highly vulnerable to failure.

Moreover, the caches implemented to handle increased load also increase the data time to market. Depending on the size and complexity of the data, the caches need to be rebuilt from a few times a day to only once a week, during the weekend, which in turn makes some data have an extremely low time to market. A monthly report can be generated on the 3rd day of the month with no negative consequences, but displaying a product’s availability from two days prior can have significant business implications for the sales department and the company’s customers.

The pressure exerted by the business team on product delivery dates was also significant: blocking debt reduction and introducing new generic products, further cementing an increasingly complex monolith.

Deployments

Implementing new products in a monolithic architecture is always risky and time-consuming. Because of the dependencies between systems with different integration methods, including database integration, all systems must be deployed simultaneously or one after the other in a specific sequence. This results in the need for extended production unavailability. Also, a problem in deploying one of the systems makes it often necessary to withdraw a set of changes and repeat the entire deployment.

Due to the enormity of the enterprise, the number of such deployments per year has to be limited by grouping changes into so-called releases. This results in the business often waiting even longer until the product is in production and commercially available.

You can mitigate these problems to some degree by employing CI/CD (continuous integration/continuous delivery). Below, I share with you a great Oracle APEX CI/CD guide written by our colleague Matt Mulvaney. Of course, you can search the internet for more in-depth CI/CD guides for technologies other than APEX.

Failures

Also, the time of unavailability in case of a failure or critical error is prolonged, and its effect on production is more extensive than in the case of microservices solutions. Due to inter-system dependencies and the clustering of capabilities in large systems, often a single error causes the unavailability of many business functions, which can lead, for example, to the complete suspension of sales for many minutes or even hours.

Building deployment packages

Given the size of systems, even a minor fix that takes a few minutes to develop often requires rebuilding the entire system – the process of compiling code and building a deployment package can take more than an hour, which in the case of a business-critical system is a size that is difficult for the business to accept.

Expensive maintenance/licenses

Monolithic software is often boxed solutions requiring high license fees and extensive hardware resources. At the same time, the organization’s actual business processes use part of the system’s capabilities, which must be paid for in full.

Even custom solutions prepared for a company’s needs are based on commercial platforms such as Oracle. This carries high maintenance costs related to support and license fees.

Scalability

Monolithic systems, databases, and large applications running on uniform servers are difficult to scale and it’s almost always impossible to scale them dynamically or automatically. The tight coupling of applications typical for such systems means any change to a single application requires a meticulous analysis of its impact on all connected applications. This often leads to a cascade of changes, making any modification expensive and introducing a significant risk of regressions.

The end result is that scaling the technology often requires unnecessary additional investments in hardware and increases in license fees (when, for example, the license model is based on the number of CPUs). In the case of microservices solutions, open-licensed, open-source solutions are used in most cases.

Digital transformation strategy

An image showing a man behind a desk.
Image source: Pexels.

Getting rid of a large system that has been operating for many years and used on a daily basis is impossible in the current business landscape. The only sensible solution is a well-thought-out digital transformation, i.e. parallel building of a new system and gradual transfer to it. This requires a well-planned strategy.

Now that we’ve identified the problems caused by old technologies and approaches, let’s focus on what you must keep in mind, good practices to follow, and decisions you’ll need to make while developing such a plan for digital change. 

The following information is based on our extensive experience with digital transformation projects in over a dozen of big companies, with a focus on market-leading enterprises from the telecom sector.

Digital transformation efforts: Understanding within the organization

An image showing a team.
Image source: Pexels.

The digital transformation efforts discussed in the article challenge the entire organization. The IT department can’t carry it out alone. It’s required to understand the goal facing the organization as a whole and the benefits that can be achieved through it. This understanding shouldn’t be limited to IT, Marketing, and Sales. Other teams and departments need to be on board too – most importantly business leaders, such as the members of the Board of Directors, who should support the process at every stage.

In addition to the challenges it poses and the apparent benefits for product development opportunities based on IT systems, the digital transformation of IT architecture is also an opportunity for the entire organization to develop.

First, it allows us to change how we think about the product. Products based on a digital platform can be more straightforward and more understandable, resulting in a better customer experience. They can be prepared to respond as quickly as possible to the changing customer demands, consumer behavior, trends, and competitive actions.

The second aspect that can be considered with architectural changes is modifying how work is managed – a product rather than project approach. By dividing into teams, it’s possible to introduce agile product management, bringing small increments into production. Also, managing the road map and priorities across the organization can be more agile and responsive to current needs. It also provides an excellent opportunity to abandon ideas and products that are not promising at a relatively early stage of their development, reducing the cost of failure significantly. 

The most important thing, however, is a change in mindset. A microservice architecture combined with agile management makes it possible to bridge the gap that often separates business and IT in organizations, putting them on two sides of the barricade. In the new paradigm, both sides should work together to develop new products at each stage of their development.

Another competitive advantage is the tools marketing gets in its hands. It’s possible to create product samples, test products, or make certain features available to a limited number of users – all through flexible configuration and versioning management.

Approaching the new architecture: API first

If we talk about such advantages of digital (microservice) architecture as the independence of component development by independent teams, we have to make one conscious decision. We base the design of the new architecture solely on the API. We decide what components with such capabilities will be created and what business objects they manage. 

On this basis, we prepare the API definition of these components. The domain model of business-managed entities boils down to the definition of individual APIs of modules like Customer Management, Product Catalog, Product Inventory, and Document Management.

The internal structure of the module data model, the implementation of the business logic, and the interior architecture of the modules are left to the discretion of the module developers.

It’s only necessary that the modules meet specific non-functional requirements that will allow for unified module management, scalability, or deployment on the same runtime platforms. 

Order of decoupling 

When creating a staggered strategy divided into digital transformation stages, it’s essential to determine the order in which capabilities or objects will be decoupled from the monolith and implemented as independent modules or microservices with well-defined APIs.

The best approach is to arrange the product roadmap so that you can implement some new products based primarily on new components. 

The ideal solution would be to base a new product solely on new components. However, this isn’t possible. It’s necessary to separate elements critical to the new product’s operation. Undoubtedly, such modules could be:

  • Product catalog – where the product configuration will be maintained
  • Order Inventory – a module responsible for storing information about orders for the activation of a given product
  • Product Inventory – the module responsible for storing information about current customer’s products and their parameters. 

Depending on the method of product delivery and activation is possible to try to implement:

  • A logistics module
  • Gateways for communication with partners responsible for maintaining the product

Building Order Management is a complex task, so at the first stage, you can consider that the existing solutions can process orders from Order Inventory or build a simple Order Manager that can initially process one type of order for only this new product.

Subsequent products should be chosen in such a way that when they are introduced, new components are deployed, pulling the following capabilities out of the monolith.

After building a sufficiently complex architecture, you can consider migrating old products from legacy. We discuss this matter later in the article.

The advantages of this approach are: 

  • Work on the new architecture is linked to product development There are no dedicated technical projects, which are often very low-prioritized in the organization 
  • New architecture elements go straight into production and are subject to verification, development, and debugging process – they do not age out
  • The organization will quickly see the benefits of transformation, and individual stakeholders and sponsors in the organization will know that it is worth investing in transformation
  • Unnecessary modules and functions will not be created (in traditional boxed solutions, there can be up to 70% of unused or rarely used capabilities)
  • It’ll be easier to define new needs or notice the opportunities that the new architecture provides and use this advantage already at the design/development stage of business products in marketing

Legacy gateway

The “new” architecture components should communicate with the rest of the IT ecosystem in the same standardized way through APIs. Therefore, for the functions necessary for the operation of the product still located on the side of the legacy monolith, you need to prepare an API that will “cover” the old interfaces making the required functions available to the new architecture (e.g., Customer management – CRM software, billing). From the point of view of the new components, the entire ecosystem will be available through a standardized API.

You can also think about communication in the other direction. The legacy systems being developed during the early stage of the transformation should use the new components through APIs instead of their counterparts from the legacy world. This will make it easier to switch them off/transform them to the new architecture later.

Legacy STOP

Given the above approach to the new architecture being built, it’s possible to develop simpler products that don’t require many integrations or management of many resources. However, for the transformation to be successful at some point, when the digital architecture is mature enough (but still incomplete), a decision must be made to stop the development of legacy systems. This means that from that point on, all new products, regardless of their type and complexity, will be developed exclusively within the microservice architecture.

This has a significant impact on a business and product roadmap, but it’s essential to the venture’s success. 

Since the implementation of such products may require the creation of microservices of greater complexity (we are talking about billing, full order manager or service provisioning mechanisms), the moment for this decision should be well-timed – so that there is time to prepare the relevant modules and implement them in production properly. 

This may cause some downtime in delivering new products to the market, so this decision must be a compromise between the IT, Marketing, and Sales departments – they need to make it together. Despite the possible pressure, it’s essential to avoid creating exceptions to the rule later.

It may be that specific systems will remain legacy forever, due to their complexity (ERP, for example). In that case, be sure to cover their APIs as much as possible and eliminate all dependencies and integrations outside the APIs (e.g. database) with other elements of the architecture. The pros and cons of this approach should be weighed each time.

Digital technologies and the migration of legacy products

To complete the transformation process and turn off legacy systems, in the final step, you need to “get rid” of products supported on the legacy side. There are several ways to do this:

  • Preparation of a new version of equivalent products, easier to implement on the new architecture and more modern, and an attempt to get customers to migrate consciously
  • Preparation of equivalent products on the new architecture with the same or better parameters and carrying out a technical migration without changing the commercial arrangements with the Customer (from the Client’s point of view, it should be the same product/service)
  • For products that cannot be transferred, the only option is to wait until active products naturally expire and Customers migrate, e.g., as a result of signing annexes, extending the contract on new commercial terms

Data sources/master system

In the case of transitional hybrid architecture, it’s crucial to decide which system (legacy or new module/microservices) is the data master. Then design data flows and background data synchronization accordingly. This applies, for example, to customers when CRM legacy and the Customer Management module in the new architecture are running in parallel.

API: Custom solution or a framework?

Another decision is whether to design an API for your organization based on an existing framework of ready-made APIs or design our own API definition that better suits our organization and domain model. An example of an existing ready-made API design (framework) is the TM Forum organization’s Open API, which fully covers the domain model for a telecommunications company.

Both approaches have advantages and disadvantages, and a proper analysis should be conducted before deciding.

For the framework:

  • We get a ready-made API, reduced design time
  • We are in line with other companies in the industry, including service providers, that use this model. This shortens the implementation of services and generally establishes cooperation with their providers.
  • We get support and training in using APIs

For the custom model:

  • The API we design better reflects our organization’s specifics, products, and processes.
  • If we need to extend the API, we don’t have to wait for the framework to develop. We are responsible for developing the API and expanding the model ourselves.
  • Custom API, primarily, is much simpler because generic solutions have to cover many exceptional cases that will never occur in our organization.

Threats posed by the hybrid model

A company’s offerings are often hybrid, convergent offerings – they bundle different products together. When dealing with a hybrid model – where systems from the new architecture and legacy work together – a situation can arise where products from the new and old architecture are in a bundle.

This includes the source of product configuration (product catalogs) and order processing for such product bundles (order management).

It is necessary to approach the creation of the product roadmap and transformation strategy in such a way as to minimize this situation. If it does occur, however, the solution should be designed so that the dependencies between these worlds are as few as possible and that these additional functions related to merging architectures can later be easily removed from the ecosystem without affecting the core elements of the ecosystem.

Summary: Digital success is within your grasp

Concluding today’s article, we trust that our insights into the challenges posed by legacy technologies, their underlying causes, and the benefits offered by emerging digital technologies and transformation have proven informative. Moreover, the practical advice and guidance provided, drawing upon our extensive knowledge, are designed to assist you in circumventing the most significant hurdles on your digital path. The problems we highlighted here are, indeed, significant, but they’re not unsolvable. Nevertheless, it is crucial to approach these recommendations with a critical eye, taking into consideration the unique circumstances of your company, business models, and projects, as every scenario presents distinctive factors to be considered.

If you need any help with joining the ranks of digital transformation leaders, you can always reach out to us at hello@pretius.com or using the contact form provided below. Pretius has considerable experience with digital technology and such projects – both in the telco industry and other fields – and we can use this expertise to the benefit of your company.

Share