Archive for month

January, 2023

Refactor Legacy Systems Without Risks

Strangler fig pattern

The strangler fig pattern is about incrementally migrating from a legacy system to a modern one. We gradually replace the functional blocks of a system with new blocks, strangle the older system’s functionalities one by one, and eventually replace everything with a new system. Once the older system is entirely strangled, it can be decommissioned.

When to use a strangler fig pattern

You can use this pattern when gradually migrating from a back-end application to a new architecture. The strangler fig pattern may not be suitable if the requests to the back-end system cannot be intercepted. When dealing with smaller systems without much complexity, it’s better to go for a wholesale replacement rather than choosing this pattern.

Here’s how you can implement the strangler fig pattern:

Transform

Given below is an example of legacy code and architecture, where all the modules are tightly coupled, and it is hard to maintain them in the same condition with upcoming modern technology.
Here, the loan service contains three services: personal, partner, and business loans.

Code Snippet:

We choose the personal loan method as the first to refactor. It could be unreliable, inefficient, outdated, or even hard to get skilled developers to maintain the same legacy code.

Let’s make a new implementation for the personal loan service and divide it into separate classes or even a microservice. After correcting or refactoring our code snippet, we can create a Proxy class for handling both new and old approaches.

We have developed a new API with a separate database. Initially, all services used a shared database, but as part of code refactoring, the personal loan module had to be decoupled from the existing database. The same applies to the API endpoint.

Coexist: where two implementations coexist together.

Here we have extracted another layer abstraction from the loan service and made it coexist with the original service. Both services provide the same functionality, and the signature of the services is also the same. The proxy will decide whether the call goes to a new API or an existing service.

Run new service with a fallback of old service: In such a scenario, we always have a safety buffer if the new one fails.

Run both services and compare the output to confirm whether the new one works fine. In such a scenario, to avoid duplicate loans or transactions, we should turn off all external references and assume that we accept loan data only from one source.

Use a feature flag to decide whether to use the new API or the old one. Here in the configuration, we can set a flag, and it will choose how the call will work — whether with legacy API or with Newly developed service.

Tests

While refactoring, we might miss some tests, or the existing tests might be heavy to maintain. The proxy service created helps us ensure that the implementation completely covers the old one.

Let’s implement integration or even e2e tests, where we test observable behaviors instead of implementation details. Remember that anything we refactor or change should have a certain level of test coverage. The better the coverage, the more confidence we get in how the code works. If the refactoring code does not contain enough tests, we should start by covering it and writing test cases for all features, behaviors, and edge cases.

We should start testing the Proxy class if unit tests cannot cover the code. Sometimes e2e tests might be the only way to cover both implementations.

Eliminate

Once we catch up and mitigate all the problems in the new approach, we can eliminate the legacy code. Hereafter, the personal loan module directly approaches the new API.

Summary

We can split code refactoring into multiple iterations and releases. If we are uncertain whether the new approach will work exactly as expected, we can prepare a fallback mechanism. So, if the new approach doesn’t work, we can quickly switch to the legacy one. This method gives us a security buffer to use for safe refactoring.

How Cloud Adoption Elevates Customer Experience

Customer experience (CX) or customer satisfaction is a critical driver in today’s business success and is directly proportional to product or service quality. CX is so important that many companies focus more on this than on their products and services. Leading organizations are constantly fine-tuning and enriching their customer journeys. Some have already begun leveraging data they’ve collected over the years to build customer journeys driven by experiences.

Sometimes businesses with good customer satisfaction and retention rates prefer to continue using traditional methods to deliver customer experiences. But when these businesses scale up and try to meet increasing demand, providing a good customer experience through conventional methods will be a hard lift. Since traditional methods cannot adapt fast enough to meet new-age customer needs, businesses should be flexible to meet the expectations of modern customers in this digital-first world.

Providing quick and fulfilling customer service across the customer life cycle requires omnichannel engagement. This rapid transformation cannot be made possible without the help of the cloud.

A good business understands that data drives customer experience. When information is in silos, it prevents innovation in customer service. Delivering new products and services is nearly impossible without sharing data within a business’s different units. There’s an urgent need to predict customer service demands and act accordingly, and that cannot be done without data.

Role of Cloud Computing

Integrations

A well-designed and structured cloud ecosystem empower businesses to cater continuously to evolving customer expectations. Additionally, it enables the seamless integration of software tools into a company’s existing technology. Such integration is valuable as it eliminates the challenges due to data silos and disconnected applications.

Agile resources

Cloud computing enables agility of resources, speeds up innovation, and is economical. Instead of maintaining a big infrastructure in their premises, businesses can leverage the agile resources from cloud providers, mix and match to create an ecosystem, and pay only for what they use.

Data analytics

All of the customers’ data is stored on the cloud. Businesses can leverage this massive amount of data with various data analysis tools to make informed decisions in real-time.

Seamless omnichannel

There are multiple ways customers can interact with businesses like email, websites, phone calls, and chatbots. Cloud can bridge the communication gaps between these various channels and provide access from a single platform without switching between apps. Thus customer service representatives can focus better and be more productive.

Customer experience management

With the help of decision and recommendation engines, employees can take appropriate actions to improve the customer journey. Operational data can be used to tailor experiences directly to customers’ preferences.

Customer focus

Constantly monitoring customer needs and following up with customers is crucial to understanding customer demands. Cloud-based support helps to cater to this and offers quick services according to the demands.

Software development life cycle

Traditionally software development is done on local machines and then deployed to QA, staging, and production environments. This approach has chances of deployment failures due to compatibility issues, and rectification is time-consuming and costly. Whereas in the case of cloud-based applications, the development, testing, and deployment all happen in the cloud environment. Cloud-based application development unloads the burden of infrastructure maintenance of servers and environments, including their backups and disaster recovery schedules.

 

 

Benefits of leveraging cloud services

Conclusion

In cloud-based application development, cloud service providers are responsible for hosting and managing your hardware, data, and everything you use in the cloud environment. As a business, you have to pay for the resources you use, and you can quickly scale up and down on these resources based on your need. This ensures cost effectiveness, better availability, and faster release cycles of your applications. A speedier release cycle means including experience improvement features faster and improving customer experience.

 

Every business needs to focus on providing the best customer experience quotient to survive and grow in a competitive market. For an enterprise of the future, the cloud is an essential foundation for delivering exceptional customer experiences.

How To Reengineer Data To Make Better Decisions

Alex Thompson Data and AI January 5, 2023

What is Data Reengineering?

In any organization, data’s primary purpose is extracting insights from it. Insights drive the decisions of management for progressing the company. Since businesses started digitalizing rapidly, data generated from business applications has also snowballed.

With these changes happening to the way of doing business and data coming in various forms and volumes, many data applications have become outdated and hinder decision-making.

So, the process of changing existing data applications to accommodate the vast volume and variety of data at a rapid velocity is called Data Reengineering.

Why Data Reengineering?

There can be several scenarios where we need to reengineer the existing application. Here are some:

How to Reengineer Data Projects

Here’s how you can reengineer data:

Choose the Right Infrastructure Setup

This is an important decision that the engineering team has to make. Choosing the right infrastructure will make the newly reengineered application capable of storing and processing data more effectively than the legacy application.

AWS, Azure, and GCP provide Infrastructure-as-a-Service (IaaS) so that companies can dynamically scale up or down the configuration to meet the changing requirements automatically and are billed only for the services used.

For example, we have an Azure Data Factory pipeline that populates about 200 million records into Azure SQL DB configured to the standard service tier. We observed that inserts took a long time, and the pipeline ran for almost a day. The solution for this was to scale up the Azure SQL DB to the premium service tier and scale down when the load completes.

So, we configured the rest API in the pipelines to dynamically scale up to the premium tier before the load starts and scale down to the standard service tier once the load is completed.

Select the Right Technology

Technical software stack needs to be chosen based on the reengineering your company is doing. You can choose from various technologies based on the type and volume of data your organization processes. Below are some examples:

  • If the change is from mainframe to other technology, you can choose Oracle on-premise or cloud. Here Informatica or similar tools can enable ingestion and orchestration, and Oracle’s in-house language PL/SQL can be used for the business logic.
  • If the change is from on-premise to cloud, AWS, Azure, or GCP provide Software-as-a-Service (SaaS).

Design the Right Data Model

During this reengineering phase, you must determine how best the existing data model can accommodate the new types and volume of data flowing in.

Identify the functional and technical gaps and requirements. When you analyze and understand your data, it can result in one of two scenarios.

  • You will identify new columns to be added to existing tables to provide additional value to the business.
  • Identify new tables and relate them to the existing tables in the data model. Leverage these new tables to build reports that will help your business leaders to make more effective decisions.

Design the Right ETL/ELT Process

This process involves reconstructing the legacy code to be compatible with the chosen infrastructure, technology stack, and the redesigned data model.

To populate data to the changed data model, your development team needs to incorporate appropriate extract and load strategies so that data can flow schemas at a high velocity and users can access reports with less latency.

Designing the ETL/ELT is not just a code and complete job; You must track the development progress and versions of code properly. Create some information sources to track these, like the ones shown below:

  • Milestone Tracker: The reengineering project needs to be split into development tasks, and these tasks can be tracked using any project management tool.
  • Deployment Tracker: This can be used to track the physical changes of schema and code changes.

Once the development efforts are complete, plan a pilot phase to integrate all code changes and new code objects. Run end-to-end loads for both history and incremental loads to confirm that your code is not breaking in the load process.

Validation and Verification

This phase ensures data origination from existing and new sources is populated according to the business logic.

Every micro-frontend application must have a Continuous Delivery Pipeline (CDP), so it can be built and tested separately. It should also be able to get into production independently without any dependencies. Multiple smaller micro-frontend applications in the production can then be composed together into one large working application.

Conclusion

Once all the above steps are completed, you will get to Day Zero. Day Zero is when you take the reengineered solution live to production and do your sanity checks. If everything is working as expected, sunset the legacy solution. Now you can rest assured knowing that your data infrastructure empowers your leaders to make the right decisions on time and accelerate the growth of your business.

Let's talk about your next big project.

Looking for a new career?