What to Monitor to Ensure Cloud Performance?

To ensure cloud performance, it’s not adequate to use “best practices”, such as selecting the right cloud services provider, architecting your cloud system properly, and including industry-leading security measures in your implementation.  You also need 24×7 cloud performance monitoring to prevent downtime and make sure you are optimizing costs along the way. 

And it’s also important to be able to respond rapidly to any critical cloud event.  Proper cloud performance monitoring ensures all of these and more.  We offer the following monitoring guidelines and best practices to ensure cloud system health.

Cloud Performance
Cloud Performance

5 Critical Cloud Performance Monitoring Tips

Monitoring various key metrics, logs, and events will tell you how your cloud infrastructure is performing.  For most cloud systems, metrics worth capturing and analyzing can be found in the following areas : 

1.   Cloud security

One of the top concerns of CTOs and CIOs today is the threat of a cyber-attack.   According to the 2022 Fortinet Cloud Security Report, a full 95% of companies are concerned about cloud security. 

The top three concerns were:

  1. Misconfiguration of the cloud platform / wrong setup. (62%)
  2. Insecure interfaces/APIs (52%)
  3. Exfiltration of sensitive data (51%)

Other concerns include:

  • Unauthorized access (50%).
  • Insecure interfaces/APIs (44%)
  • Hijacking of accounts, services, or traffic (44%)
  • External sharing of data (39%)
Cloud Security

The key to identifying suspicious activity before it becomes an all-out attack is cloud security monitoring. 

Monitoring cloud security can uncover security breaches such as:

  • New user accounts deleting existing users 
  • Unusual, simultaneous instances that start and stop, seemingly programmatically. 
  • Temporary security credentials being used for a lengthy period
  • Erasure of security logs and events

The right way to monitor cloud security is to use a service that provides full end-to-end audit logging of all activities performed by a cloud user. AWS CloudTrail and Azure Monitor are examples of such services. The goal should be to answer “who did what, where, and when?”. This can also aid in regulatory compliance.

2.   Cloud Application Performance Monitoring (APM)

Monitoring application performance is key to system health. Cloud infrastructures can generate countless logs, metrics, and alerts.  With the aid of APM tools with monitoring and analytics capabilities, this voluminous data can be easily understood. Furthermore, monitoring DevOps metrics can track the performance of the underlying infrastructure. 

Performance problems in the cloud may involve such issues as:

  • MTTR (mean time to repair)
  • MTBF (mean time between failures)
  • Throughput
  • Response time
  • Latency
  • Scalability

Many APM tools allow you to track various aspects in real time so you can proactively optimize application performance in the cloud.  

3.   Application/Service Availability

Downtime is the bane of many cloud applications, especially for companies with SaaS models.  User requests are fulfilled by cloud-based servers, so monitoring the health of your SaaS environment and components is key to preventing issues like overloading and other issues that prevent service delivery.   

Cloud-based services are typically tightly coupled and highly integrated, depending on other services to function. So when one cloud infrastructure component is not monitored, this often leads to issues in other components.  This ripple effect can cause serious performance issues.  Since these issues can pop up during frequent software updates, real-time monitoring is key. 

4.   Infrastructure Monitoring

Cloud infrastructure best practices require monitoring the health and dependencies of storage, databases, virtual machines, and Kubernetes. This will help you track and react to changes that could affect your environment’s security, performance, and availability.

The importance of being able to react quickly to critical events cannot be emphasized enough.  24/7 monitoring is only as good as the ability to react quickly.  To respond quickly to critical cloud services events, you will need the appropriate tools, notifications, and rapid-response team in place. 

Monitoring your infrastructure can also help discover which services, products, and customers you spend the most on and whether that spend is justified. 

5. Incident Response Capability

Critical production incidents could cause millions of dollars and significant damage to an organization’s reputation. Thus, it is imperative that a robust and reliable incident response infrastructure as well as response teams are established for critical production systems. The establishment of a robust Incident Response infrastructure may include the following:

  • Real-time detection of critical incidents
  • Real-time alerts and notifications across one more established channels (pager, phones, mobile apps, slack/teams, etc)
  • On-call team setup and streamlined escalation procedures
  • Reporting and analytics for continuous learning and improvement

The Role of DevOps in Monitoring Cloud Performance

DevOps is a service delivery enabler. Not only can DevOps be used to automate the process of software development and deployment, but it is also essential to the process of monitoring and maintaining the system.  Be sure to include DevOps engineers in your implementation of your Cloud Monitoring Strategy.

Hire a DevOps Team from Cloud App Developers and get FREE 24/7 Monitoring and Support.

The team will provide your development teams with day-to-day DevOps support and architectural guidance. You are free to use their dedicated hours when and where you need the most.

In addition for no additional cost, the same team will provide 24×7 production system monitoring and on-call support with SLA-based responses.

Also, you will get free access to our automated cloud security tool that continuously monitors your cloud environments


Every Cloud System is different.  There are many other possible metrics to monitor and track, but by monitoring these 5 Areas, you can greatly increase your cloud performance. 

Legacy Banking Application Modernization

The future of fintech involves wholesale application modernization of legacy banking platforms to modern Microservices architectures. Myriad banks & financial institutions are modernizing their monolithic architecture to accelerate fintech innovation, seeking benefits such as reduced payment latency and streamlined regulatory compliance. Competition from platform banking innovators is forcing established banks to adapt quickly.  In this article we will examine a 3 Tier roadmap for migrating from monolith to microservices,  incorporating both a service mesh and API Gateway into the architecture. 

API Gateway
Figure 1, Monolith to Microservices: 3-Tier Roadmap

Challenges Facing Banks

Clearly, not all banks are facing the same set of innovation challenges. For some, the starting point is a modern services-based core that can be more readily modernized to offer platform banking services, perhaps with a big-bang approach. Other banks with legacy monolithic application architectures will need to modernize in a more measured fashion, refactoring their core application over time & piece-by-piece. 

In Part 2 of our Microservices SeriesCloud Migration Strategy: Monolith to Microservices we outlined several application modernization & migration strategies to phase out parts of monolithic legacy apps, as microservices are added piece-by-piece. 

Regardless of which approach is taken, financial institutions with legacy monolithic cores will eventually need to re-engineer their core banking architecture to keep up with fast-paced platform banking trends. We offer a 3-tiered roadmap for migrating legacy applications from monolithic to microservices. This roadmap includes incorporating a service mesh, API gateway & eventual legacy core modernization.  (See Figure 1 below.)

Monlolith to Microservices

A transition from monolithic to microservices has its challenges, even when a phased approach is taken. Managing the increased operational overhead and escalating complexity during the transition is critical. We offer several strategies to help manage this chaotic transition. 

By adopting the proper strategy, banks can start offering some leading platform banking products & services almost immediately, even those with monolithic legacy platforms. Key to this strategy is the addition of a Service Mesh, combined with an API Gateway. 

Service Mesh: Near-Term Solution

Although microservices are perfect for most banking applications, there are challenges at scale. By deploying a service mesh early in the application modernization process, dev teams can address increasingly complex communication between services, a strategy that pays off later as the architecture scales. 

What is a service mesh? A service mesh is a configurable, low-latency infrastructure layer that manages the high-volume of communication between microservices.  In microservices, one service must request data from many other services. As microservices scale, this can become a challenge to manage. A properly designed service mesh architecture automatically routes requests between services & optimizes the interactions.   

Why Service Mesh?

As the complexity of a microservices architecture increases, the root cause of problems can be difficult to pinpoint. A service mesh enhances problem identification & mitigation. Furthermore, service meshes measure service-to-service communication quality, so rules for effective communication between microservices can be established & proliferated throughout the platform. This increases efficiency & reliability of the entire platform. 

Service meshes also allow multiple software development teams to work in the same infrastructure more independently. Perhaps the biggest downfall of microservices architectures is the continuous need to integrate with many other microservices even when the simplest features are introduced. Service meshes solve this issue by providing a standard format for the communication infrastructure, so developers don’t have to worry about these tedious integration tasks. The code ends up being simplified as well. In a large financial company where there might be dozens (or hundreds) of developers, this advantage is significant.

Service Mesh Implementations

There are several implementations of a service mesh. The most common involves a sidecar proxy attached to each microservice, which serves as a contact point. Service requests simplify the data path between microservices. (See Figure 2 below.)

Service Mesh - sidecar proxy
Service Mesh – sidecar proxy

You may ask, “What about my Kubernetes Service Mesh?”  To be sure, container orchestration platforms like Kubernetes offer basic management capabilities that are more than adequate for some applications. In a way, they offer primitive service meshes. However, a more robust service mesh in addition to Kubernetes’ services extends these capabilities & offers additional functionality, such as management of security policies & load balancing, which are critical for complex banking/fintech applications. 

API Gateway: Added for Innovation Speed & Security

The combination of an API gateway with a service mesh can provide a powerful blend of speed, security, agility & manageability. As microservices scale, the number of endpoints keep increasing & each endpoint needs to be secured. By using an API gateway, a security proxy level is created allowing threat detection before your applications & data are penetrated. In addition, APIs can be exposed to external partners & developers to enable accelerated development of services. 

This does not solve the inherent scalability problem of a legacy monolithic core architecture, but new services & features can be added by internal & external teams using a service mesh & API Gateway. Most importantly, platform banking features can be developed & deployed while still relying on a legacy core, until the timing is right for the complete legacy core modernization. 

Keep in mind; if the communication infrastructure is built in a way that every public request must go through the API gateway, you will need to specify these rules. This could pose a serious bottleneck, so the communication must be fluent between your various teams. Most importantly, the team responsible for creating these API rules must be scaled along with the developers introducing new features to the architecture, otherwise it’s chaos. Resource planning and PM teams need to live up to the task.

Microservices Core- A Future Necessity

The main goal of converting the banking core application to a microservices architecture is to offer leading edge services to customers. For this to happen, the speed & agility of microservices is needed. 

Banks may also wish to offer services from third parties, rather than re-invent the wheel for each new service. Although some of this can be done with the “near-term architecture” outlined in this article, there are limitations that may become severe. 

Fintech innovation from startups, along with ever-increasing customer expectations, means established financial services players will adapt & change the way they do business with their customers. Delivering on these new requirements will be most difficult with most legacy systems. In the long-run, banks will likely move to a next-generation microservices based core platform in coordination with a service mesh + API Gateway.

Cloud App Developers, LLC offers Legacy Application Modernization Services.  We are Microservices Experts with a mastery of Microservices Design Patterns. To assist in this effort, we also have domain experts in fintech, telecom, insurtech and many other industries.  To learn more about our Microservices Expertise, visit Cloud App Developers, LLC or contact wes@cloudappdevelopers.com

Copyright © 2021 Cloud App Developers, LLC. All Rights Reserved.

Cloud Migration Strategy: Monolith to Microservices

If your application is “cloud-ready”, then cloud migration can be quite painless. However, this is not the case for migrating most legacy monolithic applications to the cloud.  Several cloud migration strategies have emerged to handle each type of scenario, with best practices evolving every day. Each Cloud Migration Strategy reviewed may not work for all Monolithic Applications, and cloud migration consulting may be needed to ensure proper planning.  The benefits of cloud migration are profound, but the costs can be high.  For some, the cost of not migrating to the cloud will prove to be even higher.  

The term “Legacy Application” conjures up visions of COBOL, C, or some other arcane programming language. Ironically, these legacy systems are often a business’s mission-critical apps and can be difficult to replace. For these companies, a cloud-native rewrite of their application is either too risky or impossible, however, several app modernization strategies can partially leverage the advantages of microservices and enable the integration of new technologies.

Cloud Migration Strategy
Cloud Migration Strategies

Cloud Migration Strategies

  1. Lift & Shift:  Also known as “Rehosting”, this can be a good option for migrating applications that are cloud-ready to some degree.   
  2. Lift, Tinker & Shift:  Making a few technology stack upgrades before migrating to the cloud (without changing the application’s core architecture) is also known as “replatforming”.  This can provide accelerated cloud migration and tangible cost savings.
  3. Partial Refactoring:  Partial refactoring is when specific portions of an application are modified to take advantage of the cloud platform.  This enables some of the new functionality of microservices without the cost & complexity of a complete refactor or rewrite.
  4. Complete Refactoring:  Short of a complete rebuild of your application in cloud-native formats, “refactoring” can be a viable option for moving significant functionality to the cloud.  A gradual approach is possible (and advised), as new microservices can be quickly tested without impacting the reliability of the existing monolithic application. You can use microservices to create new features through the legacy API as you refactor the legacy platform one piece at a time.  The least measured approach of these strategies, but still far less effort than a complete rewrite. 

Application Migration to Cloud: There is No Need to Hurry

Regardless of which application modernization technique you use, retiring parts of your legacy monolithic application can be done thoughtfully over time, making it easier to implement within your organization.  You can determine which parts of your application are easiest to refactor, and execute a little bit at a time. Also, the critical parts of your application not suitable for the cloud can be left on-premise and accessed through well-defined APIs.  Finally, you may decide to “retire” rarely used functionality to lower your total cost of ownership (TCO). 

The right approach for you will likely depend on several factors, including:

  • Cost & time constraints
  • How well-suited your application is to cloud migration (when not to modernize below)
  • Scalability requirements
  • Strong business need for adding functionality not possible with the existing application.
  • Agility requirements

Application Modernization: When Not To Do It

Not all applications are right for the cloud. This is especially true when you consider containerizing and service-enabling the applications. Below are a few guidelines:

  1. The more technical debt you have, the harder it will be to get your application “cloud-ready”. Containers and services leverage a specific set of microservices patterns, and it may be easier and cheaper to start anew if your application does not incorporate these patterns.  This is often the case with companies who have grown by acquisition, having stitched multiple platforms together with countless patches and complex APIs.
  2. Tightly coupled monolithic applications are typically a poor choice for cloud migration.  Decoupling the data from the application layer is required to benefit from Microservices and this often requires a rewrite of most of the application.  
  3. Modernizing outdated applications using old languages and databases may also be more trouble than they’re worth.  It may be cheaper and less risky to do a cloud-native rewrite in these instances. Although new tools are being developed to “easily modernize” these types of applications, proceed with caution as they have significant limitations you should consider.

Cloud Migration for monolithic applications can be daunting, but with the right strategy and thoughtful planning you can mitigate risks, make incremental improvements, and get upper-management support throughout the cloud migration journey. Rehosting, Replatforming and Refactoring are each viable options, depending on your situation. 

Cloud App Developers, LLC offers Cloud Migration Services, as well as Legacy Application Modernization.  We are Microservices Experts with a mastery of Microservices Design Patterns.  To learn more, visit Cloud App Developers, LLC or contact wes@cloudappdevelopers.com

Microservices Solution To The Monolithic Problem

Microservices are still the buzz in the software development world.  Why are so many companies migrating to microservices based architectures?  What is the Microservices Solution To The Monolithic Problem? We begin by analyzing the weaknesses and limitations of monolithic architectures.

Software components are tightly coupled inside monolithic architectures and changes to a single line of code can affect the entire application.  Minor system modifications can require re-deployment of the entire system and can turn small, incremental releases and bug-fixing into complex, time consuming efforts with manual testing of the entire application taking several weeks for each release.  Also, if a small part of the system with specific functionality needs scaling, you may need to scale the whole application.  Finally, as all your code lives in one place, the resource consumption of your most resource-hungry functionality drives up the total costs.  Peak load requirements for one function may be massive overkill for others, making the whole system much less efficient.  Cross-team coordination of these efforts is very challenging. 

In summary, the weaknesses of monolithic architectures include:

  • Difficult to innovate
  • Difficult (and expensive) to scale
  • Difficult to test
  • Low release velocity
  • Difficult to coordinate across teams

Microservices architectures solve these problems by breaking large applications down into small blocks of code that are segmented by specific areas of business logic (or application functionality). These blocks communicate through simplified APIs and look like a single application to end-users.

Typically, code blocks are stored separately, which means they can be created, deployed, tested and updated independently. If one block fails, a “known good” version can we swapped out to restore app functionality. This “hot swap” capability greatly enhances app stability during updates.

Because code is in smaller blocks, it is easier to predict failure scenarios and to create more comprehensive testing. Regression tests of changes is typically limited to a handful of function points, resulting in greatly improved release velocity (by as much as 90%). 

Microservices provides real flexibility, as myriad programming languages, databases, hardware and software environments can be used in the creation of your application.

In summary, the benefits of Microservices architectures include:

  • Easier deployment and maintenance
  • Increased release velocity
  • Increased application quality
  • Reduced downtime
  • Reduced cost at scale
  • Flexible tech stack and infrastructure

If you require a rapidly scalable, easily deployed, resilient application to compete in today’s dynamic application environment, Microservices may be the solution. Hybrid solutions (where you can use key blocks from your monolithic app) are also feasible if you need to take a more measured migration to microservices. Of course, there are challenges. Ultimately, the benefits are significant, especially at scale. We hope you have found this article, Microservices Solution To The Monolithic Problem, interesting and helpful. Our subject matter experts are happy to answer any questions you might have about realizing your Microservices vision.