The benefits offered by the Microservice chassis pattern
The microservice architecture enables rapid, frequent and reliable releases of large, complex applications. It also makes it easier for an organisation to keep evolving its technology stack. There are many good reasons to use microservices, but also drawbacks. Their benefits come with the added complexity of distributed systems.
When creating a microservices-based solution, we must handle cross-cutting concerns such as configuration management, logging, health checks, metrics, service registration and discovery, circuit breakers and so on.
Coordination of distributed systems leads to boilerplate patterns, and leveraging a Microservice chassis framework to handle cross-cutting concerns allows for developers to stand up microservice-based applications without the burden of implementing those patterns.
We shall explore below some of the capabilities offered by the Microservice chassis pattern.
As the number of microservices grows so does the amount of separate configuration files, even more when we take in consideration all required environments, such as development, testing, staging and production. Maintaining hundreds of configurations in sync quickly becomes a daunting task.
Configuration Server is an externalised application configuration service, which gives us a central place to manage microservices’ external properties across all environments. As an application moves through the deployment pipeline from development to test and into production, we can leverage Config Server to manage the configuration between environments in order to be certain that the application has got everything it needs to run when migrated.
An important feature of cloud native applications is auto-scaling, i.e. the number of instances of a microservice should vary based on its workload. However, this begs a question: how does each microservice know how many and where other instances of microservice are running, at any given time? Well, hardcoding URLs within microservices is not an option.
Using Service Discovery
Ideally, we want to change the number of instances of microservices based on the workload, and make them dynamically aware of each other. That’s where the concept of Service Discovery comes to the rescue.
Service Registry provides our microservices with an implementation of the Service Discovery pattern, one of the key tenets of a microservices-based architecture, or any service-oriented architecture for that matter.
A gateway is a special node in a computer network, a key stopping point for data on its way to or from other networks. When a computer-server acts as a gateway, it operates as a Reverse Proxy server. When implementing complex applications, we can organise our solution in tiers. The same principle applies to microservice architecture where we can extract cross-cutting concerns off all microservices and implement them in a separate tier.
An API Gateway comes in handy to implement and enforce many non-functional requirements (NFRs) such as:
- authentication and authorisation
- rate limiting or throttling
API Gateway vs Service Registry
A lot of people seem to confuse these two different concepts in microservice architecture. As I mentioned before, an API Gateway might work as a single entry point for all external clients. It handles requests in one of two ways:
- Some requests are simply routed to the appropriate service — the Proxy design pattern
- Requests can be fanned out to multiple services — the Façade design pattern
From a security standpoint, handling all ingress network traffic means that API Gateways can act as Policy Decision Point (PDP) as well as Policy Enforcing Point (PEP).
A Service Registry contains a list of services, their instances along with their locations. Service instances are registered with the service registry on startup and deregistered on shutdown. Service consumers query the service registry to find available instances of a service — service discovery. Therefore, they are instrumental in providing location transparency.
It’s worth noting that the use of a Service Registry does not prescribe that all requests must flow through it. Services can interact among themselves leveraging a technique called client-side load balancing after having consulted the Service Registry. Even when using server-side load balancing, the load-balancer might be implemented by a different component in the architecture rather than on the registry, only consulting the same from time to time for performance reasons. On this set-up, the load balancer will be the PEP and the registry the PDP.
In summary, an API Gateway is responsible for routing all external requests to our microservices, this type of network traffic tends to be North-South, although APIs might be exposed to different teams within the data centre or private cloud. A Service Registry is responsible for service discovery. When implementing server-side loading balancing, a registry might handle all inter-service communication, this type of network traffic is always East-West.
Of course, one can decide to allocate both capabilities onto the same component when designing a particular solution, but that is just an architectural decision rather than a microservice architecture’s constraint.
Wrapping up — Spring Cloud
The good news is that Java developers do not need to bother with all this complexity as all the above patterns are readily available through the brilliant “umbrella” project called Spring Cloud.
As stated on their page, using Spring Cloud developers can quickly stand up microservices. They will work well in any distributed environment, from the developer’s laptop, through bare metal data centres, to managed platforms such as Cloud Foundry and Kubernetes.