Programming Language and Technology Agnostic But also in its ability to do so under load in production.
Not only in its ability to reliably deliver the required functionality. Nurturing a healthy test culture to ensure confidence in the quality of what you are shipping is key. If you can prevent or at least lower the number of faults, you have to worry a lot less about isolating them. If you have problematic background jobs, stick them in their own queue that doesn’t impact the other jobs. If you have a brittle but secondary function that has a habit of bringing everything down, maybe route its endpoint to its own cluster of app instances. If you struggle with “noisy neighbors”, you might want to route based on account ID.
#Amazon spine align how to#
This will give you an indication of how to best route your traffic. Think about where most of your historical faults come from. The idea of routing different kinds of traffic to clusters for scale can also provide a measure of protection against faults. The ability for a single feature of your application to go down without taking the rest down with it is a big bonus of a properly designed microservices architecture. Ensure that you have enough queues to give you the granularity of control over the number of boxes necessary to keep your queues down and your infrastructure costs reasonable. You can also pull asynchronous tasks into background jobs with independently scalable queues. You can use the configuration of this load balancer to route traffic to independently scalable clusters of your app instance. If you have more than a single server, you are going to be running some kind of load balancer in front of it. Route Traffic to Independently Scalable Clusters
The details of this are stack specific, but you shouldn’t have to go too far to find ideas on how to tighten up your app. Odds are it is easier to resolve performance issues and bottlenecks in your monolith than it is to transition to a new architecture pattern. Most of the time, it is more cost-effective to not optimize infrastructure to within an inch of its life. Of course, this is going to give you a high level of control over exactly how many and precisely which type of resources are assigned to each function.īut do you really need that level of control?ĭo the different functions of your application experience different levels of load?ĭo they tend to scale at different rates?ĭo they have different requirements in terms of CPU, memory, storage, and GPU? Throw Boxes at Itįor a lot of teams, it is going to be cheaper to cover the disparities in these resourcing concerns by simply increasing the size or number of boxes available across the board. Running microservices means each function of your application is running on its own resources that can be scaled independently of each other. Let’s look at some of the most commonly put forward benefits of microservices and consider how we can mitigate against these concerns with a monolith. If you don’t? Well, you just over-engineered your stack and have seriously hampered your team's ability to deliver value to customers. Maybe you should consider them seriously. Maybe that additional cost, on balance, is going to pay off. If you need that additional flexibility? Great.
They are notoriously expensive to build and maintain. The reality is that microservices allow you to buy into an additional layer and degree of flexibility to address these concerns. When someone lays out the pros of a microservice architecture, the implication is that in order to solve these problems, you must adopt a microservice model. Most, if not all, of these things can be achieved, to some extent, with a monolith. Rather they aren’t addressed exclusively by microservices. The problem isn’t so much that microservices don’t provide these benefits. There are a number of purported benefits of microservices.