I’ve written something like this before on my old (defunct) blog before, but new thoughts and realizations required that I revisit it. The topic? Microservices.
The trend to build every single application as a series of tiny web services that talk to each other is growing in prominence, and starting to be picked up by a lot of new professional developers. The problem is, many people picking up this idea have not seen the times before and just assume this is a “best practice”.
Just as I said earlier that waterfall was actually great, I should say also that 3-tier architectures or the dreaded ‘monolith’ were not times to be afraid of. Microservices are, in that essence, a religious belief that we should approach with skepticism - as is true with many things in software. Yes, they avoid some difficult people problems in software that people dislike. No, we shouldn’t try to avoid those things - they are important.
Just like how the Scrum movement demonized waterfall, microservices proponents have unfairly demonized the idea of the monolith in their excitement for something new. Instead of hating it, realize that the clean and shiny monolith in 2001: A Space Odyssey is responsible for much of human evolution, and Oh My God It Is Full Of Stars. Meanwhile, with microservices are more like HAL 9000. They are often just bad ideas that sound good, and we only find out later, when it is too late, what we have done - like maybe after it killed Frank.
A Technical Solution To A People Problem
My belief is that microservices are mostly present because teams want to make their own choices, dislike code review or overbearing ‘architects’ above them, and to a lesser extent want to use different and newer languages. This feels good and while folks may realize there is nothing smart about this architecture choice for the whole system, many developers are content to only care about them and not the company or stack as a whole. They ostrich and accept what comes because the autonomy is nicer than the alternative.
Now, it’s absolutely apt to use the right tool for the right job, but if a company only uses 4 programming languages, that would really only need 4 services. It’s not uncommon for relatively focused application stacks to have dozens or even a hundred of microservices today, and we must ask why.
Frankly, code review is contentious when we have ego and self-esteem in the way. Centralized architecture and standards, when put in a hands of the wrong few could be very counterproductive and feel unfair to everyone else that enjoys software design and doesn’t get to play in the sandbox. This is true. However, isn’t this an argument for smaller teams and less engineering bloat, and better hiring instead? On the other hand, a good architect and technical leader can keep code from becoming unmaintainable goo and ensure it remains modular, enabling efficient future developments. These are not things we want to give up.
As we seek to avoid code review, changes in one microservice may require hundreds of different teams to make a corresponding change. Code sharing across languages is also completely lost, as basic functions must be reinvented, sometimes with unfortunately differing business logic!
The Technical Solution Is A Step Back
If we think about the fastest way to execute some code, that is a function call, not a web request.
If we think about the best way to make sure we detect problems at compile time, that is by using a library at a compiled language.
If we think about the best way to understand why something failed, that is a full stack trace.
If we think about resilient systems, the most resilient systems are the ones with the least number of moving parts. The same is true for the fastest systems.
If we think about deployment and management and upgrades, the simplest systems to deploy and maintain also have the least number of moving parts.
Microservices reverse these things, and leave us with inefficient communications, less error detection in development, easier ways to introduce errors, a massively worse debugging experience relying on searching further through distributed log analysis tools, systems that crash more often, and require more and more management-software to deploy.
An old quote I read once was “we replaced our monolith with microservices so every outage could be like a murder mystery”. While this rings true, it also true that I see many friends swept up by constant systems outages in microservices environments that were absolutely not the norm in N-tier services deployments I used to work for. Those were rock solid, even when the code was not. We have new cascade failures that weren’t things before. We have to think about “circuit breakers” which we didn’t before.
They also experience more crunch time in the development process, which I attribute to those changes in the engineering culture. All these tiny code bases doing their own thing doesn’t work as well as one strong code base working together. Interpersonal communication gets exponentially harder, and becomes something you must know (am I using this service correctly?) versus things the code just enforces giving you only one way to do it.
On the communications front, internal web services are often doubly inefficient by using REST rather than binary transmissions. There’s no reason for any of this, and if multiple microservices hops are used, this all adds up and slows down the system. Even just a conversion to JSON and back is a wasted effort, more so if done dozens of times.
Microservices also risk security problems by introducing more remote access points or chances that service teams use outdated and different library versions.
Perhaps people say it is done for utilization? It is important to remember that containerization as a technology is popular because of a pivot from a platform-as-a-service vendor (DotCloud, then Docker), where they had a lot of applications people created that had almost no utilization, so they needed a way to cram services together on single instances. In the usual web application, this is not a problem, because load and tested ensures each VM will be tested to it’s autoscaling parameters, and then it will grow. Thus containerization as a solution to utilization problems is largely a myth for possibly 99% of all workloads. For the service tier(s), one could autoscale it or just scale it manually. The utilization angle is a myth.
With all of this, I can’t think of any reasons to why we got here other than we didn’t want to communicate as people. Making our computer architectures from very streamlined simple things into a spider web of organic goo that just happened is not something we would do on purpose.
We pretend we have designed cloud architectures when this happened, but it’s clear the truth is that we designed nothing and are just letting things happen, and then it is very hard to go back.
Pretty soon you end up with 8 different datastores and things duplicated in weird places, and soon after you don’t know what you have at all.
Microservices allows you to keep adding staff without suffering the consequences of what you have added. If it was hard to manage the monolith and keep adding to it, maybe it was keeping you at the right size until you got your work figured out.
Maybe you should be subtracting and improving, and not just adding. Microservices allows us to break from that inertia by letting each team pretend the system is small and elegant, but this is an illusion - it is growing towards the opposite.
An Elegant Weapon For A More Civilized Age
While microservices talk likes to pretend the solution is some horrific “monolith”, we never really had “monoliths” before in development that I experienced. What we had were some kinds of tiered architectures. The number of which does not really matter, but in a world of 200 microservices, instead imagine that there are only maybe a few.
Web requests can be managed by one type of instance, that results in one EC2 image or whatever. Anything that can be handled within the lifecycle of one request can be handled there, and these instances are horizontally scaled behind a load balancer.
Asynchronous tasks are managed by a service tier, often connected to a message bus. There may be one of these for each programming language, or maybe a few more, and that would be ok. But because these are horizontally scaled and only ask for new jobs when they have free compute resources, one type of instance can contain code for any manner of asynchronous jobs.
Code that needs to be shared between the asynchronous services and the web tier should be kept in libraries used by both of them and is not a service call.
Errors are eliminated because they can be caught better at compile time and with unit tests.
When changes are made, they can be made in a library, and if the API of a function changes, it is impossible to build/deploy the code until it is fixed.
Do we have to release often? No, we do not, because the release process almost never changes and there are so few components.
Do we need to keep up to date with the latest in container wrangling software, upgrading it and using various service discovery, ingress solutions, service meshes, or otherwise? Also, we do not. These are problems created by overcomplicated architectures. The service tiers can communicate over a message bus and there is no need to know the address of any server.
So people wanted choice, to be able to pick whatever library they wanted, and so on - and for some, this was the path of least resistance.
However, in doing so, we overcomplicated our IT infrastructures, lost efficiency in our cloud deployments, and made debugging an utter hellscape.
Yet, it was new, and novel, and it felt like minimalism - but it wasn’t. Minimalism is about elegance, clean architectures that are easy to draw and explain, and the right number of moving parts.
We had one term attack another as old and dated, and push itself as being a better way, when it was just the opposite.
It’s an example where when people push ideas - “microservices, continuous deployment, yay!!!!” - we should always question the statements. Frequently our conference formats present ideas without tradeoffs and context, and we crave newness, and eventually - as we hire new people at increasing rates - we get folks that believe the “new” way was always the better way.
The future is not always a step forward, and it’s important to remember that. We’ve created an environment that has supported billions of dollars in vendor solutions and moved us away from good software architecture - both distributed architecture and internal code architecture - and allows us to play too fast and loose.
I’ve hit a lot of technical-trends/antipattern topics lately, so we’re going to change it up for a little while.
I’ve got some mental health topics on the radar and want to get back into some technical marketing suggestions, all done in a fairly positive way - how to think about slogans, homepages, product pages, and so on. I also have some thoughts on hiring that I want to share too.
If you like this newsletter, subscriptions will always be 100% free, and you get an email when new posts come out, so feel free to hit subscribe if you want.
If you have ideas on things you’d like to hear about, also let me know!