The Case Against Microservices (Sometimes)
Somewhere around 2018, "microservices" became the default answer to every architecture question. Starting a new project? Microservices. Rewriting a legacy system? Microservices. Building a CRUD app for 50 users? Believe it or not, also microservices. The industry collectively decided that monoliths were bad and distributed systems were good. We've been paying for that decision ever since.
To be clear: microservices solve real problems at real scale. Netflix needs them. Google needs them. Your Series A startup with four engineers almost certainly does not. This isn't a blanket condemnation of microservices. It's a framework for deciding when they make sense.
The Costs Nobody Talks About
Microservices pitches cover independent deployability, technology diversity, and team autonomy. They skip the operational tax you're signing up for.
Each microservice needs its own CI/CD pipeline, its own monitoring, its own alerting, its own logging configuration, its own health checks, its own deployment strategy, and its own on-call runbook. If you have 20 services, you have 20 of each of those things to maintain. That's not a linear cost increase, it's a combinatorial one, because every service also needs to handle the failure of every other service it communicates with.
We worked with an organisation that had decomposed a moderately complex business application into 14 microservices. They had six engineers. Those six engineers spent roughly 40% of their time on infrastructure and inter-service communication concerns that wouldn't exist in a monolith. The remaining 60% was split between features and debugging distributed systems issues: race conditions, eventual consistency bugs, and cascade failures. They were shipping features slower than they had with the monolith.
"A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable." — Leslie Lamport, 1987. This hasn't changed.
Conditions That Justify Microservices
Microservices are the right call when you have scaling bottlenecks that can't be solved by scaling a monolith vertically. If one component of your system needs 100x the compute of everything else, extracting it makes sense. If different components have fundamentally different scaling characteristics (a write-heavy ingest pipeline and a read-heavy API, for instance), separating them lets you scale each independently.
They also make sense when you have large teams (50+ engineers) working on a single system. At that scale, a monolith becomes a coordination bottleneck: deployment queues, merge conflicts, and teams stepping on each other's code. Service boundaries aligned to team boundaries (the inverse Conway manoeuvre) genuinely reduce coordination costs.
And they make sense when you have genuinely different runtime requirements. A component that needs GPUs for ML inference, another that needs to be close to a database for low-latency queries, and a third that handles bursty event processing: those have legitimate reasons to be deployed separately.
The Modular Monolith Alternative
Most of the benefits people attribute to microservices are benefits of modularity, and you can get modularity without distribution. A well-structured monolith with clear module boundaries, explicit interfaces between modules, and enforced dependency rules gives you most of the architectural benefits without the operational overhead.
In practice, we build these using a few patterns. In TypeScript/Node.js, we use a monorepo with packages that enforce import boundaries — module A can call module B's public API, but can't import its internal types. In Java or Kotlin, ArchUnit tests enforce module dependency rules at build time. In Go, internal packages provide natural module boundaries.
The key discipline is this: every module communicates with other modules through defined interfaces, never through shared database tables or global state. If module A needs data from module B, it calls module B's function — not module B's database. This means that if you ever do need to extract a service, the boundary is already clean. You're replacing a function call with an HTTP call, not untangling years of shared mutable state.
The Distributed Monolith Anti-Pattern
The worst outcome, and it's shockingly common, is a distributed monolith. Distributed monoliths emerge from decomposing a system into microservices while maintaining tight coupling between them. Services that must be deployed together. Services that share databases. Services with synchronous call chains five layers deep. You've got all the operational complexity of microservices with none of the benefits.
The telltale signs: you can't deploy service A without also deploying services B and C. A schema change in one service requires coordinated changes in four others. Your "microservices" have shared libraries that change weekly. If any of this sounds familiar, you don't have microservices. You have a monolith that's harder to debug.
A Decision Framework
Our advice to clients: start with a monolith. Make it modular from day one. Enforce module boundaries with tooling, not discipline (discipline degrades under deadline pressure). Monitor which modules are scaling bottlenecks.
Extract a service only when you have a concrete, measurable reason: this module needs to scale independently, this module has a fundamentally different deployment cadence, or this module is owned by a team that needs full autonomy over its release cycle. "It feels cleaner as a separate service" is not a valid reason. "I read a blog post about microservices" is definitely not a valid reason.
When you do extract, invest heavily in the infrastructure first: service mesh (Istio or Linkerd), distributed tracing (Jaeger or Tempo), centralised logging (Loki or the ELK stack), and contract testing (Pact) between services. If you can't afford this investment, you can't afford microservices.
The Litmus Test
Ask yourself: if you drew your service boundaries on a whiteboard, could each service be developed, deployed, and operated by a single team without coordinating with other teams? If the answer is no, your service boundaries are wrong. Either redraw them or go back to a monolith.
The goal is a system architecture that lets your team ship reliable software at a sustainable pace. Sometimes that's microservices. Often it's not. The mature engineering decision is to pick the architecture that fits your actual constraints — team size, scale requirements, deployment environment — not the one that looks best on a conference slide.