Architecture10 min read

Choosing enterprise platform architecture: monolith vs modular vs microservices

The architecture decision you make today will shape your engineering velocity for years. Pick too simple and you'll hit walls. Pick too complex and you'll drown in operational overhead. Here's how to choose the right fit for where your organization actually is — not where you hope it will be.
The Options

Three architectural patterns, three sets of trade-offs

There is no universally superior architecture. Each pattern optimizes for different constraints and team realities.

Monolith

A single deployable unit containing all business logic. Simple to develop, test, and deploy early on. Becomes harder to scale and maintain as the codebase and team grow.

Complexity
Scalability
Team Size

Modular Monolith

A monolith with strict internal module boundaries. Each module owns its domain, data, and API surface. Deploys as one unit but can be extracted into services later.

Complexity
Scalability
Team Size

Microservices

Independently deployable services communicating over the network. Maximum flexibility and scalability, but significant operational overhead in networking, observability, and data consistency.

Complexity
Scalability
Team Size
Side by Side

How the three architectures compare

Deployment
Monolith:Single artifact, all-or-nothing
Modular:Single artifact, module-aware
Microservices:Independent per service
Scaling
Monolith:Vertical — scale the whole app
Modular:Vertical with targeted optimization
Microservices:Horizontal — scale each service independently
Team Coupling
Monolith:High — shared codebase, merge conflicts
Modular:Medium — clear module ownership
Microservices:Low — autonomous teams per service
Data Model
Monolith:Single shared database
Modular:Shared database with schema boundaries
Microservices:Database per service, eventual consistency
Complexity
Monolith:Low initially, grows with size
Modular:Medium — requires discipline
Microservices:High — distributed systems challenges
Best For
Monolith:Small teams, early-stage products
Modular:Growing teams, evolving domains
Microservices:Large organizations, independent scaling
By the Numbers

What the data says about architecture decisions

72%of teams that start with microservices wish they had started simpler
3-5engineers is the minimum viable team size for a single microservice
18 moaverage time before a monolith needs architectural boundaries
40%of microservice adoption is driven by org structure, not technical need
Decision Framework

Four questions to guide your architecture choice

Walk through these questions in order. Each answer narrows the options until the right architecture becomes obvious.
01

Assess team size

Under 10 engineers? A well-structured monolith will outperform microservices. The overhead of distributed systems requires dedicated platform capacity.

02

Evaluate scale requirements

Do different parts of your system need to scale independently? If your API tier handles 100x the load of your admin panel, service boundaries make sense.

03

Measure deployment frequency

Deploying once a week? A monolith is fine. Deploying 50 times a day across teams? Independent deployability becomes essential.

04

Map data coupling

If every feature needs data from five other domains, splitting into services creates a distributed monolith — the worst of both worlds.

Implementation Patterns

Making the modular monolith work in practice

The modular monolith is the most misunderstood option. Teams think they are building one simply because they have folders named after domains. Real modularity requires enforceable boundaries, not conventions.

Enforce module boundaries at the compiler level

Folder conventions break the moment a deadline arrives. Instead, use language-level enforcement. In Java, this means separate Gradle or Maven modules with explicit dependency declarations. In .NET, separate projects within a solution. In TypeScript, use project references with composite builds. The principle is the same everywhere: a module should not be able to import another module's internals without a deliberate configuration change that shows up in code review.

Define explicit public APIs per module

Each module exposes a public interface — a set of commands, queries, and events — and hides everything else. Think of it like a microservice API contract, but enforced at compile time rather than over HTTP. This gives you the design discipline of service boundaries without the operational cost of network calls. When module A needs data from module B, it calls B's public query interface, not B's repository directly. This indirection is what makes future extraction possible.

Use in-process messaging for cross-module communication

Rather than direct method calls between modules, route cross-module communication through an in-process message bus or mediator pattern. Libraries like MediatR (.NET), Spring Events (Java), or a simple EventEmitter abstraction work well. This pattern mirrors the asynchronous messaging you would use in a microservices architecture, making the eventual extraction path smoother. It also makes cross-module interactions explicit and auditable — you can log every inter-module message to understand coupling patterns.

Maintain a module dependency matrix

Create a simple table that maps which modules depend on which. Review it monthly. If any module depends on more than three others, it is becoming a gravity well that will resist future decomposition. Watch for cycles — if module A depends on B and B depends on A, you have a design problem that will only get worse. Automated tools like ArchUnit (Java), NetArchTest (.NET), or custom ESLint rules (TypeScript) can enforce these constraints in CI.

Boundary Design

Identifying service boundaries before you need them

Whether you start modular or plan to extract services later, drawing boundaries in the right places is the highest-leverage architectural decision you will make.

Start with domain events, not entities

Most teams try to identify boundaries by looking at data entities — “orders go here, customers go there.” This leads to artificial splits because entities are often shared across domains. Instead, map the business events: OrderPlaced, PaymentReceived, ShipmentDispatched. The systems that produce and consume these events naturally cluster into bounded contexts. Run an event storming workshop with domain experts and engineers together. The sticky notes on the wall will reveal boundaries that no amount of code analysis can surface.

Apply the team cognitive load test

A service boundary is only useful if a single team can own everything within it — the code, the data, the deployment, the on-call rotation. If a proposed boundary requires three teams to coordinate on every change, it is not a real boundary. Ask the question: “Can one team of five to eight people understand, build, test, and deploy this service without waiting on anyone else?” If the answer is no, the boundary is in the wrong place.

Measure change coupling in version control

Your git history contains empirical evidence of how your system is actually coupled. Analyze which files change together across commits. If files in two proposed services consistently change in the same PR, splitting them into separate services will create a distributed monolith where every feature requires coordinated deployments. Tools like CodeScene, git-of-theseus, or simple co-change scripts can surface these patterns. Only draw a service boundary where the change coupling between the two sides is low.

Data Consistency

Managing data across service boundaries

The hardest part of any distributed architecture is data. Once you split databases between services, you lose transactions and gain a set of consistency challenges that require deliberate strategy.

Embrace eventual consistency — but define “eventual”

“Eventually consistent” is not a free pass to ignore data integrity. For every cross-service data flow, define the maximum acceptable staleness. An inventory count that lags by five seconds is fine for a product listing page. An inventory count that lags by five minutes is not fine when accepting payment for the last unit in stock. Document these SLAs per data flow, and build monitoring that alerts when propagation exceeds the threshold.

Use the Saga pattern for cross-service transactions

When a business process spans multiple services — place order, reserve inventory, charge payment — you cannot use a distributed transaction without destroying availability. Instead, implement a saga: a sequence of local transactions where each step publishes an event that triggers the next. If any step fails, compensating transactions undo the previous steps. Orchestrated sagas use a central coordinator that manages the sequence. Choreographed sagas let each service react to events independently. Orchestration is easier to debug; choreography scales better. Choose based on how many services participate — three or fewer, orchestrate; more than three, choreograph.

Implement the outbox pattern for reliable event publishing

The most common data consistency bug in microservices is the dual-write problem: a service updates its database and publishes an event, but one succeeds and the other fails. The transactional outbox pattern solves this by writing the event to an outbox table within the same database transaction as the business data change. A separate process (or change data capture pipeline) reads the outbox and publishes to the message broker. This guarantees at-least-once delivery without distributed transactions. Debezium with Kafka Connect is the most battle-tested implementation of this pattern.

CQRS for read-heavy cross-domain queries

When a dashboard or report needs data from five different services, making five synchronous API calls is fragile and slow. Instead, build a dedicated read model that subscribes to events from each service and maintains a denormalized projection optimized for the query. This is Command Query Responsibility Segregation (CQRS) at the architecture level. The write path stays clean — each service owns its domain data. The read path is purpose-built for specific use cases. The trade-off is additional infrastructure and the need to handle eventual consistency in the read model, but for any non-trivial reporting requirement, this pattern pays for itself in performance and reliability.

Team Topology

Platform teams vs. stream-aligned teams

Architecture decisions are inseparable from team design. The way you organize people will shape the system you build — Conway's Law is not a suggestion, it is an observed phenomenon.

When to invest in a platform team

A platform team builds internal tools, CI/CD pipelines, observability infrastructure, and shared libraries so that stream-aligned (product) teams do not have to. The investment makes sense when you have at least four stream-aligned teams and they are duplicating infrastructure work. Below that threshold, a platform team is overhead — one senior engineer maintaining shared tooling part-time is sufficient. The litmus test: if product teams spend more than 20% of their time on infrastructure concerns rather than business logic, a platform team will accelerate everyone.

The platform-as-a-product mindset

The most common failure mode for platform teams is building tools nobody uses. Treat your internal platform as a product with real users (your engineering teams) who have a choice (they can build their own if your platform is bad). This means: conduct user research with product teams, maintain documentation, provide migration guides, measure adoption, and iterate based on feedback. A platform that requires a Slack message to the platform team for every deployment is not a platform — it is a bottleneck with a fancy name.

Thin interaction layer between platform and product

Define a clear contract between the platform team and stream-aligned teams. The platform provides capabilities: “deploy a containerized service,” “provision a database,” “configure alerting for an SLO.” Stream-aligned teams consume these capabilities through self-service interfaces — CLI tools, Terraform modules, or internal developer portals. The platform team should never be on the critical path for a product team's deployment. If your platform team has a ticket queue for routine operations, the abstraction layer is not thick enough.

Scaling the model: enabling teams and complicated-subsystem teams

As your organization grows beyond eight to ten teams, you may need two additional team types from the Team Topologies framework. Enabling teams act as consultants — they embed temporarily with a stream-aligned team to help them adopt a new practice (like observability or performance testing) and then move on. Complicated-subsystem teams own a particularly gnarly piece of technology — a real-time data pipeline, a machine learning inference engine, a custom protocol implementation — that requires deep specialist knowledge most product engineers should not need. Keep these teams small and their interfaces clean.

The Bottom Line

The best architecture is the one your team can operate.

Microservices at Netflix work because Netflix has thousands of engineers and a decade of platform investment. Your 15-person team doesn't need that complexity. Start with the simplest architecture that meets your current needs, enforce clean module boundaries, and evolve when the pain points are real — not theoretical.

Choosing your platform architecture?

We help organizations evaluate architectural options, design domain boundaries, and build platforms that scale with the team — not against it.
Start Your Project

Let's discuss what we can build together

Whether you're modernizing legacy systems, launching a new product, or solving a complex technical challenge, we'd welcome the opportunity to understand your needs.