Modular Monoliths

9 min read · Updated 2026-04-25

Modular monoliths combine the modularity and clear boundaries of microservices with the operational simplicity of a monolithic deployment. Shopify, GitHub, and Basecamp run massive systems on this pattern — getting team autonomy and clear architectural boundaries without taking on distributed-system complexity.

What Makes a Monolith Modular

In a modular monolith, modules aren’t just folders or packages — they’re architectural boundaries. Each module represents a separate business capability with its own data model, business logic, and a clear interface to the rest of the system.

Traditional monolith
Single deployable, no internal walls
Shared database, shared models, code reaches across "modules" freely. Looks fine until the team grows and changes start cascading.
Modular monolith
Single deployable, hard internal walls
Each module owns its data and exposes a defined interface. Cross-module communication goes through that interface — never direct DB access or internal calls.

Modules talk through well-defined interfaces, not direct method calls or shared database access. When the orders module needs user info, it doesn’t query the user database — it calls the user module’s API. The interface might compile down to in-process method calls for performance, but the architectural boundary is explicit.

Enforcing Boundaries

The hardest problem with modular monoliths is enforcing boundaries without the network barrier microservices give you for free. Discipline alone doesn’t scale; tooling does.

Spring Modulith (Java)
Compile-time boundary verification. Modules expose explicit public APIs; cross-module access to non-public components fails at build time.
C# / .NET
Internal access modifiers and assembly-level access control. Wolverine and MediatR add event-driven communication patterns.
Rust
Module visibility (pub, pub(crate)) makes module boundaries first-class language constructs.
Python
Lacks built-in enforcement. Use static analyzers like importlinter to catch violations in CI.
TypeScript
ESLint rules + path aliasing + dependency-cruiser to detect cross-module imports that shouldn't exist.
Build tools
Bazel, Nx, Turborepo enforce module boundaries via build dependency graphs.

DDD as the Foundation

The most successful modular monoliths align module boundaries with DDD bounded contexts. Each module represents a bounded context — an area with its own consistent business concepts, rules, and terminology.

For an e-commerce platform: customer management, order processing, inventory, payments, shipping. Each is a module with its own model of shared concepts. The customer-management module has detailed customer profiles. The shipping module has only delivery addresses and preferences.

Implementation Patterns

Hexagonal modules

Each module structures itself with hexagonal architecture (ports and adapters). Core business logic at the center, surrounded by ports that define interfaces for external communication, and adapters that implement those interfaces.

This pattern shines in modular monoliths because boundaries become explicit. Other modules interact only through defined ports, never with internal implementations.

Event-driven communication inside the monolith

Even in a single process, modules can communicate through events rather than direct calls. When an order completes, the Orders module publishes an OrderCompleted event. Inventory, Notifications, and Analytics modules consume it independently.

This produces the same loose-coupling benefits as microservices, while keeping the simplicity of in-process communication. Events can be processed synchronously for immediate consistency or asynchronously for performance.

Database patterns

Managing data is one of the trickier problems with modular monoliths. Useful patterns:

Step 01
Schema per module
Each module has its own database schema. Foreign keys exist only within a module. Cross-module data access goes through APIs, not direct queries.
Step 02
Views for read integration
Read-only views provide controlled cross-module data access for reporting and analytics, while writes still go through module APIs.
Step 03
Domain events for integration
Events serve as both integration points and audit trails. Modules emit state changes other modules subscribe to.

Evolution: Both Directions

One of the most powerful aspects of modular monoliths is evolutionary flexibility. They can grow from simple monoliths and, if needed, split into microservices — or consolidate back from microservices when operational complexity becomes excessive.

Grow into
From traditional monolith → modular
Identify domain boundaries inside the existing codebase. Use Strangler Fig — gradually extract functionality into well-defined modules, set clear interfaces, eliminate direct dependencies.
Grow out of (or back to)
Modules ↔ microservices
Well-designed modules can be extracted as services when scale or team structure justifies it. Or — like Prime Video and Segment — services can be consolidated back into a modular monolith when distributed complexity outweighs benefits.

A well-designed modular monolith is an excellent launchpad for microservices. Teams experiment with service boundaries, refine interfaces, and build operational maturity — all while keeping deployment simple. When extraction makes sense, the architectural foundation is already there.

Why Smart Teams Choose This Pattern

Operational simplicity
One CI/CD pipeline, one rollback procedure, one runtime to monitor. Architecture stays clean even as the deployment stays simple.
No network tax
In-process method calls instead of HTTP/gRPC round-trips. What costs 5 ms across a network is microseconds in-process.
Transactional consistency
When a business operation spans modules, ACID semantics still work. No saga patterns, no distributed transactions.
Lower cognitive load
Developers reason about module interfaces, not network partitions, eventual consistency, and service discovery.
Team autonomy
Clear ownership of modules. Teams ship in their module without coordinating cross-team for routine work.
Local dev that works
The whole system runs on a laptop. No mocking 30 dependencies or running half a Kubernetes cluster locally.

Where They Struggle

Other limits:

Recap