13 Microservices Best Practices

An application with microservices architecture consists of a collection of small, independent services. Each service is self-contained, handling a specific function and communicating with other services through clearly defined APIs. Unlike traditional monolithic applications—which bundle all functionality into one large codebase—microservices allow individual services to be developed, deployed, and scaled independently.

monolith vs microservices

When Should You Use Microservices?

The microservices architecture has strengths—particularly if you expect your application will scale rapidly or experience varying workloads. This is because it allows precise control over resource allocation and scaling. It’s also useful if you want independent engineering teams to develop and deploy their services without requiring constant cross-team coordination.

Benefits of Microservices Architecture

  1. Easier development: Each service is responsible for a small slice of business functionality. This enables developers to be productive without requiring them to understand the full architecture.
  2. Faster deployment: Each service is relatively small and simple, making testing and compilation faster.
  3. Greater flexibility: A siloed service allows development teams to choose the tools and languages that enable them to be most productive without affecting other teams.
  4. Improved scalability: You can run more instances of heavily used services or allocate more CPU to computationally intensive services without affecting other services.

Microservices do introduce additional complexity, though. Without careful planning, these complexities can overshadow the benefits. This article will dive into 13 best practices for designing and managing a microservices-based application to ensure you get the most out of the investment. We will highlight the importance of maintaining clear service boundaries, using dedicated databases, and employing API gateways to facilitate external interactions.

Additionally, we’ll cover the use of containerization and standardized authentication strategies to ensure scalability and security across services, providing a roadmap to deploy microservices in diverse operational environments effectively.

1. Follow the Single-Responsibility Principle (SRP)

The single-responsibility principle is a core tenet of microservices development. It states that each microservice should be responsible for one and only one well-defined slice of business logic. In other words, there should be clear and consistent service boundaries. By extension, most bug fixes and features should require changes to only one microservice.

The single responsibility principle helps your development teams ship code faster by ensuring developers can work independently within their area of expertise.

Features that require collaboration between multiple teams are at higher risk of delay due to technical and organizational issues. When teams can move independently, the likelihood of one team being blocked by another is low, and you can ensure your teams make steady progress.

To highlight this, let’s walk through two scenarios that follow and don’t follow the single responsibility principle:

Example of Following the Single-Responsibility Principle:

A food delivery app splits functionality clearly into separate microservices:

  • Order management service: Handles order creation, status tracking, and customer notifications
  • Restaurant service: Manages restaurant details, menus, and availability
  • Payment service: Handles payments, refunds, and receipts

When the app needs to update the refund logic (e.g., to support new payment gateways), developers only need to update the payment service. As long as the developers don’t modify the API signature of the payment service, the order management or restaurant service won’t be impacted. This allows the payment team to work independently and ship quickly without being blocked by other teams or creating unintended bugs in unrelated parts of the system.

Example of Violating the Single-Responsibility Principle:

Suppose developers added payment processing logic directly into the order management service, thinking it would be simpler or quicker initially. Over time, this microservice becomes increasingly complicated—handling orders, payments, and customer notifications in the same codebase.

When the payments team later needs to implement a new payment gateway, they have to work within the order management code, potentially impacting order functionality. To avoid this, they  must now coordinate closely with the order management team, causing  delays. A change intended only for payments could accidentally break order tracking or notifications, causing confusion and disruption to multiple parts of the business.

2. Do Not Share Databases Between Services

To follow microservices database best practices, services should not share data stores. Sharing databases between services can be tempting to reduce operational overhead, but sharing databases can make it harder to follow the single responsibility principle and make independent scaling more difficult.

If two services share one PostgreSQL deployment, it becomes tempting to have the two services communicate directly via the database. One service can just read the other’s data from the shared database, right? This issue is that this creates tight coupling, because schema changes in the database now affect both services.

Using an API allows developers to make changes to a service  as needed without fear of affecting any other service. As long as what’s returned in the API doesn’t change, consumers of the service don’t need to worry about its implementation details. Now let’s assume you didn’t use an API and any consumer can pull your data from the database directly. If you decide that you need to change the shape of the data or modify the database, you now need to coordinate with any team who’s accessing that database. That coordination might be doable if you know who’s accessing your database, but sometimes it’s hard to keep track of who’s using your data and for what reason. Even if you were able to coordinate across team for your database change, you’re defeating the entire purpose of building microservices.

Expert Insight:

While the recommended best practice is for each microservice to have its own database to prevent tight coupling, there might be specific contexts—such as closely related services within a single bounded context—where limited database sharing is acceptable. For instance, if you have both “User Management” and “Account Management” microservices that deal with overlapping user data, you could justify a shared database to reduce duplication—provided you maintain strict separation at the schema or namespace level. If choosing to share, ensure clear logical separation (such as schemas or namespaces) and strict enforcement of data access to maintain clear service boundaries and data integrity.

Sometimes, smaller teams (or teams migrating from a monolith) start with a shared database for convenience and use separate schemas/tables for each microservice. However, this is generally seen as a transitional approach. As microservices mature, most teams push toward isolating each service’s data.

3. Clearly Define Data Transfer Object (DTO) Usage

Data Transfer Objects (DTOs) are used to send data between services. Using clearly defined DTOs makes communication between services easier, keeps services loosely coupled, and simplifies version management.

To achieve this, first separate your internal domain models from external DTOs. Doing this prevents tight coupling between your internal structures and your external APIs. That way, you can change your internal data structures without needing to update your APIs every time.

Next, clearly define your DTO contracts. Contracts are explicit schemas that clearly state the data format and content. Tools such as OpenAPI or Protocol Buffers can help you create these schemas, which improve clarity, simplify data validation, and make team collaboration easier.

Lastly, version your DTOs carefully. Whenever the structure of your data significantly changes, create a new DTO version. This approach allows other services to adapt gradually, preventing breaks in existing functionality. It's important to note that if multiple services share the same database,  DTO versioning  becomes difficult.

4. Use centralized observability tools

Centralized observability tools are crucial for monitoring and troubleshooting microservices. These tools ensure that logs and events from all your services are accessible in a single location. Having your logs in one place means you don’t need to stitch together data from multiple logging services. This simplifies the identification and resolution of issues.

Centralized tools such as Amazon CloudWatch, HyperDX, or Honeycomb are popular choices. These tools also provide distributed tracing with correlation IDs, which greatly enhances observability. Tracing enables you to track requests end-to-end across multiple services, facilitating faster and more precise troubleshooting.

Let's say you're trying to identify the root cause of a performance spike in your system. You notice that messages in your queue are taking longer to process. As the queue grows, upstream services start experiencing timeouts, creating a feedback loop where delays further worsen the backlog. By using a centralized logging system, you can quickly visualize relationships between queues in one service and timeouts in another. This top-down view makes it easy to pinpoint the root cause, accelerating resolutions.

At first glance, centralizing observability data might appear to clash with not sharing databases across microservices. However, databases should remain independent to maintain loose coupling and avoid tight interdependencies. Observability data, meanwhile, is write-only and should be consolidated to provide a holistic view of your entire service.

Expert Insight:

Internally with my team, I talk about the topic of “shared concerns” - notions that span service boundaries in decomposed applications.System health is one of those. Even though you’ve broken your app up into multiple services, it’s still one app and its health is a composite property. Centralizing the observability allows you to view system health in composite rather than having to stitch it together yourself from separate service-level observability systems.Authorization is another shared concern. You can split your authorization logic up across services, but there’s still one policy. My access to a file might depend on my global role in the org, my position on a team, the folder that contains the file, whether I’m currently on shift, etc. That could span 2 or 3 services. We’ll touch on this next.

5. Carefully consider your authorization options

Authorization in microservice architectures is complex.

Authorization rules often require data from multiple different services. For example, a basic question like "can this user edit this document?" may depend on the user's team and role from a user service, and the folder hierarchy of the file from a document service.

Typically there have been 3 high-level patterns for using data from multiple services for authorization operations in microservices.

  • Leave data where it is. Each service is responsible for authorization in its domain. For example, a documents service is responsible for determining whether a given user is allowed to edit a document. If it needs data from a user service, it gets it through an API call.
  • Use a gateway to attach the data to all requests. The API gateway decodes a user's roles and permissions from a JWT, and that data is sent along with every request to every microservice. For example, the documents service receives a request which indicates that the given user is an admin, so they are allowed to edit any document.
  • Centralize authorization data. Create an authorization service that is responsible for determining whether a user can perform an action on a resource. Add any data that is needed to answer authorization questions to the service.

At Oso, we recently launched support for a new pattern: local authorization. With local authorization, there is still a centralized authorization service that stores the authorization logic, but it no longer needs to store the authorization data.

Each approach comes with tradeoffs. Letting each service be responsible for authorization in its own domain can be better for simpler applications. Applications that only rely on role-based access control can do well with the API gateway approach. Centralizing authorization data can take substantial upfront work, but can be much better for applications with complex authorization models.

6. Use an API gateway for HTTP

An API gateway acts as a single point of entry for external HTTP requests to your microservices. It simplifies interactions by providing a clean, consistent interface for web and mobile apps, hiding the complexity of your backend services.

Use your API gateway to route external requests to the correct microservices. It should manage authentication and authorization to secure interactions. It also handles HTTP logging for easier monitoring and troubleshooting. Finally, it applies rate-limiting to protect your services from excessive load.

Avoid using the API gateway for internal microservice-to-microservice communication. That is best handled by direct service calls or a service mesh.

Expert Insight:

"I use an API Gateway primarily to abstract external requests. For example, mapping endpoints like myapp.com/users to my user service, while enforcing authentication, rate-limiting, and logging. Internal calls between microservices don't need to go through the gateway; instead, communicate directly or via a service mesh."

7. Use the Right Communication Protocol Between Services

Use the right communication protocols for interactions between your microservices. API gateways are suitable for external access, but internally, choose protocols that match your specific needs.

HTTP/REST is ideal for synchronous communication. It is simple, widely supported, and easy to implement, making it perfect for typical request-response scenarios like fetching user profiles.

For efficient, high-performance communication, consider using gRPC. It supports binary communication with automatic schema validation and code generation. This makes gRPC particularly suitable for internal services that need rapid data transfer or streaming, such as log streaming or handling large datasets.

Message queues, like Kafka or RabbitMQ, are excellent for asynchronous communication. Publishers send messages to the queue, and subscribers listen for new messages in the queue and process them accordingly. This helps decouple services, enabling each service to process messages at its own pace. Message queues effectively manage backpressure. They are especially useful in event-driven architectures and real-time processing scenarios, like order processing workflows.

Expert Insight:

Use message queues when strong decoupling and scalability are priorities. If your primary concern is quickly transferring large amounts of data, then gRPC is typically the better choice.

8. Adopt a consistent authentication strategy

Authentication can be tricky in a microservices architecture. Not only do services need to authenticate the users that are making requests, they also need to authenticate with other services if they are communicating with those services directly.

If you are using an API gateway, your API gateway should handle authenticating users. JSON web tokens (JWTs) are a common pattern for authentication in HTTP. You can also use access tokens, but you would need an access token service.

If your microservices communicate with each other via HTTP or some other protocol that doesn't explicitly require authentication, you should also ensure services authenticate requests to ensure requests are coming from other services, not potentially malicious users.

9. Use containers and a container orchestration framework

Each instance of a service should be deployed as a container. Containers ensure services are isolated and allow you to constrain the CPU and memory that a service uses. Containers also provide a way to consistently build and deploy services regardless of which language they are written in.

Orchestration frameworks make it easier to manage containers. They let you easily deploy new services, and increase or decrease the number of instances of a service. Kubernetes has long been the de facto container orchestrator, but managed offerings like ECS on Fargate and Google Cloud Run enable you to easily deploy your microservices architecture to a cloud provider’s infrastructure with much less complexity. They provide UIs and CLIs to help you manage and monitor all your microservices. Container orchestration frameworks give you a lot of logging, monitoring, and deployment tools, which can substantially reduce the complexity of deploying  microservices architectures.

10. Run health checks on your services

To better support centralized monitoring and orchestration frameworks, each service should have a health check that returns the high-level health of the service. For example, a /status or /health HTTP API endpoint might return whether the service is responsive. A health check client then periodically runs the health check, and triggers alerts if a service is down.

Health checks help monitoring and alerting. You can see the health of all your microservices on one screen and receive alerts if a service is unhealthy. Combined with patterns like a service registry, health checks can enable your architecture to avoid sending requests to unhealthy services.

11. Maintain consistent practices across your microservices

The biggest misconception about a microservices architecture is that each service can do whatever it wants. While hypothetically true, microservices require consistent practices to remain effective. Some, like the single responsibility principle, apply to all microservices architectures. Others, like how to handle authorization, may vary between implementations, but should be consistent within a given microservices architecture. For example, if you decide each microservice is responsible for updating a centralized authorization service, you need to carefully ensure that every microservice is sending updates and authorization requests to that service. Similarly, each microservice should log using a consistent format to all your architecture’s log sinks, and define a consistent health check that ties in to your orchestration framework.

Ensuring that every service abides by your microservices architecture's best practices will help your team experience the benefits of microservices and avoid potential pitfalls.

12. Apply Resiliency and Reliability Patterns

There are several patterns you can use that minimize the impact of failures and maintain system stability.

Circuit Breaker pattern

The Circuit Breaker pattern helps prevent cascading failures by temporarily stopping requests to services that are failing or slow. Common tools like Resilience4j or Polly can handle this automatically, ensuring that one faulty service doesn’t disrupt your entire system.

Retry Mechanisms

A Retry Mechanism automatically retries failed operations, usually employing exponential backoff. This is especially useful for handling temporary issues such as network glitches or brief outages without manual intervention.

Bulkhead Isolation

Bulkhead Isolation is another important technique. It allocates dedicated resources to individual services, ensuring that if one service becomes overloaded or fails, it won’t negatively impact other services. This isolation keeps your system stable even during unexpected issues.

Finally, implement clear Timeouts and Fallbacks to define how long services should wait for responses and what alternative responses should be provided when delays or errors occur. This ensures users experience graceful degradation rather than complete failure.

13. Ensure Idempotency of Microservices Operations

You only want to implement retries if your operations are idempotent. An idempotent service is one where performing the same operation multiple times will always produce the same result. Without idempotency, retries can result in unintended side effects like duplicated transactions or inconsistent data.

One way to achieve idempotency is by using idempotency keys. These are unique identifiers attached to each operation. These keys allows services to recognize and safely ignore duplicate operations. When coupled with a message queue like RabbitMQ, this can be a great way to prevent duplicate operations.

For example, consider an order-processing service that receives multiple “Create Order” messages due to network retries. By including an idempotency key, the service can recognize and discard repeated messages, ensuring the order is created only once.

FAQ: Implementing and managing microservices

1. How to create microservices?

Creating microservices involves designing small, isolated services that communicate over well-defined APIs. Each microservice should be developed around a single business function, using the technology stack that best suits its requirements. Ensure that each microservice has its own database to avoid data coupling and maintain a decentralized data management approach, following microservices database best practices.

2. How to implement microservices?

Implementing microservices involves breaking down an application into small, independently deployable services, each responsible for a specific function. Start by defining clear service boundaries based on business capabilities, ensuring each microservice adheres to the Single Responsibility Principle. Use containers for consistent deployment environments and orchestrate them with tools like Kubernetes or ECS on Fargate for managing their lifecycle.

3. How to deploy microservices?

Deploying microservices effectively requires a combination of containerization and an appropriate orchestration platform. Containers encapsulate the microservice in a lightweight, portable environment, making them ideal for consistent deployments across different infrastructures. Use orchestration tools like Kubernetes to automate deployment, scaling, and management of your containerized microservices, ensuring they are monitored, maintain performance standards, and can be scaled dynamically in response to varying loads.

4. How to secure microservices?

Securing microservices requires implementing robust authentication and authorization strategies to manage access to services and data. Utilize API gateways to handle external requests securely and ensure internal communications are authenticated using standards like JSON Web Tokens (JWTs). Adopt authorization models that manage permissions effectively across different services without compromising the scalability and independence of each microservice

Level up your authorization knowledge

Learn the basics

A list of FAQs related to application authorization.

Read Authorization Academy

A series of technical guides for building application authorization.

Explore more about Oso

Enterprise-grade authorization without redoing your application architecture.