In this post, I’ll show how to build authorization in microservices. I’ve talked to dozens of engineering teams who have approached this problem in different ways (and I’ve written about some of them). It can be hard to know which approach to use.
Authorization is always hard to get right, but it’s particularly tricky when you’re building with microservices. In a microservices backend, authorization decisions often depend on bits of data from different systems. User roles, object relationships, and resource attributes all contribute to authorization decisions, but each piece of information might live in a separate place.
Authorization in microservices requires you to share certain data between services, often roles or group assignments. But you do not necessarily want to share all data that might affect authorization decisions anywhere. Authorization in microservices comes down to sharing some data, but keeping other data local to services that use it.
In this tutorial I’ll show the approach I’ve seen work best. I’ll break it down into three steps:
- Write authorization logic outside of any one microservice. In this step, we assume that we have access to all of the data that we need to perform a permission check (regardless of where it comes from).
- Store any shared data that multiple microservices need for authorization. If there are roles or relationships needed by multiple services, we need to put those in a central place.
- Enforce authorization by passing local data as context. “Local” data is not needed by multiple services. Enforcement combines the local data with the logic from step (1) and the shared data from step (2).
To demonstrate this process, we’ll use a specific and hopefully illustrative example. We’ll also use Oso Cloud, an authorization-as-a-service product that I work on at Oso.
Oso Cloud lets us store roles in one place and use those roles to make authorization decisions from any app using an HTTP API. Importantly, Oso Cloud also lets us use contextual data while evaluating permissions, which means we can avoid sharing our “local” data needed by only one service.
To follow along, you’ll need an Oso Cloud API key, which you can get after you sign up for a free sandbox account.
Our example: GitHub Actions
Imagine we’re building a GitHub clone with a microservices backend. So far, we’ve got just two services:
- A Repositories service, for handling interactions with repositories.
- An Actions service, for creating, viewing, and running CI actions (a la GitHub Actions).
We’ll add a feature to the system, and discuss how to authorize access when the backend is built as separate microservices.
We’re on the Actions team, and we’re implementing a feature allowing users to “re-run” CI actions. As a part of that we need to check a permission: can the current user re-run this action?
:
Our product requirements state that a user may perform the “re-run” method if the CI action belongs to a repository where the user has at least a “maintainer” role.
Checking this permission depends on answering the following two questions:
- What repository does the action belong to?
- What role does the user have on that repository?
In a monolith, the answer to both of these questions would live in one database. An authorization system could use some SQL joins to load the necessary information, and make a decision based on that.
But in our microservices setup, the ownership of that data doesn’t clearly reside in one service: the Repositories service controls all user roles on repositories, and the Actions service controls the relationships between actions and their repositories. So we’ll need some way to make authorization decisions using data from both services.
💡 In reality, we’d probably have a more complex model — repositories might belong to organizations and users’ roles on organizations affect their repository permissions. We’ll keep it simple for this tutorial, but the same principles apply.
Step one: write our authorization logic outside of the microservices environment.
Our requirements state that users can re-run actions if they have a maintainer role on the action’s repository. Somewhere we need to express that requirement as code, which we’ll do in this step. We start by completely avoiding the complexity of getting the data from both our services: that comes in the following steps.
We use the Polar language to write the logic — it’s a declarative language purpose-built for defining authorization policies. We’ll start by creating a file policy.polar
with the following contents:
actor User {}
resource Repository {
roles = ["maintainer"];
}
resource Action {
permissions = ["rerun"];
relations = { repository: Repository };
# Users can rerun an action if they are a
# maintainer on the repository
"rerun" if "maintainer" on "repository";
}
I won’t go into the details of the syntax here, but the important bit is "rerun" if "maintainer" on "repository"
— that tells Oso that if a CI action belongs to a repository, and a user has a maintainer
role on that repository, then the user can re-run that action.
💡 In real life, we’d probably have a much longer policy describing other pieces of our authorization system. This code would describe permissions throughout our applications, and would have many more resources (like Organizations, Teams, Issues, etc). But we’re keeping it simple in this post so that we can focus on how authorization data moves around.
To load the policy, we’ll upload it using the Oso Cloud CLI:
▶ oso-cloud policy policy.polar
Policy successfully loaded.
Testing the policy
Before touching our application code, we’ll want to test that our authorization logic is implemented correctly in authorization.polar
. For now, we’ll use test data. When we plug everything together in the next steps, we’ll use data from the Repositories and Actions microservices to make authorization decisions.
We can test an authorization check using the Oso Cloud CLI: can the User with ID test_user
re-run an action with the ID test_action
? As context, we’ll tell Oso that:
test_action
belongs totest_repository
, andtest_user
has amaintainer
role ontest_repository
.
▶ oso-cloud authorize User:test_user rerun Action:test_action \
--context "has_relation Action:test_action repository Repository:test_repo" \
--context "has_role User:test_user maintainer Repository:test_repo"
Allowed
That action is allowed. Our policy works.
But we passed all the data as context, including the user’s role on the repository. Remember, we’re working on the Actions service — we don’t necessarily know what role the user has on an action’s repository!
💡 The extra arguments
--context "has_relation Action:test_action repository Repository:test_repo"
tell Oso Cloud about contextual information to include for the duration of the request. The information we pass here is not stored in Oso Cloud — it’s just used for this authorization check.
Step two: store shared data needed by multiple microservices
In our case, the “shared data” that multiple services need for authorization are repository roles.
The Repositories service controls repository roles (i.e. the list of who maintains which repositories): users update repository roles inside of the “Settings” page of a particular repository. Other services (including our Actions service) depend on those roles to check permissions. Because of this dependency, it’s necessary to put repository roles in a central place (in our case, Oso Cloud).
To add a role assignment to Oso Cloud, we’ll add a bit of code to the Repositories service wherever we set users as maintainers:
// add_maintainer.ts
// Store the role assignment in oso using oso.tell()
await oso.tell(
"has_role",
{ type: "User", id: user.id },
"maintainer"
{ type: "Repository", id: repo.id }
)
We’ll call that code from the code for our Settings endpoint — whenever a user adds another user as a maintainer, that role assignment should be added to Oso Cloud using the oso.tell()
method.
💡 We’re writing new roles to Oso Cloud. If our app has existing repository role assignments, we’d want to migrate them all into the central place before shipping this feature to production.
After this step, we have the following data centralized:
Step three: enforce authorization using local data as context
Now that we have roles stored in a central place, we have to use those roles to check permissions in the Actions service. To reiterate, a user’s permission to re-run an action in our service depends on two pieces of information:
- What repository does the action belong to?
- What role does the user have on that repository?
The second question (what role does the user have?) is answered by information we inserted into Oso Cloud in the previous step. We’ll need to answer the first question with data from the Actions service itself.
We’ll assume that no other services care about the relationships between actions and repositories — if they did, then we’d also need to store the relationship so that it’s shared between services (just like we did with repository roles). Given that action → repository relationships are not shared between services, we can pass those relationships as context during an authorization request, much like we did in step 1 when we were testing the policy.
To check a user’s permission to rerun
an action, we’ll need to call oso.authorize()
from our “re-run actions” endpoint within the service itself. Note that we pass as context the relationship between the CI action and its repository:
// rerun_action.ts
// Send the action -> repository relation as context
const actionRepoRelation = [
"has_relation",
{ type: "Action", id: action.id },
"repository",
{ type: "Repository", id: action.repositoryId }
];
// This oso.authorize() method asks:
// Given that the Action with ID = action.id belongs to
// the Repository with ID = action.repositoryId, can the
// User with ID = user.id perform "rerun" on that Action?
if (!await oso.authorize(
{ type: "User", id: user.id },
"rerun"
{ type: "Action", id: action.id },
[actionRepoRelation]
))
throw new Error("Unauthorized");
// ... code that re-runs an action ...
With that check in place, our “re-run CI actions” feature is secure: Oso Cloud checks the user’s permission using a combination of contextual information (the action → repository relationship) and stored information (user → repository role).
Authorization in microservices: sharing some data and not other data
The challenge with authorization in microservices is balancing the need to share certain data (like roles) with the desire not to put all your data in one database, which would violate the separation of concerns (and defeat the purpose of using microservices!).
The trick is centralizing the minimal amount: only the important bits of authorization-relevant data. In our case, we centralized repository roles. In other more realistic and complex systems, that might include things like organization roles, relationships between organizations and repositories, certain authorization settings.
Importantly, we didn’t need to centralize the relationships between CI Actions and their repositories — that data only matters for authorization in the Actions service, so we can send it as context during each authorization request.
Note that it’s important to use a system that can support combining shared data with local data. Here, Oso Cloud handles this for us. Without an off-the-shelf mechanism for this like Oso Cloud, you’ll end up building your own. Some teams use authentication tokens (e.g. JWTs) to share this data, but that becomes a limitation when the shared data is too complex to fit in a token. Some teams share everything, putting all authorization-relevant data into one system. This can work, but asks a lot from the engineers on each team: they have to synchronize arbitrary data (relationships, attributes, and more) with a central system responsible for permission checks.
It doesn’t end here. Perhaps we’ll want to define a read
permission for CI actions. Perhaps all repository reader
s should get that permission. Perhaps all repository maintainer
s should be considered reader
s. Or perhaps we do want to add Organizations and Teams into the mix. There’s always more to build, but we always should follow the same steps:
- Write the authorization logic and test it in isolation (without yet integrating it into the services themselves).
- Identify which data is needed by multiple microservices, and share it centrally.
- Add authorization enforcement to the services that need it, passing as context whatever local data is necessary for those permission checks.