CI/CD
Continuous Integration and Continuous Delivery (CI/CD) complement a robust local development workflow by automatically testing your code after you push it to your shared repository and deploying it after merging to main. This guide defines the core elements of a CI process for authorization code using Oso Cloud.
A typical CI/CD workflow looks something like this:
- Push your branch to your remote repository
- Validate Policy Syntax
- Run Policy Tests
- Open a Pull Request
- Run functional application tests against the new policy
- Merge the branch to
main
- Deploy the new code to production
We'll illustrate the configuration with GitHub Actions (opens in a new tab), using straightforward shell commands to make it easier to translate to other platforms.
An introduction to GitHub Actions is outside the scope of this document. You can read their quickstart (opens in a new tab) to learn the basics.
Policy-level testing
Any time you push a branch to your remote repository, you should validate the syntax and run the policy tests (policy tests are analogous to unit tests). These tests are lightweight and don't require additional infrastructure, so running them frequently can give you fast feedback without creating a lot of extra overhead.
The examples below always install the latest versions of the Oso Cloud CLI and Oso Dev Server. If you want to pin the tools to a specific version, modify the installation steps as follows.
Oso Cloud CLI
curl -L https://cloud.osohq.com/install.sh | OSO_CLI_VERSION="x.y.z" bash
Oso Dev Server
curl https://oso-local-development-binary.s3.amazonaws.com/x.y.z/oso-local-development-binary-linux-x86_64.tar.gz --output oso-dev-server.tar.gz
In each case, replace "x.y.z" with the version that you want to install.
The examples below all use a single policy file. If your policy is split into multiple files, pass them all to the appropriate oso-cloud
command in a single call. The client will concatenate the files into a single policy to ensure that they are consistent with one another.
oso-cloud validate policy/policy.polar policy/policy2.polar policy/policy3.polaroso-cloud test policy/policy.polar policy/policy2.polar policy/policy3.polaroso-cloud policy policy/policy.polar policy/policy2.polar policy/policy3.polar
Validate policy syntax
To validate policy syntax, you need to:
- Check out your code
- Install the oso-cloud CLI
- Run
oso-cloud validate
against your policy files.
Here's a Github Actions job that validates policy syntax for a policy file located at policy/policy.polar
in a repository.
validate-policy-syntax: runs-on: ubuntu-latest steps: - name: Check out repository code uses: actions/checkout@v4 - name: Install Oso Cloud CLI run: | curl -L https://cloud.osohq.com/install.sh | bash - name: Validate Polar syntax run: | oso-cloud validate policy/policy.polar
Run policy tests
There are two ways to run policy tests in CI:
- Run the tests against the Oso Dev Server
- Run the tests against an Oso Cloud environment
Running against the Oso Dev Server requires a bit more setup, but doesn't require any external dependencies (other than installing the tools). Running against an Oso Cloud environment requires less setup, but makes your tests dependent on an external service.
It's good practice to minimize external dependencies in CI, so we recommend running against the Oso Dev Server whenever practical.
Against the Oso Dev Server
To run tests against the Oso Dev Server, your CI job will:
- Check out your code
- Install the oso-cloud CLI
- Install the Oso Dev Server
- Point the oso-cloud CLI at the Oso Dev Server
- Run
oso-cloud test
against your policy files.
Here's an example Github Actions job.
test-policy-with-oso-dev-server: runs-on: ubuntu-latest env: ARCHIVE_URL: "https://oso-local-development-binary.s3.amazonaws.com/latest/oso-local-development-binary-linux-x86_64.tar.gz" OUTPUT_FILENAME: "oso-dev-server.tar.gz" OSO_URL: "http://localhost:8080" OSO_AUTH: "e_0123456789_12345_osotesttoken01xiIn" steps: - name: Check out repository code uses: actions/checkout@v4 - name: Install Oso Cloud CLI run: | curl -L https://cloud.osohq.com/install.sh | bash - name: Install Oso Dev Server run: | curl ${ARCHIVE_URL} --output ${OUTPUT_FILENAME} tar -xzf ${OUTPUT_FILENAME} rm ${OUTPUT_FILENAME} chmod 0700 standalone - name: Start Oso Dev Server in the background run: | ./standalone & - name: Run policy tests against the Oso Dev Server run: | oso-cloud test policy/policy.polar
Against an Oso Cloud environment
To run tests against an Oso Cloud environment, your CI job will:
- Check out your code
- Install the oso-cloud CLI
- Point the oso-cloud CLI at your Oso Cloud environment
- Run
oso-cloud test
against your policy files.
Here's an example Github Actions job.
test-policy-on-oso-cloud: runs-on: ubuntu-latest env: OSO_AUTH: ${{ secrets.OSO_CLOUD_READ_ONLY_KEY }} steps: - name: Check out repository code uses: actions/checkout@v4 - name: Install Oso Cloud CLI run: | curl -L https://cloud.osohq.com/install.sh | bash - name: Run policy tests against Oso Cloud run: | oso-cloud test policy/policy.polar
Your Oso Cloud API keys grant access to your Oso Cloud environments, so they should be treated as sensitive data. Don't store them as plaintext in your CI configuration. Instead, use the secrets mechanism of your CI provider, for example:
env: OSO_AUTH: ${{ secrets.OSO_CLOUD_READ_ONLY_KEY }}
Because policy tests don't update your policy, you should use a read-only key to run policy tests.
Functional application tests
You should incorporate your policy code into your functional application tests so that your latest application code is tested against your latest authorization code. There are two ways to do this:
- Use a shared testing environment in Oso Cloud.
- Use the Oso Dev Server to create isolated testing environments.
A shared testing environment in Oso Cloud doesn't require you do manage any additional infrastructure, but you will have to set up a mechanism to ensure that the policy isn't updated in the middle of someone's run. Using isolated testing environments in the Oso Dev Server allows you to run multiple functional test suites in parallel without contention, but requires that you set up a server that your test application deployment can reach.
Against an Oso Cloud environment
The easiest way to run functional acceptance tests against your policy is to use an Oso Cloud environment. Currently, it is not possible to create Oso Cloud environments programmatically, so you should set up a dedicated persistent environment for this purpose in the Oso Cloud UI (opens in a new tab).
Because this environment will be shared by any jobs that run functional tests, you should make sure that those jobs are queued so they don't clobber one another. If bob
pushes up a change to his PR while alice
has a functional test running, you don't want to update the shared environment with bob
's policy until alice
's run is finished. Otherwise, alice
will see false failures.
Different CI providers provide different ways of doing this. Check the documentation for yours to configure your functional tests so that they don't run in parallel.
Once you've put a queuing mechanism in place, adding the policy to your functional test suite consists of the following steps:
- Check out your code
- Install the Oso Cloud CLI
- Point the Oso Cloud CLI at your testing Oso Cloud environment
- Deploy the policy to the test environment
Here's a GitHub Actions job that does this.
push-policy-to-oso-cloud: runs-on: ubuntu-latest env: OSO_AUTH: ${{ secrets.OSO_CLOUD_READ_WRITE_KEY }} steps: - name: Check out repository code uses: actions/checkout@v4 - name: Install Oso Cloud CLI run: | curl -L https://cloud.osohq.com/install.sh | bash - name: Push policy to Oso Cloud run: | oso-cloud policy policy/policy.polar
If your policy is split into multiple files, then pass them all in a single oso-cloud policy
call:
- name: Push policy to Oso Cloud run: | oso-cloud policy policy/policy.polar policy/policy2.polar ... policy/policyN.polar
After the policy is updated, you can run your functional tests. The app deployment in your functional test environment should also be configured to point to the testing Oso Cloud environment. You can do that by defining the correct API key when you initialize the Oso Cloud SDK in your application. See the relevant client docs for more details.
You should use a different API key for your application than you use for updating the policy in CI. That way, if one is compromised, you don't have to update both places.
Against an isolated Oso Dev Server environment
As your team grows and you have more people contributing to your codebase, you may find that a single shared test environment no longer meets your needs. The test queue can grow prohibitively long. If you don't use a queue, then tests may fail frequently because the policy is overwritten.
When this happens, you can use the Oso Dev Server to create isolated environments for policy tests. The most straightforward way to do this is to deploy a persistent Oso Dev Server to an internally accessible location and use the test_environment
endpoint to create a new test environment for each run. The steps are:
- Check out your code
- Set
OSO_URL
to point to the deployed Oso Dev Server (you don't need an API key to create a test environment) - Call the
test_environment
to create a new environment and capture the API key inOSO_AUTH
- Push your policy to the new environment
Here's how this looks in GitHub actions with a locally deployed Oso Dev server for illustration.
create-new-environment-on-oso-dev-server: runs-on: ubuntu-latest env: ARCHIVE_URL: "https://oso-local-development-binary.s3.amazonaws.com/latest/oso-local-development-binary-linux-x86_64.tar.gz" OUTPUT_FILENAME: "oso-dev-server.tar.gz" OSO_URL: "http://localhost:8080" steps: - name: Check out repository code uses: actions/checkout@v4 - name: Install Oso Cloud CLI run: | curl -L https://cloud.osohq.com/install.sh | bash - name: Install Oso Dev Server run: | curl ${ARCHIVE_URL} --output ${OUTPUT_FILENAME} tar -xzf ${OUTPUT_FILENAME} rm ${OUTPUT_FILENAME} chmod 0700 standalone - name: Start Oso Dev Server in the background run: | ./standalone & - name: Create new environment and store API key in environment run: | echo "OSO_AUTH=$(curl -s -X POST http://localhost:8080/test_environment\?copy\=false | jq -r '.token')" >> "$GITHUB_ENV" - name: Push the policy to the new environment run: | oso-cloud policy policy/policy.polar
Advanced: Deploy a new Oso Dev server for each run
Some teams spin up isolated on-demand test environments from scratch for each functional test run. If your organization does this, then you can deploy an Oso Dev Server to those environments by using a Dockerfile similar to the following:
# start with a minimal imageFROM debian:bookworm-slim# install base dependenciesRUN apt-get update && apt-get install -y curl ca-certificates && rm -rf /var/lib/apt/lists/*# create the runtime environment for the appRUN useradd -ms /bin/bash appRUN mkdir -p /app && chown app:app /app# create data directoryRUN mkdir -p /data && chown app:app /dataUSER appWORKDIR /app# fetch the Oso Dev ServerRUN curl https://oso-local-development-binary.s3.amazonaws.com/latest/oso-local-development-binary-linux-x86_64.tar.gz --output oso-dev-server.tar.gz && tar -xzf oso-dev-server.tar.gz && rm oso-dev-server.tar.gzRUN chmod +x ./standaloneENV OSO_DIRECTORY=/dataENV OSO_PORT=8080ENTRYPOINT ["/app/standalone"]
If your Dockerfile is in the same repository as your policy code, you can copy the policy into the container at build time and load it at startup. That way, you can run tests against it without a separate CI step that loads the policy. To do that, you'd modify the end of the Dockerfile similarly to this:
# copy the policy into the containerCOPY policy/policy.polar /app/ENTRYPOINT ["/app/standalone", "/app/policy.polar"]
You can then point your clients to the address of the deployed Oso Dev Server and use the default credentials to connect to it.
Managing Facts
As a rule, you shouldn't preload facts in your functional test pipeline. Instead, you should allow the application to generate them the way it would in production. This will allow you to catch errors in the logic that creates facts in Oso Cloud when your application data changes.
The exception to this is for seed data that your application needs on startup in a new environment. For example, you might have some standard roles that are assumed to exist. For this data, you should write a script that generates facts for that data and seeds the appropriate Oso Cloud environment.
If you do this, you probably already have some sort of utility to seed a test database with the same data. You can seed Oso Cloud from the same script to make it easier to keep the two in sync.
Here's an example of how this could look in GitHub Actions.
seed-oso-cloud-facts: runs-on: ubuntu-latest steps: - name: Check out repository code uses: actions/checkout@v4 - name: Install Oso Cloud CLI run: | curl -L https://cloud.osohq.com/install.sh | bash - name: Run seed script run: ./bin/seed-oso-cloud.sh
In this example, ./bin/seed-oso-cloud.sh
is a shell script that uses the oso-cloud tell
command to populate an Oso Cloud environment with an initial set of facts.
(Optional) Validate Local Authorization configuration
If your deployment uses Local Authorization, you should validate your Local Authorization configuration using the oso-cloud CLI. Here's a sample GitHub Actions job that illustrates this.
test-policy-on-oso-cloud: runs-on: ubuntu-latest steps: - name: Check out repository code uses: actions/checkout@v4 - name: Install Oso Cloud CLI run: | curl -L https://cloud.osohq.com/install.sh | bash - name: Run policy tests against Oso Cloud run: > oso-cloud validate-local-authorization-config ./local_authorization_config.yaml -c postgres://localhost -p policy/policy.polar
Local Authorization validation runs against a database in order to confirm that the queries you use for Local Authorization are valid with respect to the current database schema. Because of this, you will need to run this validation in a setting that has direct access to an instance of your application database.
You can either set up a new database with a blank schema in your CI job for this purpose, or run the job in a test environment where the postgresql port is exposed.
If you choose the second approach, take care to ensure that your test environment isn't exposed to the public internet.
Deploy
When your tests pass and you are ready to deploy your new policy to production you can set the OSO_AUTH
variable to your production API key and
upload it. This can be done as part of the same CI job that deploys the rest of your code. Again, remember not to store the key in plaintext.
push-policy-to-oso-cloud-production: runs-on: ubuntu-latest env: OSO_AUTH: ${{ secrets.OSO_CLOUD_PRODUCTION_READ_WRITE_KEY }} steps: - name: Check out repository code uses: actions/checkout@v4 - name: Install Oso Cloud CLI run: | curl -L https://cloud.osohq.com/install.sh | bash - name: Push policy to Oso Cloud run: | oso-cloud policy policy/policy.polar
Talk to an Oso engineer
If you need further assistance with testing, creating your environments or using them in your CI set up, schedule a 1x1 with an Oso engineer. We're happy to help.