Using Oso Cloud With Kubernetes Authorization Webhooks

Introduction

I’ve spent the last several years working as an SRE in organizations like Planetscale and United States Digital Service (USDS). I’ve spent a lot of time working with Kubernetes, and the experience of authorizing access to a pod is not always pain-free. While Kubernetes does have built-in authorization methods, they are limited to only a few options, are specific to one cluster, and require maintaining a set of rules for Kubernetes access separately from all your other rules.

While doing some work with the Oso team recently, I wanted to see what it would be like to use Oso Cloud to authorize access to Kubernetes. I hoped using Oso Cloud could resolve some of these pain points.

Oso Cloud is typically used for application authorization. In my first couple weeks, I learned that you can use it to implement any wild authorization pattern you might want in your app (plus of course the normal standard ones, like groups, RBAC, ABAC, etc.).

But can I actually use it with Kubernetes? Yes, it turns out, and it's surprisingly straightforward.

By using the Kubernetes Authorization Webhook, I used Oso to control access to resources in a Kubernetes cluster. This meant that I suddenly could use a tool dedicated to the authorization domain, rather than relying on whatever Kubernetes supported out of the box. This gave me more flexibility, and also turned out to make authorization a bit more intuitive. I found that I could think about my authorization model without worrying about how to make it fit into Kubernetes’ authorization methods.

Minikube Setup And Webhook Configuration

Webhook Server

I’ll demonstrate this integration using Minikube and a locally running Kubernetes cluster. My Kubernetes setup comes from this excellent post and code example. In the interest of keeping this post self-contained, all the necessary commands are repeated here.

To get started, first clone the webhook example server:

git clone git@github.com:dinumathai/auth-webhook-sample.git

This project comes with two webhook handlers. The authenticate handler does some basic bearer token authentication, which is out of the scope of this post. The authorize handler is a stub that returns “allow” for every authorization request. My focus here will be on integrating that handler with Oso.

Configuration and Certificates

In production you would have your webhook running on a real server somewhere, but for this example, I’ll be running everything locally. To make this work, you need to edit two configuration files to replace MiniKube’s webhook IP with your local IP. This is how I did it on Ubuntu:

MY_IP="$(ip route get 8.8.8.8 | awk '{print $(NF-2); exit}')"
echo "$MY_IP"
minikube start --extra-config apiserver.authorization-webhook-cache-unauthorized-ttl=5s --extra-config apiserver.authorization-webhook-cache-authorized-ttl=5s --extra-config apiserver.authorization-mode=Webhook --extra-config apiserver.authentication-token-webhook-config-file=/var/lib/minikube/certs/auth/auth-webhook-conf.yaml --extra-config apiserver.authorization-webhook-config-file=/var/lib/minikube/certs/auth/authorize-webhook-conf.yaml
sed -ri "s/([0-9]{1,3}\.){3}[0-9]{1,3}/$MY_IP/" deploy/auth-webhook-conf.yaml
sed -ri "s/([0-9]{1,3}\.){3}[0-9]{1,3}/$MY_IP/" deploy/ca/server.conf

Kubernetes requires https for webhooks, so you need to sign a certificate and tell Kubernetes to trust the webhook server (which, for development, is your computer):

openssl req \
    -new \
    -key deploy/ca/server.key \
    -out deploy/ca/server.csr \
    -config deploy/ca/server.conf
openssl x509 \
    -req \
    -in deploy/ca/server.csr \
    -CA deploy/ca/ca.crt \
    -CAkey deploy/ca/ca.key \
    -CAcreateserial \
    -out deploy/ca/server.crt \
    -days 100000 \
    -extensions v3_req \
    -extfile deploy/ca/server.conf

Then, set the environment variables for the webhook server:

export AUTH_CERT_TLS_CRT=$(cat deploy/ca/server.crt)
export AUTH_CERT_TLS_KEY=$(cat deploy/ca/server.key)

Next, put your configuration files in a place where Minikube can access them. This is done by putting the files into [.minikube/files in our home directory](https://minikube.sigs.k8s.io/docs/handbook/filesync/):

mkdir -p ~/.minikube/files/var/lib/minikube/certs/auth
cp deploy/ca/ca.crt ~/.minikube/files/var/lib/minikube/certs/auth/
cp deploy/auth* ~/.minikube/files/var/lib/minikube/certs/auth/

Running Webhook Server and Minikube

Now that you’ve set up your configuration, you can start your cluster and your webhook handler, and they should be able to talk to each other.

In another tab, build and run the golang webhook server:

docker build -t dmathai/auth-webhook-sample:latest -f Dockerfile .
docker run \
    --env "AUTH_CERT_TLS_KEY=$AUTH_CERT_TLS_KEY" \
    --env "AUTH_CERT_TLS_CRT=$AUTH_CERT_TLS_CRT" \
    --env "LOG_LEVEL=DEBUG" -p 8443:8443 dmathai/auth-webhook-sample:latest

Finally, start Minikube:

minikube start --extra-config apiserver.authorization-webhook-cache-unauthorized-ttl=5s --extra-config apiserver.authorization-webhook-cache-authorized-ttl=5s --extra-config apiserver.authorization-mode=Webhook --extra-config apiserver.authentication-token-webhook-config-file=/var/lib/minikube/certs/auth/auth-webhook-conf.yaml --extra-config apiserver.authorization-webhook-config-file=/var/lib/minikube/certs/auth/authorize-webhook-conf.yaml

These --extra-config options correspond to the [kube-apiserver flags](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/). We are passing our custom configuration, telling Kubernetes to use Webhook authorization, and reducing the cache TTL for authorization decisions, to make it easier to see that something is changing.

Note: I ran into the issue that --extra-config does not take effect on an existing cluster, so to pass extra configuration options to kube-apiserver I had to recreate the Minikube cluster rather than just restarting.

Note: If your webhook server is not receiving requests, run minikube ssh and ps aux | grep apiserver to see if the API server is actually being passed the extra options properly. There are also various logs in /var/log. You can find other debugging tips here.

Now everything should be connected! You can test that things are working like this:

kubectl config set-credentials admin \
    --token="$(curl -X POST --insecure https://localhost:8443/v0/login -u admin:admin | jq -r .token)"
kubectl config set-context minikube --user=admin
kubectl get pods --all-namespaces

Next, I’ll talk about how to add Oso to all this.

Adding Oso to the Authorization Webhook

To add Oso, first install Oso’s Golang Client Library:

go get github.com/osohq/go-oso-cloud

Then, go to the Oso Cloud Sandbox to get your API Key:

export OSO_AUTH="<your_api_key>"

Then, apply this patch:

diff --git a/api/authorization_v0_handle.go b/api/authorization_v0_handle.go
index be3bc14..d3129e2 100644
--- a/api/authorization_v0_handle.go
+++ b/api/authorization_v0_handle.go
@@ -2,18 +2,76 @@ package api

 import (
    "encoding/json"
+   "fmt"
    "io/ioutil"
    "net/http"
+   "os"
    "time"

    "github.com/dinumathai/auth-webhook-sample/auth"
    "github.com/dinumathai/auth-webhook-sample/log"
    "github.com/dinumathai/auth-webhook-sample/types"
+   oso "github.com/osohq/go-oso-cloud"
 )

+// User object
+type User struct {
+   id string
+}
+
+// Id to get id as string
+func (u User) Id() string {
+   return fmt.Sprint(u.id)
+}
+
+// Type to format type
+func (u User) Type() string {
+   return "User"
+}
+
+// Resource object
+type Resource struct {
+   id string
+}
+
+// Id to get id as string
+func (u Resource) Id() string {
+   return fmt.Sprint(u.id)
+}
+
+// Type to format type
+func (u Resource) Type() string {
+   return "Resource"
+}
+
+//AuthorizationRequest that we get from kubernetes
+type AuthorizationRequest struct {
+   Kind       string `json:"kind"`
+   APIVersion string `json:"apiVersion"`
+   Metadata   struct {
+       CreationTimestamp interface{} `json:"creationTimestamp"`
+   } `json:"metadata"`
+   Spec struct {
+       ResourceAttributes struct {
+           Verb     string `json:"verb"`
+           Group    string `json:"group"`
+           Version  string `json:"version"`
+           Resource string `json:"resource"`
+           Name     string `json:"name"`
+       } `json:"resourceAttributes"`
+       User  string   `json:"user"`
+       Group []string `json:"group"`
+   } `json:"spec"`
+   Status struct {
+       Allowed bool `json:"allowed"`
+   } `json:"status"`
+}
+
 // AuthorizeV0Handler -- Handle authentication using property file. For testing only
 func AuthorizeV0Handler(apiVersion auth.Version) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
+       osoClient := oso.NewClient("https://cloud.osohq.com", os.Getenv("OSO_AUTH"))
+
        start := time.Now()
        defer log.Debugf("AuthorizeV0Handler Elapsed - %s", time.Since(start))

@@ -23,11 +81,20 @@ func AuthorizeV0Handler(apiVersion auth.Version) http.HandlerFunc {
            sentAuthorizationResponse(w, false, "Error in Read of request body")
            return
        }
-       rawContent := json.RawMessage(string(content))
-       log.Debugf("Request body : %s", rawContent)
        log.Debugf("Request headers : %v", r.Header)
-
-       sentAuthorizationResponse(w, true, "")
+       var authorizationRequest AuthorizationRequest
+       if jsonErr := json.Unmarshal(content, &authorizationRequest); jsonErr != nil {
+           log.Errorf("Error deserializing json %v", jsonErr)
+           sentAuthorizationResponse(w, false, "Error deserializing authorization request")
+       } else {
+           log.Debugf("Got authorizationRequest %+v", authorizationRequest)
+           allowed, e := osoClient.Authorize(User{id: authorizationRequest.Spec.User},
+               authorizationRequest.Spec.ResourceAttributes.Verb, Resource{id: authorizationRequest.Spec.ResourceAttributes.Resource})
+           if e != nil {
+               log.Errorf("Error getting authorization decision: %v", e)
+           }
+           sentAuthorizationResponse(w, allowed, "")
+       }
    }
 }

If you save this file as oso-patch.patch, you can apply it using git:

git apply < oso-patch.patch

This server is ready to use Oso! Now, rebuild and run the server, making sure to pass the OSO_AUTH environment variable:

docker build -t dmathai/auth-webhook-sample:latest -f Dockerfile .
docker run \
    --env "OSO_AUTH=$OSO_AUTH" \
    --env "AUTH_CERT_TLS_KEY=$AUTH_CERT_TLS_KEY" \
    --env "AUTH_CERT_TLS_CRT=$AUTH_CERT_TLS_CRT" \
    --env "LOG_LEVEL=DEBUG" -p 8443:8443 dmathai/auth-webhook-sample:latest

Now when you try to get pods, you should get access denied, since Oso hasn’t been configured yet:

$ kubectl get pods --all-namespaces
Error from server (Forbidden): pods is forbidden: User "admin" cannot list resource "pods" in API group "" at the cluster scope

My demo Polar policy is quite short:

actor User {}

resource KubernetesResource {
  permissions = ["list", "impersonate"];
  roles = ["reader", "admin"];

  "list" if "reader";
  "impersonate" if "admin";
}

To set this policy, you need to Set Up the Oso CLI:

$ curl https://cloud.osohq.com/install.sh | bash
$ oso-cloud inspect Actor:hello
(no results)

Then you can run this command to set the policy, assuming you’ve saved the above policy as policy.polar:

$ oso-cloud policy policy.polar
Policy successfully loaded.

Now, give your admin user permission to list pods:

$ oso-cloud tell has_role User:admin "reader" KubernetesResource:pods
Writing has_role(User:admin, String:reader, KubernetesResource:pods) to Oso at https://cloud.osohq.com/Success
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-5644d7b6d9-ssqzv           1/1     Running   0          19m
kube-system   etcd-minikube                      1/1     Running   0          18m
kube-system   kube-apiserver-minikube            1/1     Running   0          18m
kube-system   kube-controller-manager-minikube   1/1     Running   0          18m
kube-system   kube-proxy-rfn9b                   1/1     Running   0          19m
kube-system   kube-scheduler-minikube            1/1     Running   0          18m
kube-system   storage-provisioner                1/1     Running   1          19m

Great! Now try some user impersonation:

$ kubectl auth can-i list secrets --as alice
Error from server (Forbidden): users "alice" is forbidden: User "admin" cannot impersonate resource "users" in API group "" at the cluster scope

You don’t have permission to do this, so give your admin user the admin role:

$ oso-cloud tell has_role User:admin "admin" KubernetesResource:users
Writing has_role(User:admin, String:admin, KubernetesResource:users) to Oso at https://cloud.osohq.com/
Success
$ kubectl auth can-i list secrets --as alice
no
$ kubectl auth can-i list secrets --as bob
no

Now that you can impersonate users, you can play around with some permissions and see the results. Give Alice permission to read secrets:

$ oso-cloud tell has_role User:alice "reader" KubernetesResource:secrets
Writing has_role(User:alice, String:reader, KubernetesResource:secrets) to Oso at https://cloud.osohq.com/
Success
$ kubectl auth can-i list secrets --as alice
yes
$ kubectl auth can-i list secrets --as bob
no

There you go! Now you have Oso controlling access to your Kubernetes resources, and you can configure your policy however you want, and even use the same central service to control access to other non-Kubernetes resources.

Conclusion

Thanks for reading to the end! This showed how to use Kubernetes Authorization webhooks to defer authorization decisions to Oso. If you’re interested in this or Oso in general, please get in touch. The team is happy to help, even if you just want someone to help you model your authorization.

If you want to learn more about authorization in general, check out Authorization Academy. It’s been helpful to many people trying to understand how to think about authorization, and none of that content is Oso-specific.

Want us to remind you?
We'll email you before the event with a friendly reminder.

Write your first policy