How to apply policies in Kubernetes using Open Policy Agent Gatekeeper

Harinderjit Singh
ITNEXT
Published in
15 min readMay 4, 2023

--

Purpose

This article is intended as an introduction to Gatekeeper and will show you how to use the OPA gatekeeper to create and enforce policies and governance for your Kubernetes clusters so the resources you apply comply with that policy.

Kubernetes admission webhooks

Before we dive into how OPA Gatekeeper works under the hood, we first need to learn about Kubernetes admission webhooks.

When a request comes into the Kubernetes API, it passes through a series of steps before it’s executed.

  1. The request is authenticated and authorized.
  2. The request is processed by a list of special Kubernetes webhooks collections called admission controllers that can mutate, modify, and validate the objects in the request.
  3. The request is persisted into etcd to be executed.

Kubernetes admission controllers are the cluster’s middleware. They control what can proceed into the cluster. Admission controllers manage deployments requesting too many resources, enforce pod security policies, and even block vulnerable images from being deployed.

Under the hood of an admission controller is a collection of predefined HTTP callbacks (i.e., webhooks), which intercept the Kubernetes API and process requests after they have been authenticated and authorized.

There are two types of admission controllers:

  • MutatingAdmissionWebhook
  • ValidatingAdmissionWebhook

Mutating admission controllers are invoked first because their job is to enforce custom defaults and, if necessary, to modify the objects sent to the API server. After all the modifications are completed and the incoming object has been validated, the validating admission controllers are invoked and can reject requests to enforce custom policies. Note that some controllers are both validating and mutating. If one of these rejects the request, the request will fail.

Several admission controllers come preconfigured out of the box, and you probably already use them. LimitRanger, for example, is an admission webhook that prevents pods from running if the cluster is out of resources. For further reading about MutatingAdmissionWebhook, see "Diving into Kubernetes MutatingAdmissionWebhooks"

Dynamic admission control

Why admission controllers are implemented with webhooks? This is where the admission controller shines and where the dynamic admission control comes in.

Webhooks give developers the freedom and flexibility to customize the admission logic to actions like Create, Update, or Delete on any resource. This is extremely useful because almost every organization will need to add/adjust its policies and best practices.

Key issues arise from the way admission controllers operate. Modifying admission controllers requires them to be recompiled into Kube-apiserver, and they can only be enabled when the apiserver is activated. Implementing admission controllers with webhooks allows administrators to create customized webhooks and add mutating or validating admission webhooks to the admission webhook chain without recompiling them. Kubernetes apiserver executes registered webhooks, which are standard interfaces.

What is OPA?

Open Policy Agent (OPA) is an open-source, general-purpose policy engine that enables unified, context-aware policy enforcement across the entire stack. OPA gives you a high-level declarative language to author and enforce policies across your stack.

With OPA, you define rules that govern how your system should behave. You integrate services with OPA so that these kinds of policy decisions do not have to be hardcoded into your service. Services integrate with OPA by executing queries when policy decisions are needed.

OPA Gatekeeper

OPA Gatekeeper, a subproject of Open Policy Agent, is specifically designed to implement OPA into a Kubernetes cluster. Gatekeeper is a validating and mutating webhook that enforces CRD-based policies executed by Open Policy Agent, a policy engine for Cloud Native environments hosted by CNCF as a graduated project.

OPA Gatekeeper provides you with two critical abilities:

  • Control what the end user can do on the cluster
  • Enforce company policies in the cluster

Gatekeeper helps reduce the dependency between DevOps admins and the developers themselves. Enforcement of your organization’s policies can be automated, which frees DevOps engineers from worrying about developers making mistakes. It also provides developers with instant feedback about what went wrong and what they need to change.

How OPA Gatekeeper works

Kubernetes allows decoupling policy decisions from the inner workings of the API Server by means of admission controller webhooks, which are executed whenever a resource is created, updated, or deleted. Gatekeeper is a validating and mutating webhook that enforces CRD-based policies executed by Open Policy Agent.

The gatekeeper acts as a bridge between the Kubernetes API server and OPA. In practice, this means that Gatekeeper checks every request that comes into the cluster to see if it violates any of the predefined policies. If so, apiserver will reject it.

Under the hood, Gatekeeper integrates with Kubernetes using the dynamic admission control API and is installed as customizable ValidatingAdmission and MutatingAdmission webhooks. Once it’s installed, the apiserver triggers it whenever a resource in the cluster is created, updated, or deleted.

diagram 1

Since Gatekeeper operates through OPA, all policies must be written in Rego. Fortunately, Kubernetes has that covered by using the OPA Constraints Framework.

A constraint is a CRD representing the policy we want to enforce on a specific kind of resource. When the ValidatingAdmission controller is invoked, the Gatekeeper webhook evaluates all constraints and sends OPA the request together with the policy to enforce it. All constraints are evaluated as logical, and if a constraint is not satisfied, the whole request is rejected.

Constraint Templates

Before you can define a constraint, you must first define a ConstraintTemplate, which describes both the Rego that enforces the constraint and the schema of the constraint. The schema of the constraint allows an admin to fine-tune the behavior of a constraint, much like arguments to a function.

Constraints

Constraints are then used to inform Gatekeeper that the admin wants a ConstraintTemplate to be enforced, and how. This constraint uses the K8sRequiredLabels constraint template above to make sure the gatekeeper label is defined on all namespaces.

Validating Admission Examples

Prerequisites

Installation on Kubernetes Cluster

We are using Helm to install the gatekeeper on the Kubernetes cluster in the namespace “gatekeeper-system”

helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
helm install gatekeeper/gatekeeper --name-template=gatekeeper --namespace gatekeeper-system --create-namespace

You can observe that resources mutatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-mutating-webhook-configuration and validatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-validating-webhook-configuration are created

hsingh@SRCD-PF2YETP8:~$ kubectl get mutatingwebhookconfigurations,validatingwebhookconfigurations -A
NAME WEBHOOKS AGE
mutatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-mutating-webhook-configuration 1 14m
mutatingwebhookconfiguration.admissionregistration.k8s.io/neg-annotation.config.common-webhooks.networking.gke.io 1 16m
mutatingwebhookconfiguration.admissionregistration.k8s.io/pod-ready.config.common-webhooks.networking.gke.io 1 16m

NAME WEBHOOKS AGE
validatingwebhookconfiguration.admissionregistration.k8s.io/flowcontrol-guardrails.config.common-webhooks.networking.gke.io 1 16m
validatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-validating-webhook-configuration 2 14m
validatingwebhookconfiguration.admissionregistration.k8s.io/gkepolicy.config.common-webhooks.networking.gke.io 1 16m
validatingwebhookconfiguration.admissionregistration.k8s.io/nodelimit.config.common-webhooks.networking.gke.io 1 16m
validatingwebhookconfiguration.admissionregistration.k8s.io/validation-webhook.snapshot.storage.k8s.io 1 16m
hsingh@SRCD-PF2YETP8:~$

kubectl get mutatingwebhookconfigurations gatekeeper-mutating-webhook-configuration -o yaml
kubectl get validatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-validating-webhook-configuration -o yaml

Example #1

We are implementing a Policy to make sure resource limits are defined for pod containers.

kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/containerlimits/template.yaml
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/containerlimits/samples/container-must-have-limits/constraint.yaml
kubectl create ns applications
namespace/applications created

kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/containerlimits/samples/container-must-have-limits/example_disallowed.yaml -n applications
Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/containerlimits/samples/container-must-have-limits/example_disallowed.yaml":
admission webhook "validation.gatekeeper.sh" denied the request: [container-must-have-limits] container <opa> memory limit <2Gi> is higher than the maximum allowed of <1Gi>
hsingh@SRCD-PF2YETP8:~$
  • Admission webhook “validation.gatekeeper.sh” validates and denies the request to create the pod.
  • Testing to create a pod with resource limits equal to what is defined in the constraint.
hsingh@SRCD-PF2YETP8:~$ kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/containerlimits/samples/container-must-have-limits/example_allowed.yaml -n applications
pod/opa-allowed created
hsingh@SRCD-PF2YETP8:~$
  • Admission webhook “validation.gatekeeper.sh” validates and lets you create the pod.

Example #2

Policy to make sure replica limits are defined for deployments.

  • Installing constraint template which contains rego code to validate that deployment has replica counts within the specified maximum values.
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/replicalimits/template.yaml
constrainttemplate.templates.gatekeeper.sh/k8sreplicalimits created
  • Create constraint of kind “K8sReplicaLimits” called replica-limits to define the replicas count (max:50, min:3) against which the constraint template validates the target kind of resource i.e. deployment.
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/replicalimits/samples/replicalimits/constraint.yaml
k8sreplicalimits.constraints.gatekeeper.sh/replica-limits created
  • When we test with a deployment manifest having the replicas count 3, it validates against the constraint template and constraint, and deployment is created.
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/replicalimits/samples/replicalimits/example_allowed.yaml
deployment.apps/allowed-deployment created
  • When we test with a deployment manifest having the replicas count 100, it validates against the constraint template and constraint and is denied.
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/replicalimits/samples/replicalimits/example_disallowed.yaml
Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/replicalimits/samples/replicalimits/example_disallowed.yaml":
admission webhook "validation.gatekeeper.sh" denied the request:
[replica-limits] The provided number of replicas is not allowed for Deployment: disallowed-deployment. Allowed ranges: {"ranges": [{"max_replicas": 50, "min_replicas": 3}]}
  • Create constraint replica-limit of kind “K8sReplicaLimits” to define the replicas count (max:15, min:2) against which the constraint template validates the target kind of resource i.e. deployment.
cat contrainst2.yaml

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sReplicaLimits
metadata:
name: replica-limit
spec:
match:
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment"]
parameters:
ranges:
- min_replicas: 2
max_replicas: 15

kubectl apply -f ./contrainst2.yaml
k8sreplicalimits.constraints.gatekeeper.sh/replica-limit created

kubectl get K8sReplicaLimits
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
replica-limit 8
replica-limits 8
  • This means we can have multiple constraints of the same kind using the same constraint templates.
  • Validation is done with respect to both constraints, one after another.
  • Testing with replica count 16 which is OK as per constraint “replica-limits” but violates the limits defined in constraint “replica-limit”
wget https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/replicalimits/samples/replicalimits/example_disallowed.yaml
sed -i 's/replicas: 100/replicas: 16/g' example_disallowed.yaml

kubectl apply -f example_disallowed.yaml

Error from server (Forbidden): error when creating "example_disallowed.yaml":
admission webhook "validation.gatekeeper.sh" denied the request:
[replica-limit] The provided number of replicas is not allowed for Deployment: disallowed-deployment.
Allowed ranges: {"ranges": [{"max_replicas": 15, "min_replicas": 2}]}
  • If one constraint validation fails, the request to create or update the resource is denied.
  • Testing with replica count 5 which is OK as per constraint “replica-limits” and constraint “replica-limit”
sed -i 's/replicas: 16/replicas: 5/g' k apply -f example_disallowed.yaml
kubectl apply -f example_disallowed.yaml
deployment.apps/disallowed-deployment created
  • You can gather information about constraints and ConstraintTemplates like below
kubectl get constraints,ConstraintTemplate
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sreplicalimits.constraints.gatekeeper.sh/replica-limit 8
k8sreplicalimits.constraints.gatekeeper.sh/replica-limits 8

NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8scontainerlimits.constraints.gatekeeper.sh/container-must-have-limits 23

NAME AGE
constrainttemplate.templates.gatekeeper.sh/k8scontainerlimits 43m
constrainttemplate.templates.gatekeeper.sh/k8sreplicalimits 26m

Audit

Audit performs periodic evaluations of existing resources against constraints, detecting pre-existing misconfigurations.

Using Constraint Status

Violations of constraints are listed in the status field of the corresponding constraint. Note that only violations from the most recent audit run are reported and that there is a maximum number of individual violations that will be reported on the constraint itself. If the number of current violations is greater than this cap, the excess violations will not be reported (though they will still be included in the totalViolations count) because Kubernetes has a cap on how large individual API objects can grow, which makes unbounded growth a bad idea. This limit can be configured via the --constraint-violations-limit flag.

kubectl get k8sreplicalimits.constraints.gatekeeper.sh/replica-limit -o json | jq .status.violations
[
{
"enforcementAction": "deny",
"group": "apps",
"kind": "Deployment",
"message": "The provided number of replicas is not allowed for Deployment: kube-dns-autoscaler. Allowed ranges: {\"ranges\": [{\"max_replicas\": 15, \"min_replicas\": 2}]}",
"name": "kube-dns-autoscaler",
"namespace": "kube-system",
"version": "v1"
},
{
"enforcementAction": "deny",
"group": "apps",
"kind": "Deployment",
"message": "The provided number of replicas is not allowed for Deployment: kube-dns. Allowed ranges: {\"ranges\": [{\"max_replicas\": 15, \"min_replicas\": 2}]}",
"name": "kube-dns",
"namespace": "kube-system",
"version": "v1"
},
...
...
]

Handling Constraint Violations

Log denies

Set the --log-denies flag to log all deny, dryrun, and warn failures. This is useful when trying to see what is being denied/fails dry-run and keeping a log to debug cluster problems without having to enable syncing or looking through the status of all constraints.

Dry Run enforcement action

When rolling out new constraints to running clusters, the dry run functionality can be helpful as it enables constraints to be deployed in the cluster without making actual changes. This allows constraints to be tested in a running cluster without enforcing them. Cluster resources that are impacted by the dry run constraint are surfaced as violations in the status field of the constraint.

To use the dry run feature, add enforcementAction: dryrun to the constraint spec to ensure no actual changes are made as a result of the constraint. By default, enforcementAction is set to deny as the default behavior is to deny admission requests with any violation.

Warn enforcement action

Warn enforcement action offers the same benefits as dry run, such as testing constraints without enforcing them. In addition to this, it will also provide immediate feedback on why that constraint would have been denied.

Replicating Data

The “Config” resource must be named config for it to be reconciled by Gatekeeper. Gatekeeper will ignore the resource if you do not name it config.

Some constraints are impossible to write without access to more states than just the object under test. For example, it is impossible to know if an ingress’s hostname is unique among all ingresses unless a rule has access to all other ingresses. To make such rules possible, we enable syncing of data into OPA.

The audit feature does not require replication by default. However, when the audit-from-cache flag is set to true, the audit informer cache will be used as the source of truth for audit queries; thus, an object must first be cached before it can be audited for constraint violations.

Kubernetes data can be replicated into the audit cache via the sync config resource. Currently, resources defined in syncOnly will be synced into OPA. Updating syncOnly should dynamically update what objects are synced. Kubernetes data can be replicated into the audit cache via the sync config resource. Currently, resources defined in syncOnly will be synced into OPA. Updating syncOnly should dynamically update what objects are synced. Below is an example:

apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
name: config
namespace: "gatekeeper-system"
spec:
sync:
syncOnly:
- group: ""
version: "v1"
kind: "Namespace"
- group: ""
version: "v1"
kind: "Pod"

You can install this config with the following command:

kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/basic/sync.yaml

Once data is synced into OPA, rules can access the cached data under the data.inventory document.

The data.inventory document has the following format:

  • For cluster-scoped objects: data.inventory.cluster[<groupVersion>][<kind>][<name>]
  • Example referencing the Gatekeeper namespace: data.inventory.cluster["v1"].Namespace["gatekeeper"]
  • For namespace-scoped objects: data.inventory.namespace[<namespace>][groupVersion][<kind>][<name>]
  • Example referencing the Gatekeeper pod: data.inventory.namespace["gatekeeper"]["v1"]["Pod"]["gatekeeper-controller-manager-d4c98b788-j7d92"]

Exempting Namespaces from Gatekeeper using config resource

The “Config” resource must be named config for it to be reconciled by Gatekeeper. Gatekeeper will ignore the resource if you do not name it config.

The config resource can be used as follows to exclude namespaces from certain processes for all constraints in the cluster. An asterisk can be used for wildcard matching (e.g. kube-*). To exclude namespaces at a constraint level, use excludedNamespaces in the constraint instead.

apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
name: config
namespace: "gatekeeper-system"
spec:
match:
- excludedNamespaces: ["kube-*", "my-namespace"]
processes: ["*"]
- excludedNamespaces: ["audit-excluded-ns"]
processes: ["audit"]

Available processes:

  • audit process exclusion will exclude resources from the specified namespace(s) in audit results.
  • webhook process exclusion will exclude resources from the specified namespace(s) from the admission webhook.
  • sync process exclusion will exclude resources from the specified namespace(s) from being synced into OPA.
  • mutation-webhook process exclusion will exclude resources from the specified namespace(s) from the mutation webhook.
  • * includes all current processes above and includes any future processes.

Configuring the admission behavior

Gatekeeper is a Kubernetes admission webhook whose default configuration can be found in the gatekeeper.yaml manifest file. By default, it is a ValidatingWebhookConfiguration resource named gatekeeper-validating-webhook-configuration.

Currently, the configuration specifies two webhooks: one for checking a request against the installed constraints and a second webhook for checking labels on namespace requests that would result in bypassing constraints for the namespace. The namespace-label webhook is necessary to prevent a privilege escalation where the permission to add a label to a namespace is equivalent to the ability to bypass all constraints for that namespace. You can read more about the ability to exempt namespaces by label here.

Because Kubernetes adds features with each version, if you want to know how the webhook can be configured it is best to look at the official documentation linked at the top of this section. However, two particularly important configuration options deserve special mention: timeouts and failure policy.

Timeouts allow you to configure how long the API server will wait for a response from the admission webhook before it considers the request to have failed. Note that setting the timeout longer than the overall request timeout means that the main request will time out before the webhook’s failure policy is invoked, causing the request to fail.

Failure policy controls what happens when a webhook fails for whatever reason. Common failure scenarios include timeouts, a 5xx error from the server or the webhook being unavailable. You have the option to ignore errors, allowing the request through, or failing, rejecting the request. This results in a direct tradeoff between availability and enforcement.

Currently, Gatekeeper is defaulting to using Ignore for the constraint requests, which means constraints will not be enforced at admission time if the webhook is down or otherwise inaccessible. This is because we cannot know the operational details of the cluster Gatekeeper is running on and how that might affect webhook uptime. For a more detailed treatment of this topic, see our docs on failing closed.

The namespace label webhook defaults to Fail, this is to help ensure that policies preventing labels that bypass the webhook from being applied are enforced. Because this webhook only gets called for namespace modification requests, the impact of downtime is mitigated, making the theoretical maximum availability less of an issue.

Because the manifest is available for customization, the webhook configuration can be tuned to meet your specific needs if they differ from the defaults.

Enable Validation of Delete Operations

Deletes are not Auditable

Once a resource is deleted, it is gone. This means that non-compliant deletes cannot be audited via Gatekeeper’s audit mechanism, and increases the importance of webhook-based enforcement.

Policies Against DELETE May Not be Perfectly Enforced

Since the webhook fails open by default (as described earlier on this page), it is possible for admission requests to have imperfect enforcement, which means some non-compliant deletes may still go through despite the policy. Normally such failures of webhook enforcement could be caught by audit, but deletes are not auditable.

It is possible to improve the likelihood of enforcement by configuring the webhook to fail closed.

How to Enable Validation of Delete Operations

To enable Delete operations for the validation.gatekeeper.sh admission webhook, add "DELETE" to the list of operations in the gatekeeper-validating-webhook-configuration ValidatingWebhookConfiguration as seen in this deployment manifest of gatekeeper: here

So you have

operations:
- CREATE
- UPDATE
- DELETE

You can now check for deletes.

Mutating the resources using Gatekeeper

Mutation policies are defined using mutation-specific CRDs, called mutators:

  • AssignMetadata — defines changes to the metadata section of a resource
  • Assign — any change outside the metadata section
  • ModifySet — adds or removes entries from a list, such as the arguments to a container
  • AssignImage — defines changes to the components of an image string

The rules for mutating metadata are stricter than for mutating the rest of the resource.

Each mutation CRD can be divided into 3 distinct sections:

  • extent of changes — The extent of changes section describes which resources will be mutated. It allows selecting resources to be mutated using the same match criteria as constraints.
  • intent —This specifies what should be changed in the resource. The location element specifies the path to be modified. The location element can specify either a simple subelement or an element in a list. The parameters.assign.value element specifies the value to be set for the element specified in location. Note that the value can either be a simple string or a composite value.
  • conditional — conditions under which the mutation will be applied. Mutation has path tests, which make it so the resource will only be mutated if the specified path exists/does not exist. This can be useful for things like setting a default value if a field is undeclared, or for avoiding creating a field when a parent is missing,

Example #1

Mutation of kind AssignMetadata, updates the annotation seccomp.security.alpha.kubernetes.io/pod to value “runtime/default” or creates it if the annotation doesn't exist.

kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/1d9a419f2094f8cc215867e84bd5f7e56b7d4824/mutation/pod-security-policy/seccomp/samples/mutation.yaml
assignmetadata.mutations.gatekeeper.sh/k8spspseccomp created

kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/1d9a419f2094f8cc215867e84bd5f7e56b7d4824/mutation/pod-security-policy/seccomp/samples/default-seccomp/example.yaml
pod/nginx-default-seccomp created
  • If you look at the manifest to create the pod, there is no annotation defined but the pod definition has it because the resource was mutated by the gatekeeper mutation webhook.
kubectl get pod/nginx-default-seccomp -o json | jq -r .metadata.annotations.\"seccomp\.security\.alpha\.kubernetes\.io\/pod\"
runtime/default

Example #2

Mutation of kind Assign, updates the pod definition to have spec.containers[name:*].securityContext.runAsUser as userid 1000 if it is not defined.

kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/mutation/pod-security-policy/users/samples/mutation-runAsUser.yaml
assign.mutations.gatekeeper.sh/k8spsprunasuser created
assign.mutations.gatekeeper.sh/k8spsprunasuser-init created
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/mutation/pod-security-policy/users/samples/users/example-nomutation.yaml
pod/nginx-run-as-root created

kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/mutation/pod-security-policy/users/samples/users/example.yaml
pod/nginx-users created
  • On checking the security context of both pods we find that both have container security contexts and the one which didn't have it defined has the runAsUser as user-id 1000 because that was mutated by the gatekeeper mutation webhook.
kubectl get pod/nginx-users -o json | jq -r .spec.containers[0].securityContext
{
"runAsUser": 1000
}
kubectl get pod/nginx-run-as-root -o json | jq -r .spec.containers[0].securityContext
{
"runAsNonRoot": false,
"runAsUser": 0
}
[15:39:20] hsingh@SRCD-PF2YETP8:~$

In the Nutshell

  • You can implement the validating and mutating policies in Kubernetes using the OPA gatekeeper.
  • Gatekeeper Constraint templates have the policy logic written in Rego and Learning rego could be overhead as initially it can be intimidating.
  • Rego enables you to define complex logic for the policies.
  • Gatekeeper Library is a helpful resource available for readymade policies to start with. But the library has very limited examples.
  • You can use OPA to enforce policies in microservices, envoy proxy, CI/CD pipelines, API gateways, Terraform, and more. So if you are using OPA for any of these, it will be easier for you to use the OPA gatekeeper for your Kubernetes clusters whether the clusters are on-prem or AKS/GKE/EKS
  • You can test the policies` ConstraintTemplates (rego code) and Constraints at the local desktop using gator cli.
  • You can create a Constraint of a kind only if the ConstraintTemplate of that kind is already defined.

Please read my other articles as well and share your feedback. If you like the content shared please like, comment, and subscribe for new articles.

--

--

Technical Solutions Developer (GCP). Writes about significant learnings and experiences at work.