Automatic import/deletion of Kubernetes workloads projects
This feature is currently in beta. We would appreciate any feedback you might have.
You can configure the Snyk controller to automatically import and update scanned workloads directly in Snyk to test and monitor for vulnerabilities. You can also automatically delete imported projects once workloads are deleted from the cluster. The controller evaluates policy decisions using a policy file written in Rego policy language.

Enabling workload auto-import and auto-delete

The Helm chart of the Snyk controller is already provisioned with a default Rego policy to process events for any workload except Jobs. To enable this feature, provide your Snyk Organization public ID in the Helm chart installation.
1
helm upgrade --install snyk-monitor snyk-charts/snyk-monitor \
2
--namespace snyk-monitor \
3
--set clusterName="Production cluster" \
4
--set policyOrgs={19982df2-0ed5-4a16-b355-e6535cfc41ef}
Copied!
Note that policyOrgs is a list of organization public IDs. You can add more than one Organization to use the auto-import and auto-delete capabilities. You can locate this public ID under your organization's settings page.
You can only use organizations that share the same Kubernetes integration ID used to provision the Snyk controller.

Policy syntax

Provide the policy file to the Snyk controller in a ConfigMap. The policy syntax looks like this:
1
package snyk
2
orgs := [ ]
3
default workload_events = false
Copied!
You can flip the value to true to automatically import or delete everything in the cluster. Tip: exclude Jobs from auto-import as they can be noisy and generate lots of workload imports in your Snyk organization.
Both package snyk and the key workload_events are mandatory by Snyk Controller.

Defining rules

To define your own rules, set a condition on the workload_events key and by providing your organization public ID. For example, to import workloads from the default namespace and automatically delete them on Snyk side once they are deleted from the cluster, the policy would look like this:
1
package snyk
2
orgs := ["19982df2-0ed5-4a16-b355-e6535cfc41ef"]
3
default workload_events = false
4
workload_events {
5
input.metadata.namespace == "default"
6
}
Copied!
Here, input refers to the Kubernetes metadata of the workload scanned by the Snyk controller.
You can also create a policy for workload events (creation/deletion) with a specific annotation:
1
package snyk
2
orgs := ["19982df2-0ed5-4a16-b355-e6535cfc41ef"]
3
default workload_events = false
4
workload_events {
5
input.metadata.annotations.team == "apollo"
6
}
Copied!

Excluding workload types

As best practice, we recommend excluding specific workload types such as Pods and Jobs from workload events (creation/deletion), as they can be really noisy and can generate lots of workload imports in your Snyk organization. You can do this with the following example policy:
1
package snyk
2
orgs := ["19982df2-0ed5-4a16-b355-e6535cfc41ef"]
3
default workload_events = false
4
workload_events {
5
input.kind != "Job"
6
input.kind != "Pod"
7
}
Copied!

Configure Snyk controller to use the policy

1
kubectl create configmap snyk-monitor-custom-policies \
2
-n snyk-monitor \
3
--from-file=workload-events.rego # This name is hardcoded
4
helm upgrade --install snyk-monitor snyk-charts/snyk-monitor \
5
--namespace snyk-monitor \
6
--set clusterName="Production cluster" \
7
--set workloadPoliciesMap=snyk-monitor-custom-policies
Copied!
NOTE
Ensure the file is named workload-events.rego.
Now you can deploy the Snyk controller, or restart it if it is already running in order to pick up the policy.
Last modified 7d ago