Install the Snyk controller with Helm
To get vulnerability details about your Kubernetes workloads, a Snyk admin must first install the Snyk controller onto your cluster. The Snyk controller is published in Helm Hub.
This section covers:
  • Snyk integration for most Kubernetes platforms
  • Additional configuration steps for integration when using Amazon Elastic Container Registry (ECR) with your Amazon Elastic Kubernetes Service (EKS) clusters
Prerequisites
Feature availability This feature is available with all paid plans. See pricing plans for more details.
  • An administrator account for your Snyk organization.
  • A minimum of 50 GB of storage must be available in the form of an emptyDir on the cluster.
  • Your Kubernetes cluster needs to be able to communicate with Snyk outbound over HTTPS.
  • When configuring Snyk to integrate with an Amazon Elastic Kubernetes Services (EKS) cluster, if you wish to scan images hosted on your Amazon Elastic Container Registry (ECR), you need to first follow the prerequisites outlined in the AWS documentation.
Steps
  1. 1.
    Access your Kubernetes environment and run the following command in order to add the Snyk Charts repository to Helm:
    1
    helm repo add snyk-charts https://snyk.github.io/kubernetes-monitor
    Copied!
  2. 2.
    Once the repository is added, create a unique namespace for the Snyk controller:
    1
    kubectl create namespace snyk-monitor
    Copied!
    Tip: Use a unique namespace to isolate the controller resources more easily. This is generally good practice for Kubernetes applications. Notice our namespace is called snyk-monitor, you’ll need this later when configuring other resources.
  3. 3.
    Now, log in to your Snyk account and navigate to Integrations.
  4. 4.
    Search for and click Kubernetes.
  5. 5.
    Click Connect from the page that loads, copy the Integration ID. The Snyk Integration ID is a UUID, similar to this format: abcd1234-abcd-1234-abcd-1234abcd1234. Save it for use from your Kubernetes environment in the next step.
  6. 6.
    Snyk monitor runs by using your Snyk Integration ID, and using a dockercfg file. If you are not using any private registries, create a Kubernetes secret called snyk-monitor containing the Snyk Integration ID from the previous step by running the following command:
    1
    kubectl create secret generic snyk-monitor -n snyk-monitor \
    2
    --from-literal=dockercfg.json={} \
    3
    --from-literal=integrationId=abcd1234-abcd-1234-abcd-1234abcd1234
    Copied!
    Note: The secret must be called snyk-monitor in order for the integration to work.
  7. 7.
    If any of the images you need to scan are located in private registries, you need to provide credentials to access those registries by creating a secret (which must be called snyk-monitor) using both the Snyk Integration ID as well as a dockercfg.json file. The dockercfg.json file is necessary to allow the monitor to look up images in private registries. Usually, your credentials reside in $HOME/.docker/config.json. These credential must also be added to the dockerconfig.json file.
    1. 1.
      Create a file named dockercfg.json. Store your credentials in there; it should look like this:
      1
      {
      2
      // If your cluster does not run on GKE or it runs on GKE and pulls images from other private registries, add the following:
      3
      "auths": {
      4
      "gcr.io": {
      5
      "auth": "BASE64-ENCODED-AUTH-DETAILS"
      6
      }
      7
      // Add other registries as necessary
      8
      },
      9
      10
      // If your cluster runs on GKE and you are using GCR, add the following:
      11
      "credHelpers": {
      12
      "us.gcr.io": "gcloud",
      13
      "asia.gcr.io": "gcloud",
      14
      "marketplace.gcr.io": "gcloud",
      15
      "gcr.io": "gcloud",
      16
      "eu.gcr.io": "gcloud",
      17
      "staging-k8s.gcr.io": "gcloud"
      18
      }
      19
      20
      // If your cluster runs on EKS and you are using ECR, add the following:
      21
      {
      22
      "credsStore": "ecr-login"
      23
      }
      24
      25
      With Docker 1.13.0 or greater, you can configure Docker to use different credential helpers for different registries.
      26
      To use this credential helper for a specific ECR registry, create a credHelpers section with the URI of your ECR registry:
      27
      {
      28
      "credHelpers": {
      29
      "public.ecr.aws": "ecr-login",
      30
      "<aws_account_id>.dkr.ecr.<region>.amazonaws.com": "ecr-login"
      31
      }
      32
      }
      33
      }
      Copied!
      2. Create a secret with the file added:
      1
      kubectl create secret generic snyk-monitor \
      2
      -n snyk-monitor --from-file=dockercfg.json \
      3
      --from-literal=integrationId=abcd1234-abcd-1234-abcd-1234abcd1234
      Copied!
  8. 8.
    If your registry is using self-signed or other additional certificates you must make those available to Snyk monitor. First place the .crt, .cert, and/or .key files in a directory and create a ConfigMap:
    1
    kubectl create configmap snyk-monitor-certs \
    2
    -n snyk-monitor --from-file=<path_to_certs_folder>
    Copied!
  9. 9.
    If you are using an insecure registry or your registry is using unqualified images, you can provide a registries.conf file.
    1
    [[registry]]
    2
    location = "internal-registry-for-example.net/bar"
    3
    insecure = true
    Copied!
    See the documentation for information on the format and further examples. Once you've created the file, you can use it to create the following ConfigMap:
    1
    kubectl create configmap snyk-monitor-registries-conf \
    2
    -n snyk-monitor \
    3
    --from-file=<path_to_registries_conf_file>
    Copied!
  10. 10.
    Install the Snyk Helm chart:
    1
    helm upgrade --install snyk-monitor snyk-charts/snyk-monitor \
    2
    --namespace snyk-monitor \
    3
    --set clusterName="Production cluster"
    Copied!
    If you are running your own instance of Snyk you need to specify the API endpoint when installing the controller. Replace below with the full hostname of your Snyk instance.
    1
    helm upgrade --install snyk-monitor snyk-charts/snyk-monitor \
    2
    --namespace snyk-monitor \
    3
    --set clusterName="Production cluster" \
    4
    --set integrationApi=https://<server>/kubernetes-upstream
    Copied!
    Tip: Replace the name Production cluster with a name based on the cluster you are monitoring. You’ll use this label to find workloads in Snyk later. Please note that / in cluster name is disallowed. Any / in cluster names will be removed. Also, to avoid naming the cluster on every update, you can use Helm's existing option for --reuse-values. This means that when upgrading, it'll reuse the last release's values and merge in any overrides from the command line via --set and -f. If '--reset-values' is specified, this is ignored.
  11. 11.
    If you are using a proxy for the outbound connection to Snyk then you need to configure the integration to use that proxy. To configure the proxy set the following values provided in the Helm chart:
    • http_proxy
    • https_proxy
    • no_proxy
    For instance:
    1
    helm upgrade --install snyk-monitor snyk-charts/snyk-monitor \
    2
    --namespace snyk-monitor \
    3
    --set clusterName="Production cluster" \
    4
    --set https_proxy=http://192.168.99.100:8080
    Copied!
    Note that the integration does not support CIDR address ranges or wildcards in the no_proxy value. Only exact matches are supported.
  12. 12.
    If you would like to alter the logging verbosity you can do so as follows. Valid levels are INFO, WARN and ERROR. We default to INFO.
    1
    helm upgrade --install snyk-monitor snyk-charts/snyk-monitor \
    2
    --namespace snyk-monitor \
    3
    --set clusterName="Production cluster" \
    4
    --set log_level="WARN"
    Copied!
  13. 13.
    By default the controller will run without a Pod Security Policy. However, this can be enabled by passing a setting.
    1
    helm upgrade --install snyk-monitor snyk-charts/snyk-monitor \
    2
    --namespace snyk-monitor \
    3
    --set clusterName="Production cluster" \
    4
    --set psp.enabled=true
    Copied!
    You can reuse an existing Pod Security Policy by specifying the name. If you don't specify a name then a new policy will be automatically created.
    1
    helm upgrade --install snyk-monitor snyk-charts/snyk-monitor \
    2
    --namespace snyk-monitor \
    3
    --set clusterName="Production cluster" \
    4
    --set psp.enabled=true \
    5
    --set psp.name=something
    Copied!
  14. 14.
    You can configure the Snyk controller to use a PersistentVolumeClaim (PVC) instead of the default emptyDir storage medium for temporarily pulling images. The PVC can either be created by the Helm template provided by the Snyk chart or you can use an already provisioned PVC.
    Use the following flags to control the PVC:
    • pvc.enabled - instructs the Helm chart to use a PVC instead of an emptyDir
    • pvc.create - whether to create the PVC - this is useful when provisioning for the first time
    • pvc.storageClassName - controls the StorageClass of the PVC
    • pvc.name - the name of the PVC to use in Kubernetes
    For example, you can run the following command on installation to provision/create the PVC:
    1
    helm upgrade --install snyk-monitor snyk-charts/snyk-monitor \
    2
    --namespace snyk-monitor \
    3
    --set pvc.enabled=true \
    4
    --set pvc.create=true \
    5
    --set pvc.name="snyk-monitor-pvc"
    Copied!
    On subsequent upgrades you can drop the "pvc.create" flag because the PVC already exists:
    1
    helm upgrade --install snyk-monitor snyk-charts/snyk-monitor \
    2
    --namespace snyk-monitor \
    3
    --set pvc.enabled=true \
    4
    --set pvc.name="snyk-monitor-pvc"
    Copied!
  15. 15.
    By default, we purposely ignore scanning certain namespaces which we believe are internal to Kubernetes (any namespace starting with kube-*, full list can be found here). If you wish to change that, we allow configuring the excluded namespaces. By adding your own list of namespaces to exclude using excludedNamespaces setting, we will override our default settings and use the list of namespaces you provide.
    1
    helm upgrade --install snyk-monitor snyk-charts/snyk-monitor \
    2
    --namespace snyk-monitor \
    3
    --set excludedNamespaces="{kube-node-lease,local-path-storage,some_namespace}"
    Copied!
  16. 16.
    If more resources are required in order to deploy the controller, configure the helm charts default value for requests and limits with the --set flag.
    1
    helm upgrade --install snyk-monitor snyk-charts/snyk-monitor \
    2
    --namespace snyk-monitor \
    3
    --set requests."ephemeral-storage"="50Gi"
    4
    --set limits."ephemeral-storage"="50Gi"
    Copied!
Last modified 7d ago
Export as PDF
Copy link
Edit on GitHub