Snyk Container-specific CI/CD strategies

The best stage to implement Snyk Container in your pipeline is after the container image is built (after running the equivalent of “docker build”), and before your image is either pushed into your registry (“docker push”) or deployed to your running infrastructure (“helm install”, “kubectl apply” and so on).

Typically, the way you run your container build-test-deploy pipeline depends on whether or not a Docker daemon is available to the build agent.

Running pipeline if a Docker daemon is available

If the following circumstances exist:

  • You are running your build tooling (such as Jenkins) directly on a host that has Docker natively installed.

  • Your pipeline tasks are run inside containers that have the Docker socket [/var/run/docker.sock] bind-mounted to the host.

  • You are running a Docker-inside-Docker setup.

Snyk can help as follows:

  • When you run snyk container test $IMAGE_NAME, Snyk looks for that image in your local daemon storage, and if the image does not exist, does the equivalent of a docker pull to download it from your upstream registry.

  • For registry authentication, Snyk uses the credentials you already configured (with something like docker login).

  • You can specify --file=Dockerfile on the command line to link the image vulnerability results with the Dockerfile that built the image, to receive inline fix advice and alternate base image suggestions.

Running pipeline if a Docker daemon is not available

If the following circumstances exist:

  • You containerize each build task but do not mount the Docker socket for security and performance reasons.

  • Pipeline tasks are split across hosts (or even clusters) and rely on artifacts to be handed off though a central volume or intermediate registry/object store.

  • You work exclusively in an ecosystem that only uses OCI-compliant container images.

Snyk can help as follows:

  • Run either snyk container test docker-archive:archive.tar or snyk container test oci-archive:archive.tar to get Snyk vulnerability results against tar-formatted container images (either in Docker or OCI format) without relying on the Docker daemon.

  • The tar archive can be generated by your build process using the equivalent of docker save and stored in a shared volume or object store. This can be accessed by the build agent container running the Snyk binary, with no other dependencies required.

Good practice recommendations for integration with container images

  • Regardless of how you integrate with container images during CI, run a Snyk Container scan as a separate build step from your Snyk Open Source (application SCA) test. This allows you to isolate build failures to vulnerabilities within either the container/OS layer or the application layer, respectively. This also enables more easily containerized build tasks.

  • Use CLI flags like --fail-on and --severity-threshold to customize the failure status for the build task. For more advanced usage, you can use --json to generate a JSON file containing the full vulnerability report, and set your own build failure status based on the JSON data.

  • Pass --exclude-base-image-vulns to report only vulnerabilities introduced by your user layers, rather than vulnerabilities that come from the base image of the container (the image you specify in the FROM line in the Dockerfile).

  • Run snyk container monitor following snyk container test (or simply check the Monitor box on your plugin settings), to keep a record of the bill of materials for this container within the Snyk UI and proactively monitor for new vulnerabilities on a daily basis. This is useful when pushing new releases into production environments. You can use --project-name to specify a unique identifier for the release to ensure production containers are tracked separately from others in your build process.

Last updated

Was this helpful?

#4580: CLI: help, Ignore - support ticket 49463-add Iac examples

Change request updated