The following conditions must be met:
You need access to build-machines running the desired architectures (running Kaniko in an emulator, e.g. QEMU should also be possible but goes beyond the scope of this documentation). This is something to keep in mind when using SaaS build tools such as github.com or gitlab.com, of which at the time of writing neither supports any non-x86_64 SaaS runners (GitHub,GitLab), so be prepared to bring your own machines (GitHub,GitLab.
Kaniko needs to be able to run on the desired architectures. At the time of writing, the official Kaniko container supports linux/amd64, linux/arm64, linux/s390x and linux/ppc64le (not on *-debug images).
The container registry of your choice must be OCIv1 or Docker v2.2 compatible.
It is up to you to find an automation tool that suits your needs best. We recommend using a modern CI/CD system such as GitHub workflows or GitLab CI. As we (the authors) happen to use GitLab CI, the following examples are tailored to this specific platform but the underlying principles should apply anywhere else and the examples are kept simple enough, so that you should be able to follow along, even without any previous experiences with this specific platform. When in doubt, visit the gitlab-ci.yml reference page for a comprehensive overview of the GitLab CI keywords.
gitlab-ci.yml:
# define a job for building the containersbuild-container: stage: container-build # run parallel builds for the desired architectures parallel:matrix: - ARCH: amd64 - ARCH: arm64 tags:# run each build on a suitable, preconfigured runner (must match the target architecture)- runner-${ARCH} image:name: gcr.io/kaniko-project/executor:debugentrypoint: [""] script:# build the container image for the current arch using kaniko- >- /kaniko/executor --context "${CI_PROJECT_DIR}" --dockerfile "${CI_PROJECT_DIR}/Dockerfile" # push the image to the GitLab container registry, add the current arch as tag. --destination "${CI_REGISTRY_IMAGE}:${ARCH}"
gitlab-ci.yml:
# define a job for creating and pushing a merged manifestmerge-manifests: stage: container-build # all containers must be build before merging them # alternatively the job may be configured to run in a later stage needs: - job: container-build artifacts: false tags:# may run on any architecture supported by manifest-tool image- runner-xyz image:name: mplatform/manifest-tool:alpineentrypoint: [""] script: - >- manifest-tool # authorize against your container registry --username=${CI_REGISTRY_USER} --password=${CI_REGISTRY_PASSWORD} push from-args # define the architectures you want to merge --platforms linux/amd64,linux/arm64 # "ARCH" will be automatically replaced by manifest-tool # with the appropriate arch from the platform definitions --template ${CI_REGISTRY_IMAGE}:ARCH # The name of the final, combined image which will be pushed to your registry --target ${CI_REGISTRY_IMAGE}
For simplicity's sake we deliberately refrained from using versioned tagged images (all builds will be tagged as "latest") in the previous examples, as we feel like this adds to much platform and workflow specific code.
Nethertheless, for anyone interested in how we handle (dynamic) versioning in GitLab, here is a short rundown:
If you are only interested in building tagged releases, you can simply use the
GitLab predefinedCI_COMMIT_TAG
variable when running a tag pipeline.
When you (like us) want to additionally build container images outside of
releases, things get a bit messier. In our case, we added a additional job
which runs before the build and merge jobs (don't forget to extend the needs
section of the build and merge jobs accordingly), which will set the tag tolatest
when running on the default branch, to the commit hash when run on
other branches and to the release tag when run on a tag pipeline.
gitlab-ci.yml:
container-get-tag: stage: pre-container-build-stage tags: - runner-xyz image: busybox script:# All other branches are tagged with the currently built commit SHA hash- | # If pipeline runs on the default branch: Set tag to "latest" if test "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH"; then tag="latest" # If pipeline is a tag pipeline, set tag to the git commit tag elif test -n "$CI_COMMIT_TAG"; then tag="$CI_COMMIT_TAG" # Else set the tag to the git commit sha else tag="$CI_COMMIT_SHA" fi - echo "tag=$tag" > build.env # parse tag to the build and merge jobs. # See: https://docs.gitlab.com/ee/ci/variables/#pass-an-environment-variable-to-another-job artifacts:reports: dotenv: build.env
Similar tools include:
BuildKit
img
orca-build
umoci
buildah
FTL
Bazel rules_docker
All of these tools build container images with different approaches.
BuildKit (and img
) can perform as a non-root user from within a container but
requires seccomp and AppArmor to be disabled to create nested containers.kaniko
does not actually create nested containers, so it does not require
seccomp and AppArmor to be disabled. BuildKit supports "cross-building"
multi-arch containers by leveraging QEMU.
orca-build
depends on runc
to build images from Dockerfiles, which can not
run inside a container (for similar reasons to img
above). kaniko
doesn't
use runc
so it doesn't require the use of kernel namespacing techniques.
However, orca-build
does not require Docker or any privileged daemon (so
builds can be done entirely without privilege).
umoci
works without any privileges, and also has no restrictions on the root
filesystem being extracted (though it requires additional handling if your
filesystem is sufficiently complicated). However, it has no Dockerfile
-like
build tooling (it's a slightly lower-level tool that can be used to build such
builders -- such as orca-build
).
Buildah
specializes in building OCI images. Buildah's commands replicate all
of the commands that are found in a Dockerfile. This allows building images with
and without Dockerfiles while not requiring any root privileges. Buildah’s
ultimate goal is to provide a lower-level coreutils interface to build images.
The flexibility of building images without Dockerfiles allows for the
integration of other scripting languages into the build process. Buildah follows
a simple fork-exec model and does not run as a daemon but it is based on a
comprehensive API in golang, which can be vendored into other tools.
FTL
and Bazel
aim to achieve the fastest possible creation of Docker images
for a subset of images. These can be thought of as a special-case "fast path"
that can be used in conjunction with the support for general Dockerfiles kaniko
provides.
kaniko-users Google group
To Contribute to kaniko, see DEVELOPMENT.md and CONTRIBUTING.md.
When taking a snapshot, kaniko's hashing algorithms include (or in the case of--snapshot-mode=time
, only use) a file'smtime
to
determine if the file has changed. Unfortunately, there is a delay between when
changes to a file are made and when the mtime
is updated. This means:
With the time-only snapshot mode (--snapshot-mode=time
), kaniko may miss
changes introduced by RUN
commands entirely.
With the default snapshot mode (--snapshot-mode=full
), whether or not kaniko
will add a layer in the case where a RUN
command modifies a file but the
contents do not change is theoretically non-deterministic. This does not
affect the contents which will still be correct, but it does affect the
number of layers.
Note that these issues are currently theoretical only. If you see this issue occur, please open an issue.
--chown
supportKaniko currently supports COPY --chown
and ADD --chown
Dockerfile command. It does not support RUN --chown
.
Kaniko - Building Container Images In Kubernetes Without Docker.