after_script commands are not run after a multi-stage Dockerfile build with kaniko
Summary
The after_script
commands are not run after a build (using kaniko) of a multi-stage Docker image.
It works as expected if this is a single stage Docker image.
Steps to reproduce
The bug can be reproduced with the following simple .gitlab-ci.yml
file, the build
job will work as expected but not the build_multi_stage
one.
.gitlab-ci.yml
stages:
- build
.build:
stage: build
image:
name: gcr.io/kaniko-project/executor:v1.9.1-debug
entrypoint: ['']
script:
- /kaniko/executor
--context $CONTEXT
--dockerfile ${DOCKERFILE_PATH}
--no-push
after_script:
- echo "applesauce"
variables:
CONTEXT: "."
build:
extends: .build
variables:
DOCKERFILE_PATH: Dockerfile
build_multi_stage:
extends: .build
variables:
DOCKERFILE_PATH: Dockerfile_multistage
The file Dockerfile
:
FROM alpine:3.16
RUN apk upgrade --no-cache
The file Dockerfile_multistage
(yes, it's a silly one):
FROM alpine:3.16 AS builder
FROM alpine:3.16
RUN apk upgrade --no-cache
I tried with:
- different version of kaniko (1.9.1, 1.9.0, 1.8.0, 1.7.0)
- different Dockerfiles (as soon as there are multiples stages, it fails to run the after_script)
Actual behavior
The after_script
is skipped / not run after my kaniko build of a multi-stage image.
Expected behavior
The after_script
should run after my kaniko build of a multi-stage image.
Relevant logs and/or screenshots
- Logs of the job
build
- Logs of the job
build_multi_stage
, where theafter_script
is not run
Environment description
config.toml contents
listen_address = "0.0.0.0:9252"
[session_server]
session_timeout = 1800
[[runners]]
request_concurrency = 20
url = "<REDACTED>"
executor = "kubernetes"
output_limit=10000
environment = ["K8S_AUTH_KUBECONFIG=~/.kube/config", "FF_GITLAB_REGISTRY_HELPER_IMAGE=1"]
[runners.custom_build_dir]
[runners.cache]
Type = "azure"
[runners.cache.azure]
StorageDomain = "blob.core.windows.net"
AccountName = "<REDACTED>"
AccountKey = "<REDACTED>"
ContainerName = "gitlab-cache"
[runners.kubernetes]
namespace = "<REDACTED>"
namespace_overwrite_allowed = ""
image_pull_secrets = ["<REDACTED>"]
cpu_request = "200m"
cpu_request_overwrite_max_allowed = "2"
memory_request = "200Mi"
memory_request_overwrite_max_allowed = "4Gi"
privileged = false
poll_interval = 5
poll_timeout = 360
host = ""
bearer_token_overwrite_allowed = false
image = ""
pull_policy = "if-not-present"
service_account = ""
service_account_overwrite_allowed = ""
pod_annotations_overwrite_allowed = ""
cpu_limit = "2"
cpu_limit_overwrite_max_allowed = "4"
memory_limit = "3Gi"
memory_limit_overwrite_max_allowed = "8Gi"
Used GitLab Runner version
GitLab Runner version 15.9.1 with k8s executor (GitLab version 15.9.2)
Running with gitlab-runner 15.9.1 (d540b510)
...
Using Kubernetes executor with image gcr.io/kaniko-project/executor:v1.9.1-debug ...
Possible fixes
No fix in sight!