Skip to content

[release/2.2 backport] Fix AppArmor bug disallowing unix domain sockets on newer kernels#12897

Merged
mxpv merged 1 commit intocontainerd:release/2.2from
thaJeztah:2.2_backport_fix-12726
Feb 18, 2026
Merged

[release/2.2 backport] Fix AppArmor bug disallowing unix domain sockets on newer kernels#12897
mxpv merged 1 commit intocontainerd:release/2.2from
thaJeztah:2.2_backport_fix-12726

Conversation

@thaJeztah
Copy link
Member

This change sets the AppArmor policy used by containerd to indicate it is abi/3.0. This was chosen based on some code archeology which indicated that containerd 1.7 came out in March 2023, before the AppArmor 4.0 ABI. The AppArmor policies themselves date to much older; the last apparmor version-checks were removed in
4baa187 and
c990e3f, and both were looking for AppArmor 2.8.96 or older, pointing to abi/3.0 being the "correct" one to pick.

Nothing is preventing containerd from migrating to a newer AppArmor ABI; note, however, that anything newer than abi/4.0 will need modifications to preserve UNIX domain sockets.

This was tested by building a custom k3s v1.35.0+k3s3, with the following modification:

diff --git a/go.mod b/go.mod
index 4e7bacd204..0fcaf76b8f 100644
--- a/go.mod
+++ b/go.mod
@@ -8,7 +8,7 @@ replace (
        github.com/cilium/ebpf => github.com/cilium/ebpf v0.12.3
        github.com/cloudnativelabs/kube-router/v2 => github.com/k3s-io/kube-router/v2 v2.6.3-k3s1
        github.com/containerd/containerd/api => github.com/containerd/containerd/api v1.9.0
-       github.com/containerd/containerd/v2 => github.com/k3s-io/containerd/v2 v2.1.5-k3s1
+       github.com/containerd/containerd/v2 => github.com/achernya/containerd/v2 v2.0.0-20260206214308-5e0dce89c422
        github.com/containerd/imgcrypt => github.com/containerd/imgcrypt v1.1.11
        github.com/containerd/stargz-snapshotter => github.com/k3s-io/stargz-snapshotter v0.17.0-k3s1
        github.com/docker/distribution => github.com/docker/distribution v2.8.3+incompatible

to use a precursor to this commit.

Once built, the resulting k3s was tested on a brand-new Proxmox installation:

root@containerd-test:~# uname -a
Linux containerd-test 6.17.2-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.17.2-1 (2025-10-21T11:55Z) x86_64 GNU/Linux
root@containerd-test:~# pveversion
pve-manager/9.1.1/42db4a6cf33dac83 (running kernel: 6.17.2-1-pve)

Files were copied over:

achernya@achernya-dev:~/src/k3s$ scp -r dist/artifacts/ root@containerd-test:

and installed

root@containerd-test:~# mkdir -p /var/lib/rancher/k3s/agent/images/ /usr/local/bin
root@containerd-test:~# cp artifacts/k3s /usr/local/bin/
root@containerd-test:~# cp artifacts/k3s-airgap-images-amd64.tar.zst /var/lib/rancher/k3s/agent/images/

then finally started with k3s server. Argo CD was then installed:

root@containerd-test:~# k3s kubectl create namespace argocd
namespace/argocd created
root@containerd-test:~# k3s kubectl apply -n argocd --server-side --force-conflicts -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
[elided]
root@containerd-test:~# k3s kubectl get pods -A
NAMESPACE     NAME                                               READY   STATUS      RESTARTS   AGE
argocd        argocd-application-controller-0                    1/1     Running     0          31s
argocd        argocd-applicationset-controller-77475dfcf-6b4cb   1/1     Running     0          32s
argocd        argocd-dex-server-6485c5ddf5-ckp5s                 1/1     Running     0          32s
argocd        argocd-notifications-controller-758f795776-djx69   1/1     Running     0          32s
argocd        argocd-redis-6cc4bb5db5-lt9fh                      1/1     Running     0          32s
argocd        argocd-repo-server-c76cf57cd-mr4mc                 1/1     Running     0          32s
argocd        argocd-server-6f85b59c87-w6cns                     0/1     Running     0          32s
kube-system   coredns-6b4688786f-pnds2                           1/1     Running     0          4m1s
kube-system   helm-install-traefik-crd-cn28g                     0/1     Completed   0          4m1s
kube-system   helm-install-traefik-hc9gp                         0/1     Completed   2          4m1s
kube-system   local-path-provisioner-6bc6568469-7wglx            1/1     Running     0          4m1s
kube-system   metrics-server-77dbbf84b-nqzsc                     1/1     Running     0          4m1s
kube-system   svclb-traefik-fe6d3a0b-z7jsp                       2/2     Running     0          3m14s
kube-system   traefik-5fdc878c8d-cjhx5                           1/1     Running     0          3m15s

Fixes: #12726

(cherry picked from commit a6f03a7)

This change sets the AppArmor policy used by containerd to indicate it
is `abi/3.0`. This was chosen based on some code archeology which
indicated that containerd 1.7 came out in March 2023, before the
AppArmor 4.0 ABI. The AppArmor policies themselves date to much older;
the last apparmor version-checks were removed in
4baa187 and
c990e3f, and both were looking for
AppArmor 2.8.96 or older, pointing to abi/3.0 being the "correct" one
to pick.

Nothing is preventing containerd from migrating to a newer AppArmor
ABI; note, however, that anything newer than `abi/4.0` will need
modifications to preserve UNIX domain sockets.

This was tested by building a custom k3s v1.35.0+k3s3, with the
following modification:

```
diff --git a/go.mod b/go.mod
index 4e7bacd204..0fcaf76b8f 100644
--- a/go.mod
+++ b/go.mod
@@ -8,7 +8,7 @@ replace (
        github.com/cilium/ebpf => github.com/cilium/ebpf v0.12.3
        github.com/cloudnativelabs/kube-router/v2 => github.com/k3s-io/kube-router/v2 v2.6.3-k3s1
        github.com/containerd/containerd/api => github.com/containerd/containerd/api v1.9.0
-       github.com/containerd/containerd/v2 => github.com/k3s-io/containerd/v2 v2.1.5-k3s1
+       github.com/containerd/containerd/v2 => github.com/achernya/containerd/v2 v2.0.0-20260206214308-5e0dce89c422
        github.com/containerd/imgcrypt => github.com/containerd/imgcrypt v1.1.11
        github.com/containerd/stargz-snapshotter => github.com/k3s-io/stargz-snapshotter v0.17.0-k3s1
        github.com/docker/distribution => github.com/docker/distribution v2.8.3+incompatible
```

to use a precursor to this commit.

Once built, the resulting k3s was tested on a brand-new Proxmox installation:

```
root@containerd-test:~# uname -a
Linux containerd-test 6.17.2-1-pve containerd#1 SMP PREEMPT_DYNAMIC PMX 6.17.2-1 (2025-10-21T11:55Z) x86_64 GNU/Linux
root@containerd-test:~# pveversion
pve-manager/9.1.1/42db4a6cf33dac83 (running kernel: 6.17.2-1-pve)
```

Files were copied over:
```
achernya@achernya-dev:~/src/k3s$ scp -r dist/artifacts/ root@containerd-test:
```

and installed
```
root@containerd-test:~# mkdir -p /var/lib/rancher/k3s/agent/images/ /usr/local/bin
root@containerd-test:~# cp artifacts/k3s /usr/local/bin/
root@containerd-test:~# cp artifacts/k3s-airgap-images-amd64.tar.zst /var/lib/rancher/k3s/agent/images/
```

then finally started with `k3s server`. Argo CD was then installed:

```
root@containerd-test:~# k3s kubectl create namespace argocd
namespace/argocd created
root@containerd-test:~# k3s kubectl apply -n argocd --server-side --force-conflicts -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
[elided]
root@containerd-test:~# k3s kubectl get pods -A
NAMESPACE     NAME                                               READY   STATUS      RESTARTS   AGE
argocd        argocd-application-controller-0                    1/1     Running     0          31s
argocd        argocd-applicationset-controller-77475dfcf-6b4cb   1/1     Running     0          32s
argocd        argocd-dex-server-6485c5ddf5-ckp5s                 1/1     Running     0          32s
argocd        argocd-notifications-controller-758f795776-djx69   1/1     Running     0          32s
argocd        argocd-redis-6cc4bb5db5-lt9fh                      1/1     Running     0          32s
argocd        argocd-repo-server-c76cf57cd-mr4mc                 1/1     Running     0          32s
argocd        argocd-server-6f85b59c87-w6cns                     0/1     Running     0          32s
kube-system   coredns-6b4688786f-pnds2                           1/1     Running     0          4m1s
kube-system   helm-install-traefik-crd-cn28g                     0/1     Completed   0          4m1s
kube-system   helm-install-traefik-hc9gp                         0/1     Completed   2          4m1s
kube-system   local-path-provisioner-6bc6568469-7wglx            1/1     Running     0          4m1s
kube-system   metrics-server-77dbbf84b-nqzsc                     1/1     Running     0          4m1s
kube-system   svclb-traefik-fe6d3a0b-z7jsp                       2/2     Running     0          3m14s
kube-system   traefik-5fdc878c8d-cjhx5                           1/1     Running     0          3m15s
```

Fixes: containerd#12726
Signed-off-by: Alex Chernyakhovsky <[email protected]>
(cherry picked from commit a6f03a7)
Signed-off-by: Sebastiaan van Stijn <[email protected]>
@thaJeztah thaJeztah force-pushed the 2.2_backport_fix-12726 branch from 818ab4a to 6c05047 Compare February 16, 2026 19:54
@github-project-automation github-project-automation bot moved this from Needs Triage to Review In Progress in Pull Request Review Feb 18, 2026
@mxpv mxpv merged commit 3661d86 into containerd:release/2.2 Feb 18, 2026
52 checks passed
@github-project-automation github-project-automation bot moved this from Review In Progress to Done in Pull Request Review Feb 18, 2026
@thaJeztah thaJeztah deleted the 2.2_backport_fix-12726 branch February 19, 2026 01:28
@samuelkarp samuelkarp changed the title [release/2.2 backport] apparmor: explicitly set abi/3.0 [release/2.2 backport] Fix AppArmor bug disallowing unix domain sockets on newer kernels Mar 9, 2026
@brandond
Copy link
Contributor

brandond commented Mar 10, 2026

FYI this has broken containerd on SLE/OpenSUSE 15:

load apparmor profile /tmp/cri-containerd.apparmor.d95257929: parser error(\"AppArmor parser error for /tmp/cri-containerd.apparmor.d95257929 in profile /tmp/cri-containerd.apparmor.d95257929 at line 2: Could not open 'abi/3.0': No such file or directory\")

Just discovered this while trying to update K3s to containerd v2.2.2.

@samuelkarp
Copy link
Member

What AppArmor version does SLE/OpenSUSE 15 have? Are the ABI files located in a different place?

@brandond
Copy link
Contributor

brandond commented Mar 10, 2026

Ugh, no, but apparently we need to install the apparmor-abstractions package - for some reason base system only comes with libapparmor1 and apparmor-parser

@brandond
Copy link
Contributor

brandond commented Mar 10, 2026

Is there no other fix for the original issue? I'd like to update all the active release branches of K3s to containerd 2.2 but that's going to be a breaking change if it suddenly requires users to install a new package on their nodes.

Would it be possible to check for the ABI files before requiring them in the template?

@samuelkarp
Copy link
Member

I'm not sure how you have containerd packaged, but can you add apparmor-abstractions as a dependency? That's typically how something like this would be handled in a DEB/RPM world. Otherwise you're welcome to use your own AppArmor profile or patch the default in your packaging to add the unix term instead.

We could also look at using the macroExists pattern that was in #12729, though we'd want to stick with 3.0 instead of preferring newer ones.

@brandond
Copy link
Contributor

brandond commented Mar 10, 2026

This is for K3s, same as the original author of #12864 is using. We do not distribute as OS packages with dependencies; except for security module support packages (selinux/apparmor) we are completely self-contained.

Apparently proxmox (and ubuntu, and most other distros with AppArmor support) include the abstractions as part of the base OS; OpenSUSE and SLE are unique in including apparmor-parser without the abstractions. Prior to this patch everything worked fine, but now that a specific ABI is set, additional packages will be required for containerd to function on these distros.

The approach at #12729 seems more portable than unconditionally expecting all distros to have the ABIs available. Naively, something like this perhaps? brandond@281ac8b

@achernya
Copy link
Contributor

Huh, I didn't know there were systems that shipped libapparmor and apparmor-parser but not apparmor-abstractions --- that seems like a recipe for breaking things. The reason I didn't include the macro test is it means that if you are on a newer apparmor+kernel you will inherit the new ABI, and end up in the problem #12726 details. I would argue it's probably better to turn off apparmor entirely if you can't load the ABI rather than configure apparmor policies in a way that has not been tested.

@brandond
Copy link
Contributor

brandond commented Mar 11, 2026

I'm not sure what you mean by "not tested", it's been working that way on SLE/OpenSUSE for years.

We can definitely recommend that folks install the abstractions if they run into problems with sockets being blocked, but having apparmor block a few things that it shouldn't until you install an optional package seems like much less of a problem then containers not running at all. Cure is worse than the disease, as it were.

@achernya
Copy link
Contributor

achernya commented Mar 11, 2026

It's been working by chance. And the moment you get a 6.17 kernel it will break with the issue I described in the previous comment. So realistically the choices are:

  1. Put the guard in, and accept that for suse users when they get a new enough kernel things will silently break
  2. Detect that the ABI files are missing and disable apparmor.
  3. Do nothing in containerd and require users on suse to modify their installations

I'm hearing that you believe (3) is a nonstarter, so that brings us to (1) or (2). Sounds like you prefer (1)?

@brandond
Copy link
Contributor

brandond commented Mar 11, 2026

  1. can be handled with documentation; putting up with limited amounts of unwanted behavior ONLY on newer kernels until an optional package is installed seems reasonable to me.
  2. is much less preferable, I'm not sure why anyone would want to disable apparmor entirely if the abi files are missing, as it clearly works fine on any kernel released prior to September '25 - which is the vast majority of those currently running in production.
  3. would constitute a breaking change for us; we have not ever historically required users to install or upgrade distribution packages to use new versions of K3s.

So yes, 1 would be my personal preference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Development

Successfully merging this pull request may close these issues.

7 participants