{"_kubernetes_io":{"feed_refresh_job":"https://testgrid.k8s.io/sig-security-cve-feed#auto-refreshing-official-cve-feed","updated_at":"2024-11-30T05:06:28Z"},"authors":[{"name":"Kubernetes Community","url":"https://www.kubernetes.dev"}],"description":"Auto-refreshing official CVE feed for Kubernetes repository","feed_url":"https://kubernetes.io/docs/reference/issues-security/official-cve-feed/index.json","home_page_url":"https://kubernetes.io","items":[{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2024-10220","issue_number":128885},"content_text":"A security vulnerability was discovered in Kubernetes that could allow a user with the ability to create a pod and associate a gitRepo volume to execute arbitrary commands beyond the container boundary. This vulnerability leverages the hooks folder in the target repository to run arbitrary commands outside of the container's boundary.\r\n\r\nPlease note that this issue was originally publicly disclosed with a fix in July (#124531), and we are retroactively assigning it a CVE to assist in awareness and tracking.\r\n\r\nThis issue has been rated High ([CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N)) (score: 8.1), and assigned CVE-2024-10220.\r\n\r\n### Am I vulnerable?\r\n\r\nThis CVE affects Kubernetes clusters where pods use the in-tree gitRepo volume to clone a repository to a subdirectory. If the Kubernetes cluster is running one of the affected versions listed below, then it is vulnerable to this issue.\r\n\r\n#### Affected Versions\r\n\r\n- kubelet v1.30.0 to v1.30.2\r\n- kubelet v1.29.0 to v1.29.6\r\n- kubelet \u003c= v1.28.11\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nTo mitigate this vulnerability, you must upgrade your Kubernetes cluster to one of the fixed versions listed below. \r\n\r\nAdditionally, since the gitRepo volume has been deprecated, the recommended solution is to perform the Git clone operation using an init container and then mount the directory into the Pod's container. An example of this approach is provided [here](https://gist.github.com/tallclair/849601a16cebeee581ef2be50c351841).\r\n\r\n#### Fixed Versions\r\n\r\n* kubelet master/v1.31.0 - fixed by #124531\r\n* kubelet v1.30.3 - fixed by #125988\r\n* kubelet v1.29.7 - fixed by #125989\r\n* kubelet v1.28.12 - fixed by #125990\r\n\r\n### Detection\r\n\r\nTo detect whether this vulnerability has been exploited, you can use the following command to list all pods that use the in-tree gitRepo volume and clones to a .git subdirectory. \r\n\r\n```\r\nkubectl get pods --all-namespaces -o json | jq '.items[] | select(.spec.volumes[].gitRepo.directory | endswith(\"/.git\")) | {name: .metadata.name, namespace: .metadata.namespace}\r\n```\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported and mitigated by Imre Rad.\r\n\r\n/area security\r\n/kind bug\r\n/committee security-response\r\n/label official-cve-feed\r\n/sig node\r\n/area kubelet","date_published":"2024-11-20T15:30:44Z","external_url":"https://www.cve.org/cverecord?id=CVE-2024-10220","id":"CVE-2024-10220","status":"fixed","summary":"Arbitrary command execution through gitRepo volume","url":"https://github.com/kubernetes/kubernetes/issues/128885"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2024-9594","issue_number":128007},"content_text":"CVSS Rating: [CVSS:3.1/AV:A/AC:H/PR:H/UI:R/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:A/AC:H/PR:H/UI:R/S:U/C:H/I:H/A:H)\r\n\r\nA security issue was discovered in the Kubernetes Image Builder where default credentials are enabled during the image build process when using the Nutanix, OVA, QEMU or raw providers. The credentials can be used to gain root access. The credentials are disabled at the conclusion of the image build process. Kubernetes clusters are only affected if their nodes use VM images created via the Image Builder project. \r\n\r\n### Am I vulnerable?\r\n\r\nClusters using virtual machine images built with Kubernetes Image Builder (https://github.com/kubernetes-sigs/image-builder) version v0.1.37 or earlier are affected if built with the Nutanix, OVA, QEMU or raw providers. These images were vulnerable during the image build process and are affected only if an attacker was able to reach the VM where the image build was happening and used the vulnerability to modify the image at the time the image build was occurring.\r\n\r\nVMs using images built with the Proxmox provider are affected by a related, but much more serious vulnerability (see #128006).\r\n\r\nVMs using images built with all other providers are not affected by this issue.\r\n\r\nTo determine the version of Image Builder you are using, use one of the following methods:\r\n- For git clones of the image builder repository:\r\n```\r\n cd \u003clocal path to image builder repo\u003e\r\n make version\r\n```\r\n- For installations using a tarball download:\r\n```\r\n cd \u003clocal path to install location\u003e\r\n grep -o v0\\\\.[0-9.]* RELEASE.md | head -1\r\n```\r\n- For a container image release:\r\n `docker run --rm \u003cimage pull spec\u003e version`\r\n or\r\n `podman run --rm \u003cimage pull spec\u003e version`\r\n or look at the image tag specified, in the case of an official image such as `registry.k8s.io/scl-image-builder/cluster-node-image-builder-amd64:v0.1.37`\r\n\r\n\r\n#### Affected Versions\r\n\r\n- Kubernetes Image Builder versions \u003c= v0.1.37\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nRebuild any affected images using a fixed version of Image Builder. Re-deploy the fixed images to any affected VMs.\r\n\r\n#### Fixed Versions\r\n\r\n- Kubernetes Image Builder master - fixed by https://github.com/kubernetes-sigs/image-builder/pull/1596\r\n- Fixed in Kubernetes Image Builder release v0.1.38\r\n\r\n### Detection\r\n\r\nThe linux command `last builder` can be used to view logins to the affected `builder` account.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n## Additional Details\r\n\r\nThe fixed version sets a randomly-generated password for the duration of the image build\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Nicolai Rybnikar @rybnico from Rybnikar Enterprises GmbH.\r\n\r\nThe issue was fixed and coordinated by Marcus Noble of the Image Builder project.\r\n\r\n/area security\r\n/kind bug\r\n/committee security-response\r\n/label official-cve-feed\r\n/sig cluster-lifecycle","date_published":"2024-10-11T18:04:50Z","external_url":"https://www.cve.org/cverecord?id=CVE-2024-9594","id":"CVE-2024-9594","status":"fixed","summary":"VM images built with Image Builder with some providers use default credentials during builds","url":"https://github.com/kubernetes/kubernetes/issues/128007"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2024-9486","issue_number":128006},"content_text":"CVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H)\r\n\r\nA security issue was discovered in the Kubernetes Image Builder where default credentials are enabled during the image build process. Additionally, virtual machine images built using the Proxmox provider do not disable these default credentials, and nodes using the resulting images may be accessible via these default credentials. The credentials can be used to gain root access. Kubernetes clusters are only affected if their nodes use VM images created via the Image Builder project with its Proxmox provider. \r\n\r\n### Am I vulnerable?\r\n\r\nClusters using virtual machine images built with Kubernetes Image Builder (https://github.com/kubernetes-sigs/image-builder) version v0.1.37 or earlier are affected if built with the Proxmox provider.\r\n\r\nVMs using images built with all other providers are not affected by this issue. See #128007 for a related issue which affects some other providers.\r\n\r\nTo determine the version of Image Builder you are using, use one of the following methods:\r\n- For git clones of the image builder repository:\r\n```\r\n cd \u003clocal path to image builder repo\u003e\r\n make version\r\n```\r\n- For installations using a tarball download:\r\n```\r\n cd \u003clocal path to install location\u003e\r\n grep -o v0\\\\.[0-9.]* RELEASE.md | head -1\r\n```\r\n- For a container image release:\r\n `docker run --rm \u003cimage pull spec\u003e version`\r\n or\r\n `podman run --rm \u003cimage pull spec\u003e version`\r\n or look at the image tag specified, in the case of an official image such as `registry.k8s.io/scl-image-builder/cluster-node-image-builder-amd64:v0.1.37`\r\n\r\n\r\n#### Affected Versions\r\n\r\n- Kubernetes Image Builder versions \u003c= v0.1.37\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nRebuild any affected images using a fixed version of Image Builder. Re-deploy the fixed images to any affected VMs.\r\n\r\nPrior to upgrading, this vulnerability can be mitigated by disabling the builder account on affected VMs:\r\nusermod -L builder\r\n\r\n#### Fixed Versions\r\n\r\n- Kubernetes Image Builder master - fixed by https://github.com/kubernetes-sigs/image-builder/pull/1595\r\n- Fixed in Kubernetes Image Builder release v0.1.38\r\n\r\n### Detection\r\n\r\nThe linux command `last builder` can be used to view logins to the affected `builder` account.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n## Additional Details\r\n\r\nThe fixed version makes two changes to remedy this bug:\r\n- It sets a randomly-generated password for the duration of the image build\r\n- It disables the builder account at the conclusion of the image build\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Nicolai Rybnikar @rybnico from Rybnikar Enterprises GmbH.\r\n\r\nThe issue was fixed and coordinated by Marcus Noble of the Image Builder project.\r\n\r\n/area security\r\n/kind bug\r\n/committee security-response\r\n/label official-cve-feed\r\n/sig cluster-lifecycle\r\n","date_published":"2024-10-11T18:04:31Z","external_url":"https://www.cve.org/cverecord?id=CVE-2024-9486","id":"CVE-2024-9486","status":"fixed","summary":"VM images built with Image Builder and Proxmox provider use default credentials","url":"https://github.com/kubernetes/kubernetes/issues/128006"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2024-7646","issue_number":126744},"content_text":"CVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H)\r\n\r\nA security issue was discovered in ingress-nginx where an actor with permission to create Ingress objects (in the `networking.k8s.io` or `extensions` API group) can bypass annotation validation to inject arbitrary commands and obtain the credentials of the ingress-nginx controller. In the default configuration, that credential has access to all secrets in the cluster.\r\n\r\nThis issue has been rated **High** (8.8) [CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H) and assigned **CVE-2024-7646**.\r\n\r\n### Am I vulnerable?\r\n\r\nThis bug affects ingress-nginx. If you do not have ingress-nginx installed on your cluster, you are not affected. You can check this by running `kubectl get po -A` and looking for `ingress-nginx-controller`.\r\n\r\nMulti-tenant environments where non-admin users have permissions to create Ingress objects are most affected by this issue.\r\n\r\n### Affected Versions\r\n\r\ningress-nginx controller \u003c v1.11.2\r\ningress-nginx controller \u003c v1.10.4\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nThis issue can be mitigated by upgrading to the fixed version. \r\n\r\n### Fixed Versions\r\n\r\ningress-nginx controller v1.11.2 - fixed by https://github.com/kubernetes/ingress-nginx/pull/11719 and https://github.com/kubernetes/ingress-nginx/pull/11721\r\ningress-nginx controller v1.10.4 - fixed by https://github.com/kubernetes/ingress-nginx/pull/11718 and https://github.com/kubernetes/ingress-nginx/pull/11722\r\n\r\n### Detection\r\n\r\nReview your Kubernetes audit logs for Ingress objects created with annotations (e.g. `nginx.ingress.kubernetes.io/auth-tls-verify-client`) that contain carriage returns (`\\r`).\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [[email protected]](mailto:[email protected])\r\n\r\n### Additional Details\r\n\r\nSee the GitHub issue for more details: \r\nhttps://github.com/kubernetes/kubernetes/issues/126744 \r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by André Storfjord Kristiansen @dev-bio. \r\n\r\nThe issue was fixed and coordinated by the fix team:\r\nAndré Storfjord Kristiansen @dev-bio\r\nJintao Zhang @tao12345666333\r\nMarco Ebert @Gacko\r\n\r\n/triage accepted\r\n/lifecycle frozen\r\n/area security\r\n/kind bug\r\n/committee security-response","date_published":"2024-08-16T16:10:31Z","external_url":"https://www.cve.org/cverecord?id=CVE-2024-7646","id":"CVE-2024-7646","status":"fixed","summary":"Ingress-nginx Annotation Validation Bypass","url":"https://github.com/kubernetes/kubernetes/issues/126744"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2024-5321","issue_number":126161},"content_text":"CVSS Rating: [CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:N) - **MEDIUM** (6.1)\r\n\r\nA security issue was discovered in Kubernetes clusters with Windows nodes where `BUILTIN\\Users` may be able to read container logs and `NT AUTHORITY\\Authenticated Users` may be able to modify container logs.\r\n\r\nThis issue has been rated **Medium** ([CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:N)), and assigned **CVE-2024-5321**.\r\n\r\n### Am I vulnerable?\r\n\r\nAny Kubernetes environment with Windows nodes is affected. Run `kubectl get nodes -l kubernetes.io/os=windows` to see if any Windows nodes are in use.\r\n\r\n#### Affected Versions\r\n\r\n- kubelet \u003c= 1.27.15\r\n- kubelet \u003c= 1.28.11\r\n- kubelet \u003c= 1.29.6\r\n- kubelet \u003c= 1.30.2 \r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nThis issue can be mitigated by applying the patch provided. The patch includes changes to `pkg/util/filesystem` that set file permissions on Windows and hardens the permissions for container logs for containers running on Windows.\r\n\r\n#### Fixed Versions\r\n\r\n- kubelet 1.27.16\r\n- kubelet 1.28.12\r\n- kubelet 1.29.7\r\n- kubelet 1.30.3 \r\n\r\nTo upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/ \r\n\r\n### Detection\r\n\r\nAny Kubernetes environment with Windows nodes is affected. Run `kubectl get nodes -l kubernetes.io/os=windows` to see if any Windows nodes are in use.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Paulo Gomes @pjbgf from SUSE.\r\n\r\nThe issue was fixed and coordinated by the fix team: \r\nMark Rossetti @marosset \r\nJames Sturtevant @jsturtevant \r\nCraig Ingram @cji \r\nRita Zhang @ritazh\r\n\r\nand release managers:\r\nSascha Grunert @saschagrunert\r\nJeremy Rickard @jeremyrickard\r\nCarlos Panato @cpanato\r\nJim Angel @jimangel\r\n\r\n\u003c!-- labels --\u003e\r\n/area security\r\n/kind bug\r\n/committee security-response\r\n/sig windows\r\n/area kubelet\r\n/triage accepted\r\n/lifecycle frozen\r\n","date_published":"2024-07-17T13:06:48Z","external_url":"https://www.cve.org/cverecord?id=CVE-2024-5321","id":"CVE-2024-5321","status":"fixed","summary":"Incorrect permissions on Windows containers logs","url":"https://github.com/kubernetes/kubernetes/issues/126161"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2024-3744","issue_number":124759},"content_text":"CVSS Rating: [CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:C/C:H/I:N/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:C/C:H/I:N/A:N) - **MEDIUM** (6.5)\r\n\r\nA security issue was discovered in azure-file-csi-driver where an actor with access to the driver logs could observe service account tokens. These tokens could then potentially be exchanged with external cloud providers to access secrets stored in cloud vault solutions. Tokens are only logged when [TokenRequests is configured in the CSIDriver object](https://kubernetes-csi.github.io/docs/token-requests.html) and the driver is set to run at log level 2 or greater via the -v flag.\r\n\r\nThis issue has been rated **MEDIUM** [CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:C/C:H/I:N/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:C/C:H/I:N/A:N) (6.5), and assigned **CVE-2024-3744**\r\n\r\n### Am I vulnerable?\r\n\r\nYou may be vulnerable if [TokenRequests is configured in the CSIDriver object](https://kubernetes-csi.github.io/docs/token-requests.html) and the driver is set to run at log level 2 or greater via the -v flag.\r\n\r\nTo check if token requests are configured, run the following command:\r\n\r\nkubectl get csidriver file.csi.azure.com -o jsonpath=\"{.spec.tokenRequests}\"\r\n\r\nTo check if tokens are being logged, examine the secrets-store container log:\r\n\r\nkubectl logs csi-azurefile-controller-56bfddd689-dh5tk -c azurefile -f | grep --line-buffered \"csi.storage.k8s.io/serviceAccount.tokens\"\r\n\r\n#### Affected Versions\r\n\r\n- azure-file-csi-driver \u003c= v1.29.3\r\n- azure-file-csi-driver v1.30.0\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nPrior to upgrading, this vulnerability can be mitigated by running azure-file-csi-driver at log level 0 or 1 via the -v flag.\r\n\r\n#### Fixed Versions\r\n\r\n- azure-file-csi-driver v1.29.4\r\n- azure-file-csi-driver v1.30.1\r\n\r\nTo upgrade, refer to the documentation: https://github.com/kubernetes-sigs/azurefile-csi-driver?tab=readme-ov-file#install-driver-on-a-kubernetes-cluster \r\n\r\n### Detection\r\n\r\nExamine cloud provider logs for unexpected token exchanges, as well as unexpected access to cloud resources.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was patched by Weizhi Chen @cvvz from Microsoft.\r\n\r\nThank You,\r\nRita Zhang on behalf of the Kubernetes Security Response Committee\r\n\r\n/triage accepted\r\n/lifecycle frozen\r\n/area security\r\n/kind bug\r\n/committee security-response","date_published":"2024-05-08T16:02:57Z","external_url":"https://www.cve.org/cverecord?id=CVE-2024-3744","id":"CVE-2024-3744","status":"fixed","summary":"azure-file-csi-driver discloses service account tokens in logs","url":"https://github.com/kubernetes/kubernetes/issues/124759"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2024-3177","issue_number":124336},"content_text":"CVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:N/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:N/A:N) - **Low** (2.7)\r\n\r\nA security issue was discovered in Kubernetes where users may be able to launch containers that bypass the mountable secrets policy enforced by the ServiceAccount admission plugin when using containers, init containers, and ephemeral containers with the envFrom field populated. The policy ensures pods running with a service account may only reference secrets specified in the service accountâs secrets field. Kubernetes clusters are only affected if the ServiceAccount admission plugin and the `kubernetes.io/enforce-mountable-secrets` annotation are used together with containers, init containers, and ephemeral containers with the envFrom field populated. \r\n\r\n### Am I vulnerable?\r\n\r\nThe ServiceAccount admission plugin is used. Most cluster should have this on by default as recommended in https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#serviceaccount\r\nThe kubernetes.io/enforce-mountable-secrets annotation is used by a service account. This annotation is not added by default. Pods using containers, init containers, and ephemeral containers with the envFrom field populated.\r\n\r\n#### Affected Versions\r\n\r\nkube-apiserver v1.29.0 - v1.29.3\r\nkube-apiserver v1.28.0 - v1.28.8\r\nkube-apiserver \u003c= v1.27.12\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nThis issue can be mitigated by applying the patch provided for the kube-apiserver component. The patch prevents containers, init containers, and ephemeral containers with the envFrom field populated from bypassing the mountable secrets policy enforced by the ServiceAccount admission plugin.\r\n\r\nOutside of applying the provided patch, there are no known mitigations to this vulnerability.\r\n\r\n#### Fixed Versions\r\n\r\n- kube-apiserver master - fixed by #124322\r\n- kube-apiserver v1.29.4 - fixed by #124325\r\n- kube-apiserver v1.28.9 - fixed by #124326\r\n- kube-apiserver v1.27.13 - fixed by #124327\r\n\r\nTo upgrade, refer to the documentation:\r\nhttps://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/ \r\n\r\n### Detection\r\n\r\nPod update requests using a container, init container, or ephemeral container with the envFrom field populated that exploits this vulnerability with unintended secret will be captured in API audit logs. You can also use the following kubectl command to find active pods using the `kubernetes.io/enforce-mountable-secrets` annotation. \r\n\r\n`kubectl get serviceaccounts --all-namespaces -o jsonpath=\"{range .items[?(@.metadata.annotations['kubernetes\\.io/enforce-mountable-secrets']=='true')]}{.metadata.namespace}{'\\t'}{.metadata.name}{'\\n'}{end}\"` \r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by tha3e1vl. \r\n\r\nThe issue was fixed and coordinated by the fix team: \r\n\r\nRita Zhang @ritazh\r\nJoel Smith @joelsmith\r\nMo Khan @enj\r\n\r\nand release managers:\r\nSascha Grunert @saschagrunert\r\nJeremy Rickard @jeremyrickard\r\n\r\n/triage accepted\r\n/lifecycle frozen\r\n/area security\r\n/kind bug\r\n/committee security-response","date_published":"2024-04-16T14:04:09Z","external_url":"https://www.cve.org/cverecord?id=CVE-2024-3177","id":"CVE-2024-3177","status":"fixed","summary":"Bypassing mountable secrets policy imposed by the ServiceAccount admission plugin","url":"https://github.com/kubernetes/kubernetes/issues/124336"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2023-5528","issue_number":121879},"content_text":"CVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H) - **HIGH** (7.2)\r\n\r\nA security issue was discovered in Kubernetes where a user that can create pods and persistent volumes on Windows nodes may be able to escalate to admin privileges on those nodes. Kubernetes clusters are only affected if they are using an in-tree storage plugin for Windows nodes.\r\n\r\n### Am I vulnerable?\r\n\r\nAny kubernetes environment with Windows nodes is impacted. Run `kubectl get nodes -l kubernetes.io/os=windows` to see if any Windows nodes are in use.\r\n\r\n#### Affected Versions\r\n\r\n- kubelet \u003e= v1.8.0 (including all later minor versions)\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nThe provided patch fully mitigates the vulnerability.\r\n\r\nOutside of applying the provided patch, there are no known mitigations to this vulnerability.\r\n\r\n#### Fixed Versions\r\n\r\n- kubelet master - fixed by #121881\r\n- kubelet v1.28.4 - fixed by #121882\r\n- kubelet v1.27.8 - fixed by #121883\r\n- kubelet v1.26.11 - fixed by #121884\r\n- kubelet v1.25.16 - fixed by #121885\r\n\r\nTo upgrade, refer to the documentation:\r\nhttps://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster\r\n\r\n### Detection\r\n\r\nKubernetes audit logs can be used to detect if this vulnerability is being exploited. Persistent Volume create events with local path fields containing special characters are a strong indication of exploitation.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Tomer Peled [@tomerpeled92](https://github.com/tomerpeled92)\r\n\r\nThe issue was fixed and coordinated by the fix team: \r\n\r\nJames Sturtevant @jsturtevant\r\nMark Rossetti @marosset\r\nMichelle Au @msau42 \r\nJan Å afránek @jsafrane \r\nMo Khan @enj \r\nRita Zhang @ritazh\r\nMicah Hausler @micahhausler\r\nSri Saran Balaji @SaranBalaji90\r\nCraig Ingram @cji \r\n\r\nand release managers:\r\nJeremy Rickard @jeremyrickard\r\nMarko MudriniÄ @xmudrii \r\n\r\n/area security\r\n/kind bug\r\n/committee security-response\r\n/label official-cve-feed\r\n/sig windows\r\n/sig storage\r\n/area kubelet","date_published":"2023-11-14T15:54:16Z","external_url":"https://www.cve.org/cverecord?id=CVE-2023-5528","id":"CVE-2023-5528","status":"fixed","summary":"Insufficient input sanitization in in-tree storage plugin leads to privilege escalation on Windows nodes","url":"https://github.com/kubernetes/kubernetes/issues/121879"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2023-5044","issue_number":126817},"content_text":"### Issue Details\r\nA security issue was identified in [ingress-nginx](https://github.com/kubernetes/ingress-nginx) where the nginx.ingress.kubernetes.io/permanent-redirect annotation on an Ingress object (in the `networking.k8s.io` or `extensions` API group) can be used to inject arbitrary commands, and obtain the credentials of the ingress-nginx controller. In the default configuration, that credential has access to all secrets in the cluster.\r\n\r\nThis issue has been rated **High** ([CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L)), and assigned **CVE-2023-5044**.\r\n\r\n### Affected Components and Configurations\r\n\r\nThis bug affects ingress-nginx. If you do not have ingress-nginx installed on your cluster, you are not affected. You can check this by running `kubectl get po -n ingress-nginx`.\r\n\r\nIf you are running the âchrootedâ ingress-nginx controller introduced in v1.2.0 (gcr.io/k8s-staging-ingress-nginx/controller-chroot), command execution is possible but credential extraction is not, so the High severity does not apply.\r\n\r\nMulti-tenant environments where non-admin users have permissions to create Ingress objects are most affected by this issue.\r\n\r\n#### Affected Versions\r\n\r\n- \u003cv1.9.0\r\n\r\n#### Versions allowing mitigation\r\n\r\n- v1.9.0\r\n\r\n### Mitigation\r\n\r\nIngress Administrators should set the --enable-annotation-validation flag to enforce restrictions on the contents of ingress-nginx annotation fields.\r\n\r\n### Detection\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [[email protected]](mailto:[email protected])\r\n\r\n### Additional Details\r\n\r\nSee ingress-nginx Issue [#10572](https://github.com/kubernetes/kubernetes/issues/126817) for more details.\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by Jan-Otto Kröpke (Cloudeteer GmbH)\r\n\r\nThank You,\r\nCJ Cullen on behalf of the Kubernetes Security Response Committee","date_published":"2023-10-25T15:48:28Z","external_url":"https://www.cve.org/cverecord?id=CVE-2023-5044","id":"CVE-2023-5044","status":"fixed","summary":"Code injection via nginx.ingress.kubernetes.io/permanent-redirect annotation","url":"https://github.com/kubernetes/kubernetes/issues/126817"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2023-5043","issue_number":126816},"content_text":"### Issue Details\r\n\r\nA security issue was identified in [ingress-nginx](https://github.com/kubernetes/ingress-nginx) where the nginx.ingress.kubernetes.io/configuration-snippet annotation on an Ingress object (in the `networking.k8s.io` or `extensions` API group) can be used to inject arbitrary commands, and obtain the credentials of the ingress-nginx controller. In the default configuration, that credential has access to all secrets in the cluster.\r\n\r\nThis issue has been rated **High** ([CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L)), and assigned **CVE-2023-5043**.\r\n\r\n### Affected Components and Configurations\r\nThis bug affects ingress-nginx. If you do not have ingress-nginx installed on your cluster, you are not affected. You can check this by running `kubectl get po -n ingress-nginx`.\r\n\r\nIf you are running the âchrootedâ ingress-nginx controller introduced in v1.2.0 (gcr.io/k8s-staging-ingress-nginx/controller-chroot), command execution is possible but credential extraction is not, so the High severity does not apply.\r\n\r\nMulti-tenant environments where non-admin users have permissions to create Ingress objects are most affected by this issue.\r\n\r\n#### Affected Versions\r\n\r\n- \u003cv1.9.0\r\n\r\n#### Versions allowing mitigation\r\n\r\n- v1.9.0\r\n\r\n### Mitigation\r\nIngress Administrators should set the --enable-annotation-validation flag to enforce restrictions on the contents of ingress-nginx annotation fields.\r\n\r\n### Detection\r\nIf you find evidence that this vulnerability has been exploited, please contact [[email protected]](mailto:[email protected])\r\n\r\n### Additional Details\r\nSee ingress-nginx Issue[ kubernetes/kubernetes#126816](https://github.com/kubernetes/kubernetes/issues/126816) for more details.\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by suanve\r\n\r\nThank You,\r\nCJ Cullen on behalf of the Kubernetes Security Response Committee","date_published":"2023-10-25T15:48:20Z","external_url":"https://www.cve.org/cverecord?id=CVE-2023-5043","id":"CVE-2023-5043","status":"fixed","summary":"Ingress nginx annotation injection causes arbitrary command execution","url":"https://github.com/kubernetes/kubernetes/issues/126816"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2022-4886","issue_number":126815},"content_text":"### Issue Details\r\nA security issue was discovered in [ingress-nginx](https://github.com/kubernetes/ingress-nginx) where a user that can create or update ingress objects can use directives to bypass the sanitization of the `spec.rules[].http.paths[].path` field of an Ingress object (in the `networking.k8s.io` or `extensions` API group) to obtain the credentials of the ingress-nginx controller. In the default configuration, that credential has access to all secrets in the cluster.\r\n\r\nThis issue has been rated **High** ([CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H)), and assigned CVE-2022-4886.\r\n\r\n### Affected Components and Configurations\r\nThis bug affects ingress-nginx. If you do not have ingress-nginx installed on your cluster, you are not affected. You can check this by running `kubectl get po -n ingress-nginx`.\r\n\r\nIf you are running the âchrootedâ ingress-nginx controller introduced in v1.2.0 (gcr.io/k8s-staging-ingress-nginx/controller-chroot), command execution is possible but credential extraction is not, so the High severity does not apply.\r\n\r\nMulti-tenant environments where non-admin users have permissions to create Ingress objects are most affected by this issue.\r\n\r\n#### Affected Versions\r\n\r\n- \u003cv1.8.0\r\n\r\n#### Versions allowing mitigation\r\n\r\n- v1.8.0\r\n\r\n### Mitigation\r\nIngress objects contain a field called pathType that defines the proxy behavior. It can be Exact, Prefix and ImplementationSpecific.\r\n\r\nWhen pathType is configured as Exact or Prefix, there is more strict validation, allowing only paths starting with \"/\" and containing only alphanumeric characters and \"-\", \"_\" and additional \"/\".\r\n\r\nWhen this option is enabled, the validation happens in the Admission Webhook, denying creation of any Ingress containing invalid characters (unless pathType is ImplementationSpecific).\r\n\r\nhttps://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#strict-validate-path-type\r\n\r\nIngress Admins should enable this validation by default. If you still need to allow implementation specific paths due to the usage of features like Regex/rewrite on path, we recommend implementing countermeasures to allow just trusted users to consume this feature, as an example with OPA: https://kubernetes.github.io/ingress-nginx/examples/openpolicyagent/\r\n\r\n### Detection\r\nIf you find evidence that this vulnerability has been exploited, please contact [[email protected]](mailto:[email protected])\r\n\r\n### Additional Details\r\nSee ingress-nginx Issue [#10570](https://github.com/kubernetes/kubernetes/issues/126815) for more details.\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by Ginoah, working with the DEVCORE Internship Program.\r\n\r\nThank You,\r\nCJ Cullen on behalf of the Kubernetes Security Response Committee","date_published":"2023-10-25T15:48:08Z","external_url":"https://www.cve.org/cverecord?id=CVE-2022-4886","id":"CVE-2022-4886","status":"fixed","summary":"ingress-nginx path sanitization can be bypassed","url":"https://github.com/kubernetes/kubernetes/issues/126815"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2023-3955","issue_number":119595},"content_text":"CVSS Rating: [CVSS:3.1/av:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H) - **HIGH** (8.8)\r\n\r\nA security issue was discovered in Kubernetes where a user that can create pods on Windows nodes may be able to escalate to admin privileges on those nodes. Kubernetes clusters are only affected if they include Windows nodes.\r\n\r\n### Am I vulnerable?\r\n\r\nAny kubernetes environment with Windows nodes is impacted. Run `kubectl get nodes -l kubernetes.io/os=windows` to see if any Windows nodes are in use.\r\n\r\n#### Affected Versions\r\n\r\n- kubelet \u003c= v1.28.0\r\n- kubelet \u003c= v1.27.4\r\n- kubelet \u003c= v1.26.7\r\n- kubelet \u003c= v1.25.12\r\n- kubelet \u003c= v1.24.16\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nThe provided patch fully mitigates the vulnerability (see fix impact below). Full mitigation for this class of issues requires patches applied for CVE-2023-3676, CVE-2023-3955, and CVE-2023-3893.\r\n\r\nOutside of applying the provided patch, there are no known mitigations to this vulnerability.\r\n\r\n#### Fixed Versions\r\n\r\n- kubelet master - fixed by #120128\r\n- kubelet v1.28.1 - fixed by #120134\r\n- kubelet v1.27.5 - fixed by #120135\r\n- kubelet v1.26.8 - fixed by #120136\r\n- kubelet v1.25.13 - fixed by #120137\r\n- kubelet v1.24.17 - fixed by #120138\r\n\r\n**Fix impact:** Passing Windows Powershell disk format options to in-tree volume plugins will result in an error during volume provisioning on the node. There are no known use cases for this functionality, nor is this functionality supported by any known out-of-tree CSI driver.\r\n\r\nTo upgrade, refer to the documentation:\r\nhttps://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/\r\n\r\n### Detection\r\n\r\nKubernetes audit logs can be used to detect if this vulnerability is being exploited. Pod create events with embedded powershell commands are a strong indication of exploitation.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was discovered by James Sturtevant @jsturtevant and Mark Rossetti @marosset during the process of fixing CVE-2023-3676 (that original CVE was reported by Tomer Peled @tomerpeled92)\r\n\r\nThe issue was fixed and coordinated by the fix team: \r\n\r\nJames Sturtevant @jsturtevant\r\nMark Rossetti @marosset\r\nAndy Zhang @andyzhangx\r\nJustin Terry @jterry75\r\nKulwant Singh @KlwntSingh\r\nMicah Hausler @micahhausler\r\nRita Zhang @ritazh\r\n\r\nand release managers:\r\n\r\nJeremy Rickard @jeremyrickard","date_published":"2023-07-26T15:30:50Z","external_url":"https://www.cve.org/cverecord?id=CVE-2023-3955","id":"CVE-2023-3955","status":"fixed","summary":"Insufficient input sanitization on Windows nodes leads to privilege escalation","url":"https://github.com/kubernetes/kubernetes/issues/119595"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2023-3893","issue_number":119594},"content_text":"CVSS Rating: [CVSS:3.1/av:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H) - **HIGH** (8.8)\r\n\r\nA security issue was discovered in Kubernetes where a user that can create pods on Windows nodes running kubernetes-csi-proxy may be able to escalate to admin privileges on those nodes. Kubernetes clusters are only affected if they include Windows nodes running kubernetes-csi-proxy.\r\n\r\n### Am I vulnerable?\r\n\r\nAny kubernetes environment with Windows nodes that are running kubernetes-csi-proxy is impacted. This is a common default configuration on Windows nodes. Run `kubectl get nodes -l kubernetes.io/os=windows` to see if any Windows nodes are in use.\r\n\r\n#### Affected Versions\r\n\r\n- kubernetes-csi-proxy \u003c= v2.0.0-alpha.0\r\n- kubernetes-csi-proxy \u003c= v1.1.2\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nThe provided patch fully mitigates the vulnerability and has no known side effects. Full mitigation for this class of issues requires patches applied for CVE-2023-3676, CVE-2023-3955, and CVE-2023-3893.\r\n\r\nOutside of applying the provided patch, there are no known mitigations to this vulnerability.\r\n\r\n#### Fixed Versions\r\n\r\n- kubernetes-csi-proxy master - fixed by https://github.com/kubernetes-csi/csi-proxy/pull/306\r\n- kubernetes-csi-proxy v2.0.0-alpha.1 - fixed by https://github.com/kubernetes-csi/csi-proxy/pull/307\r\n- kubernetes-csi-proxy v1.1.3 - fixed by https://github.com/kubernetes-csi/csi-proxy/pull/306\r\n\r\nTo upgrade: cordon the node, stop the associated Windows service, replace the csi-proxy.exe binary, restart the associated Windows service, and un-cordon the node. See the installation docs for more details: https://github.com/kubernetes-csi/csi-proxy#installation\r\n\r\nIf a Windows host process daemon set is used to run kubernetes-csi-proxy such as https://github.com/kubernetes-csi/csi-driver-smb/blob/master/charts/latest/csi-driver-smb/templates/csi-proxy-windows.yaml, simply upgrade the image to a fixed version such as ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.1.3\r\n\r\n### Detection\r\n\r\nKubernetes audit logs can be used to detect if this vulnerability is being exploited. Pod create events with embedded powershell commands are a strong indication of exploitation.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was discovered by James Sturtevant @jsturtevant and Mark Rossetti @marosset during the process of fixing CVE-2023-3676 (that original CVE was reported by Tomer Peled @tomerpeled92)\r\n\r\nThe issue was fixed and coordinated by the fix team: \r\n\r\nJames Sturtevant @jsturtevant\r\nMark Rossetti @marosset\r\nAndy Zhang @andyzhangx\r\nJustin Terry @jterry75\r\nKulwant Singh @KlwntSingh\r\nMicah Hausler @micahhausler\r\nRita Zhang @ritazh\r\n\r\nand release managers:\r\n\r\nMauricio Poppe @mauriciopoppe","date_published":"2023-07-26T15:30:26Z","external_url":"https://www.cve.org/cverecord?id=CVE-2023-3893","id":"CVE-2023-3893","status":"fixed","summary":"Insufficient input sanitization on kubernetes-csi-proxy leads to privilege escalation","url":"https://github.com/kubernetes/kubernetes/issues/119594"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2023-3676","issue_number":119339},"content_text":"CVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H) - **HIGH** (8.8)\r\n\r\nA security issue was discovered in Kubernetes where a user that can create pods on Windows nodes may be able to escalate to admin privileges on those nodes. Kubernetes clusters are only affected if they include Windows nodes.\r\n\r\n### Am I vulnerable?\r\n\r\nAny kubernetes environment with Windows nodes is impacted. Run `kubectl get nodes -l kubernetes.io/os=windows` to see if any Windows nodes are in use.\r\n\r\n#### Affected Versions\r\n\r\n- kubelet \u003c= v1.28.0\r\n- kubelet \u003c= v1.27.4\r\n- kubelet \u003c= v1.26.7\r\n- kubelet \u003c= v1.25.12\r\n- kubelet \u003c= v1.24.16\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nThe provided patch fully mitigates the vulnerability and has no known side effects. Full mitigation for this class of issues requires patches applied for CVE-2023-3676, CVE-2023-3955, and CVE-2023-3893.\r\n\r\nOutside of applying the provided patch, there are no known mitigations to this vulnerability.\r\n\r\n#### Fixed Versions\r\n\r\n- kubelet master - fixed by #120127\r\n- kubelet v1.28.1 - fixed by #120129\r\n- kubelet v1.27.5 - fixed by #120130\r\n- kubelet v1.26.8 - fixed by #120131\r\n- kubelet v1.25.13 - fixed by #120132\r\n- kubelet v1.24.17 - fixed by #120133\r\n\r\nTo upgrade, refer to the documentation:\r\nhttps://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/\r\n\r\n### Detection\r\n\r\nKubernetes audit logs can be used to detect if this vulnerability is being exploited. Pod create events with embedded powershell commands are a strong indication of exploitation. Config maps and secrets that contain embedded powershell commands and are mounted into pods are also a strong indication of exploitation.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Tomer Peled @tomerpeled92\r\n\r\nThe issue was fixed and coordinated by the fix team: \r\n\r\nJames Sturtevant @jsturtevant\r\nMark Rossetti @marosset\r\nAndy Zhang @andyzhangx\r\nJustin Terry @jterry75\r\nKulwant Singh @KlwntSingh\r\nMicah Hausler @micahhausler\r\nRita Zhang @ritazh\r\n\r\nand release managers:\r\n\r\nJeremy Rickard @jeremyrickard\r\n\r\n\r\n/triage accepted\r\n/lifecycle frozen\r\n/area security\r\n/kind bug\r\n/committee security-response","date_published":"2023-07-14T18:27:48Z","external_url":"https://www.cve.org/cverecord?id=CVE-2023-3676","id":"CVE-2023-3676","status":"fixed","summary":"Insufficient input sanitization on Windows nodes leads to privilege escalation","url":"https://github.com/kubernetes/kubernetes/issues/119339"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2023-2431","issue_number":118690},"content_text":"### What happened?\r\n\r\nA security issue was discovered in Kubelet that allows pods to bypass the seccomp profile enforcement. This issue has been rated LOW ([CVSS:3.1/AV:L/AC:L/PR:H/UI:N/S:U/C:L/I:L/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:L/AC:L/PR:H/UI:N/S:U/C:L/I:L/A:N)) (score: 3.4).\r\n\r\nIf you have pods in your cluster that use [localhost type](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#seccompprofile-v1-core) for seccomp profile but specify an empty profile field, then you are affected by this issue. In this scenario, this vulnerability allows the pod to run in âunconfinedâ (seccomp disabled) mode. This bug affects Kubelet.\r\n\r\n### How can we reproduce it (as minimally and precisely as possible)?\r\n\r\nThis can be reproduced by creating a pod with following sample seccomp Localhost profile - \r\n\r\n```\r\n localhostProfile: \"\"\r\n```\r\n\r\nhttps://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#seccompprofile-v1-core\r\n\r\n### Kubernetes version\r\n\r\n**Affected Versions**\r\nv1.27.0 - v1.27.1\r\nv1.26.0 - v1.26.4\r\nv1.25.0 - v1.25.9\r\n\u003c= v1.24.13\r\n\r\n**Fixed Versions**\r\nv1.27.2\r\nv1.26.5\r\nv1.25.10\r\nV1.24.14\r\n\r\n### Anything else we need to know?\r\n\r\nHow do I remediate this vulnerability?\r\nTo remediate this vulnerability you should upgrade your Kubelet to one of the below mentioned versions.\r\n\r\nAcknowledgements\r\nThis vulnerability was reported by Tim Allclair, and fixed by Craig Ingram.","date_published":"2023-06-15T14:42:32Z","external_url":"https://www.cve.org/cverecord?id=CVE-2023-2431","id":"CVE-2023-2431","status":"fixed","summary":"Bypass of seccomp profile enforcement ","url":"https://github.com/kubernetes/kubernetes/issues/118690"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2023-2728","issue_number":118640},"content_text":"### CVE-2023-2727: Bypassing policies imposed by the ImagePolicyWebhook admission plugin\r\nCVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:N)\r\n\r\nA security issue was discovered in Kubernetes where users may be able to launch containers using images that are restricted by ImagePolicyWebhook when using ephemeral containers. Kubernetes clusters are only affected if the ImagePolicyWebhook admission plugin is used together with ephemeral containers.\r\n\r\n### Am I vulnerable?\r\nClusters are impacted by this vulnerability if all of the following are true:\r\n\r\n1. The ImagePolicyWebhook admission plugin is used to restrict use of certain images\r\n2. Pods are using ephemeral containers.\r\n\r\n### Affected Versions\r\n\r\n- kube-apiserver v1.27.0 - v1.27.2\r\n- kube-apiserver v1.26.0 - v1.26.5\r\n- kube-apiserver v1.25.0 - v1.25.10\r\n- kube-apiserver \u003c= v1.24.14\r\n\r\n### How do I mitigate this vulnerability?\r\nThis issue can be mitigated by applying the patch provided for the kube-apiserver component. This patch prevents ephemeral containers from using an image that is restricted by ImagePolicyWebhook. \r\n\r\nNote: Validation webhooks (such as [Gatekeeper](https://open-policy-agent.github.io/gatekeeper-library/website/validation/allowedrepos/) and [Kyverno](https://kyverno.io/policies/other/allowed-image-repos/allowed-image-repos/)) can also be used to enforce the same restrictions.\r\n\r\n### Fixed Versions\r\n\r\n- kube-apiserver v1.27.3\r\n- kube-apiserver v1.26.6\r\n- kube-apiserver v1.25.11\r\n- kube-apiserver v1.24.15\r\n\r\n### Detection\r\nPod update requests using an ephemeral container with an image that should have been restricted by an ImagePolicyWebhook will be captured in API audit logs. You can also use `kubectl get pods` to find active pods with ephemeral containers running an image that should have been restricted in your cluster with this issue.\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by Stanislav LázniÄka, and fixed by Rita Zhang.\r\n\r\n### CVE-2023-2728: Bypassing enforce mountable secrets policy imposed by the ServiceAccount admission plugin\r\nCVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:N)\r\n\r\nA security issue was discovered in Kubernetes where users may be able to launch containers that bypass the mountable secrets policy enforced by the ServiceAccount admission plugin when using ephemeral containers. The policy ensures pods running with a service account may only reference secrets specified in the service accountâs secrets field. Kubernetes clusters are only affected if the ServiceAccount admission plugin and the `kubernetes.io/enforce-mountable-secrets` annotation are used together with ephemeral containers.\r\n\r\n### Am I vulnerable?\r\nClusters are impacted by this vulnerability if all of the following are true:\r\n\r\n1. The ServiceAccount admission plugin is used. Most cluster should have this on by default as recommended in [https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#serviceaccount](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#serviceaccount)\r\n2. The `kubernetes.io/enforce-mountable-secrets` annotation is used by a service account. This annotation is not added by default.\r\n3. Pods are using ephemeral containers.\r\n\r\n### Affected Versions\r\n\r\n- kube-apiserver v1.27.0 - v1.27.2\r\n- kube-apiserver v1.26.0 - v1.26.5\r\n- kube-apiserver v1.25.0 - v1.25.10\r\n- kube-apiserver \u003c= v1.24.14\r\n\r\n### How do I mitigate this vulnerability?\r\nThis issue can be mitigated by applying the patch provided for the kube-apiserver component. The patch prevents ephemeral containers from bypassing the mountable secrets policy enforced by the ServiceAccount admission plugin.\r\n\r\n### Fixed Versions\r\n- kube-apiserver v1.27.3\r\n- kube-apiserver v1.26.6\r\n- kube-apiserver v1.25.11\r\n- kube-apiserver v1.24.15\r\n\r\n### Detection\r\nPod update requests using an ephemeral container that exploits this vulnerability with unintended secret will be captured in API audit logs. You can also use kubectl get pods to find active pods with ephemeral containers running with a secret that is not referenced by the service account in your cluster.\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by Rita Zhang, and fixed by Rita Zhang.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [[email protected]](mailto:[email protected])\r\n\r\n/area security\r\n/kind bug\r\n/committee security-response\r\n/label official-cve-feed\r\n/sig auth\r\n/area apiserver\r\n","date_published":"2023-06-13T14:42:06Z","external_url":"https://www.cve.org/cverecord?id=CVE-2023-2728","id":"CVE-2023-2728","status":"fixed","summary":"Bypassing policies imposed by the ImagePolicyWebhook and bypassing mountable secrets policy imposed by the ServiceAccount admission plugin","url":"https://github.com/kubernetes/kubernetes/issues/118640"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2023-2727","issue_number":118640},"content_text":"### CVE-2023-2727: Bypassing policies imposed by the ImagePolicyWebhook admission plugin\r\nCVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:N)\r\n\r\nA security issue was discovered in Kubernetes where users may be able to launch containers using images that are restricted by ImagePolicyWebhook when using ephemeral containers. Kubernetes clusters are only affected if the ImagePolicyWebhook admission plugin is used together with ephemeral containers.\r\n\r\n### Am I vulnerable?\r\nClusters are impacted by this vulnerability if all of the following are true:\r\n\r\n1. The ImagePolicyWebhook admission plugin is used to restrict use of certain images\r\n2. Pods are using ephemeral containers.\r\n\r\n### Affected Versions\r\n\r\n- kube-apiserver v1.27.0 - v1.27.2\r\n- kube-apiserver v1.26.0 - v1.26.5\r\n- kube-apiserver v1.25.0 - v1.25.10\r\n- kube-apiserver \u003c= v1.24.14\r\n\r\n### How do I mitigate this vulnerability?\r\nThis issue can be mitigated by applying the patch provided for the kube-apiserver component. This patch prevents ephemeral containers from using an image that is restricted by ImagePolicyWebhook. \r\n\r\nNote: Validation webhooks (such as [Gatekeeper](https://open-policy-agent.github.io/gatekeeper-library/website/validation/allowedrepos/) and [Kyverno](https://kyverno.io/policies/other/allowed-image-repos/allowed-image-repos/)) can also be used to enforce the same restrictions.\r\n\r\n### Fixed Versions\r\n\r\n- kube-apiserver v1.27.3\r\n- kube-apiserver v1.26.6\r\n- kube-apiserver v1.25.11\r\n- kube-apiserver v1.24.15\r\n\r\n### Detection\r\nPod update requests using an ephemeral container with an image that should have been restricted by an ImagePolicyWebhook will be captured in API audit logs. You can also use `kubectl get pods` to find active pods with ephemeral containers running an image that should have been restricted in your cluster with this issue.\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by Stanislav LázniÄka, and fixed by Rita Zhang.\r\n\r\n### CVE-2023-2728: Bypassing enforce mountable secrets policy imposed by the ServiceAccount admission plugin\r\nCVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:N)\r\n\r\nA security issue was discovered in Kubernetes where users may be able to launch containers that bypass the mountable secrets policy enforced by the ServiceAccount admission plugin when using ephemeral containers. The policy ensures pods running with a service account may only reference secrets specified in the service accountâs secrets field. Kubernetes clusters are only affected if the ServiceAccount admission plugin and the `kubernetes.io/enforce-mountable-secrets` annotation are used together with ephemeral containers.\r\n\r\n### Am I vulnerable?\r\nClusters are impacted by this vulnerability if all of the following are true:\r\n\r\n1. The ServiceAccount admission plugin is used. Most cluster should have this on by default as recommended in [https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#serviceaccount](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#serviceaccount)\r\n2. The `kubernetes.io/enforce-mountable-secrets` annotation is used by a service account. This annotation is not added by default.\r\n3. Pods are using ephemeral containers.\r\n\r\n### Affected Versions\r\n\r\n- kube-apiserver v1.27.0 - v1.27.2\r\n- kube-apiserver v1.26.0 - v1.26.5\r\n- kube-apiserver v1.25.0 - v1.25.10\r\n- kube-apiserver \u003c= v1.24.14\r\n\r\n### How do I mitigate this vulnerability?\r\nThis issue can be mitigated by applying the patch provided for the kube-apiserver component. The patch prevents ephemeral containers from bypassing the mountable secrets policy enforced by the ServiceAccount admission plugin.\r\n\r\n### Fixed Versions\r\n- kube-apiserver v1.27.3\r\n- kube-apiserver v1.26.6\r\n- kube-apiserver v1.25.11\r\n- kube-apiserver v1.24.15\r\n\r\n### Detection\r\nPod update requests using an ephemeral container that exploits this vulnerability with unintended secret will be captured in API audit logs. You can also use kubectl get pods to find active pods with ephemeral containers running with a secret that is not referenced by the service account in your cluster.\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by Rita Zhang, and fixed by Rita Zhang.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [[email protected]](mailto:[email protected])\r\n\r\n/area security\r\n/kind bug\r\n/committee security-response\r\n/label official-cve-feed\r\n/sig auth\r\n/area apiserver\r\n","date_published":"2023-06-13T14:42:06Z","external_url":"https://www.cve.org/cverecord?id=CVE-2023-2727","id":"CVE-2023-2727","status":"fixed","summary":"Bypassing policies imposed by the ImagePolicyWebhook and bypassing mountable secrets policy imposed by the ServiceAccount admission plugin","url":"https://github.com/kubernetes/kubernetes/issues/118640"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2023-2878","issue_number":118419},"content_text":"A security issue was discovered in [secrets-store-csi-driver](https://github.com/kubernetes-sigs/secrets-store-csi-driver) where an actor with access to the driver logs could observe service account tokens. These tokens could then potentially be exchanged with external cloud providers to access secrets stored in cloud vault solutions. Tokens are only logged when [TokenRequests is configured in the CSIDriver object](https://kubernetes-csi.github.io/docs/token-requests.html) and the driver is set to run at log level 2 or greater via the -v flag.\r\n\r\nThis issue has been rated **MEDIUM** [CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:C/C:H/I:N/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:C/C:H/I:N/A:N) (6.5), and assigned **CVE-2023-2878**\r\n\r\n**Am I vulnerable?**\r\n\r\nYou may be vulnerable if [TokenRequests is configured in the CSIDriver object](https://kubernetes-csi.github.io/docs/token-requests.html) and the driver is set to run at log level 2 or greater via the -v flag.\r\n\r\nTo check if token requests are configured, run the following command:\r\n\r\n```shell\r\nkubectl get csidriver secrets-store.csi.k8s.io -o jsonpath=\"{.spec.tokenRequests}\"\r\n```\r\n\r\nTo check if tokens are being logged, examine the secrets-store container log:\r\n\r\n```shell\r\nkubectl logs -l app=secrets-store-csi-driver -c secrets-store -f | grep --line-buffered \"csi.storage.k8s.io/serviceAccount.tokens\"\r\n```\r\n\r\n**Affected Versions**\r\n\r\n- secrets-store-csi-driver \u003c 1.3.3\r\n\r\n**How do I mitigate this vulnerability?**\r\n\r\nPrior to upgrading, this vulnerability can be mitigated by running secrets-store-csi-driver at log level 0 or 1 via the -v flag.\r\n\r\n**Fixed Versions**\r\n\r\n- secrets-store-csi-driver \u003e= 1.3.3\r\n\r\nTo upgrade, refer to the documentation: [https://secrets-store-csi-driver.sigs.k8s.io/getting-started/upgrades.html#upgrades](https://secrets-store-csi-driver.sigs.k8s.io/getting-started/upgrades.html#upgrades)\r\n\r\n**Detection**\r\n\r\nExamine cloud provider logs for unexpected token exchanges, as well as unexpected access to cloud vault secrets.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [[email protected]](mailto:[email protected])\r\n\r\n**Acknowledgements**\r\n\r\nThis vulnerability was reported by Tomer Shaiman `@tshaiman` from Microsoft.\r\n\r\n/area security\r\n/kind bug\r\n/committee security-response\r\n/label official-cve-feed\r\n/sig auth","date_published":"2023-06-02T19:03:54Z","external_url":"https://www.cve.org/cverecord?id=CVE-2023-2878","id":"CVE-2023-2878","status":"fixed","summary":"secrets-store-csi-driver discloses service account tokens in logs","url":"https://github.com/kubernetes/kubernetes/issues/118419"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2022-3294","issue_number":113757},"content_text":"CVSS Rating: [CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:H)\r\n\r\nA security issue was discovered in Kubernetes where users may have access to secure endpoints in the control plane network. Kubernetes clusters are only affected if an untrusted user can modify Node objects and send proxy requests to them.\r\n\r\nKubernetes supports node proxying, which allows clients of kube-apiserver to access endpoints of a Kubelet to establish connections to Pods, retrieve container logs, and more. While Kubernetes already validates the proxying address for Nodes, a bug in kube-apiserver made it possible to bypass this validation. Bypassing this validation could allow authenticated requests destined for Nodes to to the API server's private network.\r\n\r\n### Am I vulnerable?\r\n\r\nClusters are affected by this vulnerability if there are endpoints that the kube-apiserver has connectivity to that users should not be able to access. This includes:\r\n\r\n- kube-apiserver is in a separate network from worker nodes\r\n- localhost services\r\n\r\nmTLS services that accept the same client certificate as nodes may be affected. The severity of this issue depends on the privileges \u0026 sensitivity of the exploitable endpoints.\r\n\r\nClusters that configure the egress selector to use a proxy for cluster traffic may not be affected.\r\n\r\n\r\n#### Affected Versions\r\n\r\n- Kubernetes kube-apiserver \u003c= v1.25.3\r\n- Kubernetes kube-apiserver \u003c= v1.24.7\r\n- Kubernetes kube-apiserver \u003c= v1.23.13\r\n- Kubernetes kube-apiserver \u003c= v1.22.15\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nUpgrading the **kube-apiserver** to a fixed version mitigates this vulnerability.\r\n\r\nAside from upgrading, configuring an [egress proxy for egress to the cluster network](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/) can mitigate this vulnerability.\r\n\r\n#### Fixed Versions\r\n\r\n- Kubernetes kube-apiserver v1.25.4\r\n- Kubernetes kube-apiserver v1.24.8\r\n- Kubernetes kube-apiserver v1.23.14\r\n- Kubernetes kube-apiserver v1.22.16\r\n\r\n**Fix impact:** In some cases, the fix can break clients that depend on the nodes/proxy subresource, specifically if a kubelet advertises a localhost or link-local address to the Kubernetes control plane.\r\n\r\n### Detection\r\n\r\nNode create \u0026 update requests may be included in the Kubernetes audit log, and can be used to identify requests for IP addresses that should not be permitted. Node proxy requests may also be included in audit logs.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Yuval Avrahami of Palo Alto Networks.\r\n\r\n\u003c!-- labels --\u003e\r\n/area security\r\n/kind bug\r\n/committee security-response\r\n/label official-cve-feed\r\n/sig api-machinery\r\n/area apiserver","date_published":"2022-11-08T21:33:26Z","external_url":"https://www.cve.org/cverecord?id=CVE-2022-3294","id":"CVE-2022-3294","status":"fixed","summary":"Node address isn't always verified when proxying","url":"https://github.com/kubernetes/kubernetes/issues/113757"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2022-3162","issue_number":113756},"content_text":"CVSS Rating: [CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N)\r\n\r\nA security issue was discovered in Kubernetes where users authorized to list or watch one type of namespaced custom resource cluster-wide can read custom resources of a different type in the same API group without authorization.\r\n\r\n### Am I vulnerable?\r\n\r\nClusters are impacted by this vulnerability if all of the following are true:\r\n- There are 2+ CustomResourceDefinitions sharing the same API group\r\n- Users have cluster-wide list or watch authorization on one of those custom resources.\r\n- The same users are not authorized to read another custom resource in the same API group.\r\n\r\n#### Affected Versions\r\n\r\n- Kubernetes kube-apiserver \u003c= v1.25.3\r\n- Kubernetes kube-apiserver \u003c= v1.24.7\r\n- Kubernetes kube-apiserver \u003c= v1.23.13\r\n- Kubernetes kube-apiserver \u003c= v1.22.15\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nUpgrading the kube-apiserver to a fixed version mitigates this vulnerability.\r\n\r\nPrior to upgrading, this vulnerability can be mitigated by avoiding granting cluster-wide list and watch permissions.\r\n\r\n#### Fixed Versions\r\n\r\n- Kubernetes kube-apiserver v1.25.4\r\n- Kubernetes kube-apiserver v1.24.8\r\n- Kubernetes kube-apiserver v1.23.14\r\n- Kubernetes kube-apiserver v1.22.16\r\n\r\n### Detection\r\n\r\nRequests containing `..` in the request path are a likely indicator of exploitation. Request paths may be captured in API audit logs, or in kube-apiserver HTTP logs.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Richard Turnbull of NCC Group as part of the Kubernetes Audit.\r\n\r\n\u003c!-- labels --\u003e\r\n/area security\r\n/kind bug\r\n/committee security-response\r\n/label official-cve-feed\r\n/sig api-machinery\r\n/area apiserver\r\n","date_published":"2022-11-08T21:33:07Z","external_url":"https://www.cve.org/cverecord?id=CVE-2022-3162","id":"CVE-2022-3162","status":"fixed","summary":"Unauthorized read of Custom Resources","url":"https://github.com/kubernetes/kubernetes/issues/113756"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2022-3172","issue_number":112513},"content_text":"CVSS Rating: [CVSS:3.1/AV:N/AC:H/PR:H/UI:R/S:C/C:L/I:L/A:L](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:H/PR:H/UI:R/S:C/C:L/I:L/A:L) (5.1, medium)\r\n\r\nA security issue was discovered in kube-apiserver that allows an aggregated API server to redirect client traffic to any URL. This could lead to the client performing unexpected actions as well as forwarding the client's API server credentials to third parties.\r\n\r\nThis issue has been rated medium and assigned CVE-2022-3172\r\n\r\n### Am I vulnerable?\r\n\r\nAll Kubernetes clusters with the following versions that are running aggregated API servers are impacted. To identify if you have aggregated API servers configured, run the following command:\r\n\r\n```shell\r\nkubectl get apiservices.apiregistration.k8s.io -o=jsonpath='{range .items[?(@.spec.service)]}{.metadata.name}{\"\\n\"}{end}'\r\n```\r\n\r\n#### Affected Versions\r\n\r\n- kube-apiserver v1.25.0\r\n- kube-apiserver v1.24.0 - v1.24.4\r\n- kube-apiserver v1.23.0 - v1.23.10\r\n- kube-apiserver v1.22.0 - v1.22.13\r\n- kube-apiserver \u003c= v1.21.14\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nAside from upgrading, no direct mitigation is available.\r\n\r\nAggregated API servers are a trusted part of the Kubernetes control plane, and configuring them is a privileged administrative operation. Ensure that only trusted cluster administrators are allowed to create or modify `APIService` configuration, and follow security best practices with any aggregated API servers that may be in use.\r\n\r\n#### Fixed Versions\r\n\r\n- kube-apiserver v1.25.1 - fixed by #112330\r\n- kube-apiserver v1.24.5 - fixed by #112331\r\n- kube-apiserver v1.23.11 - fixed by #112358\r\n- kube-apiserver v1.22.14 - fixed by #112359\r\n\r\n**Fix impact:** The fix blocks all 3XX responses from aggregated API servers by default. This may disrupt an aggregated API server that relies on redirects as part of its normal function. If all current and future aggregated API servers are considered trustworthy and redirect functionality is required, set the `--aggregator-reject-forwarding-redirect` Kubernetes API server flag to `false` to restore the previous behavior.\r\n\r\nTo upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade\r\n\r\n### Detection\r\n\r\nKubernetes audit log events indicate the HTTP status code sent to the client via the `responseStatus.code` field. This can be used to detect if an aggregated API server is redirecting clients.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Nicolas Joly \u0026 Weinong Wang @weinong from Microsoft.\r\n\r\nThe issue was fixed and coordinated by Di Jin @jindijamie @enj @liggitt @lavalamp @deads2k and @puerco.\r\n\r\n/area security\r\n/kind bug\r\n/committee security-response\r\n/label official-cve-feed\r\n/sig api-machinery\r\n/area apiserver\r\n/triage accepted","date_published":"2022-09-16T13:14:50Z","external_url":"https://www.cve.org/cverecord?id=CVE-2022-3172","id":"CVE-2022-3172","status":"fixed","summary":"Aggregated API server can cause clients to be redirected (SSRF)","url":"https://github.com/kubernetes/kubernetes/issues/112513"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2021-25749","issue_number":112192},"content_text":"A security issue was discovered in Kubernetes that could allow Windows workloads to run as `ContainerAdministrator` even when those workloads set the `runAsNonRoot` option to `true`.\r\n\r\nThis issue has been rated low and assigned CVE-2021-25749\r\n\r\n### Am I vulnerable?\r\n\r\nAll Kubernetes clusters with following versions, running Windows workloads with `runAsNonRoot` are impacted\r\n\r\n#### Affected Versions\r\n\r\n- kubelet v1.20 - v1.21\r\n- kubelet v1.22.0 - v1.22.13\r\n- kubelet v1.23.0 - v1.23.10\r\n- kubelet v1.24.0 - v1.24.4\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nThere are no known mitigations to this vulnerability.\r\n\r\n#### Fixed Versions\r\n\r\n- kubelet v1.22.14\r\n- kubelet v1.23.11\r\n- kubelet v1.24.5\r\n- kubelet v1.25.0\r\n\r\n\r\nTo upgrade, refer to this documentation _For core Kubernetes:_ https://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/\r\n\r\n### Detection\r\n\r\nKubernetes Audit logs may indicate if the user name was misspelled to bypass the restriction placed on which user is a pod allowed to run as.\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Additional Details\r\n\r\nSee the GitHub issue for more details: https://github.com/kubernetes/kubernetes/issues/112192 \r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported and fixed by Mark Rosetti (@marosset)\r\n","date_published":"2022-09-01T21:02:01Z","external_url":"https://www.cve.org/cverecord?id=CVE-2021-25749","id":"CVE-2021-25749","status":"fixed","summary":"`runAsNonRoot` logic bypass for Windows containers","url":"https://github.com/kubernetes/kubernetes/issues/112192"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2021-25748","issue_number":126814},"content_text":"### Issue Details\r\nA security issue was discovered in [ingress-nginx](https://github.com/kubernetes/ingress-nginx) where a user that can create or update ingress objects can use a newline character to bypass the sanitization of the `spec.rules[].http.paths[].path` field of an Ingress object (in the `networking.k8s.io` or `extensions` API group) to obtain the credentials of the ingress-nginx controller. In the default configuration, that credential has access to all secrets in the cluster.\r\n\r\nThis issue has been rated High ([CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L)), and assigned CVE-2021-25748.\r\n\r\n### Affected Components and Configurations\r\nThis bug affects ingress-nginx. If you do not have ingress-nginx installed on your cluster, you are not affected. You can check this by running `kubectl get po -n ingress-nginx`.\r\n\r\nIf you are running the âchrootedâ ingress-nginx controller introduced in v1.2.0 (gcr.io/k8s-staging-ingress-nginx/controller-chroot), you are not affected.\r\n\r\nMultitenant environments where non-admin users have permissions to create Ingress objects are most affected by this issue.\r\n\r\n#### Affected Versions\r\n\u003cv1.2.1\r\n\r\n#### Fixed Versions\r\nv1.2.1\r\n\r\n### Mitigation\r\nIf you are unable to roll out the fix, this vulnerability can be mitigated by implementing an admission policy that restricts the `spec.rules[].http.paths[].path` field on the networking.k8s.io/Ingress resource to known safe characters (see the newly added [rules](https://github.com/kubernetes/ingress-nginx/blame/main/internal/ingress/inspector/rules.go), or the suggested value for [annotation-value-word-blocklist](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#annotation-value-word-blocklist)).\r\n\r\n### Detection\r\nIf you find evidence that this vulnerability has been exploited, please contact [[email protected]](mailto:[email protected])\r\n\r\n### Additional Details\r\nSee ingress-nginx Issue [#8686](https://github.com/kubernetes/kubernetes/issues/126814) for more details.\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by Gafnit Amiga.\r\n\r\nThank You,\r\nCJ Cullen on behalf of the Kubernetes Security Response Committee","date_published":"2022-06-10T16:01:41Z","external_url":"https://www.cve.org/cverecord?id=CVE-2021-25748","id":"CVE-2021-25748","status":"fixed","summary":"Ingress-nginx `path` sanitization can be bypassed with newline character","url":"https://github.com/kubernetes/kubernetes/issues/126814"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2021-25746","issue_number":126813},"content_text":"### Issue Details\r\nA security issue was discovered in [ingress-nginx](https://github.com/kubernetes/ingress-nginx) where a user that can create or update ingress objects can use `.metadata.annotations` in an Ingress object (in the `networking.k8s.io` or `extensions` API group) to obtain the credentials of the ingress-nginx controller. In the default configuration, that credential has access to all secrets in the cluster.\r\n\r\nThis issue has been rated **High** ([CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L)), and assigned **CVE-2021-25746**.\r\n\r\n### Affected Components and Configurations\r\nThis bug affects ingress-nginx. If you do not have ingress-nginx installed on your cluster, you are not affected. You can check this by running `kubectl get po -n ingress-nginx`.\r\n\r\nMultitenant environments where non-admin users have permissions to create Ingress objects are most affected by this issue.\r\n\r\n#### Affected Versions\r\n- \u003cv1.2.0\r\n\r\n#### Fixed Versions\r\n- v1.2.0-beta.0\r\n- v1.2.0\r\n\r\n### Mitigation\r\nIf you are unable to roll out the fix, this vulnerability can be mitigated by implementing an admission policy that restricts the `metadata.annotations` values to known safe (see the newly added [rules](https://github.com/kubernetes/ingress-nginx/blame/main/internal/ingress/inspector/rules.go), or the suggested value for [annotation-value-word-blocklist](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#annotation-value-word-blocklist)).\r\n\r\n### Detection\r\nIf you find evidence that this vulnerability has been exploited, please contact [[email protected]](mailto:[email protected])\r\n\r\n### Additional Details\r\nSee ingress-nginx Issue [#8503](https://github.com/kubernetes/kubernetes/issues/126813) for more details.\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by Anthony Weems, and separately by jeffrey\u0026oliver.\r\n\r\nThank You,\r\nCJ Cullen on behalf of the Kubernetes Security Response Committee","date_published":"2022-04-22T16:18:27Z","external_url":"https://www.cve.org/cverecord?id=CVE-2021-25746","id":"CVE-2021-25746","status":"fixed","summary":"Ingress-nginx directive injection via annotations","url":"https://github.com/kubernetes/kubernetes/issues/126813"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2021-25745","issue_number":126812},"content_text":"### Issue Details\r\nA security issue was discovered in [ingress-nginx](https://github.com/kubernetes/ingress-nginx) where a user that can create or update ingress objects can use the `spec.rules[].http.paths[].path` field of an Ingress object (in the `networking.k8s.io` or `extensions` API group) to obtain the credentials of the ingress-nginx controller. In the default configuration, that credential has access to all secrets in the cluster.\r\n\r\nThis issue has been rated **High** ([CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L)), and assigned **CVE-2021-25745**.\r\n\r\n### Affected Components and Configurations\r\nThis bug affects ingress-nginx. If you do not have ingress-nginx installed on your cluster, you are not affected. You can check this by running `kubectl get po -n ingress-nginx`.\r\n\r\nMultitenant environments where non-admin users have permissions to create Ingress objects are most affected by this issue.\r\n\r\n#### Affected Versions\r\n- \u003cv1.2.0\r\n\r\n#### Fixed Versions\r\n- v1.2.0-beta.0\r\n- v1.2.0\r\n\r\n### Mitigation\r\nIf you are unable to roll out the fix, this vulnerability can be mitigated by implementing an admission policy that restricts the `spec.rules[].http.paths[].path` field on the `networking.k8s.io/Ingress` resource to known safe characters (see the newly added [rules](https://github.com/kubernetes/ingress-nginx/blame/main/internal/ingress/inspector/rules.go), or the suggested value for [annotation-value-word-blocklist](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#annotation-value-word-blocklist)).\r\n\r\n### Detection\r\nIf you find evidence that this vulnerability has been exploited, please contact [[email protected]](mailto:[email protected])\r\nAdditional Details\r\nSee ingress-nginx Issue [#8502](https://github.com/kubernetes/kubernetes/issues/126812) for more details.\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by Gafnit Amiga.\r\n\r\nThank You,\r\nCJ Cullen on behalf of the Kubernetes Security Response Committee","date_published":"2022-04-22T16:18:21Z","external_url":"https://www.cve.org/cverecord?id=CVE-2021-25745","id":"CVE-2021-25745","status":"fixed","summary":"Ingress-nginx `path` can be pointed to service account token file","url":"https://github.com/kubernetes/kubernetes/issues/126812"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2021-25742","issue_number":126811},"content_text":"### Issue Details\r\nA security issue was discovered in ingress-nginx where a user that can create or update ingress objects can use the custom snippets feature to obtain all secrets in the cluster.\r\n\r\nThis issue has been rated **High** ([CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L)), and assigned **CVE-2021-25742**.\r\n\r\n### Affected Components and Configurations\r\nThis bug affects ingress-nginx.\r\n\r\nMultitenant environments where non-admin users have permissions to create Ingress objects are most affected by this issue.\r\n\r\n#### Affected Versions with no mitigation\r\n\r\n- v1.0.0\r\n- \u003c= v0.49.0\r\n\r\n#### Versions allowing mitigation\r\nThis issue cannot be fixed solely by upgrading ingress-nginx. It can be mitigated in the following versions:\r\n- v1.0.1\r\n- v0.49.1\r\n\r\n### Mitigation\r\nTo mitigate this vulnerability:\r\n1. Upgrade to a version that allows mitigation, (\u003e= v0.49.1 or \u003e= v1.0.1)\r\n2. Set [allow-snippet-annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#allow-snippet-annotations) to false in your ingress-nginx ConfigMap based on how you deploy ingress-nginx:\r\n\r\n **Static Deploy Files** \r\n Edit the ConfigMap for ingress-nginx **after** deployment:\r\n ```\r\n kubectl edit configmap -n ingress-nginx ingress-nginx-controller\r\n ```\r\n Add directive:\r\n ````\r\n data:\r\n allow-snippet-annotations: âfalseâ\r\n ````\r\n More information on the ConfigMap [here](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/) \r\n\r\n **Deploying Via Helm**\r\n Set `controller.allowSnippetAnnotations` to `false` in the Values.yaml or add the directive to the helm deploy:\r\n ```\r\n helm install [RELEASE_NAME] --set controller.allowSnippetAnnotations=false ingress-nginx/ingress-nginx\r\n ````\r\n\r\n [https://github.com/kubernetes/ingress-nginx/blob/controller-v1.0.1/charts/ingress-nginx/values.yaml#L76](https://github.com/kubernetes/ingress-nginx/blob/controller-v1.0.1/charts/ingress-nginx/values.yaml#L76)\r\n\r\n### Detection\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\nAdditional Details\r\nSee ingress-nginx Issue kubernetes/kubernetes#126811 for more details.\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by Mitch Hulscher.\r\n\r\nThank You,\r\nCJ Cullen on behalf of the Kubernetes Security Response Committee\r\n","date_published":"2021-10-21T16:08:21Z","external_url":"https://www.cve.org/cverecord?id=CVE-2021-25742","id":"CVE-2021-25742","status":"fixed","summary":"Ingress-nginx custom snippets allows retrieval of ingress-nginx serviceaccount token and secrets across all namespaces","url":"https://github.com/kubernetes/kubernetes/issues/126811"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2021-25741","issue_number":104980},"content_text":"A security issue was discovered in Kubernetes where a user may be able to create a container with subpath volume mounts to access files \u0026 directories outside of the volume, including on the host filesystem.\r\n\r\nThis issue has been rated **High** ([CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H)), and assigned **CVE-2021-25741**.\r\n\r\n### Affected Components and Configurations\r\nThis bug affects kubelet.\r\n\r\n\r\nEnvironments where cluster administrators have restricted the ability to create hostPath mounts are the most seriously affected. Exploitation allows hostPath-like access without use of the hostPath feature, thus bypassing the restriction. \r\n\r\n\r\nIn a default Kubernetes environment, exploitation could be used to obscure misuse of already-granted privileges.\r\n\r\n#### Affected Versions\r\nv1.22.0 - v1.22.1\r\n\r\nv1.21.0 - v1.21.4\r\n\r\nv1.20.0 - v1.20.10\r\n\r\n\u003c= v1.19.14\r\n\r\n#### Fixed Versions\r\nThis issue is fixed in the following versions:\r\n\r\nv1.22.2\r\n\r\nv1.21.5\r\n\r\nv1.20.11\r\n\r\nv1.19.15\r\n\r\n### Mitigation\r\nTo mitigate this vulnerability without upgrading kubelet, you can disable the VolumeSubpath feature gate on kubelet and kube-apiserver, and remove any existing Pods making use of the feature.\r\n\r\n\r\nYou can also use admission control to prevent less-trusted users from running containers as root to reduce the impact of successful exploitation.\r\n\r\n### Detection\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n### Additional Details\r\nSee Kubernetes Issue #104980 for more details.\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by Fabricio Voznika and Mark Wolters of Google.\r\n\r\n\r\nThanks as well to Ian Coldwater, Duffie Cooley, Brad Geesaman, and Rory McCune for the thorough security research that led to the discovery of this vulnerability.","date_published":"2021-09-13T20:58:56Z","external_url":"https://www.cve.org/cverecord?id=CVE-2021-25741","id":"CVE-2021-25741","status":"fixed","summary":"Symlink Exchange Can Allow Host Filesystem Access","url":"https://github.com/kubernetes/kubernetes/issues/104980"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2021-25737","issue_number":102106},"content_text":"#### Issue Details\r\nA security issue was discovered in Kubernetes where a user may be able to redirect pod traffic to private networks on a Node. Kubernetes already prevents creation of Endpoint IPs in the localhost or link-local range, but the same validation was not performed on EndpointSlice IPs. \r\nThis issue has been rated Low ([CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:N/A:N](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:N/A:N)), and assigned CVE-2021-25737.\r\nAffected Component\r\nkube-apiserver\r\n\r\n#### Affected Versions\r\nv1.21.0\r\nv1.20.0 - v1.20.6\r\nv1.19.0 - v1.19.10\r\nv1.16.0 - v1.18.18 (Note: EndpointSlices were not enabled by default in 1.16-1.18)\r\n#### Fixed Versions\r\nThis issue is fixed in the following versions:\r\nv1.21.1\r\nv1.20.7\r\nv1.19.11\r\nv1.18.19\r\n#### Mitigation\r\nTo mitigate this vulnerability without upgrading kube-apiserver, you can create a validating admission webhook that prevents EndpointSlices with endpoint addresses in the 127.0.0.0/8 and 169.254.0.0/16 ranges. If you have an existing admission policy mechanism (like OPA Gatekeeper) you can create a policy that enforces this restriction.\r\n#### Detection\r\nTo detect whether this vulnerability has been exploited, you can list EndpointSlices and check for endpoint addresses in the 127.0.0.0/8 and 169.254.0.0/16 ranges.\r\n \r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n#### Acknowledgements\r\nThis vulnerability was reported by John Howard of Google.\r\n","date_published":"2021-05-18T19:14:27Z","external_url":"https://www.cve.org/cverecord?id=CVE-2021-25737","id":"CVE-2021-25737","status":"fixed","summary":"Holes in EndpointSlice Validation Enable Host Network Hijack","url":"https://github.com/kubernetes/kubernetes/issues/102106"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2021-3121","issue_number":101435},"content_text":"### Issue Details\r\n\r\nA security issue was discovered in code generated by the gogo protobuf compiler used by Kubernetes. The gogo protobuf compiler issue has been assigned CVE-2021-3121 and is also known as the âskippy peanut butter bugâ.\r\n\r\nA program which uses affected code to handle a malicious protobuf message could panic.\r\nThe Kubernetes Product Security Committee has tested the API server using a malicious message, and we believe that **there is no security impact to Kubernetes**. When an authenticated user sent the malicious message to the API server, a panic occurred. However, the panic handler recovered and the API server continued without interruption (except to the malicious requestor, who received no response).\r\n\r\nGenerated protobuf files are part of several Kubernetes repositories, and any downstream projects which vendor in these repos should evaluate whether there is any security impact to their project.\r\n\r\n### Affected Components and Configurations\r\n\r\nAny golang components which use handler code created by the gogo protbuf compiler, which accept protobuf messages and do not gracefully handle panics in the unmarshalling codepath may be affected.\r\n\r\nThe following Linux command can be used to detect affected generated code within a codebase:\r\n\r\n```\r\nfind . -name '*.pb.go' | \\\r\nxargs -r grep -l 'if skippy \u003c 0' | \\\r\nxargs -r awk -e '/if skippy \u003c 0/ {a=4} /if \\(iNdEx \\+ skippy\\) \u003e postIndex/ \u0026\u0026' \\\r\n \t -e 'a\u003e0 {print FILENAME \" \" FNR \": \" $0 \" // vulnerable to CVE-2021-3121\"} {a--}'\r\n```\r\n\r\nAlthough we do not believe there is any security impact to Kubernetes, we have updated all generated protobufs out of an abundance of caution and as a courtesy to any downstream consumers who may be affected. The following PRs addressed this issue in Kubernetes:\r\n\r\nMaster branch: #98477, #101306\r\n1.21 branch: #98477 (in 1.21.0), #101325 (in 1.21.1)\r\n1.20 branch: #100501 (in 1.20.6), #101326 (in 1.20.7)\r\n1.19 branch: #100515 (in 1.19.10), #101327 (in 1.19.11)\r\n1.18 branch: #100514 (in 1.18.18), #101335 (in 1.18.19)\r\n\r\nFor other generated protobuf go handlers, the issue can be remediated by upgrading the gogo protobuf compiler to a fixed version (v1.3.2 or later), then regenerating affected protobuf code with the updated protobuf compiler. \r\n\r\n### Mitigations\r\n\r\nDisabling support for protobuf messages may be one possible mitigation for any affected product.\r\n\r\nAlso, graceful panic handling in message handlers mitigates the bug.\r\n\r\n### Detection\r\n\r\nIf you use generated protobuf code in a product and you observe a process exiting with messages similar to the following, a malicious user may be exploiting this defect:\r\n\r\n```\r\npanic: runtime error: index out of range [-9223372036854775804]\r\n \r\ngoroutine 1 [running]:\r\nv1.(*MessageName).Unmarshal(0xc000057ef8, 0xc0000161a0, 0xa, 0x10, 0xc000057ec8, 0x1)\r\n .../protofile.pb.go:250 +0xb86\r\n```\r\n\r\n### References\r\n- CVE-2021-3121: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3121\r\n- GoGo Protobuf v1.3.2: https://github.com/gogo/protobuf/releases/tag/v1.3.2\r\n","date_published":"2021-04-23T18:07:32Z","external_url":"https://www.cve.org/cverecord?id=CVE-2021-3121","id":"CVE-2021-3121","status":"fixed","summary":"Processes may panic upon receipt of malicious protobuf messages","url":"https://github.com/kubernetes/kubernetes/issues/101435"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2021-25735","issue_number":100096},"content_text":"A security issue was discovered in kube-apiserver that could allow node updates to bypass a Validating Admission Webhook. You are only affected by this vulnerability if you run a Validating Admission Webhook for Nodes that denies admission based at least partially on the old state of the Node object.\r\n\r\nThis issue has been rated **Medium** ([CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:H/A:H](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:H/A:H)), and assigned **CVE-2021-25735**.\r\n\r\n**Note:** This only impacts validating admission plugins that rely on old values in certain fields, and does not impact calls from kubelets that go through the built-in NodeRestriction admission plugin.\r\n\r\n#### Affected Versions\r\n\r\n- kube-apiserver v1.20.0 - v1.20.5\r\n- kube-apiserver v1.19.0 - v1.19.9\r\n- kube-apiserver \u003c= v1.18.17\r\n\r\n#### Fixed Versions\r\n\r\nThis issue is fixed in the following versions:\r\n- kube-apiserver v1.21.0 - Fixed by https://github.com/kubernetes/kubernetes/pull/99946\r\n- kube-apiserver v1.20.6 - Fixed by https://github.com/kubernetes/kubernetes/pull/100315\r\n- kube-apiserver v1.19.10 - Fixed by https://github.com/kubernetes/kubernetes/pull/100316\r\n- kube-apiserver v1.18.18 - Fixed by https://github.com/kubernetes/kubernetes/pull/100317\r\n\r\n#### Detection\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Rogerio Bastos \u0026 Ari Lima from RedHat\r\n","date_published":"2021-03-10T18:18:01Z","external_url":"https://www.cve.org/cverecord?id=CVE-2021-25735","id":"CVE-2021-25735","status":"fixed","summary":"Validating Admission Webhook does not observe some previous fields","url":"https://github.com/kubernetes/kubernetes/issues/100096"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2020-8554","issue_number":97076},"content_text":"CVSS Rating: **Medium** ([CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L))\r\n \r\nThis issue affects multitenant clusters. If a potential attacker can already create or edit services and pods, then they may be able to intercept traffic from other pods (or nodes) in the cluster.\r\n \r\nAn attacker that is able to create a ClusterIP service and set the spec.externalIPs field can intercept traffic to that IP. An attacker that is able to patch the status (which is considered a privileged operation and should not typically be granted to users) of a LoadBalancer service can set the status.loadBalancer.ingress.ip to similar effect.\r\nThis issue is a design flaw that cannot be mitigated without user-facing changes.\r\n### Affected Components and Configurations\r\n\r\nAll Kubernetes versions are affected. Multi-tenant clusters that grant tenants the ability to create and update services and pods are most vulnerable.\r\n### Mitigations\r\n\r\nThere is no patch for this issue, and it can currently only be mitigated by restricting access to the vulnerable features. Because an in-tree fix would require a breaking change, we will open a conversation about a longer-term fix or built-in mitigation after the embargo is lifted\r\n\r\nTo restrict the use of external IPs we are providing an admission webhook container: k8s.gcr.io/multitenancy/externalip-webhook:v1.0.0. The source code and deployment instructions are published at https://github.com/kubernetes-sigs/externalip-webhook.\r\n\r\nAlternatively, external IPs can be restricted using [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper). A sample ConstraintTemplate and Constraint can be found here: https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/general/externalip.\r\n\r\nNo mitigations are provided for LoadBalancer IPs since we do not recommend granting users *patch service/status* permission. If LoadBalancer IP restrictions are required, the approach for the external IP mitigations can be copied.\r\n### Detection\r\n\r\nExternalIP services are not widely used, so we recommend manually auditing any external IP usage. Users should not patch service status, so audit events for patch service status requests authenticated to a user may be suspicious.\r\n \r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Etienne Champetier (@champtar) of Anevia.\r\n \r\n/area security\r\n/kind bug\r\n/committee product-security\r\n/sig network\r\n","date_published":"2020-12-04T20:02:15Z","external_url":"https://www.cve.org/cverecord?id=CVE-2020-8554","id":"CVE-2020-8554","status":"fixed","summary":"Man in the middle using LoadBalancer or ExternalIPs","url":"https://github.com/kubernetes/kubernetes/issues/97076"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2020-8566","issue_number":95624},"content_text":"CVSS Rating: 4.7 CVSS:3.0/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:N/A:N (Medium)\r\n\r\nIn Kubernetes clusters using Ceph RBD as a storage provisioner, with logging level of at least 4, Ceph RBD admin secrets can be written to logs. This occurs in kube-controller-manager's logs during provisioning of Ceph RBD persistent claims.\r\n\r\n### Am I vulnerable?\r\nIf Ceph RBD volumes are in use and kube-controller-manager is using a log level of at least 4.\r\n\r\n#### Affected Versions\r\nkubernetes v1.19.0 - v1.19.2\r\nkubernetes v1.18.0 - v1.18.9\r\nkubernetes v1.17.0 - v1.17.12\r\n\r\n### How do I mitigate this vulnerability?\r\nDo not enable verbose logging in production, limit access to logs.\r\n\r\n#### Fixed Versions\r\nv1.19.3\r\nv1.18.10\r\nv1.17.13\r\n\r\nTo upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by: Kaizhe Huang (derek0405)\r\n\r\n/area security\r\n/kind bug\r\n/committee product-security\r\n","date_published":"2020-10-15T22:07:53Z","external_url":"https://www.cve.org/cverecord?id=CVE-2020-8566","id":"CVE-2020-8566","status":"fixed","summary":"Ceph RBD adminSecrets exposed in logs when loglevel \u003e= 4","url":"https://github.com/kubernetes/kubernetes/issues/95624"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2020-8565","issue_number":95623},"content_text":"CVSS Rating: 4.7 CVSS:3.0/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:N/A:N (Medium)\r\n\r\nIn Kubernetes, if the logging level is to at least 9, authorization and bearer tokens will be written to log files. This can occur both in API server logs and client tool output like `kubectl`.\r\n\r\n### Am I vulnerable?\r\nIf kube-apiserver is using a log level of at least 9.\r\n\r\n#### Affected Versions\r\nkubernetes v1.19.0 - v1.19.5\r\nkubernetes v1.18.0 - v1.18.13\r\nkubernetes v1.17.0 - v1.17.15\r\n\r\n### How do I mitigate this vulnerability?\r\nDo not enable verbose logging in production, limit access to logs.\r\n\r\n#### Fixed Versions\r\nkubernetes v1.20.0\r\nkubernetes v1.19.6 \r\nkubernetes v1.18.14 \r\nkubernetes v1.17.16\r\n\r\nTo upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by: Patrick Rhomberg (purelyapplied)\r\n\r\n/area security\r\n/kind bug\r\n/committee product-security\r\n","date_published":"2020-10-15T22:05:32Z","external_url":"https://www.cve.org/cverecord?id=CVE-2020-8565","id":"CVE-2020-8565","status":"fixed","summary":"Incomplete fix for CVE-2019-11250 allows for token leak in logs when logLevel \u003e= 9","url":"https://github.com/kubernetes/kubernetes/issues/95623"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2020-8564","issue_number":95622},"content_text":"CVSS Rating: 4.7 CVSS:3.0/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:N/A:N (Medium)\r\n\r\nIn Kubernetes clusters using a logging level of at least 4, processing a malformed docker config file will result in the contents of the docker config file being leaked, which can include pull secrets or other registry credentials.\r\n\r\n### Am I vulnerable?\r\nIf kubernetes.io/dockerconfigjson type secrets are used, and a log level of 4 or higher is used. Third party tools using k8s.io/kubernetes/pkg/credentialprovider to read docker config files may also be vulnerable.\r\n\r\n#### Affected Versions\r\nkubernetes v1.19.0 - v1.19.2\r\nkubernetes v1.18.0 - v1.18.9\r\nkubernetes v1.17.0 - v1.17.12\r\n\r\n### How do I mitigate this vulnerability?\r\nDo not enable verbose logging in production, limit access to logs.\r\n\r\n#### Fixed Versions\r\nv1.19.3\r\nv1.18.10\r\nv1.17.13\r\n\r\n\r\nTo upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster\r\n\r\n#### Acknowledgements\r\nThis vulnerability was reported by: Nikolaos Moraitis (Red Hat)\r\n/area security\r\n/kind bug\r\n/committee product-security\r\n","date_published":"2020-10-15T22:03:19Z","external_url":"https://www.cve.org/cverecord?id=CVE-2020-8564","id":"CVE-2020-8564","status":"fixed","summary":"Docker config secrets leaked when file is malformed and log level \u003e= 4","url":"https://github.com/kubernetes/kubernetes/issues/95622"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2020-8563","issue_number":95621},"content_text":"CVSS Rating: 5.6 CVSS:3.0/AV:L/AC:H/PR:L/UI:N/S:C/C:H/I:N/A:N (Medium)\r\n\r\nIn Kubernetes clusters using VSphere as a cloud provider, with a logging level set to 4 or above, VSphere cloud credentials will be leaked in the cloud controller manager's log.\r\n\r\n### Am I vulnerable?\r\nIf you are using VSphere as a cloud provider, have verbose logging enabled, and an attacker can access cluster logs, then you may be vulnerable to this.\r\n\r\n#### Affected Versions\r\nkube-controller-manager v1.19.0 - v1.19.2\r\n\r\n#### How do I mitigate this vulnerability?\r\nDo not enable verbose logging in production, limit access to cluster logs.\r\n\r\n#### Fixed Versions\r\nv1.19.3\r\n\r\nTo upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster\r\n\r\n### Acknowledgements\r\nThis vulnerability was reported by: Kaizhe Huang (derek0405)\r\n\r\n/area security\r\n/kind bug\r\n/committee product-security\r\n","date_published":"2020-10-15T22:00:44Z","external_url":"https://www.cve.org/cverecord?id=CVE-2020-8563","id":"CVE-2020-8563","status":"fixed","summary":"Secret leaks in kube-controller-manager when using vSphere provider","url":"https://github.com/kubernetes/kubernetes/issues/95621"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2020-8557","issue_number":93032},"content_text":"CVSS Rating: Medium (5.5) [CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H/CR:H/IR:H/AR:M](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H/CR:H/IR:H/AR:M)\r\n\r\nThe `/etc/hosts` file mounted in a pod by kubelet is not included by the kubelet eviction manager when calculating ephemeral storage usage by a pod. If a pod writes a large amount of data to the `/etc/hosts` file, it could fill the storage space of the node and cause the node to fail.\r\n\r\n### Am I vulnerable?\r\n\r\nAny clusters allowing pods with sufficient privileges to write to their own `/etc/hosts` files are affected. This includes containers running with `CAP_DAC_OVERRIDE` in their capabilities bounding set (true by default) and either UID 0 (root) or a security context with `allowPrivilegeEscalation: true` (true by default).\r\n\r\n#### Affected Versions\r\n\r\n- kubelet v1.18.0-1.18.5\r\n- kubelet v1.17.0-1.17.8\r\n- kubelet \u003c v1.16.13\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nPrior to upgrading, this vulnerability can be mitigated by using PodSecurityPolicies or other admission webhooks to force containers to drop CAP_DAC_OVERRIDE or to prohibit privilege escalation and running as root, but these measures may break existing workloads that rely upon these privileges to function properly.\r\n\r\n#### Fixed Versions\r\n\r\n- kubelet master - fixed by #92916\r\n- kubelet v1.18.6 - fixed by #92921\r\n- kubelet v1.17.9 - fixed by #92923\r\n- kubelet v1.16.13 - fixed by #92924\r\n\r\nTo upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster\r\n\r\n### Detection\r\n\r\nLarge pod `etc-hosts` files may indicate that a pod is attempting to perform a Denial of Service attack using this bug. A command such as\r\n\r\n```\r\nfind /var/lib/kubelet/pods/*/etc-hosts -size +1M\r\n```\r\n\r\nrun on a node can be used to find abnormally large pod etc-hosts files.\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Kebe Liu of DaoCloud, via the Kubernetes bug bounty program.\r\n\r\n/area security\r\n/kind bug\r\n/committee product-security\r\n/sig node\r\n/area kubelet","date_published":"2020-07-13T18:39:08Z","external_url":"https://www.cve.org/cverecord?id=CVE-2020-8557","id":"CVE-2020-8557","status":"fixed","summary":"Node disk DOS by writing to container /etc/hosts","url":"https://github.com/kubernetes/kubernetes/issues/93032"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2020-8559","issue_number":92914},"content_text":"CVSS Rating: Medium (6.4) [CVSS:3.1/AV:N/AC:H/PR:H/UI:R/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:H/PR:H/UI:R/S:U/C:H/I:H/A:H)\r\n\r\nIf an attacker is able to intercept certain requests to the Kubelet, they can send a redirect response that may be followed by a client using the credentials from the original request. This can lead to compromise of other nodes.\r\n\r\nIf multiple clusters share the same certificate authority trusted by the client, and the same authentication credentials, this vulnerability may allow an attacker to redirect the client to another cluster. In this configuration, this vulnerability should be considered **High** severity.\r\n\r\n### Am I vulnerable?\r\n\r\nYou are only affected by this vulnerability if you treat the node as a security boundary, or if clusters share certificate authorities and authentication credentials.\r\n\r\nNote that this vulnerability requires an attacker to first compromise a node through separate means.\r\n\r\n#### Affected Versions\r\n\r\n- kube-apiserver v1.18.0-1.18.5\r\n- kube-apiserver v1.17.0-1.17.8\r\n- kube-apiserver v1.16.0-1.16.12\r\n- all kube-apiserver versions prior to v1.16.0\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nTo mitigate this vulnerability you must upgrade the kube-apiserver to a patched version.\r\n\r\n#### Fixed Versions\r\n\r\n- kube-apiserver master - fixed by https://github.com/kubernetes/kubernetes/pull/92941\r\n- kube-apiserver v1.18.6 - fixed by https://github.com/kubernetes/kubernetes/pull/92969\r\n- kube-apiserver v1.17.9 - fixed by https://github.com/kubernetes/kubernetes/pull/92970\r\n- kube-apiserver v1.16.13 - fixed by https://github.com/kubernetes/kubernetes/pull/92971\r\n\r\n\r\n**Fix impact:** Proxied backends (such as an extension API server) that respond to upgrade requests with a non-101 response code may be broken by this patch.\r\n\r\nTo upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster\r\n\r\n### Detection\r\n\r\nUpgrade requests should never respond with a redirect. If any of the following requests have a response code in the 300-399 range, it may be evidence of exploitation. This information can be found in the Kubernetes audit logs.\r\n\r\n- pods/exec\r\n- pods/attach\r\n- pods/portforward\r\n- any resource: proxy\r\n\r\nIf you find evidence that this vulnerability has been exploited, please contact [email protected]\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Wouter ter Maat of Offensi, via the Kubernetes bug bounty.\r\n\r\n/area security\r\n/kind bug\r\n/committee product-security\r\n/sig api-machinery\r\n/area apiserver","date_published":"2020-07-08T17:03:16Z","external_url":"https://www.cve.org/cverecord?id=CVE-2020-8559","id":"CVE-2020-8559","status":"fixed","summary":"Privilege escalation from compromised node to cluster","url":"https://github.com/kubernetes/kubernetes/issues/92914"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2020-8558","issue_number":92315},"content_text":"CVSS Rating:\r\n\r\nIn typical clusters: medium (5.4) [CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N)\r\n\r\nIn clusters where API server insecure port has not been disabled: high (8.8) [CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H)\r\n\r\nA security issue was discovered in `kube-proxy` which allows adjacent hosts to reach TCP and UDP services bound to `127.0.0.1` running on the node or in the node's network namespace. For example, if a cluster administrator runs a TCP service on a node that listens on `127.0.0.1:1234`, because of this bug, that service would be potentially reachable by other hosts on the same LAN as the node, or by containers running on the same node as the service. If the example service on port `1234` required no additional authentication (because it assumed that only other localhost processes could reach it), then it could be vulnerable to attacks that make use of this bug.\r\n\r\nThe Kubernetes API Server's default insecure port setting causes the API server to listen on `127.0.0.1:8080` where it will accept requests without authentication. Many Kubernetes installers explicitly disable the API Server's insecure port, but in clusters where it is not disabled, an attacker with access to another system on the same LAN or with control of a container running on the master may be able to reach the API server and execute arbitrary API requests on the cluster. This port is deprecated, and will be removed in Kubernetes v1.20.\r\n\r\n### Am I vulnerable?\r\n\r\nYou may be vulnerable if:\r\n\r\n- You are running a vulnerable version (see below)\r\n- Your cluster nodes run in an environment where untrusted hosts share the same layer 2 domain (i.e. same LAN) as nodes\r\n- Your cluster allows untrusted pods to run containers with `CAP_NET_RAW` (the Kubernetes default is to allow this capability).\r\n- Your nodes (or hostnetwork pods) run any localhost-only services which do not require any further authentication. To list services that are potentially affected, run the following commands on nodes:\r\n  - `lsof +c 15 -P -n [email protected] -sTCP:LISTEN`\r\n - `lsof +c 15 -P -n [email protected]`\r\n\r\n On a master node, an lsof entry like this indicates that the API server may be listening with an insecure port:\r\n\r\n```\r\nCOMMAND    PID USER FD  TYPE DEVICE SIZE/OFF NODE NAME\r\nkube-apiserver 123 root 7u IPv4 26799   0t0 TCP 127.0.0.1:8080 (LISTEN)\r\n```\r\n\r\n#### Affected Versions\r\n- kubelet/kube-proxy v1.18.0-1.18.3\r\n- kubelet/kube-proxy v1.17.0-1.17.6\r\n- kubelet/kube-proxy \u003c=1.16.10\r\n\r\n### How do I mitigate this vulnerability?\r\nPrior to upgrading, this vulnerability can be mitigated by manually adding an iptables rule on nodes. This rule will reject traffic to 127.0.0.1 which does not originate on the node.\r\n\r\n` iptables -I INPUT --dst 127.0.0.0/8 ! --src 127.0.0.0/8 -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP`\r\n\r\nAdditionally, if your cluster does not already have the API Server insecure port disabled, we strongly suggest that you disable it. Add the following flag to your kubernetes API server command line: `--insecure-port=0`\r\n#### Detection\r\nPackets on the wire with an IPv4 destination in the range 127.0.0.0/8 and a layer-2 destination MAC address of a node may indicate that an attack is targeting this vulnerability.\r\n\r\n#### Fixed Versions\r\nAlthough the issue is caused by `kube-proxy`, the current fix for the issue is in `kubelet` (although future versions may have the fix in `kube-proxy` instead). We recommend updating both `kubelet` and `kube-proxy` to be sure the issue is addressed.\r\n\r\nThe following versions contain the fix:\r\n \r\n- kubelet/kube-proxy master - fixed by #91569\r\n- kubelet/kube-proxy v1.18.4+ - fixed by #92038\r\n- kubelet/kube-proxy v1.17.7+ - fixed by #92039\r\n- kubelet/kube-proxy v1.16.11+ - fixed by #92040\r\n\r\nTo upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster\r\n\r\n\r\n## Additional Details\r\nThis issue was originally raised in issue #90259 which details how the `kube-proxy` sets `net.ipv4.conf.all.route_localnet=1` which causes the system not to reject traffic to localhost which originates on other hosts.\r\n\r\nIPv6-only services that bind to a `localhost` address are not affected. \r\n\r\nThere may be additional attack vectors possible in addition to those fixed by #91569 and its cherry-picks. For those attacks to succeed, the target service would need to be UDP and the attack could only rely upon sending UDP datagrams since it wouldn't receive any replies. Finally, the target node would need to have reverse-path filtering disabled for an attack to have any effect. Work is ongoing to determine whether and how this issue should be fixed. See #91666 for up-to-date status on this issue.  \r\n\r\n#### Acknowledgements\r\nThis vulnerability was reported by János Kövér, Ericsson with additional impacts reported by Rory McCune, NCC Group and Yuval Avrahami and Ariel Zelivansky, Palo Alto Networks.\r\n\r\n/area security\r\n/kind bug\r\n/committee product-security\r\n/sig network\r\n/sig node\r\n/area kubelet","date_published":"2020-06-19T18:38:58Z","external_url":"https://www.cve.org/cverecord?id=CVE-2020-8558","id":"CVE-2020-8558","status":"fixed","summary":"Node setting allows for neighboring hosts to bypass localhost boundary","url":"https://github.com/kubernetes/kubernetes/issues/92315"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2020-8555","issue_number":91542},"content_text":"CVSS Rating: [CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:N/A:N](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:N/A:N)\r\n\r\nThere exists a Server Side Request Forgery (SSRF) vulnerability in kube-controller-manager that allows certain authorized users to leak up to 500 bytes of arbitrary information from unprotected endpoints within the master's host network (such as link-local or loopback services).\r\n \r\nAn attacker with permissions to create a pod with certain built-in Volume types (GlusterFS, Quobyte, StorageOS, ScaleIO) or permissions to create a StorageClass can cause kube-controller-manager to make GET requests or POST requests without an attacker controlled request body from the master's host network.\r\n \r\n### Am I vulnerable?\r\n\r\nYou may be vulnerable if:\r\n\r\n- You are running a vulnerable version (see below)\r\n- There are unprotected endpoints normally only visible from the Kubernetes master (including link-local metadata endpoints, unauthenticated services listening on localhost, or other services in the master's private network)\r\n- Untrusted users can create pods with an affected volume type or modify storage classes.\r\n\r\n#### Affected Versions\r\n\r\n- kube-controller-manager v1.18.0\r\n- kube-controller-manager v1.17.0 - v1.17.4\r\n- kube-controller-manager v1.16.0 - v1.16.8\r\n- kube-controller-manager \u003c= v1.15.11\r\n \r\nThe affected volume types are: GlusterFS, Quobyte, StorageOS, ScaleIO\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nPrior to upgrading, this vulnerability can be mitigated by adding endpoint protections on the master or restricting usage of the vulnerable volume types (for example by constraining usage with a [PodSecurityPolicy](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems) or third-party admission controller such as [Gatekeeper](https://github.com/open-policy-agent/gatekeeper)) and restricting StorageClass write permissions through RBAC.\r\n\r\n#### Fixed Versions\r\n\r\nThe information leak was patched in the following versions:\r\n\r\n- kube-controller-manager master - fixed by https://github.com/kubernetes/kubernetes/pull/89794\r\n- kube-controller-manager v1.18.1+ - fixed by https://github.com/kubernetes/kubernetes/pull/89796\r\n- kube-controller-manager v1.17.5+ - fixed by https://github.com/kubernetes/kubernetes/pull/89837\r\n- kube-controller-manager v1.16.9+ - fixed by https://github.com/kubernetes/kubernetes/pull/89838\r\n- kube-controller-manager v1.15.12+ - fixed by https://github.com/kubernetes/kubernetes/pull/89839\r\n\r\nTo upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster\r\n\r\n## Additional Details\r\n\r\nExploitation of this vulnerability causes the kube-controller-manager to make a request to a user-supplied, unvalidated URL. The request does not include any kube-controller-manager client credentials.\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Brice Augras from Groupe-Asten and Christophe Hauquiert from Nokia.\r\n\r\n/area security\r\n/kind bug\r\n/committee product-security\r\n/sig storage\r\n/area controller-manager","date_published":"2020-05-28T16:13:34Z","external_url":"https://www.cve.org/cverecord?id=CVE-2020-8555","id":"CVE-2020-8555","status":"fixed","summary":"Half-Blind SSRF in kube-controller-manager","url":"https://github.com/kubernetes/kubernetes/issues/91542"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2020-10749","issue_number":91507},"content_text":"CVSS Rating: [CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:C/C:L/I:L/A:L](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:C/C:L/I:L/A:L) (6.0 Medium)\r\n\r\nA cluster configured to use an affected container networking implementation is susceptible to man-in-the-middle (MitM) attacks. By sending ârogueâ router advertisements, a malicious container can reconfigure the host to redirect part or all of the IPv6 traffic of the host to the attacker-controlled container. Even if there was no IPv6 traffic before, if the DNS returns A (IPv4) and AAAA (IPv6) records, many HTTP libraries will try to connect via IPv6 first then fallback to IPv4, giving an opportunity to the attacker to respond.\r\n\r\n### Am I vulnerable?\r\n\r\nKubernetes itself is not vulnerable. A Kubernetes cluster using an affected networking implementation is vulnerable.\r\n\r\nBinary releases of the kubelet installed from upstream Kubernetes Community repositories hosted at https://packages.cloud.google.com/ may have also installed the `kubernetes-cni` package containing the [containernetworking CNI plugins](https://github.com/containernetworking/plugins), which are affected by CVE-2020-10749.\r\n\r\n#### Affected Versions\r\nThe following official kubelet package versions have an affected `kubernetes-cni` package as a dependency: \r\n- kubelet v1.18.0-v1.18.3\r\n- kubelet v1.17.0-v1.17.6\r\n- kubelet \u003c v1.16.11\r\n\r\nA cluster having an affected `kubernetes-cni` package installed is only affected if configured to use it.\r\n#### Third-party components and versions\r\nMany container networking implementations are affected, including:\r\n\r\n- CNI Plugins maintained by the containernetworking team, prior to version 0.8.6 (CVE-2020-10749) (See https://github.com/containernetworking/plugins/pull/484)\r\n- Calico and Calico Enterprise (CVE-2020-13597) Please refer to the Tigera Advisory TTA-2020-001 at https://www.projectcalico.org/security-bulletins/ for details\r\n- Docker versions prior to 19.03.11 (see https://github.com/docker/docker-ce/releases/v19.03.11) (CVE-2020-13401)\r\n- Flannel, all current versions\r\n- Weave Net, prior to version 2.6.3\r\n\r\nIt is believed that the following are not affected:\r\n\r\n- Cilium\r\n- Juniper Contrail Networking\r\n- OpenShift SDN\r\n- OVN-Kubernetes\r\n- Tungsten Fabric\r\n\r\nInformation about the vulnerability status of any plugins or implementations not listed above is currently unavailable. Please contact the provider directly with questions about their implementation.\r\n\r\n### How do I mitigate this vulnerability?\r\n- Set the host default to reject router advertisements. This should prevent attacks from succeeding, but may break legitimate traffic, depending upon the networking implementation and the network where the cluster is running. To change this setting, set the sysctl `net.ipv6.conf.all.accept_ra` to 0.\r\n- Use TLS with proper certificate validation\r\n- Disallow `CAP_NET_RAW` for untrusted workloads or users. For example, a Pod Security Policy with a `RequiredDropCapabilities` that includes `NET_RAW` will prevent this attack for controlled workloads.\r\n\r\n#### Fixed Versions\r\n\r\nThe following packages will bundle fixed versions of the containernetworking CNI plugins that were formerly installed via the `kubernetes-cni` package.\r\n- kubelet v1.19.0+ (master branch #91370)\r\n- kubelet v1.18.4+ (#91387)\r\n- kubelet v1.17.7+ (#91386)\r\n- kubelet v1.16.11+ (#91388)\r\n\r\nBecause these versions are not yet available, cluster administrators using packages from the Kubernetes repositories may choose to manually upgrade CNI plugins by retrieving the relevant arch tarball from the containernetworking/plugins [v0.8.6 release](https://github.com/containernetworking/plugins/releases/tag/v0.8.6). The patch versions are [expected to be released on June 17th](https://github.com/kubernetes/sig-release/blob/master/releases/patch-releases.md#timelines), subject to change.\r\n\r\n## Additional Details\r\n#### Detection\r\n\r\n- The IPv6 routing table on nodes will show any attacker-created entries. For example, a host with IPv6 disabled might show no default route when running `ip -6 route` but the same host with an attack in progress might show an updated default route or a route to the target address(es). Any IPv6 route with a destination interface of a host-side container network interface should be investigated.\r\n- The host-side of a container network interface may show additional configured IPv6 addresses after receiving a rogue RA packet. For example, given a host-side interface of `cbr0` which might normally have no IPv6 address, a dynamic-configured address on the interface may signal an attack in progress. Use this command to view interface addresses: `ip a show dynamic cbr0`\r\n\r\n#### Affected configurations\r\n\r\n- Clusters using an affected networking implementation and allowing workloads to run with `CAP_NET_RAW privileges`. The default Kubernetes security context runs workloads with a capabilities bounding set that includes `CAP_NET_RAW`. \r\n\r\n#### Vulnerability impact\r\n\r\n- A user able to create containers with `CAP_NET_RAW` privileges on an affected cluster can intercept traffic from other containers on the host or from the host itself.\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Etienne Champetier (@champtar).\r\n\r\nThe issue was fixed by Casey Callendrello (@squeed) and maintainers of various container networking implementations. Updates to Kubernetes builds were coordinated by Stephen Augustus (@justaugustus) and Tim Pepper (@tpepper).\r\n\r\n/area security\r\n/kind bug\r\n/committee product-security\r\n/sig network\r\n","date_published":"2020-05-27T19:32:29Z","external_url":"https://www.cve.org/cverecord?id=CVE-2020-10749","id":"CVE-2020-10749","status":"fixed","summary":"IPv4 only clusters susceptible to MitM attacks via IPv6 rogue router advertisements","url":"https://github.com/kubernetes/kubernetes/issues/91507"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2019-11254","issue_number":89535},"content_text":"CVE-2019-11254 is a denial of service vulnerability in the kube-apiserver, allowing authorized users sending malicious YAML payloads to cause kube-apiserver to consume excessive CPU cycles while parsing YAML.\r\n \r\nThe issue was discovered via the fuzz test kubernetes/kubernetes#83750.\r\n \r\n**Affected components:**\r\nKubernetes API server\r\n \r\n**Affected versions:**\r\n\u003c= v1.15.9, resolved in 1.15.10 by https://github.com/kubernetes/kubernetes/pull/87640\r\nv1.16.0-v1.16.7, resolved in 1.16.8 by https://github.com/kubernetes/kubernetes/pull/87639\r\nv1.17.0-v1.17.2, resolved in 1.17.3 by https://github.com/kubernetes/kubernetes/pull/87637\r\nFixed in master by https://github.com/kubernetes/kubernetes/pull/87467\r\n \r\n**How do I mitigate this vulnerability?**\r\nPrior to upgrading, these vulnerabilities can be mitigated by preventing unauthenticated or unauthorized access to kube-apiserver.","date_published":"2020-03-26T18:55:26Z","external_url":"https://www.cve.org/cverecord?id=CVE-2019-11254","id":"CVE-2019-11254","status":"fixed","summary":"kube-apiserver Denial of Service vulnerability from malicious YAML payloads","url":"https://github.com/kubernetes/kubernetes/issues/89535"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2020-8552","issue_number":89378},"content_text":"CVSS Rating: [CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L) (Medium)\r\n\r\nThe Kubernetes API server has been found to be vulnerable to a denial of service attack via authorized API requests.\r\n\r\n### Am I vulnerable?\r\n\r\nIf an attacker that can make an authorized resource request to an unpatched API server (see below), then you are vulnerable to this. Prior to v1.14, this was possible via unauthenticated requests by default.\r\n\r\n#### Affected Versions\r\n\r\n- kube-apiserver v1.17.0 - v1.17.2\r\n- kube-apiserver v1.16.0 - v1.16.6\r\n- kube-apiserver \u003c v1.15.10\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nPrior to upgrading, this vulnerability can be mitigated by:\r\n- Preventing unauthenticated or unauthorized access to all apis\r\n- The apiserver should auto restart if it OOMs\r\n\r\n#### Fixed Versions\r\n\r\n- v1.17.3\r\n- v1.16.7\r\n- v1.15.10\r\n\r\nTo upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by: Gus Lees (Amazon)\r\n\r\n/area security\r\n/kind bug\r\n/committee product-security\r\n/sig api-machinery","date_published":"2020-03-23T18:35:34Z","external_url":"https://www.cve.org/cverecord?id=CVE-2020-8552","id":"CVE-2020-8552","status":"fixed","summary":"apiserver DoS (oom)","url":"https://github.com/kubernetes/kubernetes/issues/89378"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2020-8551","issue_number":89377},"content_text":"CVSS Rating: [CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L) (Medium)\r\n\r\nThe Kubelet has been found to be vulnerable to a denial of service attack via the kubelet API, including the unauthenticated HTTP read-only API typically served on port 10255, and the authenticated HTTPS API typically served on port 10250.\r\n\r\n### Am I vulnerable?\r\n\r\nIf an attacker can make a request to an unpatched kubelet, then you may be vulnerable to this.\r\n\r\n#### Affected Versions\r\n\r\n- kubelet v1.17.0 - v1.17.2\r\n- kubelet v1.16.0 - v1.16.6\r\n- kubelet v1.15.0 - v1.15.9\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nLimit access to the Kubelet API or patch the Kubelet.\r\n\r\n#### Fixed Versions\r\n\r\n- v1.17.3\r\n- v1.16.7\r\n- v1.15.10\r\n\r\nTo upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by: Henrik Schmidt\r\n\r\n/area security\r\n/kind bug\r\n/committee product-security\r\n/sig node\r\n/area kubelet","date_published":"2020-03-23T18:34:40Z","external_url":"https://www.cve.org/cverecord?id=CVE-2020-8551","id":"CVE-2020-8551","status":"fixed","summary":"Kubelet DoS via API","url":"https://github.com/kubernetes/kubernetes/issues/89377"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2020-8553","issue_number":126818},"content_text":"A security issue was discovered in ingress-nginx versions older than v0.28.0. The issue is of medium severity, and upgrading is encouraged to fix the vulnerability.\r\n\r\n**Am I vulnerable?**\r\n\r\nThe vulnerability exists only if the annotation [nginx.ingress.kubernetes.io/auth-type: basic](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#authentication) is used.\r\n\r\n**How do I upgrade?**\r\n\r\nFollow installation instructions [here](https://kubernetes.github.io/ingress-nginx/deploy/upgrade/)\r\n\r\n**Vulnerability Details**\r\n\r\nA vulnerability has been discovered where a malicious user could create a new Ingress definition resulting in the replacement of the password file. The vulnerability requires that the victim namespace and/or secret use a hyphen in the name.\r\n\r\nThis scenario requires privileges in the cluster to create and read ingresses and also create secrets.\r\n\r\nThis issue is filed as CVE-2020-8553.\r\n\r\n/close\r\n","date_published":"2020-02-19T19:00:32Z","external_url":"https://www.cve.org/cverecord?id=CVE-2020-8553","id":"CVE-2020-8553","status":"fixed","summary":"ingress-nginx auth-type basic annotation vulnerability","url":"https://github.com/kubernetes/kubernetes/issues/126818"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2019-11251","issue_number":87773},"content_text":"A security issue was discovered in kubectl versions v1.13.10, v1.14.6, and v1.15.3. The issue is of a medium severity and upgrading of kubectl is encouraged to fix the vulnerability.\r\n\r\n**Am I vulnerable?**\r\n\r\nRun kubectl version --client and if it returns versions v1.13.10, v1.14.6, and v1.15.3, you are running a vulnerable version.\r\n\r\n**How do I upgrade?**\r\n\r\nFollow installation instructions [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/)\r\n\r\n**Vulnerability Details**\r\n\r\nThe details for this vulnerability are very similar to CVE-2019-1002101 and CVE-2019-11246.\r\nA vulnerability has been discovered in kubectl cp that allows a combination of two symlinks to copy a file outside of its destination directory. This could be used to allow an attacker to place a nefarious file using a symlink, outside of the destination tree.\r\n\r\nThis issue is filed as CVE-2019-11251.\r\n\r\nTwo fixes were formulated, one fix to remove symlink support going forwards and a fix with cherry picks made to ensure backwards compatibility.\r\n\r\nSee https://github.com/kubernetes/kubernetes/pull/82143 for the primary fix in v1.16.0 which removes the support of symlinks in kubectl cp. After version 1.16.0, symlink support with kubectl cp is removed, it is recommended instead to use a combination of exec+tar.\r\n\r\nA second fix has been made to 1.15.4 and backported to 1.14.7 and 1.13.11. This changes the kubectl cp un-tar symlink logic, by unpacking the symlinks after all the regular files have been unpacked. This then guarantees that a file canât be written through a symlink.\r\n\r\nSee https://github.com/kubernetes/kubernetes/pull/82384 for the fix to version 1.15.4. The following Cherry picks were made from this fix to earlier versions of v1.14.7 and v1.13.11:\r\n\r\nSee https://github.com/kubernetes/kubernetes/pull/82502 for version 1.14.7\r\nSee https://github.com/kubernetes/kubernetes/pull/82503 for version 1.13.11\r\n\r\nThank you to Erik Sjölund (@eriksjolund) for discovering this issue, Tim Allclair and Maciej Szulik for both fixes and the patch release managers for including the fix in their releases.\r\n\r\n/close","date_published":"2020-02-03T15:12:22Z","external_url":"https://www.cve.org/cverecord?id=CVE-2019-11251","id":"CVE-2019-11251","status":"fixed","summary":"kubectl cp symlink vulnerability","url":"https://github.com/kubernetes/kubernetes/issues/87773"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2018-1002102","issue_number":85867},"content_text":"CVSS Rating: [CVSS:3.0/AV:N/AC:H/PR:H/UI:R/S:C/C:L/I:N/A:N/E:F (Low)](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:H/PR:H/UI:R/S:C/C:L/I:N/A:N/E:F)\r\n\r\nAn attacker-controlled Kubelet can return an arbitrary redirect when responding to certain apiserver requests. Impacted kube-apiservers will follow the redirect as a GET request with client-cert credentials for authenticating to the Kubelet.\r\n\r\n### Am I vulnerable?\r\n\r\nKubernetes API servers with the `StreamingProxyRedirects` [feature](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) enabled AND without the `ValidateProxyRedirects` feature are affected.\r\n\r\nAPI servers using SSH tunnels (--ssh-user / --ssh-keyfile) are not affected.\r\n\r\nUsing the default feature gate values, kube-apiserver versions before v1.14 are affected.\r\n\r\n### How do I mitigate this vulnerability?\r\n\r\nFor Kubernetes versions \u003e= v1.10.0, the `ValidateProxyRedirects` can be manually enabled with the `kube-apiserver` flag `--feature-gates=ValidateProxyRedirects=true`.\r\n\r\n#### Fix impact\r\nThe `ValidateProxyRedirects` feature will cause the kube-apiserver to check that redirects go to the same host. If nodes are configured to respond to CRI streaming requests on a different host interface than what the apiserver makes requests on (only the case if not using the built-in dockershim \u0026 setting the kubelet flag `--redirect-container-streaming=true`), then these requests will be broken. In that case, the feature can be temporarily disabled until the node configuration is corrected. We suggest setting `--redirect-container-streaming=false` on the kubelet to avoid issues.\r\n\r\n#### Fixed Versions\r\n\r\n- Kubernetes v1.14+ - Fixed by default in https://github.com/kubernetes/kubernetes/pull/72552\r\n- Kubernetes v1.10-v1.14 - Fix available as alpha in https://github.com/kubernetes/kubernetes/pull/66516\r\n\r\n## Additional Details\r\n\r\nIn a future release, we plan to deprecate the `StreamingProxyRedirects` feature, instead opting to handle the redirection locally through the Kubelet. Once the deprecation is complete, we can completely remove apiserver redirect handling (at least for Kubelet requests).\r\n\r\n#### Acknowledgements\r\n\r\nThis vulnerability was reported by Alban Crequy.\r\n\r\n/area security\r\n/kind bug\r\n/committee product-security\r\n/sig api-machinery node\r\n/area apiserver\r\n\r\n/close","date_published":"2019-12-03T22:58:37Z","external_url":"https://www.cve.org/cverecord?id=CVE-2018-1002102","id":"CVE-2018-1002102","status":"fixed","summary":"Unvalidated redirect","url":"https://github.com/kubernetes/kubernetes/issues/85867"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2019-11255","issue_number":85233},"content_text":"\u003c!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!\r\n\r\nIf the matter is security related, please disclose it privately via https://kubernetes.io/security/\r\n--\u003e\r\n\r\n**Am I vulnerable?**\r\n\r\nCSI snapshot, cloning and resizing features are affected. Prior to Kubernetes 1.16, these features were all alpha and disabled by default. Starting in Kubernetes 1.16, CSI cloning and resizing features are beta and enabled by default. \r\n\r\nThese features also require CSI drivers to be installed in a Kubernetes cluster and the CSI driver also has to support those features. An unofficial list of CSI drivers and their supported features is available [here](https://kubernetes-csi.github.io/docs/drivers.html), however it is best to check with the CSI driver vendor for the latest information.\r\n\r\nCheck if you have the following Kubernetes feature gates enabled:\r\n\r\n```\r\nVolumeSnapshotDataSource: alpha starting with K8s 1.12\r\nExpandCSIVolumes: alpha starting with K8s 1.14, beta starting with K8s 1.16\r\nVolumePVCDataSource: alpha starting with K8s 1.15, beta starting with K8s 1.16\r\n```\r\n\r\nCheck if you are using CSI drivers in your cluster. If so, the following commandâs output will be non-empty:\r\n\r\n```\r\n$ kubectl get nodes -o jsonpath='{.items[*].metadata.annotations.csi\\.volume\\.kubernetes\\.io\\/nodeid}'\r\n {\"my-csi-plugin\":\"kubernetes-minion-group-433q\"}\r\n```\r\n\r\nThen, check the CSI driverâs pod specifications to see if they are using the following vulnerable versions of sidecars:\r\n\r\n```\r\nexternal-provisioner: v0.4.1-0.4.2, v1.0.0-1.0.1, v1.1.0-1.2.1, v1.3.0\r\nexternal-snapshotter: v0.4.0-0.4.1, v1.0.0-1.0.1, v1.1.0-v1.2.1\r\nexternal-resizer: v0.1.0-0.2.0\r\n```\r\n\r\nAn example query:\r\n```\r\n$ kubectl get pods --all-namespaces -o jsonpath='{..image}' | tr ' ' $'\\n' | grep \"csi-provisioner\\|csi-snapshotter\\|csi-resizer\"\r\n image: quay.io/k8scsi/csi-provisioner:v1.2.0\r\n```\r\n\r\nNote that the exact container image name may vary across CSI driver vendors. It is recommended to inspect the Pod specifications directly.\r\n\r\n**How do I mitigate the vulnerability?**\r\n\r\nAs a short term mitigation, disable the `VolumeSnapshotDataSource`, `ExpandCSIVolumes`, and `VolumePVCDataSource` Kubernetes feature gates in kube-apiserver and kube-controller-manager. This will cause new PersistentVolumeClaims to be provisioned ignoring the DataSource and resizing requests will also be ignored. Note that this will cause new PVCs that are intended to be provisioned from a snapshot or clone to instead provision a blank disk.\r\n\r\nAlso, to disable taking volume snapshots, either remove the external-snapshotter sidecar from any CSI drivers or revoke the CSI driverâs RBAC permissions on the `snapshot.storage.k8s.io` API group.\r\n\r\nLonger term, upgrade your CSI driver with patched versions of the affected sidecars. Fixes are available in the following sidecar versions:\r\n\r\nexternal-provisioner: \r\nv0.4.3\r\nv1.0.2\r\nv1.2.2\r\nv1.3.1\r\nv1.4.0 \r\n\r\nexternal-snapshotter:\r\nv0.4.2\r\nv1.0.2\r\nv1.2.2\r\n\r\nexternal-resizer\r\nv0.3.0\r\n\r\nFixes for each of the sidecars can be tracked by:\r\nhttps://github.com/kubernetes-csi/external-provisioner/issues/380\r\nhttps://github.com/kubernetes-csi/external-snapshotter/issues/193\r\nhttps://github.com/kubernetes-csi/external-resizer/issues/63\r\n\r\n**How do I upgrade?**\r\n\r\nCheck with your CSI driver vendor for upgrade instructions. No Kubernetes control plane or node upgrades are required unless the CSI driver is bundled into the Kubernetes distribution.\r\n\r\n**Vulnerability details**\r\n\r\nThere are two different vulnerabilities impacting the same features.\r\n\r\nWhen PersistentVolumeClaim and PersistentVolume objects are bound, they have bidirectional references to each other. When dereferencing a PersistentVolumeClaim to get a PersistentVolume, the impacted sidecar controllers were not validating that the PersistentVolume referenced back to the same PersistentVolumeClaim, potentially operating on unauthorized PersistentVolumes for snapshot, cloning and resizing operations.\r\n\r\nA similar issue exists for VolumeSnapshot and VolumeSnapshotContent objects when creating a new PersistentVolumeClaim from a snapshot.\r\n\r\nThe second issue is related to the property that CSI volume and snapshot ids are only required to be unique within a single CSI driver. Impacted sidecar controllers were not validating that the requested source VolumeSnapshot or PersistentVolumeClaim specified were from the same driver processing the request, potentially operating on unauthorized volumes during snapshot, restore from snapshot, or cloning operations.\r\n\r\n","date_published":"2019-11-13T20:57:31Z","external_url":"https://www.cve.org/cverecord?id=CVE-2019-11255","id":"CVE-2019-11255","status":"fixed","summary":"CSI volume snapshot, cloning and resizing features can result in unauthorized volume data access or mutation","url":"https://github.com/kubernetes/kubernetes/issues/85233"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2019-11253","issue_number":83253},"content_text":"CVE-2019-11253 is a denial of service vulnerability in the kube-apiserver, allowing authorized users sending malicious YAML or JSON payloads to cause kube-apiserver to consume excessive CPU or memory, potentially crashing and becoming unavailable. This vulnerability has been given an initial severity of High, with a score of 7.5 ([CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H)).\r\n\r\nPrior to v1.14.0, default RBAC policy authorized anonymous users to submit requests that could trigger this vulnerability. Clusters upgraded from a version prior to v1.14.0 keep the more permissive policy by default for backwards compatibility. See the mitigation section below for instructions on how to install the more restrictive v1.14+ policy.\r\n\r\n**Affected versions:**\r\n* Kubernetes v1.0.0-1.12.x\r\n* Kubernetes v1.13.0-1.13.11, resolved in v1.13.12 by https://github.com/kubernetes/kubernetes/pull/83436\r\n* Kubernetes v1.14.0-1.14.7, resolved in v1.14.8 by https://github.com/kubernetes/kubernetes/pull/83435\r\n* Kubernetes v1.15.0-1.15.4, resolved in v1.15.5 by https://github.com/kubernetes/kubernetes/pull/83434\r\n* Kubernetes v1.16.0-1.16.1, resolved in v1.16.2 by https://github.com/kubernetes/kubernetes/pull/83433\r\n\r\nAll four patch releases are now available.\r\n\r\nFixed in master by #83261\r\n\r\n**Mitigation:**\r\n\r\nRequests that are rejected by authorization do not trigger the vulnerability, so managing authorization rules and/or access to the Kubernetes API server mitigates which users are able to trigger this vulnerability.\r\n\r\nTo manually apply the more restrictive v1.14.x+ policy, either as a pre-upgrade mitigation, or as an additional protection for an upgraded cluster, save the [attached file](https://github.com/kubernetes/kubernetes/files/3735508/rbac.yaml.txt) as `rbac.yaml`, and run:\r\n\r\n```sh\r\nkubectl auth reconcile -f rbac.yaml --remove-extra-subjects --remove-extra-permissions \r\n```\r\n\r\n**Note: this removes the ability for unauthenticated users to use `kubectl auth can-i`**\r\n\r\nIf you are running a version prior to v1.14.0:\r\n* in addition to installing the restrictive policy, turn off autoupdate for this clusterrolebinding so your changes arenât replaced on an API server restart:\r\n ```sh\r\n kubectl annotate --overwrite clusterrolebinding/system:basic-user rbac.authorization.kubernetes.io/autoupdate=false\r\n ```\r\n* after upgrading to v1.14.0 or greater, you can remove this annotation to reenable autoupdate:\r\n ```sh\r\n kubectl annotate --overwrite clusterrolebinding/system:basic-user rbac.authorization.kubernetes.io/autoupdate=true\r\n ```\r\n\r\n=============\r\n\r\n**Original description follows:**\r\n\r\n**Introduction** \r\n\r\nPosting this as an issue following report to the security list who suggested putting it here as it's already public in a Stackoverflow question [here](https://stackoverflow.com/questions/58129150/security-yaml-bomb-user-can-restart-kube-api-by-sending-configmap/58133282#58133282)\r\n\r\n**What happened**:\r\n\r\nWhen creating a ConfigMap object which has recursive references contained in it, excessive CPU usage can occur. This appears to be an instance of a [\"Billion Laughs\" attack](https://en.wikipedia.org/wiki/Billion_laughs_attack) which is quite well known as an XML parsing issue.\r\n\r\nApplying this manifest to a cluster causes the client to hang for some time with considerable CPU usage.\r\n\r\n```\r\napiVersion: v1\r\ndata:\r\n a: \u0026a [\"web\",\"web\",\"web\",\"web\",\"web\",\"web\",\"web\",\"web\",\"web\"]\r\n b: \u0026b [*a,*a,*a,*a,*a,*a,*a,*a,*a]\r\n c: \u0026c [*b,*b,*b,*b,*b,*b,*b,*b,*b]\r\n d: \u0026d [*c,*c,*c,*c,*c,*c,*c,*c,*c]\r\n e: \u0026e [*d,*d,*d,*d,*d,*d,*d,*d,*d]\r\n f: \u0026f [*e,*e,*e,*e,*e,*e,*e,*e,*e]\r\n g: \u0026g [*f,*f,*f,*f,*f,*f,*f,*f,*f]\r\n h: \u0026h [*g,*g,*g,*g,*g,*g,*g,*g,*g]\r\n i: \u0026i [*h,*h,*h,*h,*h,*h,*h,*h,*h]\r\nkind: ConfigMap\r\nmetadata:\r\n name: yaml-bomb\r\n namespace: default\r\n```\r\n**What you expected to happen**:\r\n\r\nIdeally it would be good for a maximum size of entity to be defined, or perhaps some limit on recursive references in YAML parsed by kubectl.\r\n\r\nOne note is that the original poster on Stackoverflow indicated that the resource consumption was in `kube-apiserver` but both tests I did (1.16 client against 1.15 Kubeadm cluster and 1.16 client against 1.16 kubeadm cluster) showed the CPU usage client-side.\r\n\r\n**How to reproduce it (as minimally and precisely as possible)**:\r\n\r\nGet the manifest above and apply to a cluster as normal with `kubectl create -f \u003cmanifest\u003e`. Use `top` or another CPU monitor to observe the quantity of CPU time used.\r\n\r\n**Anything else we need to know?**:\r\n\r\n**Environment**:\r\n- Kubernetes version (use `kubectl version`):\r\n\r\n**test 1** (linux AMD64 client, Kubeadm cluster running in kind)\r\n```\r\nClient Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.0\", GitCommit:\"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77\", GitTreeState:\"clean\", BuildDate:\"2019-09-18T14:36:53Z\", GoVersion:\"go1.12.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\r\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.0\", GitCommit:\"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529\", GitTreeState:\"clean\", BuildDate:\"2019-06-25T23:41:27Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\r\n```\r\n\r\n**test 2** (Linux AMD64 client, Kubeadm cluster running in VMWare Workstation)\r\n```\r\nClient Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.0\", GitCommit:\"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77\", GitTreeState:\"clean\", BuildDate:\"2019-09-18T14:36:53Z\", GoVersion:\"go1.12.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\r\nServer Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.0\", GitCommit:\"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77\", GitTreeState:\"clean\", BuildDate:\"2019-09-18T14:27:17Z\", GoVersion:\"go1.12.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\r\n```\r\n\r\n","date_published":"2019-09-27T16:53:31Z","external_url":"https://www.cve.org/cverecord?id=CVE-2019-11253","id":"CVE-2019-11253","status":"fixed","summary":"Kubernetes API Server JSON/YAML parsing vulnerable to resource exhaustion attack","url":"https://github.com/kubernetes/kubernetes/issues/83253"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2019-11250","issue_number":81114},"content_text":"This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)\r\n\r\n**Description**\r\nKubernetes requires an authentication mechanism to enforce usersâ privileges. One method of authentication, bearer tokens, are opaque strings used to associate a user with their having successfully authenticated previously. Any user with possession of this token may masquerade as the original user (the âbearerâ) without further authentication.\r\n\r\nWithin Kubernetes, the bearer token is captured within the hyperkube kube-apiserver system logs at high verbosity levels (--v 10). A malicious user with access to the system logs on such a system could masquerade as any user who has previously logged into the system.\r\n\r\n**Exploit Scenario**\r\nAlice logs into a Kubernetes cluster and is issued a Bearer token. The system logs her token. Eve, who has access to the logs but not the production Kubernetes cluster, replays Aliceâs Bearer token, and can masquerade as Alice to the cluster.\r\n\r\n**Recommendation**\r\nShort term, remove the Bearer token from the log. Do not log any authentication credentials within the system, including tokens, private keys, or passwords that may be used to authenticate to the production Kubernetes cluster, regardless of the logging level.\r\n\r\nLong term, either implement policies that enforce code review to ensure that sensitive data is not exposed in logs, or implement logging filters that check for sensitive data and remove it prior to outputting the log. In either case, ensure that sensitive data cannot be trivially stored in logs. \r\n\r\n**Anything else we need to know?**:\r\n\r\nSee #81146 for current status of all issues created from these findings.\r\n\r\nThe vendor gave this issue an ID of TOB-K8S-001 and it was finding 6 of the report.\r\n\r\nThe vendor considers this issue Medium Severity.\r\n\r\nTo view the original finding, begin on page 31 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)\r\n\r\n**Environment**:\r\n\r\n- Kubernetes version: 1.13.4","date_published":"2019-08-08T02:03:04Z","external_url":"https://www.cve.org/cverecord?id=CVE-2019-11250","id":"CVE-2019-11250","status":"fixed","summary":"Bearer tokens are revealed in logs","url":"https://github.com/kubernetes/kubernetes/issues/81114"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2019-11248","issue_number":81023},"content_text":"The debugging endpoint `/debug/pprof` is exposed over the unauthenticated Kubelet healthz port. Versions prior to 1.15.0, 1.14.4, 1.13.8, and 1.12.10 are affected. The issue is of medium severity, but not exposed by the default configuration. If you are exposed we recommend upgrading to at least one of the versions listed.\r\n\r\n**Am I vulnerable?**\r\nBy default, the Kubelet exposes unauthenticated healthz endpoints on port :10248, but only over localhost. If your nodes are using a non-localhost healthzBindAddress (--health-bind-address), and an older version, you may be vulnerable. If your nodes are using the default localhost healthzBindAddress, it is only exposed to pods or processes running in the host network namespace.\r\n\r\nRun `kubectl get nodes` to see whether nodes are running a vulnerable version.\r\n\r\nRun `kubectl get --raw /api/v1/nodes/${NODE_NAME}/proxy/configz` to check whether the \"healthzBindAddress\" is non-local.\r\n\r\n**How do I mitigate the vulnerability?**\r\n* Upgrade to a patched version (1.15.0+, 1.14.4+, 1.13.8+, or 1.12.10+)\r\n* or, update node configurations to set the \"healthzBindAddress\" to \"127.0.0.1\".\r\n\r\nhttps://github.com/kubernetes/kubernetes/pull/79184 fixed in 1.12.10\r\nhttps://github.com/kubernetes/kubernetes/pull/79183 fixed in 1.13.8\r\nhttps://github.com/kubernetes/kubernetes/pull/79182 fixed in 1.14.4\r\nhttps://github.com/kubernetes/kubernetes/pull/78313 fixed in 1.15.0\r\n\r\n**Vulnerability Details**\r\nThe `go pprof` endpoint is exposed over the Kubelet's healthz port. This debugging endpoint can potentially leak sensitive information such as internal Kubelet memory addresses and configuration, or for limited denial of service.\r\n\r\nThanks to Jordan Zebor of F5 Networks for reporting this problem.\r\n\r\n/area security\r\n/close","date_published":"2019-08-06T14:34:33Z","external_url":"https://www.cve.org/cverecord?id=CVE-2019-11248","id":"CVE-2019-11248","status":"fixed","summary":"/debug/pprof exposed on kubelet's healthz port","url":"https://github.com/kubernetes/kubernetes/issues/81023"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2019-11249","issue_number":80984},"content_text":"[CVSS:3.0/AV:N/AC:H/PR:L/UI:R/S:U/C:N/I:H/A:N](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:H/PR:L/UI:R/S:U/C:N/I:H/A:N)\r\n\r\nA third issue was discovered with the Kubernetes `kubectl cp` command that could enable a directory traversal such that a malicious container could replace or create files on a userâs workstation. The vulnerability is a client-side defect and requires user interaction to be exploited.\r\n \r\n**Vulnerable versions:**\r\nKubernetes 1.0.x-1.12.x\r\nKubernetes 1.13.0-1.13.8\r\nKubernetes 1.14.0-1.14.4\r\nKubernetes 1.15.0-1.15.1\r\n \r\n**Vulnerable configurations:**\r\nAll `kubectl` clients running a vulnerable version and using the `cp` operation.\r\n \r\n**Vulnerability impact:**\r\nA malicious user can potentially create or overwrite files outside of the destination directory of the `kubectl cp` operation.\r\n \r\n**Mitigations prior to upgrading:**\r\nAvoid using `kubectl cp` with any untrusted workloads.\r\n \r\n**Fixed versions:**\r\nFixed in v1.13.9 by #80871\r\nFixed in v1.14.5 by #80870\r\nFixed in v1.15.2 by #80869\r\nFixed in master by #80436\r\n \r\n**Fix impact:**\r\nThe `kubectl cp` function is prevented from creating or modifying files outside the destination directory.\r\n\r\n**Acknowledgements:**\r\nThis issue was discovered by Yang Yang of Amazon, who also provided a patch. Thanks also to the release managers for creating the security releases.","date_published":"2019-08-05T12:44:23Z","external_url":"https://www.cve.org/cverecord?id=CVE-2019-11249","id":"CVE-2019-11249","status":"fixed","summary":"Incomplete fixes for CVE-2019-1002101 and CVE-2019-11246, kubectl cp potential directory traversal","url":"https://github.com/kubernetes/kubernetes/issues/80984"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2019-11247","issue_number":80983},"content_text":"[CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:U/C:L/I:L/A:L](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:U/C:L/I:L/A:L)\r\n \r\nThe API server mistakenly allows access to a cluster-scoped custom resource if the request is made as if the resource were namespaced. Authorizations for the resource accessed in this manner are enforced using roles and role bindings within the namespace, meaning that a user with access only to a resource in one namespace could create, view update or delete the cluster-scoped resource (according to their namespace role privileges).\r\n \r\n**Vulnerable versions:**\r\nKubernetes 1.7.x-1.12.x\r\nKubernetes 1.13.0-1.13.8\r\nKubernetes 1.14.0-1.14.4\r\nKubernetes 1.15.0-1.15.1\r\n \r\n**Vulnerable configurations:**\r\nAll clusters that have rolebindings to roles and clusterroles that include authorization rules for cluster-scoped custom resources.\r\n \r\n**Vulnerability impact:**\r\nA user with access to custom resources in a single namespace can access custom resources with cluster scope.\r\n \r\n**Mitigations prior to upgrading:**\r\nTo mitigate, remove authorization rules that grant access to cluster-scoped resources within namespaces. For example, RBAC roles and clusterroles intended to be referenced by namespaced rolebindings should not grant access to `resources:[*], apiGroups:[*]`, or grant access to cluster-scoped custom resources.\r\n \r\n \r\n**Fixed versions:**\r\nFixed in v1.13.9 by #80852\r\nFixed in v1.14.5 by #80851\r\nFixed in v1.15.2 by #80850\r\nFixed in master by #80750\r\n \r\n**Fix impact:**\r\nPermission to the correct scope will be required to access cluster-scoped custom resources.\r\n\r\n**Acknowledgements:**\r\nThis issue was discovered by Prabu Shyam of Verizon Media. Thanks to Stefan Schimanski for the fix, to David Eads for the fix review, and to the release managers for creating the security releases.\r\n","date_published":"2019-08-05T12:44:08Z","external_url":"https://www.cve.org/cverecord?id=CVE-2019-11247","id":"CVE-2019-11247","status":"fixed","summary":"API server allows access to custom resources via wrong scope","url":"https://github.com/kubernetes/kubernetes/issues/80983"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2019-11245","issue_number":78308},"content_text":"[CVSS:3.0/AV:L/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:L](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:L/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:L), 4.9 (medium)\r\n\r\nIn kubelet v1.13.6 and v1.14.2, containers for pods that do not specify an explicit `runAsUser` attempt to run as uid 0 (root) on container restart, or if the image was previously pulled to the node. If the pod specified `mustRunAsNonRoot: true`, the kubelet will refuse to start the container as root. If the pod did not specify `mustRunAsNonRoot: true`, the kubelet will run the container as uid 0.\r\n\r\nCVE-2019-11245 will be **fixed** in the following Kubernetes releases:\r\n\r\n* v1.13.7 in https://github.com/kubernetes/kubernetes/pull/78320\r\n* v1.14.3 in https://github.com/kubernetes/kubernetes/pull/78316\r\n\r\nFixed by #78261 in master\r\n\r\n### Affected components:\r\n\r\n* Kubelet\r\n\r\n### Affected versions:\r\n\r\n* Kubelet v1.13.6\r\n* Kubelet v1.14.2\r\n\r\n### Affected configurations:\r\n\r\nClusters with:\r\n* Kubelet versions v1.13.6 or v1.14.2\r\n* Pods that do not specify an explicit `runAsUser: \u003cuid\u003e` or `mustRunAsNonRoot:true`\r\n\r\n### Impact:\r\n\r\nIf a pod is run without any user controls specified in the pod spec (like `runAsUser: \u003cuid\u003e` or `mustRunAsNonRoot:true`), a container in that pod that would normally run as the USER specified in the container image manifest can sometimes be run as root instead (on container restart, or if the image was previously pulled to the node)\r\n\r\n* pods that specify an explicit `runAsUser` are unaffected and continue to work properly\r\n* podSecurityPolicies that force a `runAsUser` setting are unaffected and continue to work properly\r\n* pods that specify `mustRunAsNonRoot:true` will refuse to start the container as uid 0, which can affect availability\r\n* pods that do not specify `runAsUser` or `mustRunAsNonRoot:true` will run as uid 0 on restart or if the image was previously pulled to the node\r\n\r\n### Mitigations:\r\n\r\nThis section lists possible mitigations to use prior to upgrading.\r\n\r\n* Specify `runAsUser` directives in pods to control the uid a container runs as\r\n* Specify `mustRunAsNonRoot:true` directives in pods to prevent starting as root (note this means the attempt to start the container will fail on affected kubelet versions)\r\n* Downgrade kubelets to v1.14.1 or v1.13.5 as instructed by your Kubernetes distribution.\r\n\r\n**original issue description follows**\r\n\r\n**What happened**:\r\n\r\nWhen I launch a pod from a docker image that specifies a USER in the Dockerfile, the container only runs as that user on its first launch. After that the container runs as UID=0.\r\n\r\n**What you expected to happen**:\r\nI expect the container to act consistently every launch, and probably with the USER specified in the container.\r\n\r\n**How to reproduce it (as minimally and precisely as possible)**:\r\nTesting with minikube (same test specifying v1.14.1, `kubectl logs test` always returns 11211):\r\n```\r\n$ minikube start --kubernetes-version v1.14.2\r\nð minikube v1.1.0 on linux (amd64)\r\nð¿ Downloading Minikube ISO ...\r\n 131.28 MB / 131.28 MB [============================================] 100.00% 0s\r\nð¥ Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...\r\nð³ Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6\r\nð¾ Downloading kubeadm v1.14.2\r\nð¾ Downloading kubelet v1.14.2\r\nð Pulling images ...\r\nð Launching Kubernetes ... \r\nâ Verifying: apiserver proxy etcd scheduler controller dns\r\nð Done! kubectl is now configured to use \"minikube\"\r\n$ cat test.yaml\r\n---\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n name: test\r\nspec:\r\n containers:\r\n - name: test\r\n image: memcached:latest\r\n imagePullPolicy: IfNotPresent\r\n command: [\"/bin/bash\"]\r\n args:\r\n - -c\r\n - 'id -u; sleep 30'\r\n$ kubectl apply -f test.yaml \r\npod/test created\r\n\r\n# as soon as pod starts\r\n$ kubectl logs test\r\n11211\r\n# Wait 30 seconds for container to restart\r\n$ kubectl logs test\r\n0\r\n# Try deleting/recreating the pod\r\n$ kubectl delete pod test\r\npod \"test\" deleted\r\n$ kubectl apply -f test.yaml \r\npod/test created\r\n$ kubectl logs test\r\n0\r\n```\r\n\r\n**Anything else we need to know?**:\r\n\r\n**Environment**:\r\n- Kubernetes version (use `kubectl version`): I get the results I expect in v1.13.5 and v1.14.1. The problem exists in v1.13.6 and v1.14.2\r\n- Cloud provider or hardware configuration: minikube v1.1.0 using VirtualBox\r\n- OS (e.g: `cat /etc/os-release`):\r\n- Kernel (e.g. `uname -a`):\r\n- Install tools:\r\n- Network plugin and version (if this is a network-related bug):\r\n- Others:\r\n","date_published":"2019-05-24T16:14:49Z","external_url":"https://www.cve.org/cverecord?id=CVE-2019-11245","id":"CVE-2019-11245","status":"fixed","summary":"container uid changes to root after first restart or if image is already pulled to the node","url":"https://github.com/kubernetes/kubernetes/issues/78308"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2019-11243","issue_number":76797},"content_text":"CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:U/C:L/I:N/A:N\r\n\r\nThe `rest.AnonymousClientConfig()` method returns a copy of the provided config, with credentials removed (bearer token, username/password, and client certificate/key data).\r\n\r\nIn the following versions, `rest.AnonymousClientConfig()` did not effectively clear service account credentials loaded using `rest.InClusterConfig()`:\r\n* v1.12.0-v1.12.4\r\n* v1.13.0\r\n\r\n**What is the impact?**\r\n* `k8s.io/client-go` users that use the `rest.AnonymousClientConfig()` method directly with client config loaded with `rest.InClusterConfig()` receive back a client config which can still send the loaded service account token with requests.\r\n\r\n**How was the issue fixed?**\r\n* In 1.12.5+ and 1.13.1+, `rest.InClusterConfig()` was modified to return a client config that is safe to use with the `rest.AnonymousClientConfig()` method (https://github.com/kubernetes/kubernetes/pull/71713)\r\n* In v1.15.0, the `rest.AnonymousClientConfig()` will also exclude the `config.Transport` and `config.WrapTransport` fields, in addition to the explicit credential-carrying fields. (https://github.com/kubernetes/kubernetes/pull/75771)\r\n\r\n**How do I resolve the issue?**\r\n* Upgrade `k8s.io/client-go` to `kubernetes-1.12.5`, `kubernetes-1.13.1`, `kubernetes-1.14.0`, or higher\r\n* or manually clear the `config.WrapTransport` and `config.Transport` fields in addition to calling `rest.AnonymousClientConfig()`\r\n\r\nThanks to Oleg Bulatov of Red Hat for reporting this issue.\r\n\r\n/area security\r\n/kind bug\r\n/sig auth\r\n/sig api-machinery\r\n/assign\r\n/close","date_published":"2019-04-18T21:31:53Z","external_url":"https://www.cve.org/cverecord?id=CVE-2019-11243","id":"CVE-2019-11243","status":"fixed","summary":"rest.AnonymousClientConfig() does not remove the serviceaccount credentials from config created by rest.InClusterConfig()","url":"https://github.com/kubernetes/kubernetes/issues/76797"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2019-11244","issue_number":76676},"content_text":"In kubectl v1.8.0+, schema info is cached in the location specified by `--cache-dir` (defaulting to `$HOME/.kube/http-cache`), written with world-writeable permissions (rw-rw-rw-).\r\n\r\nIf `--cache-dir` is specified and pointed at a different location accessible to other users/groups, the written files may be modified by other users/groups and disrupt the kubectl invocation.\r\n\r\nCVSS score: CVSS:3.0/AV:L/AC:H/PR:L/UI:R/S:U/C:L/I:L/A:N (3.3, low)\r\n\r\n**What versions are affected?**\r\nkubectl v1.8.0+\r\n\r\n**What configurations are affected?**\r\nInvocations that point `--cache-dir` at world-writeable locations\r\n\r\n**Impact**\r\nMalformed responses written to the cache directory can disrupt the kubectl invocation\r\n\r\n**Workaround**\r\nUse the default `--http-cache` location in the $HOME directory or point it at a directory that is only accessible to desired users/groups.\r\n\r\n\r\n\r\n(original description follows) ====\r\nWhat happened: The files inside of \".kube/http-cache\" are world writeable (rw-rw-rw-). While the default for these files appears to be the home directory, using the \"--cache-dir\" flag could put these files into a place where world writeable files would allow any user / process to modify the cache files. Modification of the cache files could influence the kubectl utility in a negative way for other users.\r\n\r\nWhat you expected to happen: Apply stricter file permissions to the http-cache files.\r\n\r\nHow to reproduce it (as minimally and precisely as possible): Run any generic kubectl command which is successful and then list the cache directory ~/.kube/http-cache/*\r\n \r\n$ kubectl get pods --all-namespaces\r\n$ ls -la ~/.kube/http-cache/*\r\n\r\nAnything else we need to know?: I estimate this is a low severity security issue with a CVSS score of \"3.3 / CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:N\" - https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:N\r\n\r\nEnvironment: Linux\r\n \r\nKubernetes version (use kubectl version):Client Version: version.Info{Major:\"1\", Minor:\"12\", GitVersion:\"v1.12.6\", GitCommit:\"ab91afd7062d4240e95e51ac00a18bd58fddd365\", GitTreeState:\"clean\", BuildDate:\"2019-02-26T12:49:28Z\", GoVersion:\"go1.10.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\r\nServer Version: version.Info{Major:\"1\", Minor:\"12\", GitVersion:\"v1.12.6\", GitCommit:\"ab91afd7062d4240e95e51ac00a18bd58fddd365\", GitTreeState:\"clean\", BuildDate:\"2019-02-26T12:49:28Z\", GoVersion:\"go1.10.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\r\n\r\nCloud provider or hardware configuration: AWS. Running kube api server in hyperkube.\r\n\r\nOS (e.g: cat /etc/os-release):\r\nNAME=\"CentOS Linux\"\r\nVERSION=\"7.1808 (Core)\"\r\nID=\"centos\"\r\nID_LIKE=\"rhel fedora\"\r\nVERSION_ID=\"7\"\r\nPRETTY_NAME=\"CentOS Linux 7.1808 (Core)\"\r\nANSI_COLOR=\"0;31\"\r\nCPE_NAME=\"cpe:/o:centos:centos:7\"\r\nHOME_URL=\"https://www.centos.org/\"\r\nBUG_REPORT_URL=\"https://bugs.centos.org/\"\r\nCENTOS_MANTISBT_PROJECT=\"CentOS-7\"\r\nCENTOS_MANTISBT_PROJECT_VERSION=\"7\"\r\nREDHAT_SUPPORT_PRODUCT=\"centos\"\r\nREDHAT_SUPPORT_PRODUCT_VERSION=\"7\"\r\nOSTREE_VERSION=7.1808\r\n \r\nKernel (e.g. uname -a): Linux hackit.internal 3.10.0-862.11.6.el7.x86_64 #1 SMP Tue Aug 14 21:49:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\r\n \r\nInstall tools: Manual installation.\r\n \r\nOthers: n/a\r\n","date_published":"2019-04-16T20:14:25Z","external_url":"https://www.cve.org/cverecord?id=CVE-2019-11244","id":"CVE-2019-11244","status":"fixed","summary":"`kubectl:-http-cache=\u003cworld-accessible dir\u003e` creates world-writeable cached schema files","url":"https://github.com/kubernetes/kubernetes/issues/76676"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2019-1002100","issue_number":74534},"content_text":"CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H) (6.5, medium)\r\n\r\nUsers that are authorized to make patch requests to the Kubernetes API Server can send a specially crafted patch of type âjson-patchâ (e.g. `kubectl patch --type json` or `\"Content-Type: application/json-patch+json\"`) that consumes excessive resources while processing, causing a Denial of Service on the API Server.\r\n\r\nThanks to Carl Henrik Lunde for reporting this problem.\r\n\r\nCVE-2019-1002100 is **fixed** in the following Kubernetes releases:\r\n\r\n* [v1.11.8](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md/#v1118)\r\n* [v1.12.6](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md/#v1126)\r\n* [v1.13.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md/#v1134)\r\n\r\n### Affected components:\r\n* Kubernetes API server\r\n\r\n### Affected versions:\r\n* Kubernetes v1.0.x-1.10.x\r\n* Kubernetes v1.11.0-1.11.7 (fixed in [v1.11.8](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md/#v1118))\r\n* Kubernetes v1.12.0-1.12.5 (fixed in [v1.12.6](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md/#v1126))\r\n* Kubernetes v1.13.0-1.13.3 (fixed in [v1.13.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md/#v1134))\r\n\r\n### Mitigations:\r\n* Remove âpatchâ permissions from untrusted users.\r\n\r\nNote: If you are using binaries or packages provided by a distributor (not the ones provided in the open source release artifacts), you should contact them to determine what versions resolve this CVE. Distributors may choose to provide support for older releases beyond the ones maintained by the open source project.\r\n\r\n### Post-mortem:\r\n* [Document](https://github.com/kubernetes/kubernetes/files/3005552/PM-CVE-2019-1002100.pdf)\r\n","date_published":"2019-02-25T19:39:09Z","external_url":"https://www.cve.org/cverecord?id=CVE-2019-1002100","id":"CVE-2019-1002100","status":"fixed","summary":"json-patch requests can exhaust apiserver resources","url":"https://github.com/kubernetes/kubernetes/issues/74534"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2018-1002105","issue_number":71411},"content_text":"[CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H) (9.8, critical)\r\n\r\nWith a specially crafted request, users that are authorized to establish a connection through the Kubernetes API server to a backend server can then send arbitrary requests over the same connection directly to that backend, authenticated with the Kubernetes API serverâs TLS credentials used to establish the backend connection.\r\n\r\nThanks to Darren Shepherd for reporting this problem.\r\n\r\nCVE-2018-1002105 is **fixed** in the following Kubernetes releases:\r\n\r\n* [v1.10.11](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md/#v11011)\r\n* [v1.11.5](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md/#v1115)\r\n* [v1.12.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md/#v1123)\r\n* [v1.13.0-rc.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md/#v1130-rc1)\r\n\r\n### Affected components:\r\n\r\n* Kubernetes API server\r\n\r\n### Affected versions:\r\n\r\n* Kubernetes v1.0.x-1.9.x\r\n* Kubernetes v1.10.0-1.10.10 (fixed in [v1.10.11](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md/#v11011))\r\n* Kubernetes v1.11.0-1.11.4 (fixed in [v1.11.5](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md/#v1115))\r\n* Kubernetes v1.12.0-1.12.2 (fixed in [v1.12.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md/#v1123))\r\n\r\nNote: If you are using binaries or packages provided by a distributor (not the ones provided in the open source release artifacts), you should contact them to determine what versions resolve this CVE. Distributors may choose to provide support for older releases beyond the ones maintained by the open source project.\r\n\r\n### Affected configurations:\r\n\r\n* Clusters \u003e= 1.6.x that run aggregated API servers (like the metrics server) that are directly accessible from the Kubernetes API serverâs network. If there are aggregated API servers configured in a cluster, the following command will return the names of the associated APIService objects (if no names are listed, or the kube-apiserver is an older version that does not have the apiservices API, then the cluster has no aggregated API servers configured):\r\n ```\r\n kubectl get apiservices \\\r\n -o 'jsonpath={range .items[?(@.spec.service.name!=\"\")]}{.metadata.name}{\"\\n\"}{end}'\r\n ```\r\n* Clusters \u003e= 1.0.x that grant pod exec/attach/portforward permissions to users that are not expected to have full access to kubelet APIs\r\n\r\n### Vulnerability impact:\r\n\r\n* An API call to any aggregated API server endpoint can be escalated to perform any API request against that aggregated API server, as long as that aggregated API server is directly accessible from the Kubernetes API serverâs network. **[Default RBAC policy](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#discovery-roles) allows all users (authenticated and unauthenticated) to perform discovery API calls that allow this escalation against any aggregated API servers configured in the cluster.**\r\n* A pod exec/attach/portforward API call can be escalated to perform any API request against the kubelet API on the node specified in the pod spec (e.g. listing all pods on the node, running arbitrary commands inside those pods, and obtaining the command output). **Pod exec/attach/portforward permissions are included in the admin/edit RBAC roles intended for namespace-constrained users.**\r\n\r\n### Mitigations:\r\n\r\nThis section lists possible mitigations to use prior to upgrading. Note that many of the mitigations are likely to be disruptive, and upgrading to a fixed version is strongly recommended.\r\n\r\n#### Mitigations for the anonymous user -\u003e aggregated API server escalation include:\r\n* suspend use of aggregated API servers (note that this will disrupt users of the APIs provided by the aggregated server)\r\n* disable anonymous requests by passing `--anonymous-auth=false` to the kube-apiserver (note that this may disrupt load balancer or kubelet health checks of the kube-apiserver, and breaks `kubeadm join` setup flows)\r\n* remove *all* anonymous access to *all* aggregated APIs (including discovery permissions granted by the [default discovery role bindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#discovery-roles))\r\n\r\n#### Mitigations for the authenticated user -\u003e aggregated API server escalation include:\r\n* suspend use of aggregated API servers (note that this will disrupt users of the APIs provided by the aggregated server)\r\n* remove *all* access to *all* aggregated APIs (including discovery permissions granted by the [default discovery role bindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#discovery-roles)) from users that should not have full access to the aggregated APIs (note that this may disrupt users and controllers that make use of discovery information to map API types to URLs)\r\n\r\n#### Mitigation for the authorized pod exec/attach/portforward -\u003e kubelet API escalation:\r\n* Remove pod exec/attach/portforward permissions from users that should not have full access to the kubelet API\r\n\r\n### Detection:\r\n\r\nThere is no simple way to detect whether this vulnerability has been used. Because the unauthorized requests are made over an established connection, they do not appear in the Kubernetes API server audit logs or server log. The requests do appear in the kubelet or aggregated API server logs, but are indistinguishable from correctly authorized and proxied requests via the Kubernetes API server.\r\n\r\n### Post-mortem:\r\n\r\n* [Document](https://github.com/kubernetes/kubernetes/files/2700818/PM-CVE-2018-1002105.pdf)\r\n* [Recorded meeting](https://youtu.be/1M4oXPgxYyE)","date_published":"2018-11-26T11:07:36Z","external_url":"https://www.cve.org/cverecord?id=CVE-2018-1002105","id":"CVE-2018-1002105","status":"fixed","summary":"proxy request handling in kube-apiserver can leave vulnerable TCP connections","url":"https://github.com/kubernetes/kubernetes/issues/71411"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2018-1002101","issue_number":65750},"content_text":"This issue is tracked under CVE-2018-1002101\r\n\r\n\u003c!-- This form is for bug reports and feature requests ONLY!\r\n\r\nIf you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).\r\n\r\nIf the matter is security related, please disclose it privately via https://kubernetes.io/security/.\r\n--\u003e\r\n\r\n**Is this a BUG REPORT or FEATURE REQUEST?**:\r\n/kind bug\r\n\r\n\u003e Uncomment only one, leave it on its own line:\r\n\u003e\r\n\u003e /kind bug\r\n\u003e /kind feature\r\n\r\n\r\n**What happened**:\r\nuser PowerShell Environment Variables to store user input string to prevent command line injection, the env var in PowerShell would be taken as literal values and not as executable vulnerable code, this kind of fix is common for command line injection issue (called: parameterized way)\r\n\r\n**What you expected to happen**:\r\n\r\n**How to reproduce it (as minimally and precisely as possible)**:\r\n\r\n\r\n**Anything else we need to know?**:\r\n\r\n**Environment**:\r\n- Kubernetes version (use `kubectl version`):\r\n- Cloud provider or hardware configuration:\r\n- OS (e.g. from /etc/os-release):\r\n- Kernel (e.g. `uname -a`):\r\n- Install tools:\r\n- Others:\r\n\r\n/sig windows\r\n/sig storage\r\n/assign","date_published":"2018-07-03T08:06:15Z","external_url":"https://www.cve.org/cverecord?id=CVE-2018-1002101","id":"CVE-2018-1002101","status":"fixed","summary":"smb mount security issue","url":"https://github.com/kubernetes/kubernetes/issues/65750"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2018-1002100","issue_number":61297},"content_text":"**Is this a BUG REPORT or FEATURE REQUEST?**: Bug\r\n\r\n/kind bug\r\n\r\n**What happened**:\r\nkubectl cp \u003cpod-name\u003e:/some/remote/dir /some/local/dir\r\n\r\nIf the container returns a malformed tarfile with paths like:\r\n\r\n'/some/remote/dir/../../../../tmp/foo' kubectl writes this to `/tmp/foo` instead of `/some/local/dir/tmp/foo`\r\n\r\n**What you expected to happen**:\r\n\r\nI expect kubectl to clean up the path and write to `/some/local/dir/tmp/foo`\r\n\r\n**Notes**\r\nOriginal credit to @hansmi (Michael Hanselmann) for originally reporting the bug.\r\n\r\nTracked as CVE-2018-1002100\r\n","date_published":"2018-03-16T19:24:46Z","external_url":"https://www.cve.org/cverecord?id=CVE-2018-1002100","id":"CVE-2018-1002100","status":"fixed","summary":"Kubectl copy doesn't check for paths outside of it's destination directory.","url":"https://github.com/kubernetes/kubernetes/issues/61297"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2017-1002102","issue_number":60814},"content_text":"[CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:H](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:H)\r\n\r\nThis vulnerability allows containers using a secret, configMap, projected or downwardAPI volume to trigger deletion of arbitrary files and directories on the nodes where they are running.\r\n\r\nThanks to Joel Smith of Red Hat for reporting this problem.\r\n\r\n**Vulnerable versions:**\r\n* Kubernetes 1.3.x-1.6.x\r\n* Kubernetes 1.7.0-1.7.13\r\n* Kubernetes 1.8.0-1.8.8\r\n* Kubernetes 1.9.0-1.9.3\r\n\r\n**Vulnerable configurations:**\r\n* Clusters that run untrusted containers with secret, configMap, downwardAPI or projected volumes mounted (including auto-added service account token mounts).\r\n\r\n**Vulnerability impact:**\r\nA malicious container running in a pod with a secret, configMap, downwardAPI or projected volume mounted (including auto-added service account token mounts) can cause the Kubelet to remove any file or directory on the host filesystem.\r\n\r\n**Mitigations prior to upgrading:**\r\nDo not allow containers to be run with secret, configMap, downwardAPI and projected volumes (note that this prevents use of service account tokens in pods, and requires use of [`automountServiceAccountToken: false`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server))\r\n\r\n**Fixed versions:**\r\n* Fixed in v1.7.14 by #60516\r\n* Fixed in v1.8.9 by #60515\r\n* Fixed in v1.9.4 by #60258\r\n* Fixed in master by #58720 (included in v1.10.0-beta.1 and up, will be in v1.10.0)\r\n\r\n**Fix impact:**\r\nSecret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location.\r\n","date_published":"2018-03-05T20:55:20Z","external_url":"https://www.cve.org/cverecord?id=CVE-2017-1002102","id":"CVE-2017-1002102","status":"fixed","summary":"atomic writer volume handling allows arbitrary file deletion in host filesystem","url":"https://github.com/kubernetes/kubernetes/issues/60814"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2017-1002101","issue_number":60813},"content_text":"[CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H)\r\n\r\nThis vulnerability allows containers using [subpath volume mounts](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath) with any volume type (including non-privileged pods, subject to file permissions) to access files/directories outside of the volume, including the hostâs filesystem.\r\n\r\nThanks to Maxim Ivanov for reporting this problem.\r\n\r\n**Vulnerable versions:**\r\n* Kubernetes 1.3.x-1.6.x\r\n* Kubernetes 1.7.0-1.7.13\r\n* Kubernetes 1.8.0-1.8.8\r\n* Kubernetes 1.9.0-1.9.3\r\n\r\n**Vulnerable configurations:**\r\n* Clusters that allow untrusted users to control pod spec content, and prevent host filesystem access via hostPath volumes (or other volume types) using PodSecurityPolicy (or custom admission plugins)\r\n* Clusters that make use of [subpath volume mounts](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath) with untrusted containers or containers that can be compromised\r\n\r\n**Vulnerability impact:**\r\nA specially crafted pod spec combined with malicious container behavior can allow read/write access to arbitrary files outside volumes specified in the pod, including the hostâs filesystem. This can be accomplished with any volume type, including emptyDir, and can be accomplished with a non-privileged pod (subject to file permissions).\r\n\r\n**Mitigations prior to upgrading:**\r\nPrevent untrusted users from creating pods (and pod-creating objects like deployments, replicasets, etc), or [disable all volume types](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems) with PodSecurityPolicy (note that this prevents use of service account tokens in pods, and requires use of [`automountServiceAccountToken: false`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server))\r\n\r\n**Fixed versions:**\r\n* Fixed in v1.7.14 by #61047\r\n* Fixed in v1.8.9 by #61046\r\n* Fixed in v1.9.4 by #61045\r\n* Fixed in master by #61044 (included in v1.10.0-beta.3, will be in v1.10.0)\r\n\r\n**Action Required:**\r\nIn addition to upgrading, PodSecurityPolicy objects designed to limit container permissions must completely [disable hostPath volumes](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems), as the allowedHostPaths feature does not restrict symlink creation and traversal. Future enhancements (tracked in issue #61043) are required to limit hostPath use to read only volumes or exact path matches before a PodSecurityPolicy can effectively restrict hostPath usage to a given subpath.\r\n\r\n**Known issues:**\r\n* Status and availability of fixes for regressions in subPath volume mount handling are tracked in https://github.com/kubernetes/kubernetes/issues/61563","date_published":"2018-03-05T20:53:58Z","external_url":"https://www.cve.org/cverecord?id=CVE-2017-1002101","id":"CVE-2017-1002101","status":"fixed","summary":"subpath volume mount handling allows arbitrary file access in host filesystem","url":"https://github.com/kubernetes/kubernetes/issues/60813"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2017-1002100","issue_number":47611},"content_text":"","date_published":"2017-06-15T18:59:13Z","external_url":"https://www.cve.org/cverecord?id=CVE-2017-1002100","id":"CVE-2017-1002100","status":"fixed","summary":"Azure PV should be Private scope not Container scope","url":"https://github.com/kubernetes/kubernetes/issues/47611"},{"_kubernetes_io":{"google_group_url":"https://groups.google.com/g/kubernetes-announce/search?q=CVE-2017-1000056","issue_number":43459},"content_text":"A PodSecurityPolicy admission plugin vulnerability allows users to make use of any PodSecurityPolicy object, even ones they are not authorized to use.\r\n\r\nCVE: CVE-2017-1000056\r\n\r\n* Fixed in [v1.5.5](https://github.com/kubernetes/kubernetes/releases/tag/v1.5.5) in https://github.com/kubernetes/kubernetes/commit/7fef0a4f6a44ea36f166c39fdade5324eff2dd5e\r\n* Fixed in release-1.5 branch in https://github.com/kubernetes/kubernetes/pull/43491\r\n* Fixed in master in https://github.com/kubernetes/kubernetes/pull/43489\r\n\r\n**Who is affected?**\r\nOnly Kubernetes 1.5.0-1.5.4 installations that do _all_ of the following:\r\n* Enable the PodSecurityPolicy API (which is not enabled by default):\r\n `--runtime-config=extensions/v1beta1/podsecuritypolicy=true`\r\n* Enable the PodSecurityPolicy admission plugin (which is not enabled by default):\r\n `--admission-control=...,PodSecurityPolicy,...`\r\n* Use authorization to limit users' ability to use specific PodSecurityPolicy objects\r\n\r\nkubeadm and GKE do not allow enabling PodSecurityPolicy in 1.5, so are not affected by this vulnerability.\r\n\r\nkube-up.sh and kops do not enable PodSecurityPolicy by default, so are not affected by this vulnerability. A modified kube-up.sh or kops deployment could have enabled it.\r\n\r\n**What is the impact?**\r\nA user that is authorized to create pods can make use of any existing PodSecurityPolicy, even ones they are not authorized to use.\r\n\r\n**How can I mitigate this prior to installing 1.5.5?**\r\n1. Export existing PodSecurityPolicy objects:\r\n `kubectl get podsecuritypolicies -o yaml \u003e psp.yaml`\r\n\r\n2. Review and delete any PodSecurityPolicy objects you do not want all pod-creating users to be able to use (NOTE: Privileged users that were making use of those policies will also lose access to those policies). For example:\r\n `kubectl delete podsecuritypolicies/my-privileged-policy`\r\n\r\n3. After upgrading to 1.5.5, re-create the exported PodSecurityPolicy objects:\r\n `kubectl create -f psp.yaml`","date_published":"2017-03-21T15:22:29Z","external_url":"https://www.cve.org/cverecord?id=CVE-2017-1000056","id":"CVE-2017-1000056","status":"fixed","summary":"PodSecurityPolicy admission plugin authorizes incorrectly","url":"https://github.com/kubernetes/kubernetes/issues/43459"}],"title":"Kubernetes Vulnerability Announcements - CVE Feed","version":"https://jsonfeed.org/version/1.1"}