-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kyaml is not respecting $patch replace|retainKeys
directives
#2037
Comments
I know it can be achieved by json patch replace op twice. Like this: # patch1.yaml
- op: replace
path: /spec/template/spec/volumes/0
value:
name: kafka-broker01
persistentVolumeClaim:
claimName: kafka-broker01
# patch2.yaml
- op: replace
path: /spec/template/spec/volumes/0
value:
name: kafka-broker02
persistentVolumeClaim:
claimName: kafka-broker02 Is there a more convenient way? |
After searching for information and testing, I found two methods: # patch.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-broker01
spec:
template:
spec:
volumes:
- name: kafka-broker01
emptyDir: null # method 1
persistentVolumeClaim:
claimName: kafka-broker01
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-broker02
spec:
template:
spec:
volumes:
- name: kafka-broker02
$patch: delete # method 2
- name: kafka-broker02
persistentVolumeClaim:
claimName: kafka-broker02 |
I did some more experimenting... $patch=replace also has an unexpected outcome... # patch.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-broker02
spec:
template:
spec:
volumes:
- name: kafka-broker02
$patch: replace
persistentVolumeClaim:
claimName: kafka-broker02
# output.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-broker02
spec:
replicas: 1
template:
spec:
...
volumes: [] # all volumes gone (both base and patch) |
Experiencing similar behavior with 3.5.4 |
From the k8s docs, That directive is broken for me too, I'm getting weird behaviour where it's deleting some of the other objects in the volumes list (but not all of them). Version:
Here's a repo with a stripped-down repro scenario: https://github.com/paultiplady/kustomize-replace-directive-bug I can confirm that manually removing the base data with |
Same I have also worked around the problem with |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale Not on my watch. |
This is very much an issue, I was able to reproduce it in Kustomize 3.8.1
Raw Deployment: ---
# Source: rancher/templates/deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: rancher
labels:
app: rancher
chart: rancher-2.4.6
heritage: Helm
release: rancher
spec:
replicas: 3
selector:
matchLabels:
app: rancher
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: rancher
release: rancher
spec:
serviceAccountName: rancher
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rancher
topologyKey: kubernetes.io/hostname
containers:
- image: rancher/rancher:v2.4.6
imagePullPolicy: IfNotPresent
name: rancher
ports:
- containerPort: 80
protocol: TCP
args:
# Private CA - don't clear ca certs
- "--http-listen-port=80"
- "--https-listen-port=443"
- "--add-local=auto"
env:
- name: CATTLE_NAMESPACE
value: rancher-system
- name: CATTLE_PEER_SERVICE
value: rancher
- name: AUDIT_LEVEL
value: "1"
- name: AUDIT_LOG_MAXAGE
value: "1"
- name: AUDIT_LOG_MAXBACKUP
value: "1"
- name: AUDIT_LOG_MAXSIZE
value: "100"
livenessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 60
periodSeconds: 30
readinessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 5
periodSeconds: 30
resources:
{}
volumeMounts:
# Pass CA cert into rancher for private CA
- mountPath: /etc/rancher/ssl/cacerts.pem
name: tls-ca-volume
subPath: cacerts.pem
readOnly: true
- mountPath: /var/log/auditlog
name: audit-log
# Make audit logs available for Rancher log collector tools.
- image: busybox
name: rancher-audit-log
command: ["tail"]
args: ["-F", "/var/log/auditlog/rancher-api-audit.log"]
volumeMounts:
- mountPath: /var/log/auditlog
name: audit-log
volumes:
- name: tls-ca-volume
secret:
defaultMode: 0400
secretName: tls-ca
- name: audit-log
emptyDir: {} Patch: kind: Deployment
apiVersion: apps/v1
metadata:
name: rancher
# namespace: rancher-system
spec:
template:
spec:
containers:
- name: rancher
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-tls"
nodePublishSecretRef:
name: secrets-store-creds
- name: tls-ca-volume
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-root-ca"
nodePublishSecretRef:
name: secrets-store-creds Unexpected output: apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rancher
chart: rancher-2.4.6
heritage: Helm
release: rancher
name: rancher
namespace: rancher-system
spec:
replicas: 3
selector:
matchLabels:
app: rancher
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: rancher
release: rancher
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rancher
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- args:
- --http-listen-port=80
- --https-listen-port=443
- --add-local=auto
env:
- name: CATTLE_NAMESPACE
value: rancher-system
- name: CATTLE_PEER_SERVICE
value: rancher
- name: AUDIT_LEVEL
value: "1"
- name: AUDIT_LOG_MAXAGE
value: "1"
- name: AUDIT_LOG_MAXBACKUP
value: "1"
- name: AUDIT_LOG_MAXSIZE
value: "100"
image: rancher/rancher:v2.4.6
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 60
periodSeconds: 30
name: rancher
ports:
- containerPort: 80
protocol: TCP
readinessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 5
periodSeconds: 30
volumeMounts:
- mountPath: /etc/rancher/ssl/cacerts.pem
name: tls-ca-volume
readOnly: true
subPath: cacerts.pem
- mountPath: /var/log/auditlog
name: audit-log
- mountPath: /mnt/secrets-store
name: secrets-store-inline
readOnly: true
- args:
- -F
- /var/log/auditlog/rancher-api-audit.log
command:
- tail
image: busybox
name: rancher-audit-log
volumeMounts:
- mountPath: /var/log/auditlog
name: audit-log
serviceAccountName: rancher
volumes:
- csi:
driver: secrets-store.csi.k8s.io
nodePublishSecretRef:
name: secrets-store-creds
readOnly: true
volumeAttributes:
secretProviderClass: azure-root-ca
name: tls-ca-volume
secret:
defaultMode: 256
secretName: tls-ca
- name: audit-log
- csi:
driver: secrets-store.csi.k8s.io
nodePublishSecretRef:
name: secrets-store-creds
readOnly: true
volumeAttributes:
secretProviderClass: azure-tls
name: secrets-store-inline Which produces the following error:
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale Try again |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale Try again |
@StevenLiekens can you please clarify the solution you are looking for? It sounds like you've found multiple solutions, and the first one in fact has test coverage as of #3727. In other words is this issue tracking the fact that to remove the emptyDir you need to do this: kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-broker01
spec:
template:
spec:
volumes:
- name: kafka-broker01
emptyDir: null # method 1
persistentVolumeClaim:
claimName: kafka-broker01 rather than this? kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-broker01
spec:
template:
spec:
volumes:
- name: kafka-broker01
$patch: replace
persistentVolumeClaim:
claimName: kafka-broker01 Or is there something else you're looking for? |
@KnVerey yep this is about not being able to replace the entire object graph without nulling out the emptyDir I did not realize you can null out emptyDir and set other properties in a single patch but I'm still unsure if that's what you want. |
It seems |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
$patch replace|retainKeys
directives
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The secret to success is |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
tree:
base content:
overlay contents:
cd overlays && kustomize build . > output.yaml
:In the output both
emptyDir
andpersistentVolumeClaim
field exists.How to change volumes from emptyDir to PVC use kustomize?
The text was updated successfully, but these errors were encountered: