Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using server-side-apply for an Ingress failed when updating IngressBackend from port number to port name #125869

Closed
ahus1 opened this issue Jul 3, 2024 · 9 comments · Fixed by #126207
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/network Categorizes an issue or PR as relevant to SIG Network. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@ahus1
Copy link

ahus1 commented Jul 3, 2024

What happened?

I created an Ingress resource with a port number manually, and then tried to use an Operator with server-side apply to update the resource from a port number to a port name. This failed with the error message "cannot set both port name & port number".

What did you expect to happen?

I expected the port number to be cleared as the operator would take over the ownership of the ServiceBackendPort.

How can we reproduce it (as minimally and precisely as possible)?

To simulate the behavior in the operator in a most simple example I use kubectl:

ingress-v1.yaml:

kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: myingress
spec:
  defaultBackend:
    service:
      name: myservice
      port:
        number: 8443

ingress-v2.yaml:

kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: myingress
spec:
  defaultBackend:
    service:
      name: myservice
      port:
        name: http
kubectl apply -f  ingress-v1.yaml --server-side --field-manager=app-1 --validate=false
kubectl apply -f  ingress-v2.yaml --server-side --field-manager=app-2 --validate=false  --force-conflicts 

This will print:

The Ingress "myingress" is invalid: spec.defaultBackend: Invalid value: "": cannot set both port name & port number

Anything else we need to know?

It seems to me that the merge type for the struct ServiceBackendPort should to be set to "atomic" according to this doc, so that the Operator updating the port should take over the whole struct, and two applications shouldn't update single fields in it.

type IngressServiceBackend struct {
// name is the referenced service. The service must exist in
// the same namespace as the Ingress object.
Name string `json:"name" protobuf:"bytes,1,opt,name=name"`
// port of the referenced service. A port name or port number
// is required for a IngressServiceBackend.
Port ServiceBackendPort `json:"port,omitempty" protobuf:"bytes,2,opt,name=port"`
}
// ServiceBackendPort is the service port being referenced.
type ServiceBackendPort struct {
// name is the name of the port on the Service.
// This is a mutually exclusive setting with "Number".
// +optional
Name string `json:"name,omitempty" protobuf:"bytes,1,opt,name=name"`
// number is the numerical port number (e.g. 80) on the Service.
// This is a mutually exclusive setting with "Name".
// +optional
Number int32 `json:"number,omitempty" protobuf:"bytes,2,opt,name=number"`
}

Kubernetes version

$ kubectl version
Client Version: v1.30.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.9+416ecaf

Cloud provider

Red Hat OpenShift on AWS

OS version

OS version of the client:

NAME="Fedora Linux"
VERSION="40 (Workstation Edition)"

Linux fedora.fritz.box 6.9.6-200.fc40.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jun 21 15:48:21 UTC 2024 x86_64 GNU/Linux

Install tools

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@ahus1 ahus1 added the kind/bug Categorizes issue or PR as related to a bug. label Jul 3, 2024
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 3, 2024
@ahus1
Copy link
Author

ahus1 commented Jul 3, 2024

/sig network

@k8s-ci-robot k8s-ci-robot added sig/network Categorizes an issue or PR as relevant to SIG Network. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 3, 2024
@thockin thockin self-assigned this Jul 18, 2024
@thockin
Copy link
Member

thockin commented Jul 18, 2024

So, on one hand you made a problem when you used two different owners to manage one aspect of the object. On the other hand, I think you're right that it SHOULD work. Probably.
+structType is not a super common tag, and ingress way predates it, so no surprise it isn't used.

I ACK the issue and reproduced it.

$ git diff
diff --git a/staging/src/k8s.io/api/networking/v1/types.go b/staging/src/k8s.io/api/networking/v1/types.go
index 4ae0be1a366..5db47530fe9 100644
--- a/staging/src/k8s.io/api/networking/v1/types.go
+++ b/staging/src/k8s.io/api/networking/v1/types.go
@@ -531,6 +531,7 @@ type IngressServiceBackend struct {
 }
 
 // ServiceBackendPort is the service port being referenced.
+// +structType=atomic
 type ServiceBackendPort struct {
        // name is the name of the port on the Service.
        // This is a mutually exclusive setting with "Number".

I ran update-codegen and built a KinD cluster from HEAD.

$ k apply -f /tmp/1.yaml --server-side --field-manager=app-1 --validate=false -o yaml --show-managed-fields
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  creationTimestamp: "2024-07-18T17:28:03Z"
  generation: 1
  managedFields:
  - apiVersion: networking.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:defaultBackend:
          f:service:
            f:name: {}
            f:port: {}
    manager: app-1
    operation: Apply
    time: "2024-07-18T17:28:03Z"
  name: myingress
  namespace: default
  resourceVersion: "483"
  uid: eb2741c9-69cc-4626-a22f-fe1caca9607f
spec:
  defaultBackend:
    service:
      name: myservice
      port:
        number: 8443
status:
  loadBalancer: {}

$ k apply -f /tmp/2.yaml --server-side --field-manager=app-2 --validate=false -o yaml --show-managed-fields --force-conflicts
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  creationTimestamp: "2024-07-18T17:28:03Z"
  generation: 2
  managedFields:
  - apiVersion: networking.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:defaultBackend:
          f:service:
            f:name: {}
    manager: app-1
    operation: Apply
    time: "2024-07-18T17:28:03Z"
  - apiVersion: networking.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:defaultBackend:
          f:service:
            f:name: {}
            f:port: {}
    manager: app-2
    operation: Apply
    time: "2024-07-18T17:28:28Z"
  name: myingress
  namespace: default
  resourceVersion: "517"
  uid: eb2741c9-69cc-4626-a22f-fe1caca9607f
spec:
  defaultBackend:
    service:
      name: myservice
      port:
        name: http
status:
  loadBalancer: {}

@thockin
Copy link
Member

thockin commented Jul 18, 2024

@pohly this may be something you want to look at for DRA API - I didn't even realize this was a thing until today :)

@jpbetz
Copy link
Contributor

jpbetz commented Jul 18, 2024

The API uses +structType=atomic for EndpointPort, which is very similar. So using it for ServiceBackendPort LGTM.

SSA will auto-migrate existing resources using kubernetes-sigs/structured-merge-diff#170 once the +structType=atomic change is released.

@aojea
Copy link
Member

aojea commented Jul 18, 2024

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 18, 2024
@pohly
Copy link
Contributor

pohly commented Jul 19, 2024

this may be something you want to look at for DRA API

"this" being "add +structType=atomic"?

I can see where this might be useful, probably on all one-of structs and perhaps on those containing a discriminator because otherwise SSA leaves fields set which should be cleared when some other owner takes over.

Is this important enough to put into #125488? I don't see a big need for SSA in general and even less for SSA with different owners at the moment - this will change once we do more with status.

@thockin
Copy link
Member

thockin commented Jul 19, 2024

No, not urgent. This is the first time it has come up in my awareness ever. But now that I know about it, I could see how it applies to dra.

@ahus1
Copy link
Author

ahus1 commented Jul 20, 2024

Thank to everyone involved here to resolve this! This will make things simpler in the Keycloak operator once it is released.

@thockin
Copy link
Member

thockin commented Jul 20, 2024

Thanks for filing a good (and easy to fix :) bug. Easy repro means we can jump on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/network Categorizes an issue or PR as relevant to SIG Network. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants