Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Job finishes without the pods being ready #600

Open
josecastillolema opened this issue Feb 26, 2024 · 11 comments
Open

[BUG] Job finishes without the pods being ready #600

josecastillolema opened this issue Feb 26, 2024 · 11 comments
Labels
bug Something isn't working

Comments

@josecastillolema
Copy link
Contributor

Bug Description

Job finishes without the pods being ready.
It looks like a regression, kube-burner 1.7.12 is fine but 1.9.3 is affected

Output of kube-burner version

  • 1.7.12
  • 1.9.3

Describe the bug

With kube-burner 1.9.3:

time="2024-02-26 09:58:12" level=info msg="serving-job: PodScheduled 50th: 0 99th: 0 max: 0 avg: 0" file="pod_latency.go:242"
time="2024-02-26 09:58:12" level=info msg="serving-job: ContainersReady 50th: 7000 99th: 7000 max: 7000 avg: 7000" file="pod_latency.go:242"
time="2024-02-26 09:58:12" level=info msg="serving-job: Initialized 50th: 0 99th: 0 max: 0 avg: 0" file="pod_latency.go:242"
time="2024-02-26 09:58:12" level=info msg="serving-job: Ready 50th: 7000 99th: 7000 max: 7000 avg: 7000" file="pod_latency.go:242"
time="2024-02-26 09:58:12" level=info msg="serving-job: PodScheduled 50th: 0 99th: 0 max: 0 avg: 0" file="pod_latency.go:242"
time="2024-02-26 09:58:12" level=info msg="serving-job: ContainersReady 50th: 7000 99th: 7000 max: 7000 avg: 7000" file="pod_latency.go:242"
time="2024-02-26 09:58:12" level=info msg="serving-job: Initialized 50th: 0 99th: 0 max: 0 avg: 0" file="pod_latency.go:242"
time="2024-02-26 09:58:12" level=info msg="serving-job: Ready 50th: 7000 99th: 7000 max: 7000 avg: 7000" file="pod_latency.go:242"
time="2024-02-26 09:58:12" level=info msg="Finished execution with UUID: 1234" file="job.go:265"
time="2024-02-26 09:58:12" level=info msg="👋 Exiting kube-burner 1234" file="kube-burner.go:87"
NAMESPACE            NAME                                                  READY   STATUS              RESTARTS   AGE     IP              NODE                NOMINATED NODE   READINESS GATES
serving-ns-0         dep-serving-0-1-serving-job-69bfddf86d-rd6nh          0/1     Pending             0          0s      <none>          ovn-worker          <none>           <none>
serving-ns-0         dep-serving-0-2-serving-job-588756db56-hmr9t          0/1     Pending             0          0s      <none>          ovn-worker          <none>           <none>
serving-ns-0         dep-serving-0-3-serving-job-5f68f8f6d9-mlspr          0/1     Pending             0          0s      <none>          ovn-worker          <none>           <none>
serving-ns-0         dep-serving-0-4-serving-job-6dfb65c666-gp955          0/1     ContainerCreating   0          0s      <none>          ovn-worker          <none>           <none>

Expected behavior

With kube-burner 1.7.12:

time="2024-02-26 03:10:29" level=info msg="serving-job: Initialized 50th: 0 99th: 0 max: 0 avg: 0" file="pod_latency.go:181"
time="2024-02-26 03:10:29" level=info msg="serving-job: Ready 50th: 6000 99th: 6000 max: 6000 avg: 5750" file="pod_latency.go:181"
time="2024-02-26 03:10:29" level=info msg="serving-job: PodScheduled 50th: 0 99th: 0 max: 0 avg: 0" file="pod_latency.go:181"
time="2024-02-26 03:10:29" level=info msg="serving-job: ContainersReady 50th: 6000 99th: 6000 max: 6000 avg: 5750" file="pod_latency.go:181"
time="2024-02-26 03:10:29" level=info msg="Pod latencies error rate was: 0.00" file="pod_latency.go:184"
time="2024-02-26 03:10:29" level=info msg="Finished execution with UUID: 1234" file="job.go:247"
time="2024-02-26 03:10:29" level=info msg="👋 Exiting kube-burner 1234" file="kube-burner.go:98"
NAMESPACE            NAME                                                   READY   STATUS    RESTARTS       AGE     IP              NODE                NOMINATED NODE   READINESS GATES
serving-ns-0         dep-serving-0-1-serving-job-74d85dc964-g6j7n           1/1     Running   0              7s      10.244.1.3      ovn-worker          <none>           <none>
serving-ns-0         dep-serving-0-2-serving-job-75c96944b4-fg4pd           1/1     Running   0              7s      10.244.1.4      ovn-worker          <none>           <none>
serving-ns-0         dep-serving-0-3-serving-job-744fbd4fc9-ww62s           1/1     Running   0              7s      10.244.1.5      ovn-worker          <none>           <none>
serving-ns-0         dep-serving-0-4-serving-job-56cfc86d4b-crxf7           1/1     Running   0              7s      10.244.1.6      ovn-worker          <none>           <none>
@josecastillolema josecastillolema added the bug Something isn't working label Feb 26, 2024
@rsevilla87
Copy link
Member

What workload did you use to reproduce this?, can you paste the lines?

@josecastillolema
Copy link
Contributor Author

@rsevilla87
Copy link
Member

https://github.com/redhat-performance/web-burner/blob/main/.github/workflows/ci.yml#L106

I think, I've found the reason of this issue... the issue was introduced by #533.

The problem is that pod_serving has specified the namespace field https://github.com/redhat-performance/web-burner/blob/main/objectTemplates/pod_serving.yml#L6, and the value of this field uses a template variable {{ .Iteration }}, with the changes introduced by the PR I linked before, kube-burner waits for the objects using the namespace specified by the metadata.namespace field, if exists, as this field uses a template variable, this variable wasn't rendered until the object is created, so for that reason kube-burner did not look for the object in the right namespace and thought it was ready.

One thing you can do to fix this issue is to remove this line https://github.com/redhat-performance/web-burner/blob/main/objectTemplates/pod_serving.yml#L6 from the object, as I think is redundant with the job namespace field https://github.com/redhat-performance/web-burner/blob/main/workload/cfg_icni2_serving_resource_init.yml#L179

@rsevilla87
Copy link
Member

Medium term solution is to improve the logic to figure out in which namespace the objects where actually created rather than deduce it from YAML definitions.

@josecastillolema
Copy link
Contributor Author

@josecastillolema
Copy link
Contributor Author

@vishnuchalla I think @rsevilla87 planned to keep this issue opened to track the medium term solution.

@vishnuchalla
Copy link
Collaborator

Got it. Sorry thought it was stale.

@vishnuchalla vishnuchalla reopened this Mar 4, 2024
Copy link

github-actions bot commented Jun 3, 2024

This issue has become stale and will be closed automatically within 7 days.

@github-actions github-actions bot added the stale Stale issue label Jun 3, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 11, 2024
@vishnuchalla vishnuchalla reopened this Jun 11, 2024
@github-actions github-actions bot removed the stale Stale issue label Jun 12, 2024
Copy link

This issue has become stale and will be closed automatically within 7 days.

@github-actions github-actions bot added the stale Stale issue label Sep 10, 2024
@vishnuchalla
Copy link
Collaborator

@josecastillolema @rsevilla87 Is this still applicable?

@github-actions github-actions bot removed the stale Stale issue label Sep 11, 2024
Copy link

This issue has become stale and will be closed automatically within 7 days.

@github-actions github-actions bot added the stale Stale issue label Dec 10, 2024
@rsevilla87 rsevilla87 removed the stale Stale issue label Dec 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants