A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | AA | AB | AC | AD | AE | AF | AG | AH | AI | AJ | AK | AL | AM | AN | AO | AP | AQ | AR | AS | AT | AU | AV | AW | AX | AY | AZ | BA | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Instructions Please add a new column every week to track the testgrid for SIG Node CI meeting, notes: https://docs.google.com/document/d/1fb-ugvgdSVIkkuJ388_nhp2pBTy_4HEVg5848Xy7n5U | force system | |||||||||||||||||||||||||||||||||||||||||||||||||||
2 | 10/31/2022 | 10/18/2022 | 09/20/2022 | 09/14/2022 | 09/07/2022 | 08/30/2022 | 08/23/2022 | 08/08/2022 | 07/25/2022 | 07/18/2022 | 07/06/2022 | 06/28/2022 | 06/08/2022 | 06/01/2022 | 05/11/2022 | 04/05/2022 | 04/26/2022 | 04/20/2022 | 04/13/2022 | 04/04/2022 | 03/29/2022 | 03/23/2022 | 03/16/2022 | 03/09/2022 | 03/02/2022 | 02/23/2022 | 02/16/2022 | 02/09/2022 | 02/02/2022 | 01/26/2022 | 01/19/2022 | 01/12/2022 | 01/05/2022 | ||||||||||||||||||||
3 | SIG Node release blocking https://testgrid.k8s.io/sig-node-release-blocking | 🟢 | 🟢 - containerd tab is slightly flaky | 🟢 - containerd tab is slightly flaky | Failing device plugin test - https://github.com/kubernetes/kubernetes/issues/112612 | 🟢 - containerd tab is slightly flaky | 🟢 - a couple of tabs have successful tests but failing tasks - containerd tab is slightly flaky | 🟢 - node-kubelet-1.21 is removed - node-kubelet-serial-containerd are flaky but are passing most of the time | 🟢 (node-kubelet-1.21 and node-kubelet-serial-containerd are flaky but are passing most of the time) | 🟢 (node-kubelet-1.21 and node-kubelet-serial-containerd are flaky but are passing most of the time) | 🟢 | The node kubelet serial containerd tests are flakey | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 - One flake (E2eNode Suite: [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval in https://testgrid.k8s.io/sig-node-release-blocking#node-kubelet-serial-containerd) | 🟢 | 🟢 | 🟢 | 🟢 There were failures caused by https://github.com/kubernetes/kubernetes/issues/108774 But passing now as related changes were reverted | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | ||||||||||||||||||
4 | SIG Node critical https://testgrid.k8s.io/sig-node-critical | N/A | N/A | N/A - maybe we should delete this row? | n/a | N/A | N/A | Testgrid has been removed | Testgrid has been removed | Testgrid has been removed | Testgrid has been removed | https://testgrid.k8s.io/sig-node-release-blocking#node-kubelet-serial-containerd | Testgrid has been removed | Testgrid has been removed | Testgrid has been removed | Testgrid has been removed | Testgrid has been removed | Testgrid has been removed | Testgrid has been removed | Testgrid has been removed | Testgrid has been removed | Testgrid has been removed | https://testgrid.k8s.io/sig-node-release-blocking#node-kubelet-1.23-kubetest2 is red | https://testgrid.k8s.io/sig-node-critical#node-kubelet-1.23-kubetest2 is red | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | https://testgrid.k8s.io/sig-node-critical#containerd-NodeConformance failing | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | ||||||||||||||||||
5 | SIG Node kubelet https://testgrid.k8s.io/sig-node-kubelet | Fedora swap ramains failure, pending https://github.com/kubernetes/test-infra/pull/27406 | Fedora swap ramains failure and https://github.com/kubernetes/test-infra/pull/27406 is merged (updated the comment for taking a look again) | Fedora swap ramains failure, pending https://github.com/kubernetes/test-infra/pull/27406 | Fedora swap ramains failure, pending https://github.com/kubernetes/test-infra/pull/27406 | Fedora swap ramains failure, pending https://github.com/kubernetes/test-infra/pull/27406 | Fedora swap jobs are failing, created https://github.com/kubernetes/test-infra/pull/27406 to fix them. | Same as last week (failed since 07/01). Any existing issues to track? https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-fedora https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-fedora-serial | Same as last week, tests are failing because they can't find an artifact: https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-fedora https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-fedora-serial | Same as last week, tests are failing because they can't find an artifact: https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-fedora eg: https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-swap-fedora/1553682426050383872 W0731 10:16:15.133] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource: W0731 10:16:15.134] - The resource 'projects/k8s-infra-e2e-boskos-099/regions/us-west1/addresses/bootstrap-e2e-master-ip' was not found | Tests are failing because they can't find an artifact: https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-fedora eg: https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-swap-fedora/1553682426050383872 W0731 10:16:15.133] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource: W0731 10:16:15.134] - The resource 'projects/k8s-infra-e2e-boskos-099/regions/us-west1/addresses/bootstrap-e2e-master-ip' was not found | Tests are failing because they can't find an artifact https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-fedora https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-swap-fedora/1549751579429572608 | - kubelet-gce-e2e-swap-ubuntu passes now - kubelet-gce-e2e-lock-contention passes now - kubelet-gce-e2e-swap-fedora and kubelet-gce-e2e-swap-fedora-serial has failed since 07/01 - TODO: create issues | Same as last week: Ubuntu-serial is green, but https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-ubuntu ( https://github.com/kubernetes/kubernetes/issues/107412 ) and https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-lock-contention (https://github.com/kubernetes/kubernetes/issues/110439) are failing | Same as last week: Ubuntu-serial is green, but https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-ubuntu ( https://github.com/kubernetes/kubernetes/issues/107412 ) and https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-lock-contention (https://github.com/kubernetes/kubernetes/issues/110439) are failing | Ubuntu-serial is green, but https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-ubuntu ( https://github.com/kubernetes/kubernetes/issues/107412 ) and https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-lock-contention are failing | Still flaky: https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-ubuntu-serial | https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-fedora-serial - Tests running, not reported in Testgrid Still Flaky 1. https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-ubuntu 2. https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-ubuntu-serial https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-fedora starts passing | https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-ubuntu is still flaky as last week New Flaky: only a few cases 1. https://testgrid.k8s.io/sig-node-kubelet#kubeadm-kinder-kubelet-1-22-on-1-23 2. https://testgrid.k8s.io/sig-node-kubelet#kubeadm-kinder-kubelet-1-23-on-latest https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-fedora starts passing | kubelet-gce-e2e-swap-ubuntu is less flaky than last week Same as last week - https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-fedora - https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-fedora-serial https://github.com/kubernetes/test-infra/pull/25886 fixed ubuntu but fedora has new "ssh" error | Failing (infra issue): - kubelet-gce-e2e-swap-fedora - kubelet-gce-e2e-swap-fedora-serial Summary API (memory missing - flake): - kubelet-gce-e2e-swap-ubuntu | kubelet-gce-e2e-swap-fedora-serial and kubelet-gce-e2e-swap-ubuntu-serial are failing since 04/05. https://github.com/kubernetes/test-infra/pull/25886 | 🟢 but swap jobs affected by https://github.com/kubernetes/kubernetes/issues/109096 | 🟢 | 🟢 but flaky | https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-lock-contention is still failed. Tracking issue: https://github.com/kubernetes/kubernetes/issues/108348 One fix: https://github.com/kubernetes/test-infra/pull/25509 is merged but the test grid is still failing, commented with new failure reason | The PR is merged and cni error is resolved https://github.com/kubernetes/test-infra/pull/25385 But https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-lock-contention is still failed. Tracking issue: https://github.com/kubernetes/kubernetes/issues/108348 | New: lock comptemption https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-lock-contention Fix PR: https://github.com/kubernetes/test-infra/pull/25385 | 🟢 except some missing results in the latest 1 run https://testgrid.k8s.io/sig-node-kubelet#kubeadm-kinder-kubelet-1-22-on-latest should be fine | Same random failures. Overall green 🟢 | In addition to the flaky ubuntu test we have seen in the past weeks, three other flaky tests: https://testgrid.k8s.io/sig-node-kubelet#kubeadm-kinder-kubelet-1-20-on-1-21 https://testgrid.k8s.io/sig-node-kubelet#kubeadm-kinder-kubelet-1-21-on-1-22 https://testgrid.k8s.io/sig-node-kubelet#kubeadm-kinder-kubelet-1-22-on-1-23 All 3 failed with "kinder.test.workflow: task-02-create-cluster " | Increasing the timeout https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-ubuntu https://github.com/kubernetes/kubernetes/issues/107412 | No progress, Mike is working on it https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-ubuntu https://github.com/kubernetes/kubernetes/issues/107412 | https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-ubuntu https://github.com/kubernetes/kubernetes/issues/107412 | https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-ubuntu https://github.com/kubernetes/kubernetes/issues/107342 | ||||||||||||||||||
6 | SIG Node Containerd https://testgrid.k8s.io/sig-node-containerd | Same as before, three failures: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction - https://github.com/kubernetes/kubernetes/issues/107063 https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test - https://github.com/kubernetes/kubernetes/issues/109911 https://testgrid.k8s.io/sig-node-containerd#image-validation-cos-e2e - https://github.com/kubernetes/kubernetes/issues/113152 New failure: https://testgrid.k8s.io/sig-node-containerd#e2e-cos-device-plugin-gpu - https://github.com/kubernetes/kubernetes/issues/113480 | Same as before, two failures: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction - https://github.com/kubernetes/kubernetes/issues/107063 https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test - https://github.com/kubernetes/kubernetes/issues/109911 One new failure: https://testgrid.k8s.io/sig-node-containerd#image-validation-cos-e2e - https://github.com/kubernetes/kubernetes/issues/113152 | Same as before, two failures: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction - https://github.com/kubernetes/kubernetes/issues/107063 https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test - https://github.com/kubernetes/kubernetes/issues/109911 | Same as before, two failures: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test | Same as before, two failures: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test | Same as last week. Failed since June. https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test | Same as last week: two failures https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test | Two failures: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test | Three failures: https://testgrid.k8s.io/sig-node-containerd#containerd-e2e-ubuntu https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test | https://testgrid.k8s.io/sig-node-containerd#containerd-e2e-ubuntu - The tests are starting to time out most of the time has been happening since at least 7/14 https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-e2e - The test are timing out, it may have started on 07/07/2022. It is unclear because we don't have data that goes further back https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-node-e2e-serial - Test timeouts https://testgrid.k8s.io/sig-node-containerd#image-validation-cos-e2e -Test intermittently times out https://testgrid.k8s.io/sig-node-containerd#image-validation-ubuntu-e2e -Test is timing out most of the time https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction - Eviction tests are flakey https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test - TensorFlow workload test failing | New Failing: - https://testgrid.k8s.io/sig-node-containerd#image-validation-ubuntu-e2e since 07/04 - https://testgrid.k8s.io/sig-node-containerd#node-e2e-unlabelled since 06/27 Same as last week: - Eviction test is failing: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction - Performance test is failing: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test (https://github.com/kubernetes/kubernetes/issues/109911)" | New Failing: Eviction test is failing: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction Same as last week: Performance test is failing: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test (https://github.com/kubernetes/kubernetes/issues/109911) | Performance test is failing: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test (https://github.com/kubernetes/kubernetes/issues/109911) | Summary API test is flaky on https://testgrid.k8s.io/sig-node-containerd#containerd-node-e2e-1.6 E2eNode Suite.[sig-node] Restart [Serial] [Slow] [Disruptive] Kubelet should correctly account for terminated pods after restart is flaky - https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-node-e2e-serial Eviction: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction Performace test: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test | Flakes: - cos-cgroupv2-containerd-node-e2e-serial: https://github.com/kubernetes/kubernetes/issues/106635 - node-kubelet-containerd-eviction: https://github.com/kubernetes/kubernetes/issues/107063 Failures: - node-kubelet-containerd-performance-test: https://github.com/kubernetes/kubernetes/issues/109911 | old failures: - node-kubelet-containerd-eviction - https://github.com/kubernetes/kubernetes/issues/107063 - node-kubelet-containerd-performance-test - https://github.com/kubernetes/test-infra/issues/25430 | new flaky (only one data points): - https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-node-e2e - https://testgrid.k8s.io/sig-node-containerd#image-validation-cos-e2e old failures: - node-kubelet-containerd-eviction - https://github.com/kubernetes/kubernetes/issues/107063 - node-kubelet-containerd-performance-test - https://github.com/kubernetes/test-infra/issues/25430 | containerd-node-e2e-features-1.4 is removed old failures: - node-kubelet-containerd-eviction - https://github.com/kubernetes/kubernetes/issues/107063 - node-kubelet-containerd-performance-test - https://github.com/kubernetes/test-infra/issues/25430 | Summary API (network) flaking: - pull-node-e2e - containerd-node-conformance - containerd-node-e2e-1.4 - cos-cgroupv2-containerd-node-e2e Seems to be old and abandoned: - pull-e2e-gci - pull-e2e-podutil E2eNode Suite.[sig-node] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess] flaking - containerd-node-e2e-features-1.4 old failures: - node-kubelet-containerd-eviction - node-kubelet-containerd-performance-test | Same as previous weeks: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction is failing https://github.com/kubernetes/kubernetes/issues/107063 A previously flaky test has been constantly failing since 04/01 https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test Created https://github.com/kubernetes/kubernetes/issues/109295 | Some test failing due to https://github.com/kubernetes/kubernetes/issues/109096 Some of the other tests failing like containerd-node-e2e-{1.4,1.5}. Test failing to start. https://github.com/kubernetes/kubernetes/issues/109127 | New failures from the registry name change. + old eviction tests failures | 1. https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test https://github.com/kubernetes/test-infra/issues/25372 2. https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction is failing https://github.com/kubernetes/kubernetes/issues/107063 | 1. https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test https://github.com/kubernetes/kubernetes/issues/107063 2. https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction is failing https://github.com/kubernetes/kubernetes/issues/107063 | Same as last week https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test still fails even https://github.com/kubernetes/test-infra/pull/25385 is merged https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test https://github.com/kubernetes/kubernetes/issues/107063 | New: perf tests https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-performance-test Fix PR: https://github.com/kubernetes/test-infra/pull/25385 No change: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction https://github.com/kubernetes/kubernetes/issues/107063 containerd-eviction failures, other tests are good. | https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction https://github.com/kubernetes/kubernetes/issues/107063 containerd-eviction failures, other tests are good. | "cgroupv2 failures, small advencement, but no major changes: https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-e2e https://github.com/kubernetes/kubernetes/issues/107830 Containerd failures - continues progress, dup jobs were cleaned up, new PR submitted https://testgrid.k8s.io/sig-node-containerd#containerd-e2e-ubuntu https://testgrid.k8s.io/sig-node-containerd#e2e-cos-device-plugin-gpu https://github.com/kubernetes/kubernetes/issues/107800 New PR: https://github.com/kubernetes/kubernetes/pull/107999 | cgroupv2 failures: https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-e2e https://github.com/kubernetes/kubernetes/issues/107830 https://testgrid.k8s.io/sig-node-containerd#containerd-e2e-ubuntu https://testgrid.k8s.io/sig-node-containerd#e2e-cos-device-plugin-gpu https://testgrid.k8s.io/sig-node-containerd#e2e-ubuntu https://github.com/kubernetes/kubernetes/issues/107800 Fix PR - https://github.com/kubernetes/kubernetes/pull/107832 open, needs to be merged | cgroupv2 failures resolved https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction https://github.com/kubernetes/kubernetes/issues/107063 new cgroupv2 failures - https://github.com/kubernetes/kubernetes/issues/107830 This PR seems to have broken many tests: https://github.com/kubernetes/test-infra/pull/24918#issuecomment-1021952307 Wouldn't start: https://testgrid.k8s.io/sig-node-containerd#containerd-e2e-ubuntu https://testgrid.k8s.io/sig-node-containerd#containerd-node-e2e-1.4 https://testgrid.k8s.io/sig-node-containerd#e2e-cos-device-plugin-gpu https://testgrid.k8s.io/sig-node-containerd#e2e-ubuntu https://github.com/kubernetes/kubernetes/issues/107800 Fix PR - https://github.com/kubernetes/kubernetes/pull/107832 open, needs to be merged Partially failing: https://testgrid.k8s.io/sig-node-containerd#containerd-node-e2e-features-1.4 https://testgrid.k8s.io/sig-node-containerd#containerd-node-e2e-features-1.5 https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-e2e https://github.com/kubernetes/kubernetes/issues/107801 Stats API: https://testgrid.k8s.io/sig-node-containerd#containerd-node-e2e-1.5 | Less failures https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction https://github.com/kubernetes/kubernetes/issues/107063 https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-node-e2e-serial https://github.com/kubernetes/kubernetes/issues/107062 | https://testgrid.k8s.io/sig-node-containerd#containerd-node-features https://testgrid.k8s.io/sig-node-containerd#image-validation-node-features https://github.com/kubernetes/kubernetes/issues/107342 https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction https://github.com/kubernetes/kubernetes/issues/107063 https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-node-e2e-serial https://github.com/kubernetes/kubernetes/issues/107062 https://testgrid.k8s.io/sig-node-containerd#node-e2e-benchmark https://github.com/kubernetes/kubernetes/issues/36621 https://testgrid.k8s.io/sig-node-containerd#node-e2e-benchmark https://github.com/kubernetes/kubernetes/issues/36621 | https://testgrid.k8s.io/sig-node-containerd#containerd-node-features https://testgrid.k8s.io/sig-node-containerd#image-validation-node-features https://github.com/kubernetes/kubernetes/issues/107342 https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction https://github.com/kubernetes/kubernetes/issues/107063 https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-node-e2e-serial https://github.com/kubernetes/kubernetes/issues/107062 https://testgrid.k8s.io/sig-node-containerd#node-e2e-benchmark https://github.com/kubernetes/kubernetes/issues/36621 https://testgrid.k8s.io/sig-node-containerd#node-e2e-benchmark https://github.com/kubernetes/kubernetes/issues/36621 | |||||||||||||||||||
7 | SIG Node CRI-O https://testgrid.k8s.io/sig-node-cri-o | Same two failues in: https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction - https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky - https://github.com/kubernetes/kubernetes/issues/109296 | Same two failues in: https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction - https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky - https://github.com/kubernetes/kubernetes/issues/109296 | Two failues in: https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction - https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky - https://github.com/kubernetes/kubernetes/issues/109296 | Two failues in: https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky | Two failues in: https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky | Same as last week: https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky Similar to "SIG Node kubelet". The error message is: The resource 'projects/k8s-jkns-gke-ubuntu-1-6/regions/us-west1/addresses/bootstrap-e2e-master-ip' was not found | Two failures as last week: https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky | Four failures: https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-conformance https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv2-node-e2e-conformance | crio-e2e-fedora, crio-e2e-rhel, ci-crio-cgroupv1-node-e2e-features, ci-crio-cgroupv1-node-e2e-resource-managers are flaky the rest of tests are failed | All failed except crio-e2e-fedora and crio-e2e-rhel which are flakey | All failed except crio-e2e-fedora and crio-e2e-rhel | Same as last week: Failures: - ci-crio-cgroupv1-node-e2e-eviction: https://github.com/kubernetes/kubernetes/issues/107804 - ci-crio-cgroupv1-node-e2e-flaky: https://github.com/kubernetes/kubernetes/issues/109296 There are other new falky testgrids: https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-unlabelled https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv2-node-e2e-conformance https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio | Same as last week: Failures: - ci-crio-cgroupv1-node-e2e-eviction: https://github.com/kubernetes/kubernetes/issues/107804 - ci-crio-cgroupv1-node-e2e-flaky: https://github.com/kubernetes/kubernetes/issues/109296 New failure since 06/06: - https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio: "Jun 7 07:21:47.208: too high pod startup latency 50th percentile: 6.013733129s" (https://github.com/kubernetes/kubernetes/issues/109911) | Failures: - ci-crio-cgroupv1-node-e2e-eviction: https://github.com/kubernetes/kubernetes/issues/107804 - ci-crio-cgroupv1-node-e2e-flaky: https://github.com/kubernetes/kubernetes/issues/109296 | Failures: - ci-crio-cgroupv1-node-e2e-eviction: https://github.com/kubernetes/kubernetes/issues/107804 - ci-crio-cgroupv1-node-e2e-flaky: https://github.com/kubernetes/kubernetes/issues/109296 | https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio starts passing Same as last week https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky https://github.com/kubernetes/kubernetes/issues/109296 | Failed for a while but lost track https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio Same as last week https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky https://github.com/kubernetes/kubernetes/issues/109296 | Same as last week https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky https://github.com/kubernetes/kubernetes/issues/109296 | Same as last week https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky https://github.com/kubernetes/kubernetes/issues/109296 | Same as last week https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky TODO: check or open a new issue for this | Same as last week https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky TODO: check or open a new issue for this | Same as last week https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky TODO: check or open a new issue for this | Same as last week https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky TODO: check or open a new issue for this | Same as last week https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky | Serial fixed, others the same: https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky | Same as last week: https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio https://github.com/kubernetes/kubernetes/issues/107805 | Same as last week: https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio https://github.com/kubernetes/kubernetes/issues/107805 | Same as last week: https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio https://github.com/kubernetes/kubernetes/issues/107805 | New Failures, old failure with Graceful termination is fixed https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://github.com/kubernetes/kubernetes/issues/107803 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://github.com/kubernetes/kubernetes/issues/107804 https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio https://github.com/kubernetes/kubernetes/issues/107805 | New Failures, old failure with Graceful termination is fixed https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-alpha https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-flaky https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio | https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio https://github.com/kubernetes/kubernetes/issues/107343 | https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio https://github.com/kubernetes/kubernetes/issues/107343 | ||||||||||||||||||||
8 | SIG Node COS https://testgrid.k8s.io/sig-node-cos | 🔴 (All failing) - https://github.com/kubernetes/kubernetes/issues/111876 | 🔴 (All failing) - https://github.com/kubernetes/kubernetes/issues/111876 | 🔴 (All failing) - https://github.com/kubernetes/kubernetes/issues/111876 | 🔴 | 🔴 | Same as last week and all tests are failing since 07/18. Any existing issues to track? | All tests are failing, seems like due to some connection API error | All tests are failing | - https://testgrid.k8s.io/sig-node-cos#soak-cos-gce - Failed to delete namespaces - https://testgrid.k8s.io/sig-node-cos#e2e-cos - Tests timing out - https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky - Tests timing out - https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-cri-containerd-e2e-cos-gce-flaky/1549762400482234368 - https://testgrid.k8s.io/sig-node-cos#e2e-cos-ip-alias - Test timing out csi-hostpath - https://testgrid.k8s.io/sig-node-cos#e2e-cos-proto Test Timeouts volume tests - https://testgrid.k8s.io/sig-node-cos#e2e-cos-reboot - Node failed to reboot | - https://testgrid.k8s.io/sig-node-cos#soak-cos-gce - Failed to delete namespaces - https://testgrid.k8s.io/sig-node-cos#e2e-cos - Tests timing out - https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky - Tests timing out - https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-cri-containerd-e2e-cos-gce-flaky/1549762400482234368 - https://testgrid.k8s.io/sig-node-cos#e2e-cos-ip-alias - Test timing out csi-hostpath - https://testgrid.k8s.io/sig-node-cos#e2e-cos-proto Test Timeouts volume tests - https://testgrid.k8s.io/sig-node-cos#e2e-cos-reboot - Node failed to reboot | New failure: - https://testgrid.k8s.io/sig-node-cos#e2e-cos-reboot Same as last week: - https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 - https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky https://github.com/kubernetes/kubernetes/issues/109221 | Same as last week: https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky https://github.com/kubernetes/kubernetes/issues/109221 There are other new flaky testgrids: https://testgrid.k8s.io/sig-node-cos#e2e-cos-reboot https://testgrid.k8s.io/sig-node-cos#e2e-cos-serial https://testgrid.k8s.io/sig-node-cos#e2e-cos-slow | Same as last week: https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky https://github.com/kubernetes/kubernetes/issues/109221 Failure that has been there but not tracked before? https://testgrid.k8s.io/sig-node-cos#e2e-cos-reboot (https://github.com/kubernetes/kubernetes/issues/110440) | Same as last week https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 https://github.com/kubernetes/kubernetes/issues/107925 https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky https://github.com/kubernetes/kubernetes/issues/109221 | Same as last week https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 https://github.com/kubernetes/kubernetes/issues/107925 https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky https://github.com/kubernetes/kubernetes/issues/109221 | Same as last week https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 https://github.com/kubernetes/kubernetes/issues/107925 https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky https://github.com/kubernetes/kubernetes/issues/109221 | containerd-e2e-cos-1.4 is removed soak-cos-gce and e2e- cos-flaky are the same as last week https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky (Need a new tracking issue? 109221 doens't seem to be one) | Same as last week, making progress on soak-cos-gce https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 https://github.com/kubernetes/kubernetes/issues/107925 https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky https://github.com/kubernetes/kubernetes/issues/109221 | Same as last week https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 https://github.com/kubernetes/kubernetes/issues/107925 https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky https://github.com/kubernetes/kubernetes/issues/109221 | https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 failing kube-up https://testgrid.k8s.io/sig-node-cos#soak-cos-gce (NPD failure) https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky (is flaky) | Same as last week https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky Green but Flaky: https://testgrid.k8s.io/sig-node-cos#e2e-cos-reboot | Same as last week https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky Green but Flaky: https://testgrid.k8s.io/sig-node-cos#e2e-cos-reboot | Same as last week https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky Green but Flaky: https://testgrid.k8s.io/sig-node-cos#e2e-cos-reboot | Same as last week https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky New: https://testgrid.k8s.io/sig-node-cos#e2e-cos-reboot breach the "flaky" threshold? | Same as last week https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 Same as last week https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky | New failure with "error during ./hack/e2e-internal/e2e-up.sh: exit status 2" https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 Same as last week https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky | New failure with "error during ./hack/e2e-internal/e2e-up.sh: exit status 2" https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 Same as last week https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky | New failure with "error during ./hack/e2e-internal/e2e-up.sh: exit status 2" https://testgrid.k8s.io/sig-node-cos#containerd-e2e-cos-1.4 (Will create a issue) Same as last week https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky | Get to green and then back to red. https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky | Get to green and then back to red. https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky | https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky Ask @bsdnet | https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky Ask @bsdnet | ||||||||||||||||||||
9 | SIG Node cAdvisor https://testgrid.k8s.io/sig-node-cadvisor | 🟢 | 🟢but flaky | 🟢 | 🟢 | kubetest.Node Tests failed. https://github.com/kubernetes/kubernetes/issues/112459 | 🟢 | 🟢 | 🟢 | Failed in the week of 07/27, but now all pass | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | kubetest.Node Tests failing since 04/15 3 test cases consistently failed created new tracking issue https://github.com/kubernetes/kubernetes/issues/109555 | 🟢 | kubetest.Node Tests failing: https://github.com/kubernetes/kubernetes/issues/109186 | failing now | failing now | 🟢 but flaky | 🟢 but flaky | 🟢 but flaky | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | ||||||||||||||||||||
10 | SIG Node CRI tools https://testgrid.k8s.io/sig-node-cri-tools | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | ||||||||||||||||||||
11 | SIG Node NPD https://testgrid.k8s.io/sig-node-node-problem-detector | 🟢 | 🟢 https://testgrid.k8s.io/sig-node-node-problem-detector#node-problem-detector-push-images https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-kubernetes-gce-gci-custom-flags are slightly flaky | 🟢 https://testgrid.k8s.io/sig-node-node-problem-detector#node-problem-detector-push-images https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-kubernetes-gce-gci-custom-flags are slightly flaky | 🟢 https://testgrid.k8s.io/sig-node-node-problem-detector#node-problem-detector-push-images | 🟢 https://testgrid.k8s.io/sig-node-node-problem-detector#node-problem-detector-push-images | 🟢 - node-problem-detector-push-images only ran once. Should we remove? | Same as last week. one failing: https://testgrid.k8s.io/sig-node-node-problem-detector#node-problem-detector-push-images | One failing: https://testgrid.k8s.io/sig-node-node-problem-detector#node-problem-detector-push-images | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | e2e tests were removed to fix the image. Push image fix: https://github.com/kubernetes/node-problem-detector/pull/639 | e2e tests were removed to fix the image. Push image fix: https://github.com/kubernetes/node-problem-detector/pull/639 | Same as week before: https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-node https://github.com/kubernetes/kubernetes/issues/107067 Green but Falky https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-test https://github.com/kubernetes/kubernetes/issues/108166 | Two more PRs in a queue, same tests still fail https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-node https://github.com/kubernetes/kubernetes/issues/107067 https://github.com/kubernetes/test-infra/pull/24914 and https://github.com/kubernetes/test-infra/pull/25405 will fix it https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-test https://github.com/kubernetes/kubernetes/issues/108166 https://github.com/kubernetes/node-problem-detector/pull/647 will fix part of it | Still failing https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-node https://github.com/kubernetes/kubernetes/issues/107067 https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-test new: https://github.com/kubernetes/kubernetes/issues/108166 | Still failing https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-node https://github.com/kubernetes/kubernetes/issues/107067 https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-test new: https://github.com/kubernetes/kubernetes/issues/108166 | Same as last week https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky | Same as last week https://testgrid.k8s.io/sig-node-cos#soak-cos-gce https://github.com/kubernetes/kubernetes/issues/107802 https://testgrid.k8s.io/sig-node-cos#e2e-cos-flaky | Still failing https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-node https://github.com/kubernetes/kubernetes/issues/107067 | Some progress, but still failing https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-node https://github.com/kubernetes/kubernetes/issues/107067 | https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-node https://github.com/kubernetes/kubernetes/issues/107067 | https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-node https://github.com/kubernetes/kubernetes/issues/107067 | ||||||||||||||||||||
12 | Kubernetes Presubmits blocking https://testgrid.k8s.io/presubmits-kubernetes-blocking | New test presubmits-kubernetes-blocking#pull-kubernetes-unit, need close monitor | 🟢 (but flaky) | 🟢 (but flaky) | 🟢 (but flaky) | 🟢 (but flaky) | 🟢 (but flaky) | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | - https://testgrid.k8s.io/sig-node-presubmits#pr-kubelet-gce-e2e-swap-fedora-serial ( https://github.com/kubernetes/kubernetes/issues/110340 ) | 🟢 but flaky | 🟢 but flaky | 🟢 | 🟢 | 🟢 | new failures. Maybe containerd config related | 🟢 but flaky | 🟢 but flaky | 🟢 but flaky | 🟢 but flaky | 🟢 but flaky | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | |||||||||||||||||||||
13 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
14 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
15 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
16 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
17 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
18 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
19 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
20 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
21 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
22 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
23 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
24 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
25 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
26 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
27 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
28 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
29 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
30 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
31 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
32 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
33 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
34 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
35 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
36 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
37 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
38 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
39 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
40 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
41 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
42 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
43 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
44 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
45 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
46 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
47 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
48 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
49 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
50 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
51 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
52 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
53 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
54 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
55 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
56 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
57 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
58 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
59 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
60 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
61 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
62 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
63 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
64 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
65 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
66 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
67 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
68 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
69 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
70 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
71 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
72 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
73 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
74 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
75 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
76 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
77 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
78 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
79 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
80 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
81 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
82 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
83 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
84 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
85 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
86 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
87 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
88 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
89 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
90 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
91 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
92 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
93 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
94 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
95 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
96 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
97 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
98 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
99 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
100 |