Description
What happened?
When installs our pods, containers in the pods are found one time reboot:
wk64c
State: Running
Started: Mon, 13 Nov 2023 03:50:57 +0100
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Mon, 13 Nov 2023 03:48:52 +0100
Finished: Mon, 13 Nov 2023 03:50:55 +0100
Ready: True
Restart Count: 1
Could you help explain why K8S let the container exit with exit code 137?
Normally the issue only happened during the first time installation, the second time installation works well.
We are trying to understand how thie 137 error code with reason Error is different than with reason:OOMKilled
What did you expect to happen?
Application pods should never get restarted when we have sufficient mem/cpu resources.
How can we reproduce it (as minimally and precisely as possible)?
Application pods should never get restarted when we have sufficient mem/cpu resources.
Anything else we need to know?
No response
Kubernetes version
k8s version is 1.27.1
$ kubectl version
# paste output here
Cloud provider
OS version
:> cat /etc/os-release> uname -a
NAME="SLES"
VERSION="15-SP4"
VERSION_ID="15.4"
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP4"
ID="sles"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:15:sp4"
DOCUMENTATION_URL="https://documentation.suse.com/"
:
#1 SMP PREEMPT_DYNAMIC Tue May 2 15:49:04 UTC 2023 (fd0cc4f) x86_64 x86_64 x86_64 GNU/Linux
Activity