-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No InternalIP when upgrading to kubernetes >= 1.6.5 with VSphere Cloud Provider #48760
Comments
@cbonte There are no sig labels on this issue. Please add a sig label by: |
/sig cluster-ops |
Performing some more tests on a mini instance outside of the production cluster, I think I've identified the issue. The When the I think the loop in |
/sig node |
ping @kerneltime |
Modifying the code like this solved the issue : diff --git a/pkg/kubelet/kubelet_node_status.go b/pkg/kubelet/kubelet_node_status.go
index a0da169f6b..ce347523e9 100644
--- a/pkg/kubelet/kubelet_node_status.go
+++ b/pkg/kubelet/kubelet_node_status.go
@@ -405,15 +405,16 @@ func (kl *Kubelet) setNodeAddress(node *v1.Node) error {
return fmt.Errorf("failed to get node address from cloud provider: %v", err)
}
if kl.nodeIP != nil {
+ node.Status.Addresses = []v1.NodeAddress{}
for _, nodeAddress := range nodeAddresses {
if nodeAddress.Address == kl.nodeIP.String() {
- node.Status.Addresses = []v1.NodeAddress{
- {Type: nodeAddress.Type, Address: nodeAddress.Address},
- {Type: v1.NodeHostName, Address: kl.GetHostname()},
- }
- return nil
+ node.Status.Addresses = append(node.Status.Addresses, v1.NodeAddress{Type: nodeAddress.Type, Address: nodeAddress.Address})
}
}
+ if len(node.Status.Addresses) > 0 {
+ node.Status.Addresses = append(node.Status.Addresses, v1.NodeAddress{Type: v1.NodeHostName, Address: kl.GetHostname()})
+ return nil
+ }
return fmt.Errorf("failed to get node address from cloud provider that matches ip: %v", kl.nodeIP)
} which returns : status:
addresses:
- address: debug.k8s.debug-01
type: Hostname
- address: 10.9.65.204
type: ExternalIP
- address: 10.9.65.204
type: InternalIP I guess it can be done in a cleaner way, but it was to quickly test the fix. |
@cbonte |
When a node IP is set and a cloud provider returns the same address with several types, on the first address was accepted. With the changes made in PR kubernetes#45201, the vSphere cloud provider returned the ExternalIP first, which led to a node without any InternalIP. The behaviour is modified to return all the address types for the specified node IP. Issue kubernetes#48760
Hi, what are the next steps to discuss about the PR ? |
When a node IP is set and a cloud provider returns the same address with several types, on the first address was accepted. With the changes made in PR kubernetes#45201, the vSphere cloud provider returned the ExternalIP first, which led to a node without any InternalIP. The behaviour is modified to return all the address types for the specified node IP. Issue kubernetes#48760
Automatic merge from submit-queue (batch tested with PRs 51728, 49202) Fix setNodeAddress when a node IP and a cloud provider are set **What this PR does / why we need it**: When a node IP is set and a cloud provider returns the same address with several types, only the first address was accepted. With the changes made in PR kubernetes#45201, the vSphere cloud provider returned the ExternalIP first, which led to a node without any InternalIP. The behaviour is modified to return all the address types for the specified node IP. **Which issue this PR fixes**: fixes kubernetes#48760 **Special notes for your reviewer**: * I'm not a golang expert, is it possible to mock `kubelet.validateNodeIP()` to avoid the need of real host interface addresses in the test ? * It would be great to have it backported for a next 1.6.8 release. **Release note**: ```release-note NONE ```
When a node IP is set and a cloud provider returns the same address with several types, on the first address was accepted. With the changes made in PR kubernetes#45201, the vSphere cloud provider returned the ExternalIP first, which led to a node without any InternalIP. The behaviour is modified to return all the address types for the specified node IP. Issue kubernetes#48760
When a node IP is set and a cloud provider returns the same address with several types, on the first address was accepted. With the changes made in PR kubernetes#45201, the vSphere cloud provider returned the ExternalIP first, which led to a node without any InternalIP. The behaviour is modified to return all the address types for the specified node IP. Issue kubernetes#48760
BUG REPORT:
/kind bug
What happened:
After upgrading the kubernetes cluster from 1.6.4 with the VSphere Cloud Provider enabled to 1.6.5/1.6.6 or 1.6.7, the cluster nodes don't have anymore InternalIPs
Several issues occur at this step.
For example, it breaks heapster with such an error :
What you expected to happen:
The node should have an InternalIP.
How to reproduce it (as minimally and precisely as possible):
Deploy a 1 node kubernetes cluster in a VM, and enable the vsphere cloud provider.
Anything else we need to know?:
with kubernetes 1.6.4 we get :
Downgrading to 1.6.4 fixes the issue.
Environment:
kubectl version
):uname -a
):@kubernetes/sig-cluster-ops-bugs @kubernetes/sig-network-bugs
/area platform/vsphere cloudprovider
The text was updated successfully, but these errors were encountered: