Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump to go1.8 and remove the edge GOROOT #41636

Merged
merged 3 commits into from
Apr 26, 2017

Conversation

luxas
Copy link
Member

@luxas luxas commented Feb 17, 2017

What this PR does / why we need it:

Bumps to go1.8; we get:

  • performance improvements
  • build time improvements
  • the possibility to remove the hacky edge-GOROOT for arm and ppc64le that must use go1.8
  • all other awesome features that are included in go1.8: https://golang.org/doc/go1.8

Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #38228

Special notes for your reviewer:

@ixdy Please push the image ASAP so we can see if this passes all tests

Release note:

Upgrade go version to v1.8

cc @ixdy @bradfitz @jessfraz @wojtek-t @timothysc @spxtr @thockin @smarterclayton @bprashanth @gmarek

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Feb 17, 2017
@k8s-github-robot k8s-github-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. release-note Denotes a PR that will be considered when it comes time to generate release notes. labels Feb 17, 2017
@k8s-reviewable
Copy link

This change is Reviewable

@ixdy
Copy link
Member

ixdy commented Feb 17, 2017

Can you try testing this locally before we push a new crossbuild image?

@ixdy
Copy link
Member

ixdy commented Feb 17, 2017

also we probably want to update the rules_go commit in WORKSPACE to pick up go1.8.0 in bazel.

@luxas
Copy link
Member Author

luxas commented Feb 19, 2017

@ixdy Ok, tests have been run, see gist: https://gist.github.com/luxas/6c0b9ced17e60072a63ff7c150574e48

All binaries were compiled, but etcd errored out a lot of times in the unit/integration tests, example:

ok      k8s.io/kubernetes/federation/registry/cluster   0.168s
2017-02-19 11:23:57.693527 I | integration: launching node1893452839853692973 (unix://localhost:node1893452839853692973.sock.bridge)
2017-02-19 11:23:57.697701 I | etcdserver: name = node1893452839853692973
2017-02-19 11:23:57.697738 I | etcdserver: data dir = /tmp.k8s/etcd954994936
2017-02-19 11:23:57.697753 I | etcdserver: member dir = /tmp.k8s/etcd954994936/member
2017-02-19 11:23:57.697764 I | etcdserver: heartbeat = 10ms
2017-02-19 11:23:57.697775 I | etcdserver: election = 100ms
2017-02-19 11:23:57.697787 I | etcdserver: snapshot count = 0
2017-02-19 11:23:57.697821 I | etcdserver: advertise client URLs = unix://127.0.0.1:21002.6981.sock
2017-02-19 11:23:57.697836 I | etcdserver: initial advertise peer URLs = unix://127.0.0.1:21001.6981.sock
2017-02-19 11:23:57.697854 I | etcdserver: initial cluster = node1893452839853692973=unix://127.0.0.1:21001.6981.sock
2017-02-19 11:23:57.702867 I | etcdserver: starting member e9173b0babbaf3c5 in cluster 6c1491be9d5c7178
2017-02-19 11:23:57.702936 I | raft: e9173b0babbaf3c5 became follower at term 0
2017-02-19 11:23:57.702977 I | raft: newRaft e9173b0babbaf3c5 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2017-02-19 11:23:57.702999 I | raft: e9173b0babbaf3c5 became follower at term 1
2017-02-19 11:23:57.714649 I | etcdserver: set snapshot count to default 10000
2017-02-19 11:23:57.714669 I | etcdserver: starting server... [version: 3.0.14, cluster version: to_be_decided]
2017-02-19 11:23:57.714990 I | integration: launched node1893452839853692973 (unix://localhost:node1893452839853692973.sock.bridge)
2017-02-19 11:23:57.715808 I | membership: added member e9173b0babbaf3c5 [unix://127.0.0.1:21001.6981.sock] to cluster 6c1491be9d5c7178
2017-02-19 11:23:57.723404 I | raft: e9173b0babbaf3c5 is starting a new election at term 1
2017-02-19 11:23:57.723428 I | raft: e9173b0babbaf3c5 became candidate at term 2
2017-02-19 11:23:57.723472 I | raft: e9173b0babbaf3c5 received vote from e9173b0babbaf3c5 at term 2
2017-02-19 11:23:57.723487 I | raft: e9173b0babbaf3c5 became leader at term 2
2017-02-19 11:23:57.723495 I | raft: raft.node: e9173b0babbaf3c5 elected leader e9173b0babbaf3c5 at term 2
2017-02-19 11:23:57.723718 I | etcdserver: setting up the initial cluster version to 3.0
2017-02-19 11:23:57.724420 N | membership: set the initial cluster version to 3.0
2017-02-19 11:23:57.724459 I | api: enabled capabilities for version 3.0
2017-02-19 11:23:57.724485 I | etcdserver: published {Name:node1893452839853692973 ClientURLs:[unix://127.0.0.1:21002.6981.sock]} to cluster 6c1491be9d5c7178
panic: test timed out after 2m0s

goroutine 11849 [running]:
testing.startAlarm.func1()
        /usr/local/go/src/testing/testing.go:1023 +0xf9
created by time.goFunc
        /usr/local/go/src/time/sleep.go:170 +0x44

goroutine 1 [chan receive, 2 minutes]:
testing.(*T).Run(0xc4201084e0, 0x19ec35b, 0xa, 0x1a64a00, 0xc420613d20)
        /usr/local/go/src/testing/testing.go:698 +0x2f4
testing.runTests.func1(0xc4201084e0)
        /usr/local/go/src/testing/testing.go:882 +0x67
testing.tRunner(0xc4201084e0, 0xc420613de0)
        /usr/local/go/src/testing/testing.go:657 +0x96
testing.runTests(0xc42017b240, 0x24770c0, 0x6, 0x6, 0x30)
        /usr/local/go/src/testing/testing.go:888 +0x2c1
testing.(*M).Run(0xc420a41f20, 0xc420613f20)
        /usr/local/go/src/testing/testing.go:822 +0xfc
main.main()
        k8s.io/kubernetes/federation/registry/cluster/etcd/_test/_testmain.go:52 +0xf7

goroutine 17 [syscall, 2 minutes, locked to thread]:
runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:2197 +0x1

goroutine 5 [chan receive]:
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x24850a0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:879 +0x7a
created by k8s.io/kubernetes/vendor/github.com/golang/glog.init.1
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:410 +0x21d

goroutine 85 [chan receive]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/logutil.(*MergeLogger).outputLoop(0xc42017ae60)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:174 +0x94
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/logutil.NewMergeLogger
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:92 +0xd4

goroutine 73 [chan receive]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/logutil.(*MergeLogger).outputLoop(0xc42017a5c0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:174 +0x94
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/logutil.NewMergeLogger
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:92 +0xd4

goroutine 51 [runnable]:
time.Sleep(0x989680)
        /usr/local/go/src/runtime/time.go:59 +0xf9
k8s.io/kubernetes/vendor/github.com/coreos/etcd/integration.(*cluster).waitMembersMatch(0xc42017b260, 0xc4201085b0, 0xc4202f1540, 0x1, 0x1)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/integration/cluster.go:324 +0x252
k8s.io/kubernetes/vendor/github.com/coreos/etcd/integration.(*cluster).Launch(0xc42017b260, 0xc4201085b0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/integration/cluster.go:168 +0x185
k8s.io/kubernetes/vendor/github.com/coreos/etcd/integration.NewClusterV3(0xc4201085b0, 0xc42028a100, 0x0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/integration/cluster.go:755 +0xa4
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/etcd/testing.NewUnsecuredEtcd3TestClientServer(0xc4201085b0, 0xc420363940, 0x0, 0x0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/etcd/testing/utils.go:316 +0x50
k8s.io/kubernetes/pkg/registry/registrytest.NewEtcdStorage(0xc4201085b0, 0x19ec6d5, 0xa, 0x0, 0x0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/registry/registrytest/etcd.go:41 +0x3e
k8s.io/kubernetes/federation/registry/cluster/etcd.newStorage(0xc4201085b0, 0x0, 0x0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/federation/registry/cluster/etcd/etcd_test.go:34 +0x55
k8s.io/kubernetes/federation/registry/cluster/etcd.TestCreate(0xc4201085b0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/federation/registry/cluster/etcd/etcd_test.go:70 +0x43
testing.tRunner(0xc4201085b0, 0x1a64a00)
        /usr/local/go/src/testing/testing.go:657 +0x96
created by testing.(*T).Run
        /usr/local/go/src/testing/testing.go:697 +0x2ca

goroutine 52 [IO wait, 2 minutes]:
net.runtime_pollWait(0x7f52b45ac878, 0x72, 0x2405f40)
        /usr/local/go/src/runtime/netpoll.go:164 +0x59
net.(*pollDesc).wait(0xc4202646f8, 0x72, 0x23f9828, 0xc4201fa020)
        /usr/local/go/src/net/fd_poll_runtime.go:75 +0x38
net.(*pollDesc).waitRead(0xc4202646f8, 0xffffffffffffffff, 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:80 +0x34
net.(*netFD).accept(0xc420264690, 0x0, 0x24038c0, 0xc4201fa020)
        /usr/local/go/src/net/fd_unix.go:430 +0x1e5
net.(*UnixListener).accept(0xc4202ce1b0, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/unixsock_posix.go:162 +0x32
net.(*UnixListener).Accept(0xc4202ce1b0, 0x1371429, 0xc42046ff58, 0xc42046ff50, 0xc420480140)
        /usr/local/go/src/net/unixsock.go:237 +0x49
k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/transport.(*unixListener).Accept(0xc42029b0c0, 0x1a67f10, 0xc4203504e0, 0x58a9804d, 0x294e7f2b)
        <autogenerated>:94 +0x49
k8s.io/kubernetes/vendor/github.com/coreos/etcd/integration.(*bridge).serveListen(0xc4203504e0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/integration/bridge.go:89 +0x6b
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/integration.newBridge
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/integration/bridge.go:53 +0x2ed

goroutine 108 [runnable]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver.(*raftNode).start.func1(0xc420168928)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/raft.go:151 +0xd1e
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver.(*raftNode).start
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/raft.go:254 +0x1e4

goroutine 54 [select]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/mvcc/backend.(*backend).run(0xc4203509c0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/mvcc/backend/backend.go:193 +0x1c2
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/mvcc/backend.newBackend
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/mvcc/backend/backend.go:119 +0x26a

goroutine 55 [select, 2 minutes]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/wal.(*filePipeline).run(0xc42028ad00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/wal/file_pipeline.go:89 +0x197
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/wal.newFilePipeline
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/wal/file_pipeline.go:47 +0x134

goroutine 56 [select]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/raft.(*node).run(0xc420350ea0, 0xc420109ba0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/raft/node.go:307 +0x10f6
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/raft.StartNode
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/raft/node.go:204 +0x6b3

goroutine 57 [select]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/lease.(*lessor).runLoop(0xc4202f0ff0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/lease/lessor.go:379 +0x193
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/lease.newLessor
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/lease/lessor.go:168 +0x18c

goroutine 58 [select]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/lease.(*lessor).runLoop(0xc4202f1040)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/lease/lessor.go:379 +0x193
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/lease.newLessor
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/lease/lessor.go:168 +0x18c

goroutine 59 [select, 2 minutes]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/schedule.(*fifo).run(0xc420351860)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/schedule/schedule.go:146 +0x35c
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/schedule.NewFIFOScheduler
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/schedule/schedule.go:71 +0x1ac

goroutine 60 [select]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/mvcc.(*watchableStore).syncWatchersLoop(0xc420027200)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/mvcc/watchable_store.go:280 +0x201
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/mvcc.newWatchableStore
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/mvcc/watchable_store.go:84 +0x2ed

goroutine 61 [select, 2 minutes]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/mvcc.(*watchableStore).syncVictimsLoop(0xc420027200)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/mvcc/watchable_store.go:306 +0x1cc
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/mvcc.newWatchableStore
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/mvcc/watchable_store.go:85 +0x30f

goroutine 62 [select]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).run(0xc420168900)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/server.go:595 +0x744
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).start
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/server.go:491 +0x2d4

goroutine 64 [select, 2 minutes]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).purgeFile(0xc420168900)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/server.go:502 +0x2a8
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).Start
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/server.go:468 +0x8f

goroutine 65 [select]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver.monitorFileDescriptor(0xc420351c80)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/metrics.go:89 +0x1ad
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).Start
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/server.go:469 +0xb8

goroutine 98 [select]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).monitorVersions(0xc420168900)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/server.go:1234 +0x3c2
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).Start
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/server.go:470 +0xda

goroutine 99 [IO wait, 2 minutes]:
net.runtime_pollWait(0x7f52b45acab8, 0x72, 0x2405f40)
        /usr/local/go/src/runtime/netpoll.go:164 +0x59
net.(*pollDesc).wait(0xc4202645a8, 0x72, 0x23f9828, 0xc4201c95c0)
        /usr/local/go/src/net/fd_poll_runtime.go:75 +0x38
net.(*pollDesc).waitRead(0xc4202645a8, 0xffffffffffffffff, 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:80 +0x34
net.(*netFD).accept(0xc420264540, 0x0, 0x24038c0, 0xc4201c95c0)
        /usr/local/go/src/net/fd_unix.go:430 +0x1e5
net.(*UnixListener).accept(0xc420267740, 0xc42029dce0, 0xc42046aee8, 0x5ebd4d)
        /usr/local/go/src/net/unixsock_posix.go:162 +0x32
net.(*UnixListener).Accept(0xc420267740, 0xdfdfb9, 0xc420267740, 0x2408480, 0xc42017b340)
        /usr/local/go/src/net/unixsock.go:237 +0x49
k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/transport.(*unixListener).Accept(0xc42029afc0, 0xc42029dcb0, 0x17cb1a0, 0x246dc30, 0x18b4ac0)
        <autogenerated>:94 +0x49
net/http.(*Server).Serve(0xc420094580, 0x2412300, 0xc42029afc0, 0x0, 0x0)
        /usr/local/go/src/net/http/server.go:2643 +0x228
net/http/httptest.(*Server).goServe.func1(0xc420351d40)
        /usr/local/go/src/net/http/httptest/server.go:235 +0x6d
created by net/http/httptest.(*Server).goServe
        /usr/local/go/src/net/http/httptest/server.go:236 +0x5c

goroutine 104 [IO wait, 2 minutes]:
net.runtime_pollWait(0x7f52b45ac9f8, 0x72, 0x2405f40)
        /usr/local/go/src/runtime/netpoll.go:164 +0x59
net.(*pollDesc).wait(0xc420264618, 0x72, 0x23f9828, 0xc4201c95e0)
        /usr/local/go/src/net/fd_poll_runtime.go:75 +0x38
net.(*pollDesc).waitRead(0xc420264618, 0xffffffffffffffff, 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:80 +0x34
net.(*netFD).accept(0xc4202645b0, 0x0, 0x24038c0, 0xc4201c95e0)
        /usr/local/go/src/net/fd_unix.go:430 +0x1e5
net.(*UnixListener).accept(0xc4202677a0, 0xc42029de00, 0xc42046b6e8, 0x5ebd4d)
        /usr/local/go/src/net/unixsock_posix.go:162 +0x32
net.(*UnixListener).Accept(0xc4202677a0, 0xdfdfb9, 0xc4202677a0, 0x2408480, 0xc42017b500)
        /usr/local/go/src/net/unixsock.go:237 +0x49
k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/transport.(*unixListener).Accept(0xc42029b000, 0xc42029dda0, 0x17cb1a0, 0x246dc30, 0x18b4ac0)
        <autogenerated>:94 +0x49
net/http.(*Server).Serve(0xc420095ef0, 0x2412300, 0xc42029b000, 0x0, 0x0)
        /usr/local/go/src/net/http/server.go:2643 +0x228
net/http/httptest.(*Server).goServe.func1(0xc420351f20)
        /usr/local/go/src/net/http/httptest/server.go:235 +0x6d
created by net/http/httptest.(*Server).goServe
        /usr/local/go/src/net/http/httptest/server.go:236 +0x5c

goroutine 105 [select]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/api/v3rpc.monitorLeader.func1(0xc420168900, 0xc4203539c0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/api/v3rpc/interceptor.go:147 +0x43c
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/api/v3rpc.monitorLeader
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/api/v3rpc/interceptor.go:173 +0xbf

goroutine 106 [IO wait, 2 minutes]:
net.runtime_pollWait(0x7f52b45ac938, 0x72, 0x2405f40)
        /usr/local/go/src/runtime/netpoll.go:164 +0x59
net.(*pollDesc).wait(0xc420264688, 0x72, 0x23f9828, 0xc4201c9620)
        /usr/local/go/src/net/fd_poll_runtime.go:75 +0x38
net.(*pollDesc).waitRead(0xc420264688, 0xffffffffffffffff, 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:80 +0x34
net.(*netFD).accept(0xc420264620, 0x0, 0x24038c0, 0xc4201c9620)
        /usr/local/go/src/net/fd_unix.go:430 +0x1e5
net.(*UnixListener).accept(0xc420267e00, 0xf100000000000018, 0xc4201d9ea8, 0xf13ffb2a58f2b60f)
        /usr/local/go/src/net/unixsock_posix.go:162 +0x32
net.(*UnixListener).Accept(0xc420267e00, 0xe95081, 0xc4201d9f48, 0xc4201d9f40, 0xc4202f1db0)
        /usr/local/go/src/net/unixsock.go:237 +0x49
k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/transport.(*unixListener).Accept(0xc42029b090, 0x1a69fb8, 0xc42002c460, 0x2412300, 0xc42029b090)
        <autogenerated>:94 +0x49
k8s.io/kubernetes/vendor/google.golang.org/grpc.(*Server).Serve(0xc42002c460, 0x2412300, 0xc42029b090, 0x0, 0x0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/server.go:348 +0x153
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/integration.(*member).Launch
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/integration/cluster.go:609 +0x81f

goroutine 109 [select]:
k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/schedule.(*fifo).run(0xc4202e6c00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/schedule/schedule.go:146 +0x35c
created by k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/schedule.NewFIFOScheduler
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/pkg/schedule/schedule.go:71 +0x1ac
FAIL    k8s.io/kubernetes/federation/registry/cluster/etcd      120.197s

cc @kubernetes/sig-api-machinery-pr-reviews @wojtek-t @hongchaodeng @xiang90 @timothysc

@luxas
Copy link
Member Author

luxas commented Feb 19, 2017

@ixdy However, there's nothing wrong with this image, so feel free to push it so we can run the pre-submit test suite on this PR, then it's easier to evalute and debug the failures

@gmarek
Copy link
Contributor

gmarek commented Feb 20, 2017

Can we please not merge this before 1.6 is out? There's enough churn already...

@timothysc
Copy link
Member

Can we please not merge this before 1.6 is out? There's enough churn already...

+1, I'm good with first thing in 1.7.

@timothysc timothysc added this to the v1.7 milestone Feb 20, 2017
@timothysc timothysc added the sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. label Feb 20, 2017
@luxas
Copy link
Member Author

luxas commented Feb 20, 2017

+1, I'm good with first thing in 1.7.

Me too

@wojtek-t
Copy link
Member

+1

@luxas
Copy link
Member Author

luxas commented Feb 20, 2017

But the faster we have an image to experiment with in the CI, the faster we'll catch bugs and things that should be changed on our side

@gmarek gmarek added the do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. label Feb 20, 2017
@ixdy
Copy link
Member

ixdy commented Feb 21, 2017

I'm a little hesitant to push a cross-build image before it's ready to be checked in, but I guess we don't have a better way, really.

@luxas can you update the WORKSPACE file in root to bump the rules_go import? That should get the unit tests running against 1.8.

@ixdy
Copy link
Member

ixdy commented Feb 21, 2017

also as #41771 demonstrates there are a few other places we need to bump.

@ixdy
Copy link
Member

ixdy commented Feb 21, 2017

gcr.io/google_containers/kube-cross:v1.8.0-1 pushed.

@k8s-bot test this

Things have been pretty unstable today, so I'm not sure how much signal we'll get.

@ixdy
Copy link
Member

ixdy commented Feb 21, 2017

@k8s-bot cross build this

@ixdy
Copy link
Member

ixdy commented Feb 22, 2017

@k8s-bot cross build this

I don't understand what's going on.

@ixdy
Copy link
Member

ixdy commented Feb 22, 2017

@k8s-bot test this

@ixdy
Copy link
Member

ixdy commented Feb 22, 2017

The cross-build job doesn't appear to be significantly faster (59m, which is still gross), though we probably need more data points.

k8s-github-robot pushed a commit that referenced this pull request Feb 23, 2017
Automatic merge from submit-queue (batch tested with PRs 41812, 41665, 40007, 41281, 41771)

Bump golang versions to 1.7.5

**What this PR does / why we need it**: While #41636 might not make it in until 1.7, this would bump current golang versions from 1.7.4 to 1.7.5 to integrate the fixes from that patch version. This would include, among other things, a fix to ensure cross-built binaries for darwin don't have certificate validation errors (golang/go#18688)

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: none

**Special notes for your reviewer**:

**Release note**:

```release-note
Upgrade golang versions to 1.7.5
```
@k8s-github-robot k8s-github-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 23, 2017
@zmerlynn zmerlynn assigned ixdy and unassigned zmerlynn Feb 28, 2017
@ixdy
Copy link
Member

ixdy commented Apr 20, 2017

The verification tests run in gcr.io/k8s-testimages/kubekins-test:1.6-v20161205-ad918bc, which I think is based on go1.7.4.

Does the generated code really depend on the go version? That's gross.

@luxas
Copy link
Member Author

luxas commented Apr 20, 2017

Does the generated code really depend on the go version? That's gross.

I'm very confident they do. Nearly every time we've bumped go version earlier, generated code has changed and we've had a rebase hell party ;)

@ixdy could you bump kubekins and try to get #44583 in today? That would unblock this in time for Friday (or maybe preferably Monday)

@liggitt
Copy link
Member

liggitt commented Apr 20, 2017

Does the generated code really depend on the go version? That's gross.

I'm very confident they do. Nearly every time we've bumped go version earlier, generated code has changed and we've had a rebase hell party ;)

It's unfortunate this means all kube developers must immediately switch to compiling with go1.8 :-/

@cblecker
Copy link
Member

Is it a big deal switching over to go1.8 though? Go's release policy deprecated support for 1.7 the moment 1.8 was out.

@ixdy
Copy link
Member

ixdy commented Apr 20, 2017

I think a lot of our gogen scripts get around this issue by running in the kube-cross container (via build/run.sh). It looks like update-staging-client-go.sh and update-staging-godeps.sh just use whatever version of go is installed, which is almost certainly going to cause issues (like this).

cc @caesarxuchao @sttts @deads2k @cblecker

@caesarxuchao
Copy link
Member

@luxas it looks like you didn't run hack/update-staging-client-go.sh after you updated protobuf.

I don't think the update/verify-staging-client-go.sh depends on the go version.

@deads2k
Copy link
Contributor

deads2k commented Apr 21, 2017

@deads2k @sttts What go version/env is hack/verify-staging-client-go.sh using?
Seems like its env should be updated somehow...

This is the only golang usage I can think of inside of that script: https://github.com/kubernetes/kubernetes/blob/master/hack/verify-staging-client-go.sh#L26 . The copy step is just inspecting the dependency tree and making some copies.

@ixdy
Copy link
Member

ixdy commented Apr 25, 2017

bazel PR is merged. @luxas can you rebase/regenerate stuff to see if that makes verification happy?

@luxas
Copy link
Member Author

luxas commented Apr 25, 2017

@ixdy updated, let's see how it goes 👍

@ixdy ixdy removed the do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. label Apr 25, 2017
@cblecker
Copy link
Member

Looks like it worked 🎉 !

@ixdy
Copy link
Member

ixdy commented Apr 25, 2017

/lgtm

sweet!
@thockin want to give this one top-level approval?

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Apr 25, 2017
@smarterclayton
Copy link
Contributor

smarterclayton commented Apr 25, 2017 via email

@k8s-github-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ixdy, luxas, smarterclayton

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@k8s-github-robot k8s-github-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 25, 2017
@ixdy
Copy link
Member

ixdy commented Apr 25, 2017

@smarterclayton thanks!

@luxas might want to email kubernetes-dev@ as soon as this merges to note that we've updated to go1.8.1.

@k8s-github-robot
Copy link

Automatic merge from submit-queue (batch tested with PRs 41287, 41636, 44881, 44826)

@k8s-github-robot k8s-github-robot merged commit d03ca66 into kubernetes:master Apr 26, 2017
@wojtek-t
Copy link
Member

wohoo

I didn't yet have time to look into results, but at least we didn't see anything broken by that.

@mikedanese
Copy link
Member

!!! thanks @luxas

@luxas
Copy link
Member Author

luxas commented Apr 26, 2017

@ixdy now announced, thanks for reminding: https://groups.google.com/forum/#!topic/kubernetes-dev/0XRRz6UhhTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update to Go 1.8; eval 1.8beta1