-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validating docker 1.13.x and 17.03 #42926
Comments
xref: #40182 |
@dchen1107 is this a blocker for 1.6? |
Hi @dchen1107, I'd like to help validate docker 1.13.x and 17.03.x. Can you help me understand what kind of validation currently runs? |
Is this expected to be done for k8s 1.7? With an RC cut I'm guessing probably not. |
Automatic merge from submit-queue (batch tested with PRs 48082, 48815, 48901, 48824) Add test image name to the OS image field of the perf metrics I'd like to add the resource usage benchmarks for COS m60 (docker 1.13.1) but don't want to remove the existing m59 (docker 1.11.2) [ones](https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/jenkins/benchmark/benchmark-config.yaml#L51-L71), in order to compare the results between the two docker versions. The `image` reported in the metrics is from `Node.Status.NodeInfo.OSImage`, which is always "Container-Optimized OS from Google" (from `/etc/os-releases`) for COS. So there's no way to differentiate two milestones in the metrics. This PR attaches the [image name](https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/jenkins/benchmark/benchmark-config.yaml#L52) to the `image` field of the metrics. So it will become "Container-Optimized OS from Google (cos-stable-59-9460-64-0)". See the results of the test run: [performance-memory-containervm-resource1-resource_0.json](https://storage.googleapis.com/ygg-gke-dev-bucket/e2e-node-test/ci-kubernetes-node-kubelet-benchmark/13/artifacts/performance-memory-containervm-resource1-resource_0.json) [performance-memory-coreos-resource1-resource_0.json](https://storage.googleapis.com/ygg-gke-dev-bucket/e2e-node-test/ci-kubernetes-node-kubelet-benchmark/13/artifacts/performance-memory-coreos-resource1-resource_0.json) [performance-memory-gci-resource1-resource_0.json](https://storage.googleapis.com/ygg-gke-dev-bucket/e2e-node-test/ci-kubernetes-node-kubelet-benchmark/13/artifacts/performance-memory-gci-resource1-resource_0.json) **Release note**: ``` None ``` Ref: #42926 /sig node /area node-e2e /assign @dchen1107
Automatic merge from submit-queue (batch tested with PRs 48890, 46893, 48872, 48896) Add cos-beta-60-9592-52-0 to the benchmark tests This PR depends on #48824. This PR adds new resource usage tests for cos-beta-60-9592-52-0 (docker 1.13.1). Ref: #42926 **Release note**: ``` None ``` /sig node /area node-e2e /assign @dchen1107 /cc @abgworrall
Automatic merge from submit-queue (batch tested with PRs 48377, 48940, 49144, 49062, 49148) Add cos-dev-61-9733-0-0 to the benchmark tests Ref: #42926 m60 has docker 1.13.1 while m61 has 17.03. This PR adds m61 to the benchmark tests so that we will have more data to compare. PS: We will support fetching the latest image in an image family in the node e2e tests in the future. **Release note**: ``` None ``` /assign @yujuhong /cc @kewu1992 @abgworrall
Automatic merge from submit-queue (batch tested with PRs 48224, 45431, 45946, 48775, 49396) Update cos-dev image in benchmark tests to cos-dev-61-9759-0-0 Ref: #42926 `cos-dev-61-9759-0-0` contains a fix in Linux utility `du` that would affect the measurement of docker performance in kubelet. I'd like to update the benchmark to use the new image. **Release note**: ``` None ``` /assign @tallclair /cc @kewu1992 @abgworrall
Automatic merge from submit-queue (batch tested with PRs 49916, 50050) Update images used in the node e2e benchmark tests Ref: #42926 - Update the cosbeta image since the new version contains a 'du' command fix that affects Docker performance. - Add the coreos and ubuntu image that run Docker 1.12.6 so that we will have more data to compare. **Release note**: ``` None ```
Automatic merge from submit-queue (batch tested with PRs 50087, 39587, 50042, 50241, 49914) Add node e2e test for Docker's shared PID namespace Ref: #42926 This PR adds a simple test for the shared PID namespace that's enabled when Docker is 1.13.1+. /sig node /area node-e2e /assign @yujuhong **Release note**: ``` None ```
Automatic merge from submit-queue (batch tested with PRs 49847, 49743, 49853, 50225, 50479) Add node benchmark tests for cos-m60 with docker 1.12.6 Ref: #42926 This PR adds a benchmark tests against cos-m60 with docker 1.12.6 on http://node-perf-dash.k8s.io. This test is useful for docker validation -- we can compare the performance of different dockers on the same OS. cos-m60 comes with docker 1.13.1 by default, so we need to use cloud-init to downgrade the version to 1.12.6. **Release note**: ``` None ``` /assign @dchen1107
Automatic merge from submit-queue (batch tested with PRs 49842, 50649) Allow passing image description from e2e node test config Ref: #42926 This is the follow up of #50479. In #50479, we added the tests for cos-m60 with docker 1.12.6. Those tests use the same image name as the existing ones (cos-m60 with docker 1.13.1). So we are not distinguish them on node-perf-dash, which categories the tests by image names. This PR fixes the issue by passing image description to the test. Examples: https://storage.googleapis.com/ygg-gke-dev-bucket/e2e-node-test/ci-kubernetes-node-kubelet-benchmark/22/artifacts/performance-cpu-cos-docker112-resource2-resource_35.json https://storage.googleapis.com/ygg-gke-dev-bucket/e2e-node-test/ci-kubernetes-node-kubelet-benchmark/22/artifacts/performance-cpu-cos-resource2-resource_35.json **Release note**: ``` None ``` /assign @Random-Liu
EOL for docker 1.12 is 2017-11-14 according to their maintenance lifecycle page. Is this issue still active, or is there another one that is tracking the validation of newer versions of docker engine? |
Yes, it's active and being tracked in this issue (if you see the attached PRs).
I will have a full summary no latter than next Monday. |
Automatic merge from submit-queue (batch tested with PRs 51337, 47080, 52646, 52635, 52666). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.. Fix: update system spec to support Docker 17.03 Docker 17.03 is 1.13 with bug fixes so they are of the same minor version release. We've validated them both in #42926. This PR changes the system spec to support Docker 17.03. **This should be in 1.8.** **Release note**: ``` Kubernetes 1.8 supports docker version 17.03.x. ``` /assign @Random-Liu
Unfortunately, at the time of the 1.8 release, Docker CE releases are only supported for a 4 month period, so 17.03 was EOL'd by August. That means that Kubernetes still warns on every single version of docker-ce that is supported by upstream. It seems irresponsible to recommend users use software which will not be receiving security updates. However, for the open source project to recommend unsupported software seems wrong to me. I can open a new issue for 17.06/17.09 specifically, but this doesn't seem like a scaleable approach. |
Is 1.11-1.13 receiving security updates anymore?
… On 27 Sep 2017, at 23:30, Euan Kemp ***@***.***> wrote:
Unfortunately, at the time of the 1.8 release, docker 17.03-ce will not be a supported version.
Docker CE releases are only supported for a 4 month period, so 17.03 was EOL'd by August.
That means that Kubernetes still warns on every single version of docker-ce that is supported by upstream.
It seems irresponsible to recommend users use software which will not be receiving security updates.
I have no objection with e.g. GKE's container-vm or RHEL or tectonic choosing a version not updated by docker inc because they'll apply updates themselves.
However, for the open source project to recommend unsupported (aka insecure) software seems wrong to me.
I can open a new issue for 17.06/17.09 specifically, but this doesn't seem like a scaleable approach.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Only if you pay for the commercially supported edition.
|
Additionally, as of last night, 17.06 is no longer officially supported in the open source editions of Docker (only 17.07 and now 17.09). Edit: @cpuguy83 was kind enough to correct me privately on this -- 17.06 is supported for one more month (the "stable" releases overlap for one month; see https://blog.docker.com/2017/03/docker-enterprise-edition/ which has a graphic that includes CE to illustrate this) ❤️ |
You can also move away from docker to cri-o or cri-containerd..
…On Sep 27, 2017 5:55 PM, "Tianon Gravi" ***@***.***> wrote:
Only if you pay for the commercially supported edition.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#42926 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACMezBKacQlDFZWVX5GVIagqbBCtUfQMks5smrYmgaJpZM4MZ_DE>
.
|
Which basically means it's pretty hard to keep up with that pace from k8s' PoV? All validated versions (except 1.13) seem EOL'd at this point... (1.11, 1.12, 17.03)
… On 27 Sep 2017, at 23:56, Tianon Gravi ***@***.***> wrote:
Additionally, as of last night, 17.06 is no longer officially supported in the open source editions of Docker (only 17.07 and now 17.09).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
@luxas all validated versions have been EOLd. 1.13 CSE ("Commercially Supported Edition") might still be supported by docker inc, but that's a closed source fork of docker, not docker 1.13. Kubernetes probably needs to just explicitly not be recommending specific versions of the engine, but rather support a version of the dockerd API.. and then test against upstream to ensure things aren't breaking. As far as I can tell, other components (like the Linux kernel, or iptables, or cgroups) we don't declare compatibility with exact numbers, but rather with an ABI or a minimum version.
My complaint is about what Kubernetes marks as validated/supported. |
Publishing the Docker API version sounds reasonable. On the other hand, most of the value of the Docker validation lies within discovering/fixing/working around Docker engine-version-specific bugs. I am not sure I'd compare docker to cgroups or iptables, and I think it's still valuable to publish this information in some form. Another thing is that we could provide an option (in dockershim) to use a specific docker API version. That way, users can continue upgrading their docker engine while staying on the same API version. Of course, they will have to deal with bugs themselves.
I don't think Kubernetes recommends docker. It just happens to be the only feature complete, production-ready integration we have now. |
FYI, I filed a new issue for using the docker API version: #53221 |
Seeing this thread and the discussion just raises the question, Why Docker at all? Should we maybe move towards another container engine that maybe follows the Kubernetes release cycle closer? |
@roffe That’s what projects like cri-o, cri-containerd, and rktlet are working towards. They’re just not as mature as Docker at this point. (You’ll find all three projects under the kubernetes-incubator GitHub org.) |
How long does kubernetes support its releases? |
What speaks against following Docker via shorter patch release cycles? |
@jimmycuadra I am already running RKT for my core components. but there has been some problems running it for the containers with GC not working as expected and such. Regarding CRI-O i would love to test it on CoreOS but it requires me to build the OS myself from scratch to make modifications needed to include deps for CRI-O and that is over my head. @wkornewald i would say Docker itself, it's not rare with regressions and them making breaking changes between versions. |
Speaking as a project maintainer on moby/moby, if you find a breaking change it's a bug. Please report as quickly as possible so we can get it fixed in a patch release. |
Would it be possible to run the (Docker-specific parts of the) Kubernetes test suite before each Docker release in order to ensure there are no breaking changes or regressions? After all, Kubernetes is an important use-case for Docker. Also, other projects building on top of Docker would surely benefit from the additional compatibility guarantees brought by the extra test suite, too. |
@wkornewald docker provides release candidates, which allows other projects (like Kubernetes) to run such tests against releases before they start breaking users. It's rare for a project to take on the burden of running other project's tests for numerous reasons. In this case, it would be an especially large burden since running the Kubernetes test suite is so involved. An RC period during which projects that care test and report issues is a more normal and, in my opinion, healthy way to handle that issue. The Linux Kernel is probably the most notable user of that model, and it works fairly well for them. I would much rather argue that the Kubernetes project, or possibly community, should consume docker-ce RCs and alert them if/when regressions are found. |
Now that Docker Enterprise announced Kubernetes as an officially supported orchestration platform besides Swarm, wouldn't it mean it's in the best interest of Docker Inc. to work on test suites & regression tests to ensure smooth integration as well? I presume Docker Inc. and/or their Enterprise customers will probably want to run Kubernetes with latest Docker version instead of being stuck with the past/patched Docker versions indefinitely. |
I'd love to see the community pick up the work validating Docker RCs and report issues. I think that's orthogonal to validating Docker (in kubernetes) against a fixed API version (#53221).
I have no clue how Docker plans to validate Kubernetes releases. Perhaps @cpuguy83 can provide some insight? |
I couldn't comment on immediate plans (beta hasn't started yet), but definitely expect to see kubernetes running on top of containerd directly as part of the (Docker) platform instead of sitting on top of the platform. containerd, once out of beta, will be a much more stable base (much smaller scope) and have much longer support cycle (https://github.com/containerd/containerd/blob/master/RELEASES.md#support-horizon) containerd is already being tested with kube node e2e tests: https://k8s-testgrid.appspot.com/sig-node-containerd, more work to be done for cluster e2e tests (so I'm told). |
Hi, I'm trying to locate documentation from kubernetes that recommends running kubernetes 1.8.x or 1.9.x with which docker version on CentOS 7.4, Ubuntu 16.04. |
@bamb00 looking for the same... |
Check https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md section: External Dependencies. But to be honest we are using docker ce 17.06.x and it's working ok for us. |
What is the preferred storage driver used with Kubernetes 1.9.x in a production environment for CentOS and RHEL? Docker listed the preferrable storage driver is overlay2 but I would like to hear from the user what is being use and why. Thanks in Advance. |
@bamb00 I think this probably isn't the right issue to discuss that on 😉 (Maybe try the community on Slack?) |
.
The text was updated successfully, but these errors were encountered: