-
Notifications
You must be signed in to change notification settings - Fork 40.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Dockerfiles, plain Docker deployment instructions and minor fixes #167
Conversation
Awesome! Thanks for doing this. I'll take a detailed look today. In the Instructions are in CONTRIB.md. It's basically identical to the Apache CLA Thanks again. On Thu, Jun 19, 2014, 5:53 AM, discordianfish [email protected]
|
Wow! Thanks for the PR. Unfortunately, this overlaps with some work that I have in progress now too. So we'll have to figure out how to merge it. First off -- if you could break out the addr flag commit, the error commit and the client lib commit into separate PRs (or a single one) -- that'd be great -- we can land those right away once we clear the CLA. My plan is a little bit more ambitious than what you have here -- but that is maybe why it is taking longer. I'm looking to pre-compile the binaries before packaging the runtime Dockerfiles. This'll help to minimize download/upload size and will help cluster start time. I also want to rework the GCE cluster scripts to use these. That is obviously separable though. Things that overlap and are different:
That being said, having a single Docker image for the master and node does simplify start up! Hopefully we can get the "manual set up" steps simple enough that it is easy to port Kubernetes to new environments. Hopefully I'll have a PR out today or tomorrow (I need to as I'm going on vacation after that) that should get some of this stuff more concrete. You can see the start of what I'm doing in the |
First of all, I've split out #169 Regarding using pre-built binaries: I would also prefer separating build- from runtime so we would just have the built binaries in the images. But since Docker unfortunately doesn't support that (yet) and using a Dockerfile is the canonical way to build Docker images I went this way. Since Docker caches the images layer by layer the actual size it needs to fetch on update, even if we keep all build stuff around, shouldn't be a big problem. Totally agree with "running one app per container" in general. But I'm being pragmatic here: Instead of writing tons of bash scripts to build and run the various components on a vanilla docker cluster I decided to wrap up the required components in two easily deployable images. It's more meant as a quick start than to be used as it is for production deployments. It will help people trying kubernetes on their docker hosts. It took me quite some time without that until I got it running. But I love the idea of using kubernetes to host kubernetes, so to me it depends on how fast we can get there and whether you want to provide something in the meanwhile so people can play with it. |
(I've just moved cdd6b3c back in here so this branch can be built/used) |
This makes it easy to deploy kubernetes on a "native" Docker cluster.
We got your CLA today. Any chance you can extract the Docker API calls for Image Pulling (rather than shelling out) into a separate PR? Many thanks! |
OK, I've created #351 |
@jbeda FWIW, read replicas are part of the motivation for our work on an updated raft implementation. They are coming after we get that work in: etcd-io/etcd#874 |
Do you guys have interest in anything else from this PR? I still would like to see a easy way to deploy kubernetes as docker images itself. Happy to help if you have concrete suggestions. |
I've taken a slightly different approach for running Kubernetes on CoreOS. I'm building all the binaries in a separate container and using docker cp to extract them. Then I upload the binaries to Google Cloud Storage. Next I create a Dockerfile, one per application, and build a container from the uploaded binaries. Details of this work and the related docker files can be found here: I've chosen to build the individual containers using binaries to keep the images small. My goal is to build a generic set of containers for each Kubernetes component usable on any platform. |
Kelsey, It would be great to centralize on on version of this stuff. Brendan
|
@brendanburns Thanks, did not see that. I wonder if we can now close out all of the Dockerfile related issues. Maybe once there are builds available from the Docker Hub. |
Yeah, there are two (related) tasks, one is to build everything as a docker I think #1 is mostly done, but #2 is still in progress (though you're doing Brendan
|
closing this now, since its a little out of date at this point, and the docker pull stuff was integrated in a different PR. |
Docker-DCO-1.1-Signed-off-by: Rohit Jnagal <[email protected]> (github: rjnagal)
V1.8.0 patchset
Hi,
I've put all this in one PR, hope that's okay. It's basically what I had to change to get it working in my everything-is-dockerized environment.
I've tried to come up with "good commits". Let me know if you prefer single PRs for each of them.
So the idea here is to run kubernetes itself in Docker containers, therefor I created three Dockerfiles. The documentation assumes that this will be added as automated builds in the Google account. Guess that's up for discussion so let me know if I should change that.
The Dockerfile in / will create a kubernetes 'base image' which just includes the code/binaries and is used by kubernetes-node and kubernetes-server. This might not fit all all deployments but it's a good start and easy to understand.
To make this possible I added a
-docker
flag to point to the docker address (since with the bind-mount we can't use /var/run/docker.sock in the container). Beside that I made kubelet use the client lib for pulling the image instead of the binary (which isn't installed in the container).