Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Dockerfiles, plain Docker deployment instructions and minor fixes #167

Closed

Conversation

discordianfish
Copy link
Contributor

Hi,

I've put all this in one PR, hope that's okay. It's basically what I had to change to get it working in my everything-is-dockerized environment.
I've tried to come up with "good commits". Let me know if you prefer single PRs for each of them.

So the idea here is to run kubernetes itself in Docker containers, therefor I created three Dockerfiles. The documentation assumes that this will be added as automated builds in the Google account. Guess that's up for discussion so let me know if I should change that.
The Dockerfile in / will create a kubernetes 'base image' which just includes the code/binaries and is used by kubernetes-node and kubernetes-server. This might not fit all all deployments but it's a good start and easy to understand.
To make this possible I added a -docker flag to point to the docker address (since with the bind-mount we can't use /var/run/docker.sock in the container). Beside that I made kubelet use the client lib for pulling the image instead of the binary (which isn't installed in the container).

@brendanburns
Copy link
Contributor

Awesome! Thanks for doing this. I'll take a detailed look today. In the
meantime, can you sign our CLA?

Instructions are in CONTRIB.md. It's basically identical to the Apache CLA

Thanks again.
Brendan

On Thu, Jun 19, 2014, 5:53 AM, discordianfish [email protected]
wrote:

Hi,

I've put all this in one PR, hope that's okay. It's basically what I had
to change to get it working in my everything-is-dockerized environment.
I've tried to come up with "good commits". Let me know if you prefer
single PRs for each of them.

So the idea here is to run kubernetes itself in Docker containers,
therefor I created three Dockerfiles. The documentation assumes that this
will be added as automated builds in the Google account. Guess that's up
for discussion so let me know if I should change that.
The Dockerfile in / will create a kubernetes 'base image' which just
includes the code/binaries and is used by kubernetes-node and
kubernetes-server. This might not fit all all deployments but it's a good
start and easy to understand.
To make this possible I added a -docker flag to point to the docker
address (since with the bind-mount we can't use /var/run/docker.sock in the
container). Beside that I made kubelet use the client lib for pulling the

image instead of the binary (which isn't installed in the container).

You can merge this Pull Request by running

git pull https://github.com/discordianfish/kubernetes various-fixes

Or view, comment on, or merge it at:

#167
Commit Summary

  • Add docker addr flag and check for etcd_machines
  • Print errors as string
  • Use docker client lib instead of binary for pulls
  • Add Dockerfiles and deployment instructions

File Changes

Patch Links:


Reply to this email directly or view it on GitHub
#167.

@proppy
Copy link
Contributor

proppy commented Jun 19, 2014

Related: #19 and #141

@jbeda
Copy link
Contributor

jbeda commented Jun 19, 2014

Wow! Thanks for the PR.

Unfortunately, this overlaps with some work that I have in progress now too. So we'll have to figure out how to merge it.

First off -- if you could break out the addr flag commit, the error commit and the client lib commit into separate PRs (or a single one) -- that'd be great -- we can land those right away once we clear the CLA.

My plan is a little bit more ambitious than what you have here -- but that is maybe why it is taking longer. I'm looking to pre-compile the binaries before packaging the runtime Dockerfiles. This'll help to minimize download/upload size and will help cluster start time. I also want to rework the GCE cluster scripts to use these. That is obviously separable though.

Things that overlap and are different:

  • I lean toward running 1 process per container instead of bundling multiple servers into a single container. This is the model we use internally at Google and it has worked out well. In fact, the pod concept was introduced to represent a bundle of containers that work together like this. Advantages here:
    • As we add extend container services like log collection and monitoring, being able to pinpoint these to a binary instead of a container is super useful. If memory usage on a container starts to run away, you want to know the process/binary that is doing it.
    • The kubelet will monitor for crashes and restart containers in that case. Over time, these resources are another production metric that you'll want to track and alert on. If we bundle multiple processes in a container, we hide that.
    • In long running production systems, being able to upgrade/push different components on different schedules is useful. We may want to push a new etcd server without taking down the rest of the binaries on a machine.
  • I'd love to start using the kubelet in a more "static" mode to handle the master containers. That would consist of getting the kubelet running on the master (with out looking an etcd) and then have it run a pod for the master components by reading out of a manifest file on disk.
  • While there are some that run etcd on every node, it just doesn't scale in a clustered way that well. Once the coreos guys introduce a 'read replica' of etcd, this may make more sense. Details on etcd cluster size here.

That being said, having a single Docker image for the master and node does simplify start up! Hopefully we can get the "manual set up" steps simple enough that it is easy to port Kubernetes to new environments.

Hopefully I'll have a PR out today or tomorrow (I need to as I'm going on vacation after that) that should get some of this stuff more concrete. You can see the start of what I'm doing in the build/ directory.

@discordianfish
Copy link
Contributor Author

First of all, I've split out #169

Regarding using pre-built binaries: I would also prefer separating build- from runtime so we would just have the built binaries in the images. But since Docker unfortunately doesn't support that (yet) and using a Dockerfile is the canonical way to build Docker images I went this way. Since Docker caches the images layer by layer the actual size it needs to fetch on update, even if we keep all build stuff around, shouldn't be a big problem.

Totally agree with "running one app per container" in general. But I'm being pragmatic here: Instead of writing tons of bash scripts to build and run the various components on a vanilla docker cluster I decided to wrap up the required components in two easily deployable images. It's more meant as a quick start than to be used as it is for production deployments. It will help people trying kubernetes on their docker hosts. It took me quite some time without that until I got it running.

But I love the idea of using kubernetes to host kubernetes, so to me it depends on how fast we can get there and whether you want to provide something in the meanwhile so people can play with it.

@discordianfish
Copy link
Contributor Author

(I've just moved cdd6b3c back in here so this branch can be built/used)

@brendandburns
Copy link
Contributor

We got your CLA today.

Any chance you can extract the Docker API calls for Image Pulling (rather than shelling out) into a separate PR?

Many thanks!
--brendan

@discordianfish
Copy link
Contributor Author

OK, I've created #351

@philips
Copy link
Contributor

philips commented Jul 8, 2014

@jbeda FWIW, read replicas are part of the motivation for our work on an updated raft implementation. They are coming after we get that work in: etcd-io/etcd#874

@discordianfish
Copy link
Contributor Author

Do you guys have interest in anything else from this PR? I still would like to see a easy way to deploy kubernetes as docker images itself. Happy to help if you have concrete suggestions.

@kelseyhightower
Copy link
Contributor

I've taken a slightly different approach for running Kubernetes on CoreOS. I'm building all the binaries in a separate container and using docker cp to extract them. Then I upload the binaries to Google Cloud Storage.

Next I create a Dockerfile, one per application, and build a container from the uploaded binaries. Details of this work and the related docker files can be found here:

I've chosen to build the individual containers using binaries to keep the images small. My goal is to build a generic set of containers for each Kubernetes component usable on any platform.

@brendandburns
Copy link
Contributor

Kelsey,
Have a look at the contents of the './build' directory, that does exactly
what you suggest.

It would be great to centralize on on version of this stuff.

Brendan
On Jul 14, 2014 7:29 AM, "Kelsey Hightower" [email protected]
wrote:

I've taken a slightly different approach for running Kubernetes on CoreOS.
I'm building all the binaries in a separate container and using docker cp
to extract them. Then I upload the binaries to Google Cloud Storage.

Next I create a Dockerfile, one per application, and build a container
from the uploaded binaries. Details of this work and the related docker
files can be found here:

I've chosen to build the individual containers using binaries to keep the
images small. My goal is to build a generic set of containers for each
Kubernetes component usable on any platform.


Reply to this email directly or view it on GitHub
#167 (comment)
.

@kelseyhightower
Copy link
Contributor

@brendanburns Thanks, did not see that. I wonder if we can now close out all of the Dockerfile related issues. Maybe once there are builds available from the Docker Hub.

@brendandburns
Copy link
Contributor

Yeah, there are two (related) tasks, one is to build everything as a docker
container, the other is to default the running of k8s to using those
containers.

I think #1 is mostly done, but #2 is still in progress (though you're doing
a bunch to close out #2 as well ;)

Brendan
On Jul 14, 2014 9:01 AM, "Kelsey Hightower" [email protected]
wrote:

@brendanburns https://github.com/brendanburns Thanks, did not see that.
I wonder if we can now close out all of the Dockerfile related issues.
Maybe once there are builds available from the Docker Hub.


Reply to this email directly or view it on GitHub
#167 (comment)
.

@brendandburns
Copy link
Contributor

closing this now, since its a little out of date at this point, and the docker pull stuff was integrated in a different PR.

vishh pushed a commit to vishh/kubernetes that referenced this pull request Apr 6, 2016
Docker-DCO-1.1-Signed-off-by: Rohit Jnagal <[email protected]> (github: rjnagal)
iaguis pushed a commit to kinvolk/kubernetes that referenced this pull request Feb 6, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants