-
Notifications
You must be signed in to change notification settings - Fork 39.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-proxy uses token to access port 443 of apiserver #7303
Conversation
Testing consisted of kube-up, followed by inspecting kube-proxy logs to ensure it was reading data. |
You included changes for vagrant & aws. Have you tested those? /cc @justinsb & @derekwaynecarr |
FYI, the file cluster/saltbase/salt/kube-proxy/kubeconfig has no changes and doesn't need to be included in your PR. |
@@ -248,13 +248,40 @@ function create-salt-auth() { | |||
mkdir -p /srv/salt-overlay/salt/kube-apiserver | |||
(umask 077; | |||
echo "${KUBE_BEARER_TOKEN},admin,admin" > "${KNOWN_TOKENS_FILE}"; | |||
echo "${KUBELET_TOKEN},kubelet,kubelet" >> "${KNOWN_TOKENS_FILE}") | |||
echo "${KUBELET_TOKEN},kubelet,kubelet" >> "${KNOWN_TOKENS_FILE}"; | |||
echo "${KUBE_PROXY_TOKEN},kubelet,kubelet" >> "${KNOWN_TOKENS_FILE}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did we want this to be "${KUBE_PROXY_TOKEN},kube-proxy,kube-proxy"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like you had kube_proxy for AWS/Vagrant.
LGTM other than my 1 comment. |
Tested on GCE. Includes untested modifications for AWS and Vagrant. No changes for any other distros. Probably will work on other up-to-date providers but beware. Symptom would be that service proxying stops working. 1. Generates a token kube-proxy in AWS, GCE, and Vagrant setup scripts. 1. Distributes the token via salt-overlay, and salt to /var/lib/kube-proxy/kubeconfig 1. Changes kube-proxy args: - use the --kubeconfig argument - changes --master argument from http://MASTER:7080 to https://MASTER - http -> https - explicit port 7080 -> implied 443 Possible ways this might break other distros: Mitigation: there is an default empty kubeconfig file. If the distro does not populate the salt-overlay, then it should get the empty, which parses to an empty object, which, combined with the --master argument, should still work. Mitigation: - azure: Special case to use 7080 in - rackspace: way out of date, so don't care. - vsphere: way out of date, so don't care. - other distros: not using salt.
Good catch. Fixed. |
@cjcullen ptal |
Content LGTM now. Can you remove the empty cluster/saltbase/salt/kube-proxy/kubeconfig file before merge? |
That file is going to avoid a salt error on providers that don't generate this file. |
Ah. Okay, LGTM then. |
kube-proxy uses token to access port 443 of apiserver
Tested on GCE.
Includes untested modifications for AWS and Vagrant.
No changes for any other distros.
Probably will work on other up-to-date providers
but beware. Symptom would be that service proxying
stops working.
Possible ways this might break other distros:
Mitigation: there is an default empty kubeconfig file.
If the distro does not populate the salt-overlay, then
it should get the empty, which parses to an empty
object, which, combined with the --master argument,
should still work.
Mitigation: