-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node authorizer #46076
Node authorizer #46076
Conversation
|
||
const ( | ||
ConfigMapNode nodeType = iota | ||
NodeNode nodeType = iota |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iota copypasta?
type GraphPopulator struct { | ||
graph *Graph | ||
|
||
podInformer cache.SharedIndexInformer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We generally haven't been storing refs to the underlying cache.SharedIndexInformer
objects. If you're using a shared informer, it's generally safe to assume that something else (e.g. kube-controller-manager) will start it.
} | ||
|
||
func (g *GraphPopulator) Run(stopCh <-chan struct{}) { | ||
g.podInformer.GetController().Run(stopCh) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These 2 lines are likely no-ops (see my comment above,
kubernetes/staging/src/k8s.io/client-go/tools/cache/shared_informer.go
Lines 261 to 263 in 44127c5
func (s *sharedIndexInformer) GetController() Controller { | |
return &dummyController{informer: s} | |
} |
func (v *dummyController) Run(stopCh <-chan struct{}) { |
} | ||
if oldObj != nil && pod.Spec.NodeName == oldObj.(*api.Pod).Spec.NodeName { | ||
// Node is unchanged | ||
// TODO: do we need to check for uid changes? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say yes
7d81814
to
9f152be
Compare
Comments addressed. Still working on tests, and I'll have some follow-ups for performance/memory related improvements to the graph, but the authorizer and graph population code is ready for review |
@@ -0,0 +1 @@ | |||
Forked from gonum/graph@50b27dea7ebbfb052dfaf91681afc6fde28d8796 to support memory-use improvements to the simple graph |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any plans to upstream the changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Absolutely, I just don't want to block on the upstream PRs being accepted and vendorable prior to code freeze
fc2aa23
to
90da41d
Compare
rebased, comments addressed. will look more into graph space optimizations in 1.8 timeframe. |
No other comments from me /approve Pending other reviewers sign off |
LGTM, but I'll avoid adding the label to let other reviewers respond. Feel free to self apply once they have. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: deads2k, ericchiang, liggitt, smarterclayton
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
fixed lint issue, retagging |
@k8s-bot pull-kubernetes-e2e-kops-aws test this |
@k8s-bot pull-kubernetes-federation-e2e-gce test this |
Automatic merge from submit-queue |
@liggitt: The following test(s) failed:
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Automatic merge from submit-queue (batch tested with PRs 49619, 49598, 47267, 49597, 49638) Remove default binding of system:node role to system:nodes group part of kubernetes/enhancements#279 deprecation of this automatic binding announced in 1.7 in #46076 ```release-note RBAC: the `system:node` role is no longer automatically granted to the `system:nodes` group in new clusters. It is recommended that nodes be authorized using the `Node` authorization mode instead. Installations that wish to continue giving all members of the `system:nodes` group the `system:node` role (which grants broad read access, including all secrets and configmaps) must create an installation-specific `ClusterRoleBinding`. ```
This PR implements the authorization portion of https://github.com/kubernetes/community/blob/master/contributors/design-proposals/kubelet-authorizer.md and kubernetes/enhancements#279:
Node
) that authorizes requests from nodes based on a graph of related pods,secrets,configmaps,pvcs, and pvs:system:node
role to thesystem:nodes
group is not automatically created.NodeRestriction
admission plugin to require identifiable nodes for requests from users in thesystem:nodes
group.This authorization mode is intended to be used in combination with the
NodeRestriction
admission plugin, which limits the pods and nodes a node may modify. To enable in combination with RBAC authorization and the NodeRestriction admission plugin:--authorization-mode=Node,RBAC --admission-control=...,NodeRestriction,...
system:nodes
group with a username ofsystem:node:<nodeName>
Follow-up:
enable e2e CI environment with admission and authorizer enabledenable kubelet csr bootstrap in GCE/GKE #40760 enable Node authorizer and NodeRestriction admission controller #46796enable this authorizer and admission plugin in kubeadmkubeadm: Enable the Node Authorizer/Admission plugin in v1.7 #46879enable this authorizer and admission plugin in kube-upenable Node authorizer and NodeRestriction admission controller #46796scale-test in kubemarkEnable Node authorizer and NodeRestriction admission in kubemark #46921