Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fails to run on GKE because of permission issue #6

Closed
avnersorek opened this issue Jun 19, 2018 · 9 comments
Closed

Fails to run on GKE because of permission issue #6

avnersorek opened this issue Jun 19, 2018 · 9 comments

Comments

@avnersorek
Copy link

The container fails on Error from server (Forbidden): jobs.batch "init-data-job" is forbidden: User "system:serviceaccount:default:default" cannot get jobs.batch in the namespace "default": Unknown user "system:serviceaccount:default:default"

@groundnuty
Copy link
Owner

groundnuty commented Jul 10, 2018

@avnersorek I do not have access to GKE. I use home brewed k8s clusters.
I'm not an expert on RBAC nor k8s permissions but the script in this repo just uses kubectl get ... and kubectl describe ... commands only. The error you are getting means that the default permissions injected into the container by your k8s on GKE (it happens automatically for every container) are not sufficient to invoke the command kubectl describe job init-data-job. I would say it's a cluster problem not an issue with k8s-wait-for script.

@avnersorek
Copy link
Author

@groundnuty
Totally agree. But since GKE is a very likely environment to use k8s-wait-for in - it might be good to mention it in the docs, and possibly how to avoid this issue.
I will try to see if I can come up with the solution, and if so I will post it here.
Thanks

@avnersorek
Copy link
Author

HI this is my solution to this issue (just an example) -
It creates a ClusterRoleBinding for the ClusterRole cluster-admin, granted to the default ServiceAccount.
This works, but you might want to consider creating a narrower role (with less permissions than cluster-admin) - that really depends on your security/access requirements.

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: wait-for-rbac
subjects:
  - kind: ServiceAccount
    # Reference to upper's `metadata.name`
    name: default
    # Reference to upper's `metadata.namespace`
    namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---
# this job will take about 10s
apiVersion: batch/v1
kind: Job
metadata:
  name: calc-pi
spec:
  template:
    spec:
      containers:
      - name: calc-pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"] # calc Pie to 2000 digits
      restartPolicy: Never
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.7.9
    ports:
    - containerPort: 80
  initContainers:
  - name: wait-for-job
    image: groundnuty/k8s-wait-for
    args: [ 'job', 'calc-pi']

@avnersorek
Copy link
Author

@groundnuty LMK if you want me to open a PR to add some reference from README to this issue.
Maybe the issue will be enough for people to find it if they have this problem.

Thanks

@groundnuty
Copy link
Owner

groundnuty commented Jul 16, 2018

@avnersorek you're more than welcome to create a PR. Shamely in our cluster I just use

kubectl create clusterrolebinding serviceaccounts-cluster-admin \
  --clusterrole=cluster-admin \
  --group=system:serviceaccounts

as this type of none-security is enough for our needs. I have yet to master RBAC ;-(

@rally25rs
Copy link
Contributor

rally25rs commented Nov 14, 2019

This also fails on a fairly stock AWS EKS cluster for the same reason.

@avnersorek 's solution should work, but I'm hesitant to grant access to the default account. The proper way would be to setup a new ServiceAccount for k8s-wait-for to use.

Unfortunately, initContainers can't use a different service account from the container that they are initializing.
reference: kubernetes/kubernetes#66020


This is what I ended up doing for a Helm chart. Thanks to @avnersorek for most of what I needed to figure this out! 👍 It currently assigns the grants to the default service account which isn't ideal, but it seems to be the best we can do with Kubernetes at the moment.

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: {{ .Release.Name }}-k8s-wait-for
rules:
- apiGroups: [""]
  resources: ["services","pods","jobs"]
  verbs: ["get","watch","list"]
- apiGroups: ["batch"]
  resources: ["services","pods","jobs"]
  verbs: ["get","watch","list"]

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: {{ .Release.Name }}-k8s-wait-for
subjects:
  - kind: ServiceAccount
    name: default
    namespace: {{ .Release.Namespace }}
roleRef:
  kind: ClusterRole
  name: {{ .Release.Name }}-k8s-wait-for
  apiGroup: rbac.authorization.k8s.io

For those that come across this later, granting access to the default service account is a pretty bad idea for security. Any "secrets" that you might have passed in environment variables, or coded into jobs becomes readable by any service in the cluster.

I ended up making an entirely new tool that separates the Kubernetes API access into it's own isolated server that can run with a properly configured ServiceAccount. The initContainers then need no special permissions: https://github.com/rally25rs/k8s-when-ready

@satazor
Copy link

satazor commented Mar 1, 2021

For anyone coming here, you may create a ServiceAccount with limited permissions. Here's an example of one that only has access to pod1.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-service-account
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: my-service-account
rules:
  - apiGroups: [""]
    resources: ["pods"]
    resourceNames: ["pod1", "pod2"]
    verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: my-service-account
subjects:
  - kind: ServiceAccount
    name: my-service-account
roleRef:
  kind: Role
  name: my-service-account
  apiGroup: rbac.authorization.k8s.io

Then, simply use the service account in your deployment template spec.

@groundnuty
Copy link
Owner

Two years later there are still no special service accounts for init containers.
I like the idea presented here.
Create a service account and then manually inject a correct token.
What do you guys think? I would like to add a complete example that takes the permissions problem into account.

@bpesics
Copy link

bpesics commented May 17, 2021

quite similar to the previous ones:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: k8s-wait-for
rules:
  - apiGroups: ["core"]
    resources: ["services","pods"]
    verbs: ["get", "watch", "list"]
  - apiGroups: ["batch"]
    resources: ["jobs"]
    verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: default
subjects:
  - kind: ServiceAccount
    name: default
roleRef:
  kind: Role
  name: k8s-wait-for
  apiGroup: rbac.authorization.k8s.io

I think as a basic simple solution it's ok to bind a generic role to the namespace default service account. The alternative fancier and more secure option could be with time limited token volume projection...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants