Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not modify ELB security group #49445

Closed
zihaoyu opened this issue Jul 22, 2017 · 12 comments · Fixed by #62774
Closed

Do not modify ELB security group #49445

zihaoyu opened this issue Jul 22, 2017 · 12 comments · Fixed by #62774
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@zihaoyu
Copy link

zihaoyu commented Jul 22, 2017

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature

What happened:

We run Kuberentes on AWS. We are trying out LoadBalancer service type.

We use Terraform heavily for managing infrastructure on AWS. As a result we have several shared security groups with "perfect" rules. One of them is supposed to be attached to ELBs so our own IPs, partners, etc are whitelisted. Right now, it seems there isn't a way for Kubernetes to use the security group as is. There are several alternatives:

  • Pass my existing security group as service.beta.kubernetes.io/aws-load-balancer-extra-security-groups annotation. However, Kubernetes still tries to create one first, or use a global security group. If I don't provide any IP range, Kubernetes will whitelist 0.0.0.0/0. This seems insecure.
  • Provide my own IP ranges as either annotation service.beta.kubernetes.io/load-balancer-source-ranges or service.Spec.LoadBalancerSourceRanges. Then Kubernetes will create a security group for this ELB with the given IPs. This means I need to duplicate my IPs in all microservices.
  • Provide my own IP ranges as either annotation service.beta.kubernetes.io/load-balancer-source-ranges or service.Spec.LoadBalancerSourceRanges, and use a global security group. Then Kubernetes will modify the global SG with the given IPs. This seems problematic when lots of microservices share one SG but some want different IPs whitelisted.

I want to entertain the idea below:

  • Allow users to sepcify another security group to kube-controller-manager or Service object, which Kubernetes will take it and attach it to ELBs as is.
  • If such security group is provided, skip other IP whitelisting steps.

What you expected to happen:

I can give Kubernetes existing security group(s). Kuberentes just attaches the security group(s) to the managed ELBs without modifying the rules.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

It may already be possible to achieve this, and I'm not aware of. Or there are other concerns so that Kubernetes didn't choose to implement this feature.

Environment:

  • Kubernetes version (use kubectl version): v1.7.1
  • Cloud provider or hardware configuration**: AWS
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Jul 22, 2017
@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jul 22, 2017
@zihaoyu
Copy link
Author

zihaoyu commented Jul 22, 2017

/sig aws

@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jul 22, 2017
@nbutton23
Copy link
Contributor

PR #49805 may help with this. . .

@KIVagant
Copy link

KIVagant commented Dec 12, 2017

This problem could be really critical if one has the Kubernetes dashboard installed. If for some reasons the kubernetes-dashboard service recreates AWS ELB, the related security group will be recreated as well with the "0.0.0.0/0" input/output masks and then botnets could find the ELB and attack your cluster.
JFYI, in our case in the one of clusters with K8s 1.5, the deployment and replication controller were installed from the dashboard using this vector of attack. Then special pods were created and root volume mounted inside. The system crontab was modified by processes in these pods and the crypto miner appeared in all K8s nodes on the system level. After this another pods were installed and the miner was executed inside the Kubernetes pods in parallel with the system level.

@winjer
Copy link

winjer commented Dec 18, 2017

just had some downtime on a production cluster because of this. i really need it too ;)

@yossip
Copy link

yossip commented Mar 15, 2018

+1

@KIVagant
Copy link

/kind feature

As for me, it isn't a feature, it's a kind of bug.

@justinsb
Copy link
Member

Some comments on #62774 which may be relevant - please comment if an alternative annotation that did replace the default security-groups (as opposed to adding to them) would/wouldn't meet your use-case: #62774 (comment)

@johnzheng1975
Copy link

johnzheng1975 commented May 24, 2018

For me, I don't add any annotation, I just want to change its security group rule.
After it will create a aws elb, and a specificed secruity group with 0.0.0.0, I change 0.0.0.0 to xx.xx.xx.xx/32.
However, the security group rule will change back to 0.0.0.0 after some minutes.

Is this by design?
Is there any way for me keep my changes, thanks.

@Raffo
Copy link
Contributor

Raffo commented May 24, 2018

I guess this is part of how the controller works. If you look at #62774 , the PR fixes this problem by allowing you to specify the behaviour you want with an annotation. For the way Kubernetes works, the ELB is owned by Kubernetes and you should never be forced to modify the resource manually, thing that ultimately will not work.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 22, 2018
@Raffo
Copy link
Contributor

Raffo commented Aug 22, 2018

/remove-lifecycle stale

This is still relevant cause #62774 is not merged yet.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 22, 2018
k8s-ci-robot added a commit that referenced this issue Oct 1, 2018
Fixes #49445 by not adding the default SG when using SG annotation (AWS)
@rehevkor5
Copy link

This is a good change. The situation pre-1.13.0 is somewhat surprising because explicitly setting loadBalancerSourceRanges to an empty array like so:

  loadBalancerSourceRanges: []

has no effect, and the 0.0.0.0/0 Security Group Ingress is still added.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.