-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not modify ELB security group #49445
Comments
/sig aws |
PR #49805 may help with this. . . |
This problem could be really critical if one has the Kubernetes dashboard installed. If for some reasons the |
just had some downtime on a production cluster because of this. i really need it too ;) |
+1 |
As for me, it isn't a feature, it's a kind of bug. |
Some comments on #62774 which may be relevant - please comment if an alternative annotation that did replace the default security-groups (as opposed to adding to them) would/wouldn't meet your use-case: #62774 (comment) |
For me, I don't add any annotation, I just want to change its security group rule. Is this by design? |
I guess this is part of how the controller works. If you look at #62774 , the PR fixes this problem by allowing you to specify the behaviour you want with an annotation. For the way Kubernetes works, the ELB is owned by Kubernetes and you should never be forced to modify the resource manually, thing that ultimately will not work. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale This is still relevant cause #62774 is not merged yet. |
Fixes #49445 by not adding the default SG when using SG annotation (AWS)
This is a good change. The situation pre-1.13.0 is somewhat surprising because explicitly setting
has no effect, and the |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
We run Kuberentes on AWS. We are trying out
LoadBalancer
service type.We use Terraform heavily for managing infrastructure on AWS. As a result we have several shared security groups with "perfect" rules. One of them is supposed to be attached to ELBs so our own IPs, partners, etc are whitelisted. Right now, it seems there isn't a way for Kubernetes to use the security group as is. There are several alternatives:
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups
annotation. However, Kubernetes still tries to create one first, or use a global security group. If I don't provide any IP range, Kubernetes will whitelist0.0.0.0/0
. This seems insecure.service.beta.kubernetes.io/load-balancer-source-ranges
orservice.Spec.LoadBalancerSourceRanges
. Then Kubernetes will create a security group for this ELB with the given IPs. This means I need to duplicate my IPs in all microservices.service.beta.kubernetes.io/load-balancer-source-ranges
orservice.Spec.LoadBalancerSourceRanges
, and use a global security group. Then Kubernetes will modify the global SG with the given IPs. This seems problematic when lots of microservices share one SG but some want different IPs whitelisted.I want to entertain the idea below:
kube-controller-manager
orService
object, which Kubernetes will take it and attach it to ELBs as is.What you expected to happen:
I can give Kubernetes existing security group(s). Kuberentes just attaches the security group(s) to the managed ELBs without modifying the rules.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
It may already be possible to achieve this, and I'm not aware of. Or there are other concerns so that Kubernetes didn't choose to implement this feature.
Environment:
kubectl version
): v1.7.1uname -a
):The text was updated successfully, but these errors were encountered: