-
Notifications
You must be signed in to change notification settings - Fork 518
Open
Labels
bugSomething isn't workingSomething isn't working
Description
When using Helm on Openshift (on AWS), I see the following behavior:
- I see the following warning when I install:
W1129 14:23:48.829438 22854 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), allowPrivilegeEscalation != false (containers "sniffer", "tracer" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "sniffer", "tracer" must not include "CHECKPOINT_RESTORE", "DAC_OVERRIDE", "NET_ADMIN", "NET_RAW", "SYS_ADMIN", "SYS_MODULE", "SYS_PTRACE", "SYS_RESOURCE" in securityContext.capabilities.add), restricted volume types (volumes "proc", "sys" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "sniffer", "tracer" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "sniffer", "tracer" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
W1129 14:23:48.907067 22854 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kubeshark-hub" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "kubeshark-hub" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kubeshark-hub" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "kubeshark-hub" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
W1129 14:23:48.908988 22854 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kubeshark-front" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "kubeshark-front" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kubeshark-front" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "kubeshark-front" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")- When I try to uninstall with helm I see this:
helm uninstall kubeshark-os
Error: failed to delete release: kubeshark-osWhen I try to install again, using the same name, I see this:
helm install kubeshark-os . --set tap.proxy.worker.srvPort=30001
Error: INSTALLATION FAILED: cannot re-use a name that is still in useHence, I can't install Kubeshark twice in the same namespace.
- Similar problems occur when installing with the CLI.
TBD
Ensure eBPF/Openshift support
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working