Cluster information:

Kubernetes version:1.28.8

Installation method:kubeadm

Host OS: AlmaLinux release 8.8

Hello. I am fairly new to kubernetes and I have some troubles with installing kubernetes dashboard version 7.5.0 via helm chart.

Commands used to install kubernetes dashboard successfully:

cat > dashboard-replace.yaml << EOF
auth:
  image:
    repository: some-proxy.com/docker.io/kubernetesui/dashboard-auth
    tag: "1.1.3"
api:
  image:
    repository: some-proxy.com/docker.io/kubernetesui/dashboard-api
    tag: "1.7.0"
web:
  image:
    repository: some-proxy.com/docker.io/kubernetesui/dashboard-web
    tag: "1.4.0"
metricsScraper:
  image:
    repository: some-proxy.com/docker.io/kubernetesui/dashboard-metrics-scraper
    tag: "1.1.1"
kong:
  enabled: true
  env:
    dns_order: LAST,A,CNAME,AAAA,SRV
    plugins: 'off'
    nginx_worker_processes: 1
  ingressController:
    enabled: false
  dblessConfig:
    configMap: kong-dbless-config
  image:
    repository: some-proxy.com/docker.io/library/kong
    tag: "3.6"
  proxy:
    type: NodePort
    http:
      enabled: true
      servicePort: 80
      nodePort: 30080
EOF

helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard --version 7.5.0 -f dashboard-replace.yaml

The relevant information is as follows:

[root@k8s-master ~]# kubectl get svc -o wide -n kubernetes-dashboard
NAME                                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE   SELECTOR
kubernetes-dashboard-api               ClusterIP   10.101.7.250     <none>        8000/TCP                        28h   app.kubernetes.io/instance=kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-api,app.kubernetes.io/part-of=kubernetes-dashboard
kubernetes-dashboard-auth              ClusterIP   10.105.254.2     <none>        8000/TCP                        28h   app.kubernetes.io/instance=kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-auth,app.kubernetes.io/part-of=kubernetes-dashboard
kubernetes-dashboard-kong-manager      NodePort    10.96.192.174    <none>        8002:31590/TCP,8445:32548/TCP   28h   app.kubernetes.io/component=app,app.kubernetes.io/instance=kubernetes-dashboard,app.kubernetes.io/name=kong
kubernetes-dashboard-kong-proxy        NodePort    10.105.179.142   <none>        80:30080/TCP,443:30771/TCP      28h   app.kubernetes.io/component=app,app.kubernetes.io/instance=kubernetes-dashboard,app.kubernetes.io/name=kong
kubernetes-dashboard-metrics-scraper   ClusterIP   10.104.158.240   <none>        8000/TCP                        28h   app.kubernetes.io/instance=kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-metrics-scraper,app.kubernetes.io/part-of=kubernetes-dashboard
kubernetes-dashboard-web               ClusterIP   10.107.115.229   <none>        8000/TCP                        28h   app.kubernetes.io/instance=kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-web,app.kubernetes.io/part-of=kubernetes-dashboard
[root@k8s-master ~]# kubectl get po -o wide -n kubernetes-dashboard
NAME                                                    READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
kubernetes-dashboard-api-77fbd6677b-2w4wj               1/1     Running   0          7h20m   10.244.107.204   k8s-node3   <none>           <none>
kubernetes-dashboard-auth-65fdd774d5-btr2w              1/1     Running   0          7h20m   10.244.169.143   k8s-node2   <none>           <none>
kubernetes-dashboard-kong-758655fdfb-qp6gl              1/1     Running   0          4m34s   10.244.36.83     k8s-node1   <none>           <none>
kubernetes-dashboard-metrics-scraper-76dbcb7ff5-kgzsr   1/1     Running   0          7h20m   10.244.36.82     k8s-node1   <none>           <none>
kubernetes-dashboard-web-56b6945778-bbwt6               1/1     Running   0          7h20m   10.244.169.144   k8s-node2   <none>           <none>
[root@k8s-master ~]# kubectl get nodes -o wide
NAME         STATUS   ROLES           AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                           KERNEL-VERSION                CONTAINER-RUNTIME
k8s-master   Ready    control-plane   130d   v1.28.9   192.168.110.165   <none>        AlmaLinux 8.8 (Sapphire Caracal)   5.4.273-1.el8.elrepo.x86_64   containerd://1.7.16
k8s-node1    Ready    <none>          130d   v1.28.9   192.168.110.166   <none>        AlmaLinux 8.8 (Sapphire Caracal)   5.4.273-1.el8.elrepo.x86_64   containerd://1.7.16
k8s-node2    Ready    <none>          130d   v1.28.9   192.168.110.167   <none>        AlmaLinux 8.8 (Sapphire Caracal)   5.4.273-1.el8.elrepo.x86_64   containerd://1.7.16
k8s-node3    Ready    <none>          130d   v1.28.9   192.168.110.168   <none>        AlmaLinux 8.8 (Sapphire Caracal)   5.4.273-1.el8.elrepo.x86_64   containerd://1.7.16

Then, when I visit the following page:

http://192.168.110.166:30080

or

https://192.168.110.166:30771

It outputed the same error message like the following:

Error
An invalid response was received from the upstream server.
request_id: a05efe41b2c3765915ea8a38a1c21262

I checked the firewall configuration of each node as follows:

[root@k8s-master ~]# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens160
  sources: 
  services: http https
  ports: 49156/tcp 53/tcp 53/udp 179/tcp 2379-2380/tcp 5473/tcp 6443/tcp 10250-10252/tcp 4789/udp 8285/udp 8472/udp 30000-32767/tcp
  protocols: 
  forward: yes
  masquerade: yes
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

The results of executing the firewall-cmd --list-all command on the other remaining three nodes are:

public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens160
  sources: 
  services: http https
  ports: 49156/tcp 53/tcp 53/udp 179/tcp 10250-10252/tcp 30000-32767/tcp 4789/udp 6443/tcp
  protocols: 
  forward: yes
  masquerade: yes
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

The results of executing the following command on all nodes are like this:

[root@k8s-master ~]# firewall-cmd --get-active-zones
public
  interfaces: ens160
[root@k8s-master ~]# firewall-cmd --zone=public --list-interfaces
ens160
[root@k8s-master ~]# firewall-cmd --get-default-zone
public

The results of executing the sysctl -p command on all nodes are:

fs.suid_dumpable = 0
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.sysrq = 0
kernel.randomize_va_space = 2
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2
net.ipv4.tcp_max_tw_buckets = 6000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_keepalive_time = 30
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_max_syn_backlog = 262144
net.core.somaxconn = 65535
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.ip_local_port_range = 1024 65535
vm.overcommit_memory = 1
vm.max_map_count = 655360
vm.zone_reclaim_mode = 0
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
fs.file-max = 52706963
fs.nr_open = 52706963
net.core.netdev_max_backlog = 32768
net.ipv4.tcp_max_orphans = 3276800
net.netfilter.nf_conntrack_max = 8388608
net.netfilter.nf_conntrack_tcp_timeout_established = 3600
fs.inotify.max_user_watches = 89100
vm.panic_on_oom = 0
vm.swappiness = 0
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv6.conf.all.forwarding = 1

If all nodes temporarily stop the firewall service:

systemctl stop firewalld

I found that it can be accessed successfully:
https://192.168.110.166:30771

Where are the firewall misconfigured?

How is it resolved?

Any help appreciated!