如何解决从Kubernetes Nginx入口获得响应的问题
在通过负载均衡器提供的外部IP访问群集时遇到问题。
我已经使用centos 8在kubeadm本地设置了k8s。我的控制平面节点使用keepalived和haproxy来实现高可用性。我也在利用metallb和calico cni。
最终目标是在每个节点上设置多个Elasticsearch Pod。
keepalived.conf
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 151
priority 255
mcast_src_ip 172.16.93.11
authentication {
auth_type PASS
auth_pass test1234
}
unicast_peer {
172.16.93.12
172.16.93.13
}
virtual_ipaddress {
172.16.93.14/24
}
track_script {
check_apiserver
}
}
haproxy.cfg
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
#chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
#user haproxy
#group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
# utilize system-wide crypto-policies
ssl-default-bind-ciphers PROFILE=SYSTEM
ssl-default-server-ciphers PROFILE=SYSTEM
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#-------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#-------------------------------------------------------------------
frontend http_stats
bind *:8080
mode tcp
stats uri /haproxy?stats
frontend apiserver
bind *:6443
mode tcp
option tcplog
timeout client 10800s
default_backend controlPlanes
#-------------------------------------------------------------------
# round robin balancing for apiserver
#-------------------------------------------------------------------
backend controlPlanes
option httpchk GET /healthz
option ssl-hello-chk
mode tcp
balance leastconn
timeout server 10800s
balance roundrobin
server master-node-1 172.16.93.11:6444 check
server master-node-2 172.16.93.12:6444 check
server master-node-3 172.16.93.13:6444 check
所有主节点和工作节点都通过端口6443上的keepalived.conf(172.16.93.14)中提供的虚拟IP进行了连接。
出于测试目的,我有一个部署,该部署创建5个基本nginx pod的副本。
red-deploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: red-test
labels:
deploymentColor: red-deployment
spec:
replicas: 5
selector:
matchLabels:
podColor: red
template:
metadata:
labels:
podColor: red
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
使用以下命令在端口80上公开部署:
kubectl expose deploy red-test --port 80
nginx入口控制器的仓库:https://github.com/kubernetes/ingress-nginx
使用'--rbac.create = true'标签通过头盔安装了nginx-ingress控制器。
service / ingress-nginx-controller.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: my-release
meta.helm.sh/release-namespace: default
creationTimestamp: "2020-10-30T18:24:23Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: my-release
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.40.2
helm.sh/chart: ingress-nginx-3.7.1
name: my-release-ingress-nginx-controller
namespace: default
resourceVersion: "3456923"
selfLink: /api/v1/namespaces/default/services/my-release-ingress-nginx-controller
uid: 6f223a76-01e2-4bb2-b4d2-dd5f8b03c8b5
spec:
clusterIP: 10.109.91.181
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 32275
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 32528
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: my-release
app.kubernetes.io/name: ingress-nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 172.16.93.50
nginx-ingress版本
NGINX Ingress controller
Release: v0.40.2
Build: fc4ccc5eb0e41be2436a978b01477fc354f31643
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.3
my-ingress-def.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: red-test-ing
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: red.test
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: red-test
port:
number: 80
metallb-configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.16.93.50-172.16.93.100
kubectl获取
--svc--
service/my-release-ingress-nginx-controller LoadBalancer 10.109.91.181 172.16.93.50 80:32275/TCP,443:32528/TCP
service/my-release-ingress-nginx-controller-admission ClusterIP 10.106.163.96 <none> 443/TCP
service/red-test ClusterIP 10.108.176.214 <none> 80/TCP
--ing--
red-test-ing <none> red.test 172.16.93.50 80
red.test已添加到我的/ etc / hosts文件中。我已经尝试过“卷曲”主机名,端口,节点端口,IP地址等的每个组合。我要么超时要么无法路由。
curl http://172.16.93.50:32275/ -v -H 'Host: red.test' - no route to host
curl http://red.test/ - timeout
curl http://172.16.93.50/ - timeout
etc,etc...
我已经在所有节点上运行了iptables -A FORWARD -j ACCEPT和iptables -P FORWARD ACCEPT。我可以通过普通的nodeport服务访问pod。
我一般对k8和devop还是陌生的。让我知道是否需要更多信息。非常感谢。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。