我在不同的网络上安装了 Primary-Remote,但是跨集群的 pod 访问失败

如何解决我在不同的网络上安装了 Primary-Remote,但是跨集群的 pod 访问失败

我按照 official documents 部署了两个 Kubernetes 集群并在不同的网络上安装 istio Primary-Remote。

我发现pod的outbound中的endpoint是两个集群对应的pod IP,这好像是说跨集群通信请求没有通过地址网关流入其他集群,好像和官方文档中描述的网络通信方式不一样。

谁能告诉我为什么以及如何处理?

安装到cluster2时,我对'istiooperator'做了一些修改以满足资源配置。我用这个命令导出 cluster1 中的 istio-eastwestgateway。

### cluster2's IstioOperator
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  profile: remote
  values:
    global:
      imagePullPolicy: "IfNotPresent"
      proxy:
        resources:
          requests:
            cpu: 0m
            memory: 40Mi
      meshID: mesh1
      multiCluster:
        clusterName: cluster2
      network: network2
      remotePilotAddress: 192.168.79.78
  components:
    ingressGateways:
    - name: istio-ingressgateway
      k8s:
        resources:
          requests:
            cpu: 0m
            memory: 40Mi
    pilot:
      k8s:
        env:
          - name: PILOT_TRACE_SAMPLING
            value: "100"
        resources:
          requests:
            cpu: 0m
            memory: 100Mi
### cluster2's eastwest-gateway
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: eastwest
spec:
  profile: empty
  components:
    ingressGateways:
      - name: istio-eastwestgateway
        label:
          istio: eastwestgateway
          app: istio-eastwestgateway
          topology.istio.io/network: network2
        enabled: true
        k8s:
          resources:
            requests:
              cpu: "0m"
          env:
            # sni-dnat adds the clusters required for AUTO_PASSTHROUGH mode
            - name: ISTIO_META_ROUTER_MODE
              value: "sni-dnat"
            # traffic through this gateway should be routed inside the network
            - name: ISTIO_META_REQUESTED_NETWORK_VIEW
              value: network2
          service:
            ports:
              - name: status-port
                port: 15021
                targetPort: 15021
              - name: mtls
                port: 15443
                targetPort: 15443
              - name: tcp-istiod
                port: 15012
                targetPort: 15012
              - name: tcp-webhook
                port: 15017
                targetPort: 15017
  values:
    global:
      #jwtPolicy: first-party-jwt
      meshID: mesh1
      network: network2
      multiCluster:
        clusterName: cluster2

安装完两个集群后,我跟着document for verification的时候,发现不同集群之间的pods无法正常通信。跨集群通信结果如下

[root@localhost k8s_ctx]# kubectl exec --context="${CTX_CLUSTER1}" -n sample -c sleep \
>     "$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l \
>     app=sleep -o jsonpath='{.items[0].metadata.name}')" \
>     -- curl helloworld.sample:5000/hello
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    60  100    60    0     0    674      0 --:--:-- --:--:-- --:--:--   674
Hello version: v1,instance: helloworld-v1-5897696f47-5lsqp
[root@localhost k8s_ctx]# 
[root@localhost k8s_ctx]# 
[root@localhost k8s_ctx]# kubectl exec --context="${CTX_CLUSTER1}" -n sample -c sleep     "$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l \
    app=sleep -o jsonpath='{.items[0].metadata.name}')"     -- curl helloworld.sample:5000/hello
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    84  100    84    0     0      8      0  0:00:10  0:00:10 --:--:--    22
upstream connect error or disconnect/reset before headers. reset reason: local reset

这是我的集群信息和

[root@localhost istio-1.8.1]# istioctl version
client version: 1.8.1
control plane version: 1.8.1
data plane version: 1.8.1 (8 proxies)
[root@localhost istio-1.8.1]# istioctl pc endpoint  sleep-854565cb79-77lt7 --port 5000 
ENDPOINT                 STATUS      OUTLIER CHECK     CLUSTER
192.167.102.190:5000     HEALTHY     OK                outbound|5000||helloworld.sample.svc.cluster.local
192.169.169.7:5000       HEALTHY     OK                outbound|5000||helloworld.sample.svc.cluster.local

[root@localhost istio-1.8.1]# kubectl --context cluster1 get po -o wide 
NAME                             READY   STATUS    RESTARTS   AGE   IP                NODE                    NOMINATED NODE   READINESS GATES
helloworld-v1-5897696f47-5lsqp   2/2     Running   0          73m   192.167.102.190   localhost.localdomain   <none>           <none>
sleep-854565cb79-77lt7           2/2     Running   0          73m   192.167.102.130   localhost.localdomain   <none>           <none>
[root@localhost istio-1.8.1]# kubectl --context cluster2 get po -o wide 
NAME                             READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
helloworld-v2-7bbf4994d7-k577f   2/2     Running   0          73m   192.169.169.7    node-79-79   <none>           <none>
sleep-8f795f47d-74qgz            2/2     Running   0          73m   192.169.169.21   node-79-79   <none>           <none>

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-