已解决:Nginx proxy_bind失败

如何解决已解决:Nginx proxy_bind失败

全部:

我试图配置nginx来使用(listen)和proxy_bind指令将来自(192.168.0.2:12345)的内部代理地址与从(192.168.0.2:443)监听的相同内部主机地址反向,在单独的端口上。

# /opt/sbin/nginx -v
nginx version: nginx/1.19.2 (x86_64-pc-linux-gnu)

# ifconfig br0:0 192.168.0.2

# opkg install nginx-ssl
# opkg install ca-bundle
# cat /opt/etc/nginx/nginx.conf
user admin root;
#user nobody;
worker_processes  1;

events {
    worker_connections  64;
}

http {
    # HTTPS server

    server {
        listen       192.168.0.2:443 ssl;
        server_name  $host;

        ssl_certificate        /etc/cert.pem;
        ssl_certificate_key    /etc/key.pem;
        ssl_client_certificate /opt/etc/ssl/cert.pem;
        ssl_verify_client optional;
        ssl_verify_depth 2;

        proxy_ssl_server_name      on;

        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;

        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers  on;

        location / {
    #        root   html;
    #        index  index.html index.htm;
            resolver 103.86.99.100;
    #        proxy_bind 192.168.0.2:12345;
            proxy_bind $server_addr:12345;
    #        proxy_bind $remote_addr:12345 transparent;
            proxy_pass $scheme://$host;
        }
    }
}

我尝试更改user admin root;,它是此路由器的根用户。我尝试使用proxy_bind 192.168.0.2;proxy_bind 192.168.0.2 transparent;proxy_bind $server_addr;proxy_bind $server_addr transparent;的不同组合。使用tcpdump进行验证时,它们似乎都不起作用。 nginx始终使用外部WAN地址(100.64.8.236)。

Ifconfig输出:

# ifconfig
br0       Link encap:Ethernet  HWaddr C0:56:27:D1:B8:A4
          inet addr:192.168.0.1  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING ALLMULTI MULTICAST  MTU:1500  Metric:1
          RX packets:10243803 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5440860 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:14614392834 (13.6 GiB)  TX bytes:860977246 (821.0 MiB)

br0:0     Link encap:Ethernet  HWaddr C0:56:27:D1:B8:A4
          inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING ALLMULTI MULTICAST  MTU:1500  Metric:1

vlan2     Link encap:Ethernet  HWaddr C0:56:27:D1:B8:A4
          inet addr:100.64.8.236  Bcast:100.64.15.255  Mask:255.255.248.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1757588 errors:0 dropped:0 overruns:0 frame:0
          TX packets:613625 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2267961441 (2.1 GiB)  TX bytes:139435610 (132.9 MiB)

路由输出:

# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.10.0.17      *               255.255.255.255 UH    0      0        0 tun12
89.38.98.142    100.64.8.1      255.255.255.255 UGH   0      0        0 vlan2
100.64.8.1      *               255.255.255.255 UH    0      0        0 vlan2
10.15.0.65      *               255.255.255.255 UH    0      0        0 tun11
192.168.2.1     *               255.255.255.255 UH    0      0        0 vlan3
51.68.180.4     100.64.8.1      255.255.255.255 UGH   0      0        0 vlan2
192.168.2.0     *               255.255.255.0   U     0      0        0 vlan3
192.168.0.0     *               255.255.255.0   U     0      0        0 br0
100.64.8.0      *               255.255.248.0   U     0      0        0 vlan2
127.0.0.0       *               255.0.0.0       U     0      0        0 lo
default         100.64.8.1      0.0.0.0         UG    0      0        0 vlan2

Tcpdump输出:

客户端Remote_Addr(192.168.0.154:$port)==请求=> Nginx反向代理服务器-侦听器(192.168.0.2:443)

07:19:06.840468  In c8:1f:66:13:a1:11 ethertype IPv4 (0x0800),length 62: 192.168.0.154.55138 > 192.168.0.2.443: Flags [.],ack 1582,win 8212,length 0
07:19:06.840468  In c8:1f:66:13:a1:11 ethertype IPv4 (0x0800),length 0

Nginx反向代理服务器-侦听器(192.168.0.2:443)==响应=>客户端Remote_Addr(192.168.0.154:$port)

07:19:06.841377 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800),length 56: 192.168.0.2.443 > 192.168.0.154.55138: Flags [.],ack 1475,win 541,length 0
07:19:06.841411 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800),length 0

Nginx反向代理服务器-发件人(100.64.8.236:12345)==请求=>上游目标服务器-侦听器(104.27.161.206:443)

07:19:11.885314 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800),length 76: 100.64.8.236.12345 > 104.27.161.206.443: Flags [S],seq 3472185855,win 5840,options [mss 1460,sackOK,TS val 331214 ecr 0,nop,wscale 4],length 0

上游目标服务器-侦听器(104.27.161.206:443)==响应=> Nginx反向代理服务器-发件人(100.64.8.236:12345)

07:19:11.887683  In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800),length 68: 104.27.161.206.443 > 100.64.8.236.12345: Flags [S.],seq 2113436779,ack 3472185856,win 65535,options [mss 1400,wscale 10],length 0

注意:Nginx反向代理服务器(侦听器)和Nginx反向代理服务器(发件人)的MAC地址是同一硬件

07:19:06.840468  In c8:1f:66:13:a1:11 ethertype IPv4 (0x0800),length 0
07:19:06.841377 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800),length 0
07:19:11.885314 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800),length 0
07:19:11.887683  In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800),length 0
07:19:11.887948 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800),length 56: 100.64.8.236.12345 > 104.27.161.206.443: Flags [.],ack 1,win 365,length 0
07:19:11.888854 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800),length 264: 100.64.8.236.12345 > 104.27.161.206.443: Flags [P.],seq 1:209,length 208
07:19:11.890844  In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800),length 62: 104.27.161.206.443 > 100.64.8.236.12345: Flags [.],ack 209,win 66,length 0
07:19:11.893154  In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800),length 1516: 104.27.161.206.443 > 100.64.8.236.12345: Flags [.],seq 1:1461,length 1460
07:19:11.893316 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800),ack 1461,win 548,length 0
07:19:11.893161  In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800),length 1000: 104.27.161.206.443 > 100.64.8.236.12345: Flags [P.],seq 1461:2405,length 944

Iptables输出:

# iptables -t mangle -I PREROUTING -i vlan2 -p tcp -m multiport --dport 12345 -j MARK --set-mark 0x2000/0x2000
# iptables -t mangle -I POSTROUTING -o vlan2 -p tcp -m multiport --sport 12345 -j MARK --set-mark 0x8000/0x8000

注意:数据包是匹配的并被标记,但没有路由到适当的接口。我认为这可能为时已晚。

# iptables -t mangle -L -v -n
Chain PREROUTING (policy ACCEPT 5506K packets,8051M bytes)
 pkts bytes target     prot opt in     out     source               destination
   33 15329 MARK       tcp  --  vlan2  *       0.0.0.0/0            0.0.0.0/0            multiport dports 12345 MARK or 0x2000

Chain POSTROUTING (policy ACCEPT 2832K packets,171M bytes)
 pkts bytes target     prot opt in     out     source               destination
   30  4548 MARK       tcp  --  *      vlan2   0.0.0.0/0            0.0.0.0/0            multiport sports 12345 MARK or 0x8000

反向代理请求使其到达目的地并返回目的地,但使用外部WAN地址(100.64.8.236:12345),而不使用内部主机地址(192.168.0.2:12345)。

编辑:启动流量时,我发现从192.168.0.2:12345(nginx worker)发送了一个SYN数据包。没有回应就没有其他了。我想知道192.168.0.2:12345是否没有收到SYN,ACK是nginx worker使用的下一个可用接口地址(即100.64.8.236:12345)吗?

# netstat -anp|grep 12345
tcp 0 1 192.168.0.2:12345 172.64.163.36:443 SYN_SENT 14176/nginx: worker

我的目标是使用iptables标记192.168.0.2:12345数据包,以在已建立的OpenVPN隧道(tun12)上进行路由。

proxy_bind指令似乎失败了。

有什么想法吗?

谢谢!

加里

解决方法

全部:

查看iptables链的工作流程后,我发现外部接口的Nginx Worker(192.168.0.2:12345)与OUTPUT链相关。

                                                                                 (192.168.0.2:12345) OUTPUT ==>
    (192.168.0.154:$port) PREROUTING ==>                                   (100.64.8.236:12345) POSTROUTING ==>
Windows Client (192.168.0.154:$port) ==> Nginx Master (192.168.0.2:443) | Nginx Worker (100.64.8.236:12345) ==> Upstream Desination Server (104.27.161.206:443)
                                     <== POSTROUTING (192.168.0.2:443)                                      <== PREROUTING (104.27.161.206:443)

一旦使用正确的接口(vlan2)添加了适当的iptables OUTPUT规则,则对离开Nginx Worker(100.64.8.236:12345)的数据包进行了适当的标记,并将其路由到OpenVPN隧道。

# iptables -t mangle -I OUTPUT -o vlan2 -p tcp -m multiport --sport 12345 -j MARK --set-mark 0x2000/0x2000

现在,我只需要弄清楚Nginx SSL客户端CA Trust配置,我们就可以开展业务了。

希望这对以后的人有帮助。

恭喜,

加里

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-