Netty:创建有状态的同步出站管道的最佳方法?

如何解决Netty:创建有状态的同步出站管道的最佳方法?

使用Netty,我从多个线程上的框架接收多个异步消息。我需要将这些消息发送到使用同步,有状态协议的网络设备(UDP)。因此,我需要使用一个状态变量,并且一次只允许发送一条消息,只有在客户端处于“空闲”状态时才应该发生。

此外,状态机将需要在队列中等待的内容之前发送其自身内部生成的消息-重试。对于这种用例,我知道如何将消息注入到管道中,只要出站消息可以保留在管道的顶部,该方法就可以工作。

有什么想法如何使用客户端状态控制输出?

TIA

解决方法

我已经提出了概念验证/提议的解决方案,但希望有人知道更好的方法。尽管此操作按预期进行,但会引入许多不良副作用,需要解决。

  • 在其自己的线程上运行阻塞写处理程序。
  • 如果状态不空闲,则将线程置于睡眠状态,并在状态变为空闲时将其唤醒。
  • 如果状态为空闲或变为空闲,请按其方式发送消息。

这是我使用的引导程序

public class UDPConnector {
    
    public Init() {

        this.workerGroup = EPOLL ? new EpollEventLoopGroup() : new NioEventLoopGroup();
        this.blockingExecutor = new DefaultEventExecutor();

        bootstrap = new Bootstrap()
                .channel(EPOLL ? EpollDatagramChannel.class : NioDatagramChannel.class)
                .group(workerGroup)
                .handler(new ChannelInitializer<DatagramChannel>() {
                    @Override
                    public void initChannel(DatagramChannel ch) throws Exception {
                        ch.pipeline().addLast("logging",new LoggingHandler());
                        ch.pipeline().addLast("encode",new RequestEncoderNetty());
                        ch.pipeline().addLast("decode",new ResponseDecoderNetty());
                        ch.pipeline().addLast("ack",new AckHandler());
                        ch.pipeline().addLast(blockingExecutor,"blocking",new BlockingOutboundHandler());
                    }
                });
    }
}

阻止出站处理程序如下

public class BlockingOutboundHandler extends ChannelOutboundHandlerAdapter {

    private final Logger logger = LoggerFactory.getLogger(BlockingOutboundHandler.class);
    private final AtomicBoolean isIdle = new AtomicBoolean(true);

    public void setIdle() {
        synchronized (this.isIdle) {
            logger.debug("setIdle() called");
            this.isIdle.set(true);
            this.isIdle.notify();
        }
    }

    @Override
    public void write(ChannelHandlerContext ctx,Object msg,ChannelPromise promise) throws Exception {
        synchronized (isIdle) {
            if (!isIdle.get()) {
                logger.debug("write(): I/O State not Idle,Waiting");
                isIdle.wait();
                logger.debug("write(): Finished waiting on I/O State");
            }
            isIdle.set(false);
        }
        logger.debug("write(): {}",msg.toString());
        ctx.write(msg,promise);
    }
}

最后,当StateMachine过渡到空闲状态时,该块被释放

Optional.ofNullable((BlockingOutboundHandler) ctx.pipeline().get("blocking")).ifPresent(h -> h.setIdle());

所有这些都会导致出站消息与设备发出的同步,全状态响应同步。

当然,我希望不必处理额外的线程以及它们所带来的同步。我也不知道我会以这种方式遇到什么样的“尚待发现”的问题。这样做的副作用是使主处理程序被多个线程访问,这只是一个新问题,需要解决。

接下来,还要执行超时并使用退避重试;该线程无法无限期地保持阻塞状态。

15:07:16.539 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2] REGISTERED
15:07:16.540 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2] CONNECT: portserver1.tedworld.net/192.168.2.173:2102
15:07:16.541 [DEBUG] [internal.connection.UDPConnectorNetty] - connect(): connect() complete
15:07:16.541 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2,L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] ACTIVE
15:07:16.542 [DEBUG] [c.projector.internal.ProjectorHandler] - scheduler.execute: creating test message
15:07:16.543 [DEBUG] [c.projector.internal.ProjectorHandler] - scheduler.execute: sending test message
15:07:16.544 [DEBUG] [internal.connection.UDPConnectorNetty] - sendRequest: Adding msg to queue { super={ messageType=21,channelId=test,data= } }
15:07:16.546 [DEBUG] [c.projector.internal.ProjectorHandler] - scheduler.execute: Finished
15:07:16.545 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - write { super={ messageType=21,data= } }
15:07:16.547 [DEBUG] [internal.connection.UDPConnectorNetty] - sendRequest: Adding msg to queue { super={ messageType=3F,channelId=lamp,data= } }
15:07:16.548 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - write(): I/O State not Idle,Waiting
15:07:16.548 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2,L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] WRITE: 6B
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 21 89 01 00 00 0a                               |!.....          |
+--------+-------------------------------------------------+----------------+
15:07:16.550 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2,L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] FLUSH
15:07:16.567 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2,L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ: DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010,PooledUnsafeDirectByteBuf(ridx: 0,widx: 6,cap: 2048)),6B
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 06 89 01 00 00 0a                               |......          |
+--------+-------------------------------------------------+----------------+
15:07:16.568 [DEBUG] [nternal.protocol.ResponseDecoderNetty] - decode DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010,cap: 2048))
15:07:16.569 [DEBUG] [rojector.internal.protocol.AckHandler] - channelRead0 { super={ messageType=06,data= } }
15:07:16.570 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - setIdle called
15:07:16.571 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - write(): Finished waiting on I/O State
15:07:16.571 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2,L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ COMPLETE
15:07:16.571 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - write { super={ messageType=3F,data= } }
15:07:16.573 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2,L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] WRITE: 6B
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 3f 89 01 50 57 0a                               |?..PW.          |
+--------+-------------------------------------------------+----------------+
15:07:16.573 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2,L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] FLUSH
15:07:16.587 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2,6B
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 06 89 01 50 57 0a                               |...PW.          |
+--------+-------------------------------------------------+----------------+
15:07:16.588 [DEBUG] [nternal.protocol.ResponseDecoderNetty] - decode DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010,cap: 2048))
15:07:16.589 [DEBUG] [rojector.internal.protocol.AckHandler] - channelRead0 { super={ messageType=06,data= } }
15:07:16.590 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - setIdle called
15:07:16.591 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2,L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ COMPLETE
15:07:16.592 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2,widx: 7,7B
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 40 89 01 50 57 30 0a                            |@..PW0.         |
+--------+-------------------------------------------------+----------------+
15:07:16.593 [DEBUG] [nternal.protocol.ResponseDecoderNetty] - decode DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010,cap: 2048))
15:07:16.594 [DEBUG] [.netty.channel.DefaultChannelPipeline] - Discarded inbound message { super={ messageType=40,data=30 } } that reached at the tail of the pipeline. Please check your pipeline configuration.
15:07:16.595 [DEBUG] [.netty.channel.DefaultChannelPipeline] - Discarded message pipeline : [logging,encode,decode,ack,blocking,DefaultChannelPipeline$TailContext#0]. Channel : [id: 0xb19e73e2,L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102].
15:07:16.596 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2,L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ COMPLETE
 

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-