reactor-netty中TcpClient的create过程

本文主要研究一下reactor-netty中TcpClient的create的过程

maven

<dependency>
            <groupId>io.projectreactor.ipc</groupId>
            <artifactId>reactor-netty</artifactId>
            <version>0.7.3.RELEASE</version>
        </dependency>

TcpClient

reactor-netty-0.7.3.RELEASE-sources.jar!/reactor/ipc/netty/tcp/TcpClient.java

protected TcpClient(TcpClient.Builder builder) {
        ClientOptions.Builder<?> clientOptionsBuilder = ClientOptions.builder();
        if (Objects.nonNull(builder.options)) {
            builder.options.accept(clientOptionsBuilder);
        }
        if (!clientOptionsBuilder.isLoopAvailable()) {
            clientOptionsBuilder.loopResources(TcpResources.get());
        }
        if (!clientOptionsBuilder.isPoolAvailable() && !clientOptionsBuilder.isPoolDisabled()) {
            clientOptionsBuilder.poolResources(TcpResources.get());
        }
        this.options = clientOptionsBuilder.build();
    }
loopResources和poolResources其实是通过TcpResources创建
上面loopResources创建完之后,下面的poolResources其实是直接返回

clientOptionsBuilder.isLoopAvailable()

reactor-netty-0.7.3.RELEASE-sources.jar!/reactor/ipc/netty/options/NettyOptions.java

public final boolean isLoopAvailable() {
            return this.loopResources != null;
        }
一开始是null,于是调用TcpResources.get()创建

TcpResources.get()

reactor-netty-0.7.3.RELEASE-sources.jar!/reactor/ipc/netty/tcp/TcpResources.java

/**
     * Return the global HTTP resources for event loops and pooling
     *
     * @return the global HTTP resources for event loops and pooling
     */
    public static TcpResources get() {
        return getOrCreate(tcpResources,null,ON_TCP_NEW,"tcp");
    }
    /**
     * Safely check if existing resource exist and proceed to update/cleanup if new
     * resources references are passed.
     *
     * @param ref the resources atomic reference
     * @param loops the eventual new {@link LoopResources}
     * @param pools the eventual new {@link PoolResources}
     * @param onNew a {@link TcpResources} factory
     * @param name a name for resources
     * @param <T> the reified type of {@link TcpResources}
     *
     * @return an existing or new {@link TcpResources}
     */
    protected static <T extends TcpResources> T getOrCreate(AtomicReference<T> ref,LoopResources loops,PoolResources pools,BiFunction<LoopResources,PoolResources,T> onNew,String name) {
        T update;
        for (; ; ) {
            T resources = ref.get();
            if (resources == null || loops != null || pools != null) {
                update = create(resources,loops,pools,name,onNew);
                if (ref.compareAndSet(resources,update)) {
                    if(resources != null){
                        if(loops != null){
                            resources.defaultLoops.dispose();
                        }
                        if(pools != null){
                            resources.defaultPools.dispose();
                        }
                    }
                    return update;
                }
                else {
                    update._dispose();
                }
            }
            else {
                return resources;
            }
        }
    }
这里进入create,创建loops还有pools
static final AtomicReference<TcpResources>                          tcpResources;
    static final BiFunction<LoopResources,TcpResources> ON_TCP_NEW;

    static {
        ON_TCP_NEW = TcpResources::new;
        tcpResources  = new AtomicReference<>();
    }

    final PoolResources defaultPools;
    final LoopResources defaultLoops;

    protected TcpResources(LoopResources defaultLoops,PoolResources defaultPools) {
        this.defaultLoops = defaultLoops;
        this.defaultPools = defaultPools;
    }

    static <T extends TcpResources> T create(T previous,String name,T> onNew) {
        if (previous == null) {
            loops = loops == null ? LoopResources.create("reactor-" + name) : loops;
            pools = pools == null ? PoolResources.elastic(name) : pools;
        }
        else {
            loops = loops == null ? previous.defaultLoops : loops;
            pools = pools == null ? previous.defaultPools : pools;
        }
        return onNew.apply(loops,pools);
    }
这里的onNew是创建TcpResources,使用的构造器是TcpResources(LoopResources defaultLoops,PoolResources defaultPools)

LoopResources.create

reactor-netty-0.7.3.RELEASE-sources.jar!/reactor/ipc/netty/resources/LoopResources.java

/**
     * Default worker thread count,fallback to available processor
     */
    int DEFAULT_IO_WORKER_COUNT = Integer.parseInt(System.getProperty(
            "reactor.ipc.netty.workerCount","" + Math.max(Runtime.getRuntime()
                        .availableProcessors(),4)));
    /**
     * Default selector thread count,fallback to -1 (no selector thread)
     */
    int DEFAULT_IO_SELECT_COUNT = Integer.parseInt(System.getProperty(
            "reactor.ipc.netty.selectCount","" + -1));
    /**
     * Create a simple {@link LoopResources} to provide automatically for {@link
     * EventLoopGroup} and {@link Channel} factories
     *
     * @param prefix the event loop thread name prefix
     *
     * @return a new {@link LoopResources} to provide automatically for {@link
     * EventLoopGroup} and {@link Channel} factories
     */
    static LoopResources create(String prefix) {
        return new DefaultLoopResources(prefix,DEFAULT_IO_SELECT_COUNT,DEFAULT_IO_WORKER_COUNT,true);
    }
这里有两个参数,一个是worker thread count,一个是selector thread count
  • DEFAULT_IO_WORKER_COUNT
如果环境变量有设置reactor.ipc.netty.workerCount,则用该值;没有设置则取Math.max(Runtime.getRuntime().availableProcessors(),4)))
  • DEFAULT_IO_SELECT_COUNT
如果环境变量有设置reactor.ipc.netty.selectCount,则用该值;没有设置则取-1,表示没有selector thread

DefaultLoopResources

reactor-netty-0.7.3.RELEASE-sources.jar!/reactor/ipc/netty/resources/DefaultLoopResources.java

DefaultLoopResources(String prefix,int selectCount,int workerCount,boolean daemon) {
        this.running = new AtomicBoolean(true);
        this.daemon = daemon;
        this.workerCount = workerCount;
        this.prefix = prefix;

        this.serverLoops = new NioEventLoopGroup(workerCount,threadFactory(this,"nio"));

        this.clientLoops = LoopResources.colocate(serverLoops);

        this.cacheNativeClientLoops = new AtomicReference<>();
        this.cacheNativeServerLoops = new AtomicReference<>();

        if (selectCount == -1) {
            this.selectCount = workerCount;
            this.serverSelectLoops = this.serverLoops;
            this.cacheNativeSelectLoops = this.cacheNativeServerLoops;
        }
        else {
            this.selectCount = selectCount;
            this.serverSelectLoops =
                    new NioEventLoopGroup(selectCount,"select-nio"));
            this.cacheNativeSelectLoops = new AtomicReference<>();
        }
    }
这里prefix为reactor-tcp,selectCount为-1,workerCount为4,daemon为true
可以看到这里创建了NioEventLoopGroup,workerCount为4;由于selectCount=-1因此没有单独创建serverSelectLoops

NioEventLoopGroup

netty-transport-4.1.20.Final-sources.jar!/io/netty/channel/nio/NioEventLoopGroup.java

public NioEventLoopGroup(int nThreads,ThreadFactory threadFactory,final SelectorProvider selectorProvider,final SelectStrategyFactory selectStrategyFactory) {
        super(nThreads,threadFactory,selectorProvider,selectStrategyFactory,RejectedExecutionHandlers.reject());
    }
注意这里的rejectHandler是RejectedExecutionHandlers.reject()

netty-common-4.1.20.Final-sources.jar!/io/netty/util/concurrent/MultithreadEventExecutorGroup.java

/**
     * Create a new instance.
     *
     * @param nThreads          the number of threads that will be used by this instance.
     * @param threadFactory     the ThreadFactory to use,or {@code null} if the default should be used.
     * @param args              arguments which will passed to each {@link #newChild(Executor,Object...)} call
     */
    protected MultithreadEventExecutorGroup(int nThreads,Object... args) {
        this(nThreads,threadFactory == null ? null : new ThreadPerTaskExecutor(threadFactory),args);
    }
new NioEventLoopGroup的时候调用了MultithreadEventExecutorGroup
这里的threadFactory是reactor.ipc.netty.resources.DefaultLoopResources$EventLoopSelectorFactory
这里的executor是ThreadPerTaskExecutor

netty-common-4.1.20.Final-sources.jar!/io/netty/util/concurrent/ThreadPerTaskExecutor.java

public final class ThreadPerTaskExecutor implements Executor {
    private final ThreadFactory threadFactory;

    public ThreadPerTaskExecutor(ThreadFactory threadFactory) {
        if (threadFactory == null) {
            throw new NullPointerException("threadFactory");
        }
        this.threadFactory = threadFactory;
    }

    @Override
    public void execute(Runnable command) {
        threadFactory.newThread(command).start();
    }
}

MultithreadEventExecutorGroup

/**
     * Create a new instance.
     *
     * @param nThreads          the number of threads that will be used by this instance.
     * @param executor          the Executor to use,or {@code null} if the default should be used.
     * @param chooserFactory    the {@link EventExecutorChooserFactory} to use.
     * @param args              arguments which will passed to each {@link #newChild(Executor,Executor executor,EventExecutorChooserFactory chooserFactory,Object... args) {
        if (nThreads <= 0) {
            throw new IllegalArgumentException(String.format("nThreads: %d (expected: > 0)",nThreads));
        }

        if (executor == null) {
            executor = new ThreadPerTaskExecutor(newDefaultThreadFactory());
        }

        children = new EventExecutor[nThreads];

        for (int i = 0; i < nThreads; i ++) {
            boolean success = false;
            try {
                children[i] = newChild(executor,args);
                success = true;
            } catch (Exception e) {
                // TODO: Think about if this is a good exception type
                throw new IllegalStateException("failed to create a child event loop",e);
            } finally {
                if (!success) {
                    for (int j = 0; j < i; j ++) {
                        children[j].shutdownGracefully();
                    }

                    for (int j = 0; j < i; j ++) {
                        EventExecutor e = children[j];
                        try {
                            while (!e.isTerminated()) {
                                e.awaitTermination(Integer.MAX_VALUE,TimeUnit.SECONDS);
                            }
                        } catch (InterruptedException interrupted) {
                            // Let the caller handle the interruption.
                            Thread.currentThread().interrupt();
                            break;
                        }
                    }
                }
            }
        }

        chooser = chooserFactory.newChooser(children);

        final FutureListener<Object> terminationListener = new FutureListener<Object>() {
            @Override
            public void operationComplete(Future<Object> future) throws Exception {
                if (terminatedChildren.incrementAndGet() == children.length) {
                    terminationFuture.setSuccess(null);
                }
            }
        };

        for (EventExecutor e: children) {
            e.terminationFuture().addListener(terminationListener);
        }

        Set<EventExecutor> childrenSet = new LinkedHashSet<EventExecutor>(children.length);
        Collections.addAll(childrenSet,children);
        readonlyChildren = Collections.unmodifiableSet(childrenSet);
    }
注意,这里用for循环去newChild

netty-transport-4.1.20.Final-sources.jar!/io/netty/channel/nio/NioEventLoopGroup.java

protected EventLoop newChild(Executor executor,Object... args) throws Exception {
        return new NioEventLoop(this,executor,(SelectorProvider) args[0],((SelectStrategyFactory) args[1]).newSelectStrategy(),(RejectedExecutionHandler) args[2]);
    }
每个child都是一个NioEventLoop

NioEventLoop

netty-transport-4.1.20.Final-sources.jar!/io/netty/channel/nio/NioEventLoop.java

protected static final int DEFAULT_MAX_PENDING_TASKS = Math.max(16,SystemPropertyUtil.getInt("io.netty.eventLoop.maxPendingTasks",Integer.MAX_VALUE));

    NioEventLoop(NioEventLoopGroup parent,SelectorProvider selectorProvider,SelectStrategy strategy,RejectedExecutionHandler rejectedExecutionHandler) {
        super(parent,false,DEFAULT_MAX_PENDING_TASKS,rejectedExecutionHandler);
        if (selectorProvider == null) {
            throw new NullPointerException("selectorProvider");
        }
        if (strategy == null) {
            throw new NullPointerException("selectStrategy");
        }
        provider = selectorProvider;
        final SelectorTuple selectorTuple = openSelector();
        selector = selectorTuple.selector;
        unwrappedSelector = selectorTuple.unwrappedSelector;
        selectStrategy = strategy;
    }
注意这里的DEFAULT_MAX_PENDING_TASKS参数,指定了队列的大小。
如果io.netty.eventLoop.maxPendingTasks有指定,则取它跟16的最大值;没有指定则是Integer.MAX_VALUE
这里没有指定,默认是Integer.MAX_VALUE

NioEventLoop extends SingleThreadEventLoop

netty-transport-4.1.20.Final-sources.jar!/io/netty/channel/SingleThreadEventLoop.java

protected SingleThreadEventLoop(EventLoopGroup parent,boolean addTaskWakesUp,int maxPendingTasks,addTaskWakesUp,maxPendingTasks,rejectedExecutionHandler);
        tailTasks = newTaskQueue(maxPendingTasks);
    }
这里的parent是NioEventLoopGroup
这里的executor是ThreadPerTaskExecutor
这里的rejectHandler是RejectedExecutionHandlers.reject()

SingleThreadEventLoop extends SingleThreadEventExecutor

/**
     * Create a new instance
     *
     * @param parent            the {@link EventExecutorGroup} which is the parent of this instance and belongs to it
     * @param executor          the {@link Executor} which will be used for executing
     * @param addTaskWakesUp    {@code true} if and only if invocation of {@link #addTask(Runnable)} will wake up the
     *                          executor thread
     * @param maxPendingTasks   the maximum number of pending tasks before new tasks will be rejected.
     * @param rejectedHandler   the {@link RejectedExecutionHandler} to use.
     */
    protected SingleThreadEventExecutor(EventExecutorGroup parent,RejectedExecutionHandler rejectedHandler) {
        super(parent);
        this.addTaskWakesUp = addTaskWakesUp;
        this.maxPendingTasks = Math.max(16,maxPendingTasks);
        this.executor = ObjectUtil.checkNotNull(executor,"executor");
        taskQueue = newTaskQueue(this.maxPendingTasks);
        rejectedExecutionHandler = ObjectUtil.checkNotNull(rejectedHandler,"rejectedHandler");
    }

    /**
     * Create a new {@link Queue} which will holds the tasks to execute. This default implementation will return a
     * {@link LinkedBlockingQueue} but if your sub-class of {@link SingleThreadEventExecutor} will not do any blocking
     * calls on the this {@link Queue} it may make sense to {@code @Override} this and return some more performant
     * implementation that does not support blocking operations at all.
     */
    protected Queue<Runnable> newTaskQueue(int maxPendingTasks) {
        return new LinkedBlockingQueue<Runnable>(maxPendingTasks);
    }
这里的maxPendingTasks是Integer.MAX_VALUE,创建的taskQueue的大小为Integer.MAX_VALUE
这里的addTaskWakesUp为false

PoolResources.elastic(name)

reactor-netty-0.7.3.RELEASE-sources.jar!/reactor/ipc/netty/resources/PoolResources.java

/**
     * Create an uncapped {@link PoolResources} to provide automatically for {@link
     * ChannelPool}.
     * <p>An elastic {@link PoolResources} will never wait before opening a new
     * connection. The reuse window is limited but it cannot starve an undetermined volume
     * of clients using it.
     *
     * @param name the channel pool map name
     *
     * @return a new {@link PoolResources} to provide automatically for {@link
     * ChannelPool}
     */
    static PoolResources elastic(String name) {
        return new DefaultPoolResources(name,SimpleChannelPool::new);
    }

DefaultPoolResources

reactor-netty-0.7.3.RELEASE-sources.jar!/reactor/ipc/netty/resources/DefaultPoolResources.java

final ConcurrentMap<SocketAddress,Pool> channelPools;
    final String                             name;
    final PoolFactory                        provider;

    DefaultPoolResources(String name,PoolFactory provider) {
        this.name = name;
        this.provider = provider;
        this.channelPools = PlatformDependent.newConcurrentHashMap();
    }
创建channelPools的map,key是SocketAddress,value是Pool

小结

TcpClient的create方法主要是创建TcpResources,而TcpResources则创建loopResources和poolResources。

loopResources

这个loopResources主要是创建NioEventLoopGroup,以及该group下面的workerCount个NioEventLoop(这里涉及两个参数,一个是worker thread count,一个是selector thread count)

  • DEFAULT_IO_WORKER_COUNT:如果环境变量有设置reactor.ipc.netty.workerCount,则用该值;没有设置则取Math.max(Runtime.getRuntime().availableProcessors(),4)))
  • DEFAULT_IO_SELECT_COUNT:如果环境变量有设置reactor.ipc.netty.selectCount,则用该值;没有设置则取-1,表示没有selector thread
  • DEFAULT_MAX_PENDING_TASKS: 指定NioEventLoop的taskQueue的大小,Math.max(16,Integer.MAX_VALUE))
  • NioEventLoop继承了SingleThreadEventLoop,而SingleThreadEventLoop则继承SingleThreadEventExecutor,而其代理的executor是ThreadPerTaskExecutor,rejectHandler是RejectedExecutionHandlers.reject(),默认的taskQueue是LinkedBlockingQueue,其大小为Integer.MAX_VALUE

poolResources

这个主要是创建channelPools,类型是ConcurrentMap<SocketAddress,Pool>

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


react 中的高阶组件主要是对于 hooks 之前的类组件来说的,如果组件之中有复用的代码,需要重新创建一个父类,父类中存储公共代码,返回子类,同时把公用属性...
我们上一节了解了组件的更新机制,但是只是停留在表层上,例如我们的 setState 函数式同步执行的,我们的事件处理直接绑定在了 dom 元素上,这些都跟 re...
我们上一节了解了 react 的虚拟 dom 的格式,如何把虚拟 dom 转为真实 dom 进行挂载。其实函数是组件和类组件也是在这个基础上包裹了一层,一个是调...
react 本身提供了克隆组件的方法,但是平时开发中可能很少使用,可能是不了解。我公司的项目就没有使用,但是在很多三方库中都有使用。本小节我们来学习下如果使用该...
mobx 是一个简单可扩展的状态管理库,中文官网链接。小编在接触 react 就一直使用 mobx 库,上手简单不复杂。
我们在平常的开发中不可避免的会有很多列表渲染逻辑,在 pc 端可以使用分页进行渲染数限制,在移动端可以使用下拉加载更多。但是对于大量的列表渲染,特别像有实时数据...
本小节开始前,我们先答复下一个同学的问题。上一小节发布后,有小伙伴后台来信问到:‘小编你只讲了类组件中怎么使用 ref,那在函数式组件中怎么使用呢?’。确实我们...
上一小节我们了解了固定高度的滚动列表实现,因为是固定高度所以容器总高度和每个元素的 size、offset 很容易得到,这种场景也适合我们常见的大部分场景,例如...
上一小节我们处理了 setState 的批量更新机制,但是我们有两个遗漏点,一个是源码中的 setState 可以传入函数,同时 setState 可以传入第二...
我们知道 react 进行页面渲染或者刷新的时候,会从根节点到子节点全部执行一遍,即使子组件中没有状态的改变,也会执行。这就造成了性能不必要的浪费。之前我们了解...
在平时工作中的某些场景下,你可能想在整个组件树中传递数据,但却不想手动地通过 props 属性在每一层传递属性,contextAPI 应用而生。
楼主最近入职新单位了,恰好新单位使用的技术栈是 react,因为之前一直进行的是 vue2/vue3 和小程序开发,对于这些技术栈实现机制也有一些了解,最少面试...
我们上一节了了解了函数式组件和类组件的处理方式,本质就是处理基于 babel 处理后的 type 类型,最后还是要处理虚拟 dom。本小节我们学习下组件的更新机...
前面几节我们学习了解了 react 的渲染机制和生命周期,本节我们正式进入基本面试必考的核心地带 -- diff 算法,了解如何优化和复用 dom 操作的,还有...
我们在之前已经学习过 react 生命周期,但是在 16 版本中 will 类的生命周期进行了废除,虽然依然可以用,但是需要加上 UNSAFE 开头,表示是不安...
上一小节我们学习了 react 中类组件的优化方式,对于 hooks 为主流的函数式编程,react 也提供了优化方式 memo 方法,本小节我们来了解下它的用...
开源不易,感谢你的支持,❤ star me if you like concent ^_^
hel-micro,模块联邦sdk化,免构建、热更新、工具链无关的微模块方案 ,欢迎关注与了解
本文主题围绕concent的setup和react的五把钩子来展开,既然提到了setup就离不开composition api这个关键词,准确的说setup是由...
ReactsetState的执行是异步还是同步官方文档是这么说的setState()doesnotalwaysimmediatelyupdatethecomponent.Itmaybatchordefertheupdateuntillater.Thismakesreadingthis.staterightaftercallingsetState()apotentialpitfall.Instead,usecom