联合2表像一个实体一样,然后进行排序,跳过和限制猫鼬

如何解决联合2表像一个实体一样,然后进行排序,跳过和限制猫鼬

我有两个架构,架构A和B如下:

const A = new Schema({
  paymentId: Number,date: Date,...data
})

const B = new Schema({
  paidId: Number,...data
})

我想从A和B都返回记录,就像它是一个表一样,在这里我可以从A和B都获得可以与.sort().skip()和{{ 1}}具有理想的功能。

我可以在两个表上都做一个.limit(),将它们连接起来,然后手动排序/跳过/限制,但是我发现效率很低。

编辑:进行澄清。两个集合是否相关都无关紧要。我只想从两个集合中查询,就像两个都在一个集合中一样。

例如,如果我有以下文件

.find()

使用选项// Documents in A { date: '2020-01-01',A_id: 1 },{ date: '2020-01-03',A_id: 2 },{ date: '2020-01-05',A_id: 3 },// Documents in B { date: '2020-01-02',B_id: 1 },{ date: '2020-01-04',B_id: 2 },{ date: '2020-01-06',B_id: 3 },进行查询将导致以下结果:

.sort('date').skip(0).limit(5)

解决方法

//Can I suggest,$merge to merge these two independent collections into another different collection 
//and then use aggregation to doing sort(),skip() and limit()
> db.version();
4.2.6
> db.colA.find();
{ "_id" : ObjectId("5f76d969975ec8826bbcaab5"),"date" : "2020-01-01","A_id" : 1 }
{ "_id" : ObjectId("5f76d969975ec8826bbcaab6"),"date" : "2020-01-03","A_id" : 2 }
{ "_id" : ObjectId("5f76d969975ec8826bbcaab7"),"date" : "2020-01-05","A_id" : 3 }
> db.colB.find();
{ "_id" : ObjectId("5f76d969975ec8826bbcaab8"),"date" : "2020-01-02","B_id" : 1 }
{ "_id" : ObjectId("5f76d969975ec8826bbcaab9"),"date" : "2020-01-04","B_id" : 2 }
{ "_id" : ObjectId("5f76d969975ec8826bbcaaba"),"date" : "2020-01-06","B_id" : 3 }
> db.colA.aggregate([
{$match:{}},{$merge:{into: "colAB"}}
]);
>db.colB.aggregate([
{$match:{}},{$merge:{into: "colAB"}}
]);
> db.colAB.find();
{ "_id" : ObjectId("5f76d969975ec8826bbcaab5"),"A_id" : 1,"date" : "2020-01-01" }
{ "_id" : ObjectId("5f76d969975ec8826bbcaab6"),"A_id" : 2,"date" : "2020-01-03" }
{ "_id" : ObjectId("5f76d969975ec8826bbcaab7"),"A_id" : 3,"date" : "2020-01-05" }
{ "_id" : ObjectId("5f76d969975ec8826bbcaab8"),"B_id" : 1,"date" : "2020-01-02" }
{ "_id" : ObjectId("5f76d969975ec8826bbcaab9"),"B_id" : 2,"date" : "2020-01-04" }
{ "_id" : ObjectId("5f76d969975ec8826bbcaaba"),"B_id" : 3,"date" : "2020-01-06" }
> > db.colAB.aggregate([
... {$project:{_id:0}},... {$sort:{date:1}},... {$skip:0},... {$limit:5}
... ]);
{ "A_id" : 1,"date" : "2020-01-01" }
{ "B_id" : 1,"date" : "2020-01-02" }
{ "A_id" : 2,"date" : "2020-01-03" }
{ "B_id" : 2,"date" : "2020-01-04" }
{ "A_id" : 3,"date" : "2020-01-05" }
,

来自https://stackoverflow.com/a/55289023/3793648

可以使用“ SQL UNION”方式在MongoDB中进行联合 在单个查询中聚合和查找。

类似这样的东西:

    db.getCollection("AnyCollectionThatContainsAtLeastOneDocument").aggregate(
    [
      { $limit: 1 },// Reduce the result set to a single document.
      { $project: { _id: 1 } },// Strip all fields except the Id.
      { $project: { _id: 0 } },// Strip the id. The document is now empty.

      // Lookup all collections to union together.
      { $lookup: { from: 'collectionToUnion1',pipeline: [...],as: 'Collection1' } },{ $lookup: { from: 'collectionToUnion2',as: 'Collection2' } },{ $lookup: { from: 'collectionToUnion3',as: 'Collection3' } },// Merge the collections together.
      {
        $project:
        {
          Union: { $concatArrays: ["$Collection1","$Collection2","$Collection3"] }
        }
      },{ $unwind: "$Union" },// Unwind the union collection into a result set.
      { $replaceRoot: { newRoot: "$Union" } } // Replace the root to cleanup the resulting documents.
    ]);

更多细节在上面的帖子中。添加$sort$skip$limit只是将它们添加到聚合管道中的问题。非常感谢@sboisse!

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 <select id="xxx"> SELECT di.id, di.name, di.work_type, di.updated... <where> <if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 <property name="dynamic.classpath" value="tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams['font.sans-serif'] = ['SimHei'] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -> systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping("/hires") public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate<String
使用vite构建项目报错 C:\Users\ychen\work>npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-