使用嵌套类别将长数据范围扩大

如何解决使用嵌套类别将长数据范围扩大

我正在尝试将长数据转换为宽格式,但是我有多个类别需要嵌套。我当前的数据如下:

YRTR    sub_cou SUBJ    PATH    path_count  pre_drop_count  freq
20173   ACCT 2251   ACCT    1051 -> 2251    1   235 0.40%
20183   ACCT 2251   ACCT    1051 -> 2251    1   217 0.50%
20203   ACCT 2251   ACCT    1051 -> 2251    1   248 0.40%
20213   ACCT 2251   ACCT    1051 -> 2251    1   219 0.50%
20213   ACCT 2251   ACCT    1051 and 2251 -> NA 1   219 0.50%
20173   ACCT 2251   ACCT    1853 -> 2251    2   235 0.90%
20183   ACCT 2251   ACCT    2251 -> 1051    1   217 0.50%
20173   ACCT 2251   ACCT    2251 -> 2251    224 235 95.30%
20183   ACCT 2251   ACCT    2251 -> 2251    210 217 96.80%
20193   ACCT 2251   ACCT    2251 -> 2251    240 258 93%
20203   ACCT 2251   ACCT    2251 -> 2251    223 248 89.90%
20213   ACCT 2251   ACCT    2251 -> 2251    204 219 93.20%
20173   ACCT 2251   ACCT    2251 -> NA  11  235 4.70%
20183   ACCT 2251   ACCT    2251 -> NA  6   217 2.80%
20193   ACCT 2251   ACCT    2251 -> NA  18  258 7.00%
20203   ACCT 2251   ACCT    2251 -> NA  25  248 10.10%
20213   ACCT 2251   ACCT    2251 -> NA  14  219 6.40%
20173   ACCT 2251   ACCT    NA -> 2251  17  235 7.20%
20183   ACCT 2251   ACCT    NA -> 2251  23  217 10.60%
20193   ACCT 2251   ACCT    NA -> 2251  29  258 11%
20203   ACCT 2251   ACCT    NA -> 2251  37  248 14.90%
20213   ACCT 2251   ACCT    NA -> 2251  40  219 18.30%


我正在尝试通过YRTR将其转换为宽格式,但是还具有path_countpre_drop_countfreq的值。因此理想情况下,它看起来应该像这样:

            20173           20183           20193           20203       
sub_cou     SUBJ    PATH    path_count  pre_drop_count  freq    path_count  pre_drop_count  freq    path_count  pre_drop_count  freq    path_count  pre_drop_count  freq
ACCT 2251   ACCT    1853 -> 2251    2   235 0.90%   NA  NA  NA  NA  NA  NA  NA  NA  NA
ACCT 2251   ACCT    NA -> 2251  17  235 7.20%   23  217 10.60%  29  258 11% 37  248 14.90%
ACCT 2251   ACCT    2251 -> NA  11  235 4.70%   6   217 2.80%   18  258 7.00%   25  248 10.10%
ACCT 2251   ACCT    2251 -> 2251    224 235 95.30%  210 217 96.80%  240 258 93% 223 248 89.90%
ACCT 2251   ACCT    1051 -> 2251    1   235 0.40%   1   217 0.50%   NA  NA  NA  1   248 0.40%
ACCT 2251   ACCT    2251 -> 1051    NA  NA  NA  1   217 0.50%   NA  NA  NA  NA  NA  NA

我尝试过使用dcast,但似乎只想将YRTR放在首位。

编辑后添加dput()输出:

dput(path_agg2)
structure(list(YRTR = c(20173L,20173L,20183L,20193L,20203L,20213L,20213L),sub_cou = c("ACCT 2251","ACCT 2251","ACCT 2251"),SUBJ = c("ACCT","ACCT","ACCT"),PATH = c("1853 -> 2251","NA -> 2251","2251 -> NA","2251 -> 2251","1051 -> 2251","2251 -> 1051","1051 and 2251 -> NA","NA -> 2251"),path_count = c(2L,17L,11L,224L,1L,6L,210L,23L,29L,240L,18L,223L,25L,37L,204L,14L,40L),pre_drop_count = c(235L,235L,217L,258L,248L,219L,219L),freq = c("0.9%","7.2%","4.7%","95.3%","0.4%","2.8%","0.5%","96.8%","10.6%","11.2%","93%","7%","89.9%","10.1%","14.9%","93.2%","6.4%","18.3%")),row.names = c(NA,-22L),class = "data.frame")

解决方法

此答案是否正确

> path_agg2_wider <- path_agg2 %>% pivot_wider(
+   names_from = YRTR,+   values_from = c(path_count,pre_drop_count,freq)
+ )
> path_agg2_wider <- path_agg2_wider[c(1:3,4,9,14,5,10,15,6,11,16,7,12,17,8,13,18)]
> path_agg2_wider
# A tibble: 7 x 18
  sub_cou SUBJ  PATH  path_count_20173 pre_drop_count_~ freq_20173 path_count_20183 pre_drop_count_~ freq_20183 path_count_20193 pre_drop_count_~ freq_20193
  <chr>   <chr> <chr>            <int>            <int> <chr>                 <int>            <int> <chr>                 <int>            <int> <chr>     
1 ACCT 2~ ACCT  1853~                2              235 0.9%                     NA               NA NA                       NA               NA NA        
2 ACCT 2~ ACCT  NA -~               17              235 7.2%                     23              217 10.6%                    29              258 11.2%     
3 ACCT 2~ ACCT  2251~               11              235 4.7%                      6              217 2.8%                     18              258 7%        
4 ACCT 2~ ACCT  2251~              224              235 95.3%                   210              217 96.8%                   240              258 93%       
5 ACCT 2~ ACCT  1051~                1              235 0.4%                      1              217 0.5%                     NA               NA NA        
6 ACCT 2~ ACCT  2251~               NA               NA NA                        1              217 0.5%                     NA               NA NA        
7 ACCT 2~ ACCT  1051~               NA               NA NA                       NA               NA NA                       NA               NA NA        
# ... with 6 more variables: path_count_20203 <int>,pre_drop_count_20203 <int>,freq_20203 <chr>,path_count_20213 <int>,pre_drop_count_20213 <int>,#   freq_20213 <chr>
> 

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-