如何解决GraphFrames:合并具有相似列值的边缘节点
tl; dr:如何简化图,删除具有相同name
值的边节点?
我有一个定义如下的图:
import graphframes
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
vertices = spark.createDataFrame([
('1','foo','1'),('2','bar','2'),('3','3'),('4','5'),('5','baz','9'),('6','blah',('7',('8','3')
],['id','name','value'])
edges = spark.createDataFrame([
('1',('1','4'),'6'),'7'),'8')
],['src','dst'])
f = graphframes.GraphFrame(vertices,edges)
从等于1
的顶点ID开始,我想简化图形。这样,具有相似name
值的节点将合并为一个节点。结果图看起来会有些变化
像这样:
请注意,我们如何只有一个foo
(ID 1),一个bar
(ID 2),一个baz
(ID 5)和一个blah
(ID 6 )。顶点的value
无关紧要,只是为了表明每个顶点都是唯一的。
我试图实现一个解决方案,但是它很hacky,效率极低,我敢肯定有更好的方法(我也不认为它可行):
f = graphframes.GraphFrame(vertices,edges)
# Get the out degrees for our nodes. Nodes that do not appear in
# this dataframe have zero out degrees.
outs = f.outDegrees
# Merge this with our nodes.
vertices = f.vertices
vertices = f.vertices.join(outs,outs.id == vertices.id,'left').select(vertices.id,'value','outDegree')
vertices.show()
# Create a new graph with our out degree nodes.
f = graphframes.GraphFrame(vertices,edges)
# Find paths to all edge vertices from our vertex ID = 1
# Can we make this one operation instead of two??? What if we have more than two hops?
one_hop = f.find('(a)-[e]->(b)').filter('b.outDegree is null').filter('a.id == "1"')
one_hop.show()
two_hop = f.find('(a)-[e1]->(b); (b)-[e2]->(c)').filter('c.outDegree is null').filter('a.id == "1"')
two_hop.show()
# Super ugly,but union the vertices from the `one_hop` and `two_hop` above,and unique
# on the name.
vertices = one_hop.select('a.*').union(one_hop.select('b.*'))
vertices = vertices.union(two_hop.select('a.*').union(two_hop.select('b.*').union(two_hop.select('c.*'))))
vertices = vertices.dropDuplicates(['name'])
vertices.show()
# Do the same for the edges
edges = two_hop.select('e1.*').union(two_hop.select('e2.*')).union(one_hop.select('e.*')).distinct()
# We need to ensure that we have the respective nodes from our edges. We do this by
# Ensuring the referenced vertex ID is in our `vertices` in both the `src` and the `dst`
# columns - This does NOT seem to work as I'd expect!
edges = edges.join(vertices,vertices.id == edges.src,"left").select("src","dst")
edges = edges.join(vertices,vertices.id == edges.dst,"dst")
edges.show()
是否有一种更简单的方法来删除节点(及其对应的边),以使边节点在其name
上是唯一的?
解决方法
您为什么不简单地将name
列视为新的id
?
import graphframes
vertices = spark.createDataFrame([
('1','foo','1'),('2','bar','2'),('3','3'),('4','5'),('5','baz','9'),('6','blah',('7',('8','3')
],['id','name','value'])
edges = spark.createDataFrame([
('1',('1','4'),'6'),'7'),'8')
],['src','dst'])
#create a dataframe with only one column
new_vertices = vertices.select(vertices.name.alias('id')).distinct()
#replace the src ids with the name column
new_edges = edges.join(vertices,edges.src == vertices.id,'left')
new_edges = new_edges.select(new_edges.dst,new_edges.name.alias('src'))
#replace the dst ids with the name column
new_edges = new_edges.join(vertices,new_edges.dst == vertices.id,'left')
new_edges = new_edges.select(new_edges.src,new_edges.name.alias('dst'))
#drop duplicate edges
new_edges = new_edges.dropDuplicates(['src','dst'])
new_edges.show()
new_vertices.show()
f = graphframes.GraphFrame(new_vertices,new_edges)
输出:
+---+----+
|src| dst|
+---+----+
|foo| baz|
|foo| bar|
|baz|blah|
+---+----+
+----+
| id|
+----+
|blah|
| bar|
| foo|
| baz|
+----+
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。