如何解决来自ARKit相机框架的纹理ARMeshGeometry?
这个问题多少基于this帖子,其中的想法是从具有LiDAR扫描仪的iOS设备中获取ARMeshGeometry
,计算纹理坐标,并将采样的相机帧作为给定的网格,从而允许用户创建其环境的“真实感” 3D表示。
在该帖子中,我调整了其中一个响应以计算纹理坐标,就像这样;
func buildGeometry(meshAnchor: ARMeshAnchor,arFrame: ARFrame) -> SCNGeometry {
let vertices = meshAnchor.geometry.vertices
let faces = meshAnchor.geometry.faces
let camera = arFrame.camera
let size = arFrame.camera.imageResolution
// use the MTL buffer that ARKit gives us
let vertexSource = SCNGeometrySource(buffer: vertices.buffer,vertexFormat: vertices.format,semantic: .vertex,vertexCount: vertices.count,dataOffset: vertices.offset,dataStride: vertices.stride)
// set the camera matrix
let modelMatrix = meshAnchor.transform
var textCords = [CGPoint]()
for index in 0..<vertices.count {
let vertexPointer = vertices.buffer.contents().advanced(by: vertices.offset + vertices.stride * index)
let vertex = vertexPointer.assumingMemoryBound(to: (Float,Float,Float).self).pointee
let vertex4 = SIMD4<Float>(vertex.0,vertex.1,vertex.2,1)
let world_vertex4 = simd_mul(modelMatrix,vertex4)
let world_vector3 = simd_float3(x: world_vertex4.x,y: world_vertex4.y,z: world_vertex4.z)
let pt = camera.projectPoint(world_vector3,orientation: .portrait,viewportSize: CGSize(width: CGFloat(size.height),height: CGFloat(size.width)))
let v = 1.0 - Float(pt.x) / Float(size.height)
let u = Float(pt.y) / Float(size.width)
//let z = vector_float2(u,v)
let c = CGPoint(x: v,y: u)
textCords.append(c)
}
// Setup the texture coordinates
let textureSource = SCNGeometrySource(textureCoordinates: textCords)
// Setup the normals
let normalsSource = SCNGeometrySource(meshAnchor.geometry.normals,semantic: .normal)
// Setup the geometry
let faceData = Data(bytesNoCopy: faces.buffer.contents(),count: faces.buffer.length,deallocator: .none)
let geometryElement = SCNGeometryElement(data: faceData,primitiveType: .triangles,primitiveCount: faces.count,bytesPerIndex: faces.bytesPerIndex)
let nodeGeometry = SCNGeometry(sources: [vertexSource,textureSource,normalsSource],elements: [geometryElement])
/* Setup texture - THIS IS WHERE I AM STUCK
let texture = textureConverter.makeTextureForMeshModel(frame: arFrame)
*/
let imageMaterial = SCNMaterial()
imageMaterial.isDoubleSided = false
imageMaterial.diffuse.contents = texture!
nodeGeometry.materials = [imageMaterial]
return nodeGeometry
}
我要努力确定这些纹理坐标是否在正确地计算,然后确定如何对相机框架进行采样,以将相关的框架图像用作该网格的纹理。
链接的问题表明将ARFrame
的{{1}}(这是capturedImage
)属性转换为CVPixelBuffer
对于实时性能是理想的,但是对我来说,MTLTexture
是一个CVPixelBuffer
图像,而我相信我将需要一个YCbCr
图像。
在我的RGB
类中,我试图将textureConverter
转换为CVPixelBuffer
,但是不确定如何返回MTLTexture
RGB
;
MTLTexture
最后,我不确定我是否真的需要func makeTextureForMeshModel(frame: ARFrame) -> MTLTexture? {
if CVPixelBufferGetPlaneCount(frame.capturedImage) < 2 {
return nil
}
let cameraImageTextureY = createTexture(fromPixelBuffer: frame.capturedImage,pixelFormat: .r8Unorm,planeIndex: 0)
let cameraImageTextureCbCr = createTexture(fromPixelBuffer: frame.capturedImage,pixelFormat: .rg8Unorm,planeIndex: 1)
/* How do I blend the Y and CbCr textures,or return a RGB texture,to return a single MTLTexture?
return ...
}
func createTexture(fromPixelBuffer pixelBuffer: CVPixelBuffer,pixelFormat: MTLPixelFormat,planeIndex: Int) -> CVMetalTexture? {
let width = CVPixelBufferGetWidthOfPlane(pixelBuffer,planeIndex)
let height = CVPixelBufferGetHeightOfPlane(pixelBuffer,planeIndex)
var texture: CVMetalTexture? = nil
let status = CVMetalTextureCacheCreateTextureFromImage(nil,textureCache,pixelBuffer,nil,pixelFormat,width,height,planeIndex,&texture)
if status != kCVReturnSuccess {
texture = nil
}
return texture
}
纹理还是RGB
纹理,但是我仍然不确定如何返回用于纹理化的适当图像(通过手动设置纹理格式,我尝试仅返回YCbCr
而不必担心CVPixelBuffer
颜色空间的情况,会产生非常奇怪的图像)。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。