无法从形状为[1,3087,2]的TensorFlowLite张量Identity_1复制到形状为[1,3087]的Java对象

如何解决无法从形状为[1,3087,2]的TensorFlowLite张量Identity_1复制到形状为[1,3087]的Java对象

我正在尝试在Android上运行已转换为.tflite的YoloV4模型。我的输入形状似乎很好[1、224、224、4],但是应用程序在我的输出形状上崩溃了。我正在使用tflite上的Udacity课程中的代码。

运行以下代码时出现上述错误:

class TFLiteObjectDetectionAPIModel private constructor() : Classifier {
override val statString: String
    get() = TODO("not implemented") //To change initializer of created properties use File | Settings | File Templates.
private var isModelQuantized: Boolean = false
// Config values.
private var inputSize: Int = 0
// Pre-allocated buffers.
private val labels = Vector<String>()
private var intValues: IntArray? = null
// outputLocations: array of shape [Batchsize,NUM_DETECTIONS,4]
// contains the location of detected boxes
private var outputLocations: Array<Array<FloatArray>>? = null
// outputClasses: array of shape [Batchsize,NUM_DETECTIONS]
// contains the classes of detected boxes
private var outputClasses: Array<FloatArray>? = null
// outputScores: array of shape [Batchsize,NUM_DETECTIONS]
// contains the scores of detected boxes
private var outputScores: Array<FloatArray>? = null
// numDetections: array of shape [Batchsize]
// contains the number of detected boxes
private var numDetections: FloatArray? = null

private var imgData: ByteBuffer? = null

private var tfLite: Interpreter? = null

override fun recognizeImage(bitmap: Bitmap): List<Classifier.Recognition> {
    // Log this method so that it can be analyzed with systrace.
    Trace.beginSection("recognizeImage")

    Trace.beginSection("preprocessBitmap")
    // Preprocess the image data from 0-255 int to normalized float based
    // on the provided parameters.
    bitmap.getPixels(intValues,bitmap.width,bitmap.height)

    imgData!!.rewind()
    for (i in 0 until inputSize) {
        for (j in 0 until inputSize) {
            val pixelValue = intValues!![i * inputSize + j]
            if (isModelQuantized) {
                // Quantized model
                imgData!!.put((pixelValue shr 16 and 0xFF).toByte())
                imgData!!.put((pixelValue shr 8 and 0xFF).toByte())
                imgData!!.put((pixelValue and 0xFF).toByte())
            } else { // Float model
                imgData!!.putFloat(((pixelValue shr 16 and 0xFF) - IMAGE_MEAN) / IMAGE_STD)
                imgData!!.putFloat(((pixelValue shr 8 and 0xFF) - IMAGE_MEAN) / IMAGE_STD)
                imgData!!.putFloat(((pixelValue and 0xFF) - IMAGE_MEAN) / IMAGE_STD)
            }
        }
    }
    Trace.endSection() // preprocessBitmap

    // Copy the input data into TensorFlow.
    Trace.beginSection("feed")
    outputLocations = Array(1) { Array(NUM_DETECTIONS) { FloatArray(4) } }
    outputClasses = Array(1) { FloatArray(NUM_DETECTIONS) }
    outputScores = Array(1) { FloatArray(NUM_DETECTIONS) }
    numDetections = FloatArray(1)

    val inputArray = arrayOf<Any>(imgData!!)
    val outputMap = ArrayMap<Int,Any>()
    outputMap[0] = outputLocations!!
    outputMap[1] = outputClasses!!
    outputMap[2] = outputScores!!
    outputMap[3] = numDetections!!
    Trace.endSection()

    // Run the inference call.
    Trace.beginSection("run")
    tfLite!!.runForMultipleInputsOutputs(inputArray,outputMap)
    Trace.endSection()

    // Show the best detections.
    // after scaling them back to the input size.
    val recognitions = ArrayList<Classifier.Recognition>(NUM_DETECTIONS)
    for (i in 0 until NUM_DETECTIONS) {
        val detection = RectF(
                outputLocations!![0][i][1] * inputSize,outputLocations!![0][i][0] * inputSize,outputLocations!![0][i][3] * inputSize,outputLocations!![0][i][2] * inputSize)
        // SSD Mobilenet V1 Model assumes class 0 is background class
        // in label file and class labels start from 1 to number_of_classes+1,// while outputClasses correspond to class index from 0 to number_of_classes
        val labelOffset = 1
        recognitions.add(
                Classifier.Recognition(
                        "" + i,labels[outputClasses!![0][i].toInt() + labelOffset],outputScores!![0][i],detection))
    }
    Trace.endSection() // "recognizeImage"
    return recognitions
}

override fun enableStatLogging(debug: Boolean) {
    //Not implemented
}

override fun close() {
    //Not needed.
}

override fun setNumThreads(numThreads: Int) {
    if (tfLite != null) tfLite!!.setNumThreads(numThreads)
}

override fun setUseNNAPI(isChecked: Boolean) {
    if (tfLite != null) tfLite!!.setUseNNAPI(isChecked)
}

companion object {

    // Only return this many results.
    private const val NUM_DETECTIONS = 3087
    // Float model
    private const val IMAGE_MEAN = 128.0f
    private const val IMAGE_STD = 128.0f

    /** Memory-map the model file in Assets.  */
    @Throws(IOException::class)
    private fun loadModelFile(assets: AssetManager,modelFilename: String): MappedByteBuffer {
        val fileDescriptor = assets.openFd(modelFilename)
        val inputStream = FileInputStream(fileDescriptor.fileDescriptor)
        val fileChannel = inputStream.channel
        val startOffset = fileDescriptor.startOffset
        val declaredLength = fileDescriptor.declaredLength
        return fileChannel.map(FileChannel.MapMode.READ_ONLY,startOffset,declaredLength)
    }

    /**
     * Initializes a native TensorFlow session for classifying images.
     *
     * @param assetManager The asset manager to be used to load assets.
     * @param modelFilename The filepath of the model GraphDef protocol buffer.
     * @param labelFilename The filepath of label file for classes.
     * @param inputSize The size of image input
     * @param isQuantized Boolean representing model is quantized or not
     */
    @Throws(IOException::class)
    fun create(
            assetManager: AssetManager,modelFilename: String,labelFilename: String,inputSize: Int,isQuantized: Boolean): Classifier {
        val d = TFLiteObjectDetectionAPIModel()

        val labelsInput: InputStream?
        val actualFilename = labelFilename.split("file:///android_asset/".toRegex())
                .dropLastWhile { it.isEmpty() }.toTypedArray()[1]
        labelsInput = assetManager.open(actualFilename)
        val br: BufferedReader?
        br = BufferedReader(InputStreamReader(labelsInput!!))
        while (br.readLine()?.let { d.labels.add(it) } != null);
        br.close()

        d.inputSize = inputSize

        try {
            val options = Interpreter.Options()
            options.setNumThreads(4)
            d.tfLite = Interpreter(loadModelFile(assetManager,modelFilename),options)
        } catch (e: Exception) {
            throw RuntimeException(e)
        }

        d.isModelQuantized = isQuantized
        // Pre-allocate buffers.
        val numBytesPerChannel: Int = if (isQuantized) {
            1 // Quantized
        } else {
            4 // Floating point
        }
        d.imgData = ByteBuffer.allocateDirect(1 * d.inputSize * d.inputSize * 3 * numBytesPerChannel)
        d.imgData!!.order(ByteOrder.nativeOrder())
        d.intValues = IntArray(d.inputSize * d.inputSize)

        d.outputLocations = Array(1) { Array(NUM_DETECTIONS) { FloatArray(2) } }
        d.outputClasses = Array(1) { FloatArray(NUM_DETECTIONS) }
        d.outputScores = Array(1) { FloatArray(NUM_DETECTIONS) }
        d.numDetections = FloatArray(1)
        return d
    }
}

当我将outputLocation更改为

outputLocations = Array(1) { Array(NUM_DETECTIONS) { FloatArray(2) } }

我收到以下错误无法从形状为[1,3087,4]的TensorFlowLite张量(Identity)复制到形状为[1,3087,2]的Java对象

什么是身份和身份_1?我已经在Netron上查看了我的模型,可以看到两者,但是我不确定如何理解该模型。

有人可以帮忙吗?我还有什么可以改变的,还是我的模型不适合移动平台?

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-