tf-coreml Install From SourceInstall From PyPILimitations TensorFlow 转换到 CoreML 的转换器

程序名称:tf-coreml Install From SourceInstall From PyPILimitations

授权协议: Apache

操作系统: 跨平台

开发语言: Python

tf-coreml Install From SourceInstall From PyPILimitations 介绍

tfcoreml

TensorFlow (TF) to CoreML Converter

Dependencies

  • tensorflow >= 1.5.0

  • coremltools >= 0.8

  • numpy >= 1.6.2

  • protobuf >= 3.1.0

  • six >= 1.10.0

Installation

Install From Source

To get the latest version of the converter, clone this repo and install from
source. That is,

git clone https://github.com/tf-coreml/tf-coreml.git
cd tf-coreml

To install as a package with pip, either run (at the root directory):

pip install -e .

or run:

python setup.py bdist_wheel

This will generate a pip installable wheel inside the dist directory.

Install From PyPI

To install the Pypi package:

pip install -U tfcoreml

Usage

See iPython notebooks in the directory examples/ for examples of how to use
the converter.

The following arguments are required by the CoreML converter:

  • path to the frozen .pb graph file to be converted

  • path where the .mlmodel should be written

  • a list of output tensor names present in the TF graph

  • a dictionary of input names and their shapes (as list of integers). This is only required if input tensors’ shapes are not fully defined in the frozen .pb file (e.g. they contain None or ?)

Note that the frozen .pb file can be obtained from the checkpoint and graph
def files by using the tensorflow.python.tools.freeze_graph utility. For
details of freezing TF graphs, please refer to the TensorFlow documentation
and the notebooks in directory examples/ in this repo. There are scripts in
the utils/ directory for visualizing and writing out a text summary of a
given frozen TF graph. This could be useful in determining the input/output
names and shapes. Another useful tool for visualizing frozen TF graphs is
Netron.

There are additional arguments that the converter can take. For details, refer
to the full function definition here.

For example:

When input shapes are fully determined in the frozen .pb file:

import tfcoreml as tf_converter
tf_converter.convert(tf_model_path = 'my_model.pb',
                     mlmodel_path = 'my_model.mlmodel',
                     output_feature_names = ['softmax:0'])

When input shapes are not fully specified in the frozen .pb file:

import tfcoreml as tf_converter
tf_converter.convert(tf_model_path = 'my_model.pb',
                     mlmodel_path = 'my_model.mlmodel',
                     output_feature_names = ['softmax:0'],
                     input_name_shape_dict = {'input:0' : [1, 227, 227, 3]})

Following topics are discussed in the jupyter notebooks under the examples/
folder:

inception_v1_preprocessing_steps.ipynb : How to generate a classifier
model with image input types and the importance of properly setting the
preprocessing parameters.

inception_v3.ipynb : How to strip the “DecodeJpeg” op from the TF graph
to prepare it for CoreML conversion.

linear_mnist_example.ipynb : How to get a frozen graph from the
checkpoint and graph description files generated by training in TF.

ssd_example.ipynb : How to extract a portion of the TF graph that can be
converted, from the overall graph that may have unsupported ops.

style_transfer_example.ipynb : How to edit a CoreML model to get an image
output type (by default the outputs are MultiArrays).

custom_layer_examples.ipynb : A few examples to demonstrate the process
of adding custom CoreML layers for unsupported TF ops.

Supported Ops

List of TensorFlow ops that are supported currently (see
tfcoreml/_ops_to_layers.py):

  • Abs

  • Add

  • ArgMax

  • AvgPool

  • BatchNormWithGlobalNormalization

  • BatchToSpaceND*

  • BiasAdd

  • ConcatV2, Concat

  • Const

  • Conv2D

  • Conv2DBackpropInput

  • CropAndResize*

  • DepthToSpace

  • DepthwiseConv2dNative

  • Elu

  • Exp

  • ExtractImagePatches

  • FusedBatchNorm

  • Identity

  • Log

  • LRN

  • MatMul

  • Max*

  • Maximum

  • MaxPool

  • Mean*

  • Min*

  • Minimum

  • MirrorPad

  • Mul

  • Neg

  • OneHot

  • Pad

  • Placeholder

  • Pow*

  • Prod*

  • RealDiv

  • Reciprocal

  • Relu

  • Relu6

  • Reshape*

  • ResizeNearestNeighbor

  • ResizeBilinear

  • Rsqrt

  • Sigmoid

  • Slice*

  • Softmax

  • SpaceToBatchND*

  • SpaceToDepth

  • Split*

  • Sqrt

  • Square

  • SquaredDifference

  • StridedSlice*

  • Sub

  • Sum*

  • Tanh

  • Transpose*

Note that certain parameterizations of these ops may not be fully supported.
For ops marked with an asterisk, only very specific usage patterns are
supported. In addition, there are several other ops (not listed above) that
are skipped by the converter as they generally have no effect during
inference. Kindly refer to the files tfcoreml/_ops_to_layers.py and
tfcoreml/_layers.py for full details. For unsupported ops or configurations,
the custom
layer

feature of CoreML can be used. For details, refer to the
examples/custom_layer_examples.ipynb notebook.

Scripts for converting several of the following pretrained models can be found
at tests/test_pretrained_models.py. Other models with similar structures and
supported ops can be converted. Below is a list of publicly available
TensorFlow frozen models that can be converted with this converter:

*Converting these models requires extra steps to extract subgraphs from the TF frozen graphs. See examples/ for details.
+There are known issues running image stylization network on GPU. (See Issue #26)

Limitations

tfcoreml converter has the following constraints:

  • TF graph must be cycle free (cycles are generally created due to control flow ops like if, while, map, etc.)

  • Must have NHWC ordering (Batch size, Height, Width, Channels) for image feature map tensors

  • Must have tensors with rank less than or equal to 4 (len(tensor.shape) <= 4)

  • The converter produces CoreML model with float values. A quantized TF graph (such as the style transfer network linked above) gets converted to a float CoreML model

Running Unit Tests

In order to run unit tests, you need pytest.

pip install pytest

To add a new unit test, add it to the tests/ folder. Make sure you name the
file with a ‘test’ as the prefix. To run all unit tests, navigate to the
tests/ folder and run

pytest

Directories

  • “tfcoreml”: the tfcoreml package

  • “examples”: examples to use this converter

  • “tests”: unit tests

  • “utils”: general scripts for graph inspection

License

Apache License 2.0

tf-coreml Install From SourceInstall From PyPILimitations 官网

https://github.com/tf-coreml/tf-coreml

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


欧盟第7框架计划(FP7)的LarKC项目的目标是开发大规模知识加速器(LarKC,其发音为“lark”),LarKC被设计为一个大规模分布式不完备推理平台 ,该平台用于突破语义万维网(Semantic Web)推理系统目前面临的知识处理规
Salad 是一种有效且灵活的实现著名的异常检测方法回文构词法王et al . 2006(RAID)。Salad
multilanguage 是一个多语开发工具包,用于缓存多语系统的多语值,它拥有良好的性能,并且能防止内存泄露。
go-cortex 是一个服务,通过倾听你的句子,并视图理解你的意思,然后执行相应的动作。它使用 Wit.ai
DKPro Core 是基于 Apache UIMA 框架之上的自然语言处理(NLP)的软件组件。DKPro Core 提供了这样的第三方工具以及原NLP组件的包装。
NLTK 会被自然地看作是具有栈结构的一系列层,这些层构建于彼此基础之上。那些熟悉人工语言(比如
ERNIE 是基于持续学习的语义理解预训练框架,使用多任务学习增量式构建预训练任务。
Algorithm research 基于 AC 有限状态自动状态机的过滤服务。 AC 编译及使用方法 1. 编译之前请先确认安装好 libevent
spaCy 是一个 Python 和 CPython 的 NLP 自然语言文本处理库。 示例代码: >>> import spacy.en
Lango 是自然语言处理库,类似乐高游戏,可以把各个语言块构建在一起工作。
SyntaxNet 是一个框架,即学术圈所指的SyntacticParser,他是许多NLU系统中的关键组件。在这个系统中输入一个句子,他会自动给句子中的每一个单词
FudanNLP主要是为中文自然语言处理而开发的工具包,也包含为实现这些任务的机器学习算法和数据集。
HanLP: Han Language Processing 汉语言处理包 HanLP 是由一系列模型与算法组成的 Java 工具包,目标是普及自然语言处理在生产环境中的应用。HanLP
TextTeaser是一个自动摘要算法,结合了自然语言处理的力量和机器学习产生好结果。
专门针对中文文档的simhash算法库 简介 此项目用来对中文文档计算出对应的 simhash 值。 simhash 是谷歌用来进行文本去重的算法,现在广泛应用在文本处理中。
Lacona 是语言无关的 JavaScript 语言解析器。Lacona 可以根据一个任意但是定义良好的模式来预测自然语言。也就是说,你告诉 Lacona
UBY是一个大规模的统一的文章资源,为自然语言处理(NLP)基于ISO标准词汇标记框架(LMF)。
CRF是著名的条件随机场开源工具,也是目前综合性能最佳的CRF工具。CRF本身已经是个比较老的工具了,但鉴于其性能较好,仍然是自然语言处理很重要的一个工具。
OpenNLP 是一个机器学习工具包,用于处理自然语言文本。支持大多数常用的 NLP 任务,例如:标识化、句子切分、部分词性标注、名称抽取、组块、解析等。
LingPipe是一个自然语言处理的Java开源工具包。LingPipe目前已有很丰富的功能,包括主题分类(Top