如何解决如何将onsets_frames_transcription检查点导出到TFServing的PB模型?
我想使用onsets_frames_transcription进行投放,但是音频示例原型的预处理与data.provide_batch有关,并且它返回一个数据集对象。
def provide_batch(examples,preprocess_examples,params,is_training,shuffle_examples,skip_n_initial_records):
"""Returns batches of tensors read from TFRecord files.
Args:
examples: A string path to a TFRecord file of examples,a python list of
serialized examples,or a Tensor placeholder for serialized examples.
preprocess_examples: Whether to preprocess examples. If False,assume they
have already been preprocessed.
params: HParams object specifying hyperparameters. Called 'params' here
because that is the interface that TPUEstimator expects.
is_training: Whether this is a training run.
shuffle_examples: Whether examples should be shuffled.
skip_n_initial_records: Skip this many records at first.
Returns:
Batched tensors in a TranscriptionData NamedTuple.
"""
hparams = params
input_dataset = read_examples(
examples,skip_n_initial_records,hparams)
if preprocess_examples:
input_map_fn = functools.partial(
preprocess_example,hparams=hparams,is_training=is_training)
else:
input_map_fn = parse_preprocessed_example
input_tensors = input_dataset.map(input_map_fn)
model_input = input_tensors.map(
functools.partial(
input_tensors_to_model_input,is_training=is_training))
model_input = splice_examples(model_input,hparams,is_training)
dataset = create_batch(model_input,is_training=is_training)
return dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
如何使用tf.estimator.export将模型检查点导出到PB模型,并包含所有这些预处理过程,并创建serve_input_receiver?服务中是否可以使用任何功能对tf.example进行预处理?
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。