【tensorflow2.0】训练模型的三种方法

模型的训练主要有内置fit方法、内置tran_on_batch方法、自定义训练循环。

注:fit_generator方法在tf.keras中不推荐使用,其功能已经被fit包含。

import numpy as np 
 pandas as pd 
 tensorflow as tf
from tensorflow.keras import * 
 
# 打印时间分割线
@tf.function
def printbar():
    ts = tf.timestamp()
    today_ts = ts%(24*60*60)
 
    hour = tf.cast(today_ts//3600+8,tf.int32)%tf.constant(24)
    minite = tf.cast((today_ts%3600)//60,tf.int32)
    second = tf.cast(tf.floor(today_ts%60),tf.int32)
 
     timeformat(m):
        if tf.strings.length(tf.strings.format("{}",m))==1:
            return(tf.strings.format(0{}"else tf.strings.join([timeformat(hour),timeformat(minite),timeformat(second)],separator = :)
    tf.print(=========="*8,end = ""print(timestring)
 
MAX_LEN = 300
BATCH_SIZE = 32
(x_train,y_train),(x_test,y_test) = datasets.reuters.load_data()
x_train = preprocessing.sequence.pad_sequences(x_train,maxlen=MAX_LEN)
x_test = preprocessing.sequence.pad_sequences(x_test,1)">MAX_LEN)
 
MAX_WORDS = x_train.max()+1
CAT_NUM = y_train.max()+1
 
ds_train = tf.data.Dataset.from_tensor_slices((x_train,y_train)) \
          .shuffle(buffer_size = 1000).batch(BATCH_SIZE) \
          .prefetch(tf.data.experimental.AUTOTUNE).cache()
 
ds_test = tf.data.Dataset.from_tensor_slices((x_test,y_test)) \
          .shuffle(buffer_size = 1000).batch(BATCH_SIZE) \
          .prefetch(tf.data.experimental.AUTOTUNE).cache()

一,内置fit方法

该方法功能非常强大,支持对numpy array,tf.data.Dataset以及 Python generator数据进行训练。

并且可以通过设置回调函数实现对训练过程的复杂控制逻辑。

tf.keras.backend.clear_session()
 create_model():
 
    model = models.Sequential()
    model.add(layers.Embedding(MAX_WORDS,7,input_length=MAX_LEN))
    model.add(layers.Conv1D(filters = 64,kernel_size = 5,activation = relu))
    model.add(layers.MaxPool1D(2))
    model.add(layers.Conv1D(filters = 32,kernel_size = 3,1)">))
    model.add(layers.Flatten())
    model.add(layers.Dense(CAT_NUM,activation = softmax))
    return(model)
 
 compile_model(model):
    model.compile(optimizer=optimizers.Nadam(),loss=losses.SparseCategoricalCrossentropy(),metrics=[metrics.SparseCategoricalAccuracy(),metrics.SparseTopKCategoricalAccuracy(5)]) 
    (model)
 
model = create_model()
model.summary()
model = compile_model(model)
Model: sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None,300,7)            216874    

conv1d (Conv1D)              (None,296,64)           2304      

max_pooling1d (MaxPooling1D) (None,148,64)           0         

conv1d_1 (Conv1D)            (None,146,32)           6176      

max_pooling1d_1 (MaxPooling1 (None,73,32)            0         

flatten (Flatten)            (None,2336)              0         

dense (Dense)                (None,46)                107502    
=================================================================
Total params: 332,856
Trainable params: 332,1)">
Non-trainable params: 0
_________________________________________________________________
history = model.fit(ds_train,validation_data = ds_test,epochs = 10)
Epoch 1/10
281/281 [==============================] - 8s 28ms/step - loss: 1.9854 - sparse_categorical_accuracy: 0.4876 - sparse_top_k_categorical_accuracy: 0.7488 - val_loss: 1.6438 - val_sparse_categorical_accuracy: 0.5841 - val_sparse_top_k_categorical_accuracy: 0.7636
Epoch 2/10
281/281 [==============================] - 8s 28ms/step - loss: 1.4446 - sparse_categorical_accuracy: 0.6294 - sparse_top_k_categorical_accuracy: 0.8037 - val_loss: 1.5316 - val_sparse_categorical_accuracy: 0.6126 - val_sparse_top_k_categorical_accuracy: 0.7925
Epoch 3/10
281/281 [==============================] - 8s 28ms/step - loss: 1.1883 - sparse_categorical_accuracy: 0.6906 - sparse_top_k_categorical_accuracy: 0.8549 - val_loss: 1.6185 - val_sparse_categorical_accuracy: 0.6278 - val_sparse_top_k_categorical_accuracy: 0.8019
Epoch 4/10
281/281 [==============================] - 8s 28ms/step - loss: 0.9406 - sparse_categorical_accuracy: 0.7546 - sparse_top_k_categorical_accuracy: 0.9057 - val_loss: 1.7211 - val_sparse_categorical_accuracy: 0.6153 - val_sparse_top_k_categorical_accuracy: 0.8041
Epoch 5/10
281/281 [==============================] - 8s 29ms/step - loss: 0.7207 - sparse_categorical_accuracy: 0.8108 - sparse_top_k_categorical_accuracy: 0.9404 - val_loss: 1.9749 - val_sparse_categorical_accuracy: 0.6233 - val_sparse_top_k_categorical_accuracy: 0.7996
Epoch 6/10
281/281 [==============================] - 8s 28ms/step - loss: 0.5558 - sparse_categorical_accuracy: 0.8540 - sparse_top_k_categorical_accuracy: 0.9643 - val_loss: 2.2560 - val_sparse_categorical_accuracy: 0.6269 - val_sparse_top_k_categorical_accuracy: 0.7947
Epoch 7/10
281/281 [==============================] - 8s 28ms/step - loss: 0.4438 - sparse_categorical_accuracy: 0.8916 - sparse_top_k_categorical_accuracy: 0.9781 - val_loss: 2.4731 - val_sparse_categorical_accuracy: 0.6238 - val_sparse_top_k_categorical_accuracy: 0.7965
Epoch 8/10
281/281 [==============================] - 8s 29ms/step - loss: 0.3710 - sparse_categorical_accuracy: 0.9086 - sparse_top_k_categorical_accuracy: 0.9837 - val_loss: 2.6960 - val_sparse_categorical_accuracy: 0.6175 - val_sparse_top_k_categorical_accuracy: 0.7939
Epoch 9/10
281/281 [==============================] - 8s 28ms/step - loss: 0.3201 - sparse_categorical_accuracy: 0.9203 - sparse_top_k_categorical_accuracy: 0.9894 - val_loss: 3.1160 - val_sparse_categorical_accuracy: 0.6193 - val_sparse_top_k_categorical_accuracy: 0.7898
Epoch 10/10
281/281 [==============================] - 8s 28ms/step - loss: 0.2827 - sparse_categorical_accuracy: 0.9262 - sparse_top_k_categorical_accuracy: 0.9922 - val_loss: 2.9516 - val_sparse_categorical_accuracy: 0.6264 - val_sparse_top_k_categorical_accuracy: 0.7974

二,内置train_on_batch方法

该内置方法相比较fit方法更加灵活,可以不通过回调函数而直接在批次层次上更加精细地控制训练的过程。

tf.keras.backend.clear_session()
 
 create_model():
    model = models.Sequential()
 
    model.add(layers.Embedding(MAX_WORDS,1)">_________________________________________________________________
 train_model(model,ds_train,ds_valid,epoches):
 
    for epoch in tf.range(1,epoches+1):
        model.reset_metrics()
 
         在后期降低学习率
        if epoch == 5:
            model.optimizer.lr.assign(model.optimizer.lr/2.0)
            tf.Lowering optimizer Learning Rate...\n\n)
 
        for x,y in ds_train:
            train_result = model.train_on_batch(x,y)
 
         ds_valid:
            valid_result = model.test_on_batch(x,y,reset_metrics=False)
 
        if epoch%1 ==0:
            printbar()
            tf.epoch = train:valid:)
train_model(model,ds_test,10)
================================================================================11:49:43
epoch =  1
train: {'loss': 2.0567171573638916,sparse_categorical_accuracy': 0.4545454680919647,1)">sparse_top_k_categorical_accuracy': 0.6818181872367859}
valid: {': 1.6894209384918213,1)">': 0.5605521202087402,1)">': 0.7617987394332886}

================================================================================11:49:53
epoch =  2': 1.4644863605499268,1)">': 0.6363636255264282,1)">': 0.7727272510528564': 1.5152910947799683,1)">': 0.6157613396644592,1)">': 0.7938557267189026}

================================================================================11:50:01
epoch =  3': 1.0017579793930054,1)">': 0.7727272510528564,1)">': 0.9545454382896423': 1.5588842630386353,1)">': 0.6228851079940796,1)">': 0.8058770895004272}

================================================================================11:50:10
epoch =  4': 0.6004871726036072,1)">': 0.9090909361839294,1)">': 1.0': 1.7447566986083984,1)">': 0.6233303546905518,1)">': 0.8174532651901245}

Lowering optimizer Learning Rate...


================================================================================11:50:19
epoch =  5': 0.3866238594055176,1)">': 0.9545454382896423,1)">': 1.8871253728866577,1)">': 0.6308993697166443,1)">': 0.816117525100708}

================================================================================11:50:28
epoch =  6': 0.27341774106025696,1)">': 2.0595862865448,1)">': 0.6273375153541565,1)">': 0.8089937567710876}

================================================================================11:50:37
epoch =  7': 0.1923554539680481,1)">': 2.2238168716430664,1)">': 0.6251112818717957,1)">': 0.8085485100746155}

================================================================================11:50:46
epoch =  8': 0.12688547372817993,1)">': 2.3778438568115234,1)">': 0.6175423264503479,1)">': 0.8072128295898438}

================================================================================11:50:55
epoch =  9': 0.08024053275585175,1)">': 2.501840829849243,1)">': 0.6135351657867432,1)">': 0.8081033229827881}

================================================================================11:51:04
epoch =  10': 0.05211604759097099,1)">': 1.0,1)">': 2.61771559715271,1)">': 0.6126446723937988,1)">': 0.8085485100746155}

三,自定义训练循环

自定义训练循环无需编译模型,直接利用优化器根据损失函数反向传播迭代参数,拥有最高的灵活性。

 create_model()
model.summary()
optimizer = optimizers.Nadam()
loss_func = losses.SparseCategoricalCrossentropy()
 
train_loss = metrics.Mean(name=train_loss')
train_metric = metrics.SparseCategoricalAccuracy(name=train_accuracy)
 
valid_loss = metrics.Mean(name=valid_loss)
valid_metric = metrics.SparseCategoricalAccuracy(name=valid_accuracy)
 
@tf.function
 train_step(model,features,labels):
    with tf.GradientTape() as tape:
        predictions = model(features,training = True)
        loss = loss_func(labels,predictions)
    gradients = tape.gradient(loss,model.trainable_variables)
    optimizer.apply_gradients(zip(gradients,model.trainable_variables))
 
    train_loss.update_state(loss)
    train_metric.update_state(labels,predictions)
 
 
@tf.function
 valid_step(model,labels):
    predictions = model(features)
    batch_loss =):
 
        for features,labels  ds_train:
            train_step(model,labels)
 
         ds_valid:
            valid_step(model,labels)
 
        logs = Epoch={},Loss:{},Accuracy:{},Valid Loss:{},Valid Accuracy:{}'
 
        (tf.strings.format(logs,(epoch,train_loss.result(),train_metric.result(),valid_loss.result(),valid_metric.result())))
            tf.)
 
        train_loss.reset_states()
        valid_loss.reset_states()
        train_metric.reset_states()
        valid_metric.reset_states()
 
train_model(model,10)
Model: _________________________________________________________________
================================================================================11:52:04
Epoch=1,Loss:2.02564383,Accuracy:0.464707196,Valid Loss:1.68035507,Valid Accuracy:0.55921638

================================================================================11:52:11
Epoch=2,Loss:1.48306167,Accuracy:0.612781107,Valid Loss:1.52322364,Valid Accuracy:0.606411397

================================================================================11:52:18
Epoch=3,Loss:1.20491719,Accuracy:0.677243352,Valid Loss:1.56225574,Valid Accuracy:0.624666095

================================================================================11:52:25
Epoch=4,Loss:0.944778264,Accuracy:0.749387681,Valid Loss:1.7202934,Valid Accuracy:0.620658934

================================================================================11:52:32
Epoch=5,Loss:0.701866329,Accuracy:0.817635298,Valid Loss:1.97179747,Valid Accuracy:0.61843276

================================================================================11:52:39
Epoch=6,Loss:0.531810164,Accuracy:0.866844773,Valid Loss:2.25338316,Valid Accuracy:0.605075717

================================================================================11:52:46
Epoch=7,Loss:0.425013304,Accuracy:0.896236897,Valid Loss:2.47035336,Valid Accuracy:0.601068556

================================================================================11:52:53
Epoch=8,Loss:0.355143964,Accuracy:0.915609,Valid Loss:2.67822,Valid Accuracy:0.591718614

================================================================================11:53:00
Epoch=9,Loss:0.30812338,Accuracy:0.92785573,Valid Loss:2.86121941,Valid Accuracy:0.583704352

================================================================================11:53:07
Epoch=10,Loss:0.275565386,Accuracy:0.934535742,Valid Loss:2.99354172,Valid Accuracy:0.579252

 

参考:

开源电子书地址:https://lyhue1991.github.io/eat_tensorflow2_in_30_days/

GitHub 项目地址:https://github.com/lyhue1991/eat_tensorflow2_in_30_days

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


MNIST数据集可以说是深度学习的入门,但是使用模型预测单张MNIST图片得到数字识别结果的文章不多,所以本人查找资料,把代码写下,希望可以帮到大家~1#BudingyourfirstimageclassificationmodelwithMNISTdataset2importtensorflowastf3importnumpyasnp4impor
1、新建tensorflow环境(1)打开anacondaprompt,输入命令行condacreate-ntensorflowpython=3.6注意:尽量不要更起名字,不然环境容易出错在选择是否安装时输入“y”(即为“yes”)。其中tensorflow为新建的虚拟环境名称,可以按喜好自由选择。python=3.6为指定python版本为3
这篇文章主要介绍“张量tensor是什么”,在日常操作中,相信很多人在张量tensor是什么问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大...
tensorflow中model.fit()用法model.fit()方法用于执行训练过程model.fit(训练集的输入特征,训练集的标签,batch_size,#每一个batch的大小epochs,#迭代次数validation_data=(测试集的输入特征,
https://blog.csdn.net/To_be_little/article/details/124438800 目录1、查看GPU的数量2、设置GPU加速3、单GPU模拟多GPU环境1、查看GPU的数量importtensorflowastf#查看gpu和cpu的数量gpus=tf.config.experimental.list_physical_devices(device_type='GPU')cpus=tf.c
根据身高推测体重const$=require('jquery');consttf=require('@tensorflowfjs');consttfvis=require('@tensorflowfjs-vis');/*根据身高推测体重*///把数据处理成符合模型要求的格式functiongetData(){//学习数据constheights=[150,151,160,161,16
#!/usr/bin/envpython2#-*-coding:utf-8-*-"""CreatedonThuSep610:16:372018@author:myhaspl@email:myhaspl@myhaspl.com二分法求解一元多次方程"""importtensorflowastfdeff(x):y=pow(x,3)*3+pow(x,2)*2-19return
 继续上篇的pyspark集成后,我们再来看看当今热的不得了的tensorflow是如何继承进pycharm环境的参考:http://blog.csdn.net/include1224/article/details/53452824思路其实很简单,说下要点吧1.python必须要3.564位版本(上一篇直接装的是64位版本的Anaconda)2.激活3.5版本的
首先要下载python3.6:https://www.python.org/downloadselease/python-361/接着下载:numpy-1.13.0-cp36-none-win_amd64.whl 安装这两个:安装python3.6成功,接着安装numpy.接着安装tensorflow: 最后测试一下: python3.6+tensorflow安装完毕,高深的AI就等着你去
参考书《TensorFlow:实战Google深度学习框架》(第2版)以下TensorFlow程序完成了从图像片段截取,到图像大小调整再到图像翻转及色彩调整的整个图像预处理过程。#!/usr/bin/envpython#-*-coding:UTF-8-*-#coding=utf-8"""@author:LiTian@contact:694317828@qq.com
参考:TensorFlow在windows上安装与简单示例写在开头:刚开始安装的时候,由于自己的Python版本是3.7,安装了好几次都失败了,后来发现原来是tensorflow不支持3.7版本的python,所以后来换成了Python3.6,就成功了。。。。。anconda:5.3.2python版本:3.6.8tensorflow版本:1.12.0安装Anconda
实验介绍数据采用CriteoDisplayAds。这个数据一共11G,有13个integerfeatures,26个categoricalfeatures。Spark由于数据比较大,且只在一个txt文件,处理前用split-l400000train.txt对数据进行切分。连续型数据利用log进行变换,因为从实时训练的角度上来判断,一般的标准化方式,
 1)登录需要一个 invitationcode,申请完等邮件吧,大概要3-5个小时;2)界面3)配置数据集,在右边列设置 
模型文件的保存tensorflow将模型保持到本地会生成4个文件:meta文件:保存了网络的图结构,包含变量、op、集合等信息ckpt文件:二进制文件,保存了网络中所有权重、偏置等变量数值,分为两个文件,一个是.data-00000-of-00001文件,一个是.index文件checkpoint文件:文本文件,记录了最新保持
原文地址:https://blog.csdn.net/jesmine_gu/article/details/81093686这里只是做个收藏,防止原链接失效importosimportnumpyasnpfromPILimportImageimporttensorflowastfimportmatplotlib.pyplotaspltangry=[]label_angry=[]disgusted=[]label_d
 首先声明参考博客:https://blog.csdn.net/beyond_xnsx/article/details/79771690?tdsourcetag=s_pcqq_aiomsg实践过程主线参考这篇博客,相应地方进行了变通。接下来记载我的实践过程。  一、GPU版的TensorFlow的安装准备工作:笔者电脑是Windows10企业版操作系统,在这之前已
1.tensorflow安装  进入AnacondaPrompt(windows10下按windows键可找到)a.切换到创建好的tensorflow36环境下:activatetensorflow36    b.安装tensorflow:pipinstlltensorflow    c.测试环境是否安装好       看到已经打印出了"h
必须走如下步骤:sess=tf.Session()sess.run(result)sess.close()才能执行运算。Withtf.Session()assess:Sess.run()通过会话计算结果:withsess.as_default():print(result.eval())表示输出result的值生成一个权重矩阵:tf.Variable(tf.random_normal([2,3]
tf.zeros函数tf.zeros(shape,dtype=tf.float32,name=None)定义在:tensorflow/python/ops/array_ops.py.创建一个所有元素都设置为零的张量. 该操作返回一个带有形状shape的类型为dtype张量,并且所有元素都设为零.例如:tf.zeros([3,4],tf.int32)#[[0,0,
一、Tensorflow基本概念1、使用图(graphs)来表示计算任务,用于搭建神经网络的计算过程,但其只搭建网络,不计算2、在被称之为会话(Session)的上下文(context)中执行图3、使用张量(tensor)表示数据,用“阶”表示张量的维度。关于这一点需要展开一下       0阶张量称