如何解决计算深度学习模型值损失的Python错误-“ ValueError:数据基数不明确”
尽管我以前有R和Python的编码经验,但是我是Jupyter Notebook,tensorflow和深度学习模型构建的新手,所以我正在寻找可以帮助我诊断此错误的人。我正在按照一个教程演示如何使用深度学习模块(https://www.youtube.com/watch?v=wQ8BIBpya2k&list=PLQVvvaa0QuDfhTox0AjmQ6tvTgMBZBEXN)对图像进行分类。该模型加载mnist图像数据集,并对1-10之间的图像进行分类。运行三个纪元,总体模型准确性达到97%。
#Import module
import tensorflow as tf
#Import image dataset
mnist = tf.keras.datasets.mnist
(x_train,y_train),(x_test,y_test) = mnist.load_data()
#Defining training and test
x_train = tf.keras.utils.normalize(x_train,axis = 1)
x_test = tf.keras.utils.normalize(x_train,axis = 1)
#Define model
model = tf.keras.models.Sequential() #Feed-forward model
#Define imput layer
model.add(tf.keras.layers.Flatten()) #Input layer
#Two hidden layers
model.add(tf.keras.layers.Dense(128,activation=tf.nn.relu)) #Rectify linear,default
model.add(tf.keras.layers.Dense(128,activation=tf.nn.relu))
#Output layer
#Corresponds to the number of classifications; ten in this case
#No relu because it is a probability distribution
model.add(tf.keras.layers.Dense(10,activation=tf.nn.softmax))
#Defining training parameters for the model
#Loss is error,or what you have gotten wrong
model.compile(optimizer = 'adam',loss= 'sparse_categorical_crossentropy',#could use binary if looking for cats/dogs
metrics = ['accuracy'])
#Training the model
#Epoch = how many times the model runs
model.fit(x_train,y_train,epochs = 3)
输出如下:
Epoch 1/3
1875/1875 [==============================] - 1s 578us/step - loss: 0.2612 - accuracy: 0.9236
Epoch 2/3
1875/1875 [==============================] - 1s 571us/step - loss: 0.1068 - accuracy: 0.9668
Epoch 3/3
1875/1875 [==============================] - 1s 562us/step - loss: 0.0721 - accuracy: 0.9773
当我尝试找出模型损失时...
#Printing the model loss
val_loss,val_acc = model.evaluate(x_test,y_test)
print(val_loss,val_acc)
...我收到此错误:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-6-3452f7a38776> in <module>
----> 1 val_loss,y_test)
2 print(val_loss,val_acc)
~\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self,*args,**kwargs)
106 def _method_wrapper(self,**kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
--> 108 return method(self,**kwargs)
109
110 # Running inside `run_distribute_coordinator` already.
~\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\keras\engine\training.py in evaluate(self,x,y,batch_size,verbose,sample_weight,steps,callbacks,max_queue_size,workers,use_multiprocessing,return_dict)
1354 use_multiprocessing=use_multiprocessing,1355 model=self,-> 1356 steps_per_execution=self._steps_per_execution)
1357
1358 # Container that configures and calls `tf.keras.Callback`s.
~\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\keras\engine\data_adapter.py in __init__(self,steps_per_epoch,initial_epoch,epochs,shuffle,class_weight,model,steps_per_execution)
1115 use_multiprocessing=use_multiprocessing,1116 distribution_strategy=ds_context.get_strategy(),-> 1117 model=model)
1118
1119 strategy = ds_context.get_strategy()
~\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\keras\engine\data_adapter.py in __init__(self,sample_weights,sample_weight_modes,**kwargs)
280 label,",".join(str(i.shape[0]) for i in nest.flatten(data)))
281 msg += "Please provide data which shares the same first dimension."
--> 282 raise ValueError(msg)
283 num_samples = num_samples.pop()
284
ValueError: Data cardinality is ambiguous:
x sizes: 60000
y sizes: 10000
Please provide data which shares the same first dimension.
我该如何解决此错误并计算损失?
解决方法
如错误所述,您在x_test
中与y_test
相对应的样本数量不匹配
x sizes: 60000
y sizes: 10000
如果您检查代码,则会发现在创建x_test
应该是:
x_test = tf.keras.utils.normalize(x_test,axis = 1)
不
x_test = tf.keras.utils.normalize(x_train,axis = 1)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。