如何解决与 Jupyter 中 X 数据集数组大小相关的 ValueError
我对 tensorflow 比较陌生,遇到了一个我不知道如何解决的错误。
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Dropout,Activation,Flatten
from tensorflow.keras.layers import Conv2D,MaxPooling2D
import pickle
pickle_in = open("X.pickle","rb")
X = pickle.load(pickle_in)
pickle_in = open("y.pickle","rb")
y = pickle.load(pickle_in)
X = np.array(X)
y = np.array(y)
print(tf.size(X))
print(tf.size(y))
X = X/255.0
print(tf.size(X))
print(tf.size(y))
model = Sequential()
print(tf.size(X))
print(tf.size(y))
model.add(Conv2D(256,(3,3),input_shape=X.shape[1:]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
print(tf.size(X))
print(tf.size(y))
model.add(Conv2D(256,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
print(tf.size(X))
print(tf.size(y))
model.add(Flatten())
print(tf.size(X))
print(tf.size(y))
model.add(Dense(64))
print(tf.size(X))
print(tf.size(y))
model.add(Dense(1))
model.add(Activation('sigmoid'))
print(tf.size(X))
print(tf.size(y))
x_val = X[-40:]
y_val = y[-40:]
X = X[:-40]
y = y[:-40]
print(tf.size(X))
print(tf.size(y))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
print(tf.size(X))
print(tf.size(y))
model.fit(X,y,batch_size = 10,epochs = 10,validation_data = (x_val,y_val))
当我尝试运行此代码时,出现以下错误:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-24-151ffb1a84e3> in <module>
45 metrics=['accuracy'])
46
---> 47 model.fit(X,batch_size=10,epochs=10,y_val))
/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in fit(self,x,batch_size,epochs,verbose,callbacks,validation_split,validation_data,shuffle,class_weight,sample_weight,initial_epoch,steps_per_epoch,validation_steps,validation_batch_size,validation_freq,max_queue_size,workers,use_multiprocessing)
1048 training_utils.RespectCompiledTrainableState(self):
1049 # Creates a `tf.data.Dataset` and handles batch and epoch iteration.
-> 1050 data_handler = data_adapter.DataHandler(
1051 x=x,1052 y=y,/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py in __init__(self,use_multiprocessing,model,steps_per_execution)
1098
1099 adapter_cls = select_data_adapter(x,y)
-> 1100 self._adapter = adapter_cls(
1101 x,1102 y,sample_weights,sample_weight_modes,steps,**kwargs)
272
273 num_samples = set(int(i.shape[0]) for i in nest.flatten(inputs)).pop()
--> 274 _check_data_cardinality(inputs)
275
276 # If batch_size is not passed but steps is,calculate from the input data.
/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py in _check_data_cardinality(data)
1527 label,",".join(str(i.shape[0]) for i in nest.flatten(single_data)))
1528 msg += "Make sure all arrays contain the same number of samples."
-> 1529 raise ValueError(msg)
1530
1531
ValueError: Data cardinality is ambiguous:
x sizes: 2360
y sizes: 760
Make sure all arrays contain the same number of samples.
看起来好像某个点的 x 数据变成了 y 数据大小的 3 倍,这就是错误的原因。我相信两者都应该是 800,但 x 数据目前似乎是 2400。当我从每个数据中减去 40 进行验证时,我通过从数据中减去 1600 进行测试,这修复了所有错误,所以看起来这就是问题的原因。有谁知道为什么x数据是y数据的3倍?
更新:我有在这里创建数据集的代码:
import random
import pickle
random.shuffle(training_data)
#for sample in training_data[:10]:
# print(sample[1])
X = []
y = []
for features,label in training_data:
X.append(features)
y.append(label)
# print(X[0].reshape(-1,IMG_SIZE,1))
X = np.array(X).reshape(-1,1)
y = np.array(y)
pickle_out = open("X.pickle","wb")
pickle.dump(X,pickle_out)
pickle_out.close()
pickle_out = open("y.pickle","wb")
pickle.dump(y,pickle_out)
pickle_out.close()
更新 2:看起来我已经正确地分割了数据……这与我使用 jpeg 文件的方式有什么关系吗?另外,我尝试使用 tf.size() 来查找每个数组的大小。似乎 y 数组的行为符合预期,但 x 数组的大小从 13500000 开始,当我留出 40 个样本进行验证时,它会更改为 13275000。
解决方法
我已经解决了这个问题。 X 值是预期的 3 倍,因为预期值为 1,但图像是彩色的,而不是灰度的,因此有 3 个值而不是预期的 1。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。