如何解决在 PyTorch 中定义批量大小为 1 的手动排序的 MNIST 数据集
[] :这表示一个批次。例如,如果批次大小为 5,则批次将类似于 [1,4,7,2]。 []的长度表示批量大小。
我想让训练集看起来像这样:
[1] -> [1] -> [1] -> [1] -> [1] -> [7] -> [7] -> [7] -> [7] -> [7] ] -> [3] -> [3] -> [3] -> [3] -> [3] -> ... 以此类推
这意味着首先五个 1(批量大小 = 1),第二个 5 个 7(批量大小 = 1),第三个五个 3(批量大小 = 1)等等......
有人可以给我一个想法吗?
如果有人能解释如何用代码来实现这一点会很有帮助。
谢谢! :)
解决方法
如果您想要一个 DataLoader,您只想为每个样本定义类标签,那么您可以使用 torch.data.utils.Subset
类。尽管它的名字,它不一定需要定义数据集的子集。例如
import torch
import torchvision
import torchvision.transforms as T
from itertools import cycle
mnist = torchvision.datasets.MNIST(root='./',train=True,transform=T.ToTensor())
# not sure what "...and so on" implies,but define this list however you like
target_classes = [1,1,7,3,3]
# create cyclic iterators of indices for each class in MNIST
indices = dict()
for label in torch.unique(mnist.targets).tolist():
indices[label] = cycle(torch.nonzero(mnist.targets == label).flatten().tolist())
# define the order of indices in the new mnist subset based on target_classes
new_indices = []
for t in target_classes:
new_indices.append(next(indices[t]))
# create a Subset of MNIST based on new_indices
mnist_modified = torch.utils.data.Subset(mnist,new_indices)
dataloader = torch.utils.data.DataLoader(mnist_modified,batch_size=1,shuffle=False)
for idx,(x,y) in enumerate(dataloader):
# training loop
print(f'Batch {idx+1} labels: {y.tolist()}')
,
如果您想要一个 DataLoader
在同一类的一行中返回五个样本,但您不想手动为每个索引定义类,那么您可以创建一个自定义采样器。例如
import torch
import torchvision
import torchvision.transforms as T
from itertools import cycle
class RepeatClassSampler(torch.utils.data.Sampler):
def __init__(self,targets,repeat_count,length,shuffle=False):
if not torch.is_tensor(targets):
targets = torch.tensor(targets)
self.targets = targets
self.repeat_count = repeat_count
self.length = length
self.shuffle = shuffle
self.classes = torch.unique(targets).tolist()
self.class_indices = dict()
for label in self.classes:
self.class_indices[label] = torch.nonzero(targets == label).flatten()
def __iter__(self):
class_index_iters = dict()
for label in self.classes:
if self.shuffle:
class_index_iters[label] = cycle(self.class_indices[label][torch.randperm(len(self.class_indices))].tolist())
else:
class_index_iters[label] = cycle(self.class_indices[label].tolist())
if self.shuffle:
target_iter = cycle(self.targets[torch.randperm(len(self.targets))].tolist())
else:
target_iter = cycle(self.targets.tolist())
def index_generator():
for i in range(self.length):
if i % self.repeat_count == 0:
current_class = next(target_iter)
yield next(class_index_iters[current_class])
return index_generator()
def __len__(self):
return self.length
mnist = torchvision.datasets.MNIST(root='./',transform=T.ToTensor())
dataloader = torch.utils.data.DataLoader(
mnist,sampler=RepeatClassSampler(
targets=mnist.targets,repeat_count=5,length=15,# How many total to pick from your dataset
shuffle=True))
for idx,y) in enumerate(dataloader):
# training loop
print(f'Batch {idx+1} labels: {y.tolist()}')
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。