如何解决'bert-base-multilingual-uncased'数据加载器RuntimeError:堆栈期望每个张量相等
我是nlp的初学者,因为我参加了比赛https://www.kaggle.com/c/contradictory-my-dear-watson,我正在使用模型'bert-base-multilingual-uncased',并且使用同一模型中的BERT标记器。我也在使用kaggle tpu。这是我创建的自定义数据加载器。
class SherlockDataset(torch.utils.data.Dataset):
def __init__(self,premise,hypothesis,tokenizer,max_len,target = None):
super(SherlockDataset,self).__init__()
self.premise = premise
self.hypothesis = hypothesis
self.tokenizer = tokenizer
self.max_len = max_len
self.target = target
def __len__(self):
return len(self.premise)
def __getitem__(self,item):
sen1 = str(self.premise[item])
sen2 = str(self.hypothesis[item])
encode_dict = self.tokenizer.encode_plus(sen1,sen2,add_special_tokens = True,max_len = self.max_len,pad_to_max_len = True,return_attention_mask = True,return_tensors = 'pt'
)
input_ids = encode_dict["input_ids"][0]
token_type_ids = encode_dict["token_type_ids"][0]
att_mask = encode_dict["attention_mask"][0]
if self.target is not None:
sample = {
"input_ids":input_ids,"token_type_ids":token_type_ids,"att_mask":att_mask,"targets": self.target[item]
}
else:
sample = {
"input_ids":input_ids,"att_mask":att_mask
}
return sample
以及在将数据加载到数据加载器的过程中
def train_fn(model,dataloader,optimizer,criterion,scheduler = None):
model.train()
print("train")
for idx,sample in enumerate(dataloader):
'''
input_ids = sample["input_ids"].to(config.DEVICE)
token_type_ids = sample["token_type_ids"].to(config.DEVICE)
att_mask = sample["att_mask"].to(config.DEVICE)
targets = sample["targets"].to(config.DEVICE)
'''
print("train_out")
input_ids = sample[0].to(config.DEVICE)
token_type_ids = sample[1].to(config.DEVICE)
att_mask = sample[2].to(config.DEVICE)
targets = sample[3].to(config.DEVICE)
optimizer.zero_grad()
output = model(input_ids,token_type_ids,att_mask)
output = np.argmax(output,axis = 1)
loss = criterion(outputs,targets)
accuracy = accuracy_score(output,targets)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(),1.0)
xm.optimizer_step(optimizer,barrier=True)
if scheduler is not None:
scheduler.step()
if idx%50==0:
print(f"idx : {idx},TRAIN LOSS : {loss}")
我一次又一次地遇到此错误
RuntimeError: Caught RuntimeError in DataLoader worker process 0. Original Traceback (most recent
call last): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py",line
178,in _worker_loop data = fetcher.fetch(index) File "/opt/conda/lib/python3.7/site-
packages/torch/utils/data/_utils/fetch.py",line 47,in fetch return self.collate_fn(data) File
"/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py",line 79,in
default_collate return [default_collate(samples) for samples in transposed] File
"/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py",in return
[default_collate(samples) for samples in transposed] File "/opt/conda/lib/python3.7/site-
packages/torch/utils/data/_utils/collate.py",line 55,in default_collate return torch.stack(batch,out=out) RuntimeError: stack expects each tensor to be equal size,but got [47] at entry 0 and
[36] at entry 1
我尝试更改num_workers值,更改批次大小。我已经检查了数据,并且其中的任何文本都不为null,0或任何形式的损坏。我也曾尝试在tokenizer中更改max_len,但无法找到解决此问题的方法。请检查并告知我该如何解决。
解决方法
data_loader = torch.utils.data.DataLoader(batch_size = batch_size,数据集= data,shuffle = shuffle,num_workers = 0,collate_fn = lambda x:x)
在数据加载器中使用Collate_fn应该可以解决问题。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。