如何解决计算来自中间层的输出的梯度并使用优化器更新权重
我正在尝试实施以下架构,不确定是否正确应用了渐变磁带。
在上面的体系结构中,我们可以看到,输出取自蓝色框中的多层。每个蓝色框在paper中称为损失分支,其中包含两个损失,即交叉熵和l2损失。我在tensorflow 2中编写了架构,并使用渐变带进行自定义训练。我不确定的一件事是如何使用梯度带来更新损耗。
我有两个查询,
- 在这种情况下,我应该如何使用梯度带进行多次损失。我对查看代码感兴趣!
- 例如,考虑上图中的第三个蓝色框(第三个损失分支),我们将从 conv 13 层获取输入,并获得两个输出,一个用于分类,另一个用于回归。 因此,在计算了损失后,我应该如何更新权重,应该更新上面的所有层(从conv 1到conv 13)还是应该仅更新获取我 conv 13 的层权重(转换11、12和13)。
我还要附上一个link,昨天我在上面详细张贴了一个问题。
下面是我尝试进行梯度下降的代码段。如果我错了,请纠正我。
images = batch.data[0]
images = (images - 127.5) / 127.5
targets = batch.label
with tensorflow.GradientTape() as tape:
outputs = self.net(images)
loss = self.loss_criterion(outputs,targets)
self.scheduler(i,self.optimizer)
grads = tape.gradient(loss,self.net.trainable_variables)
self.optimizer.apply_gradients(zip(grads,self.net.trainable_variables))
下面是自定义损失函数的代码,用作上面的loss_criterion。
losses = []
for i in range(self.num_output_scales):
pred_score = outputs[i * 2]
pred_bbox = outputs[i * 2 + 1]
gt_mask = targets[i * 2]
gt_label = targets[i * 2 + 1]
pred_score_softmax = tensorflow.nn.softmax(pred_score,axis=1)
loss_mask = tensorflow.ones(pred_score_softmax.shape,tensorflow.float32)
if self.hnm_ratio > 0:
pos_flag = (gt_label[:,:,:] > 0.5)
pos_num = tensorflow.math.reduce_sum(tensorflow.cast(pos_flag,dtype=tensorflow.float32))
if pos_num > 0:
neg_flag = (gt_label[:,1,:] > 0.5)
neg_num = tensorflow.math.reduce_sum(tensorflow.cast(neg_flag,dtype=tensorflow.float32))
neg_num_selected = min(int(self.hnm_ratio * pos_num),int(neg_num))
neg_prob = tensorflow.where(neg_flag,pred_score_softmax[:,:],\
tensorflow.zeros_like(pred_score_softmax[:,:]))
neg_prob_sort = tensorflow.sort(tensorflow.reshape(neg_prob,shape=(1,-1)),direction='ASCENDING')
prob_threshold = neg_prob_sort[0][int(neg_num_selected)]
neg_grad_flag = (neg_prob <= prob_threshold)
loss_mask = tensorflow.concat([tensorflow.expand_dims(pos_flag,axis=1),tensorflow.expand_dims(neg_grad_flag,axis=1)],axis=1)
else:
neg_choice_ratio = 0.1
neg_num_selected = int(tensorflow.cast(tensorflow.size(pred_score_softmax[:,:]),dtype=tensorflow.float32) * 0.1)
neg_prob = pred_score_softmax[:,:]
neg_prob_sort = tensorflow.sort(tensorflow.reshape(neg_prob,direction='ASCENDING')
prob_threshold = neg_prob_sort[0][int(neg_num_selected)]
neg_grad_flag = (neg_prob <= prob_threshold)
loss_mask = tensorflow.concat([tensorflow.expand_dims(pos_flag,axis=1)
pred_score_softmax_masked = tensorflow.where(loss_mask,pred_score_softmax,tensorflow.zeros_like(pred_score_softmax,dtype=tensorflow.float32))
pred_score_log = tensorflow.math.log(pred_score_softmax_masked)
score_cross_entropy = - tensorflow.where(loss_mask,gt_label[:,:2,tensorflow.zeros_like(gt_label[:,dtype=tensorflow.float32)) * pred_score_log
loss_score = tensorflow.math.reduce_sum(score_cross_entropy) /
tensorflow.cast(tensorflow.size(score_cross_entropy),tensorflow.float32)
mask_bbox = gt_mask[:,2:6,:]
predict_bbox = pred_bbox * mask_bbox
label_bbox = gt_label[:,:] * mask_bbox
# l2 loss of boxes
# loss_bbox = tensorflow.math.reduce_sum(tensorflow.nn.l2_loss((label_bbox - predict_bbox)) ** 2) / 2
loss_bbox = mse(label_bbox,predict_bbox) / tensorflow.math.reduce_sum(mask_bbox)
# Adding only losses relevant to a branch and sending them for back prop
losses.append(loss_score + loss_bbox)
# losses.append(loss_bbox)
# Adding all losses and sending to back prop Approach 1
# loss_cls += loss_score
# loss_reg += loss_bbox
# loss_branch.append(loss_score)
# loss_branch.append(loss_bbox)
# loss = loss_cls + loss_reg
return losses
我没有收到任何错误,但损失没有减少到最低。这是我训练的log。
请有人帮我解决这个问题。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。