如何解决为什么数据集的SGD丢失与pytorch代码和用于线性回归的草稿python代码不匹配?
我正在尝试在wine数据集上实现多元线性回归。但是,当我将Pytorch的结果与Python的草稿代码进行比较时,损失并不相同。
我的便签代码:
功能:
obj.size -> 'size'
主要代码:
def yinfer(X,beta):
return beta[0] + np.dot(X,beta[1:])
def cost(X,Y,beta):
sum = 0
m = len(Y)
for i in range(m):
sum = sum + ( yinfer(X[i],beta) - Y[i])*(yinfer(X[i],beta) - Y[i])
return sum/(1.0*m)
我使用了与Pytorch代码相同的权重
我的Pytorch代码:
alpha = 0.005
b=[0,0.04086357,-0.02831656,0.09622949,-0.15162516,0.60188454,0.47528714,-0.6066466,-0.22995654,-0.58388734,0.20954669,-0.67851365]
beta = np.array(b)
print(beta)
iterations = 1000
arr_cost = np.zeros((iterations,2))
m = len(Y)
temp_beta = np.zeros(12)
for i in range(iterations):
for k in range(m):
temp_beta[0] = yinfer(X[k,:],beta) - Y[k]
temp_beta[1:] = (yinfer(X[k,beta) - Y[k])*X[k,:]
beta = beta - alpha*temp_beta/(1.0*m) #(m*np.linalg.norm(temp_beta))
arr_cost[i] = [i,cost(X,beta)]
#print(cost(X,beta))
plt.scatter(arr_cost[0:iterations,0],arr_cost[0:iterations,1])
class LinearRegression(nn.Module):
def __init__(self,n_input_features):
super(LinearRegression,self).__init__()
self.linear=nn.Linear(n_input_features,1)
# self.linear.weight.data=b.view(1,-1)
self.linear.bias.data.fill_(0.0)
nn.init.xavier_uniform_(self.linear.weight)
# nn.init.xavier_normal_(self.linear.bias)
def forward(self,x):
y_predicted=self.linear(x)
return y_predicted
model=LinearRegression(11)
我的DataLoader:
criterion = nn.MSELoss()
num_epochs=1000
for epoch in range(num_epochs):
for x,y in train_data:
y_pred=model(x)
loss=criterion(y,y_pred)
# print(loss)
loss.backward()
optimizer.step()
optimizer.zero_grad()
有人可以告诉我为什么发生这种情况,或者我的代码中有任何错误吗?
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。