如何解决Scikit-Learn的Logistic回归严重超过了数字分类训练数据
我正在使用Scikit-Learn的Logistic回归算法执行数字分类。我正在使用的数据集是Scikit-Learn的load_digits。
下面是我的代码的简化版本:
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import learning_curve
from sklearn.datasets import load_digits
digits = load_digits()
model = LogisticRegression(solver ='lbfgs',penalty = 'none',max_iter = 1e5,multi_class = 'auto')
model.fit(digits.data,digits.target)
predictions = model.predict(digits.data)
df_cm = pd.DataFrame(confusion_matrix(digits.target,predictions))
ax = sns.heatmap(df_cm,annot = True,cbar = False,cmap = 'Blues_r',fmt='d',annot_kws = {"size": 10})
ax.set_ylim(0,10)
plt.title("Confusion Matrix")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
train_size = [0.2,0.4,0.6,0.8,1]
training_size,training_score,validation_score = learning_curve(model,digits.data,digits.target,cv = 5,train_sizes = train_size,scoring = 'neg_mean_squared_error')
training_scores_mean = - training_score.mean(axis = 1)
validation_score_mean = - validation_score.mean(axis = 1)
plt.plot(training_size,validation_score_mean)
plt.plot(training_size,training_scores_mean)
plt.legend(["Validation error","Training error"])
plt.ylabel("MSE")
plt.xlabel("Training set size")
plt.show()
### EDIT ###
# With L2 regularization
model = LogisticRegression(solver ='lbfgs',penalty = 'l2',# Changing penality to l2
max_iter = 1e5,digits.target)
predictions = model.predict(digits.data)
df_cm = pd.DataFrame(confusion_matrix(digits.target,10)
plt.title("Confusion Matrix with L2 regularization")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
training_size,scoring = 'neg_mean_squared_error')
training_scores_mean = - training_score.mean(axis = 1)
validation_score_mean = - validation_score.mean(axis = 1)
plt.plot(training_size,"Training error"])
plt.title("Learning curve with L2 regularization")
plt.ylabel("MSE")
plt.xlabel("Training set size")
plt.show()
# With L2 regularization and best C
from sklearn.model_selection import GridSearchCV
C = {'C': [1e-3,1e-2,1e-1,1,10]}
model_l2 = GridSearchCV(LogisticRegression(random_state = 0,solver ='lbfgs',multi_class = 'auto'),param_grid = C,iid = False,scoring = 'neg_mean_squared_error')
model_l2.fit(digits.data,digits.target)
best_C = model_l2.best_params_.get("C")
print(best_C)
model_reg = LogisticRegression(solver ='lbfgs',C = best_C,multi_class = 'auto')
model_reg.fit(digits.data,digits.target)
predictions = model_reg.predict(digits.data)
df_cm = pd.DataFrame(confusion_matrix(digits.target,10)
plt.title("Confusion Matrix with L2 regularization and best C")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
training_size,validation_score = learning_curve(model_reg,"Training error"])
plt.title("Learning curve with L2 regularization and best C")
plt.ylabel("MSE")
plt.xlabel("Training set size")
plt.show()
从训练数据的混淆矩阵以及使用learning_curve生成的最后一个图可以看出,训练集的错误始终为0:
在我看来,该模型过度拟合,我无法理解。我也使用MNIST数据集进行了尝试,并且发生了同样的事情。
我该如何解决?
-编辑-
在L2正则化代码上方添加了代码,并为超参数C设置了最佳值。
通过L2正则化,模型仍然适合数据:
Learning Curve with L2 regularization here
使用最佳C超参数,训练数据上的误差不再为零,但算法仍然过拟合:
Learning Curve with L2 regularization here and best C here
仍然不了解发生了什么...
解决方法
使用正则化术语(惩罚)代替“无”。
model = LogisticRegression(solver ='lbfgs',penalty = 'l2',max_iter = 1e5,multi_class = 'auto')
您发现做验证曲线的C的最佳值。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。