如何建立混合模型以找到最优的产品折扣?

如何解决如何建立混合模型以找到最优的产品折扣?

我需要找到每种产品的最佳折扣(例如A,B,C),以便我能使总销售额最大化。对于每种产品,我都有现有的随机森林模型,这些模型将折扣和季节映射到销售额。如何结合这些模型并将其馈送到优化器以找到每种产品的最佳折扣?

选择型号的原因:

  1. RF:能够在预测变量和响应(sales_uplift_norm)之间提供更好的(不包括线性模型)关系。
  2. PSO:在许多白皮书中都建议使用(可在researchgate / IEEE上获得),以及在python herehere中的软件包的可用性。

输入数据sample data用于在产品级别构建模型。数据一览如下:

enter image description here

想法/跟随我的步骤:

  1. 为每种产品构建RF模型
    # pre-processed data
    products_pre_processed_data = {key:pre_process_data(df,key) for key,df in df_basepack_dict.items()}
    # rf models
    products_rf_model = {key:rf_fit(df) for key,df in products_pre_processed_data .items()}
  • 将模型传递给优化器
    • 目标函数:最大化 sales_uplift_norm (RF模型的响应变量)
    • 约束:
      • 总支出(支出A + B + C
      • 产品(A,B,C)的下界:[0.0,0.0,0.0]#折扣百分比下界
      • 产品(A,B,C)的上限:[0.3,0.4,0.4]#折扣百分比上限

sudo / sample代码#,因为我找不到将product_models传递到优化器的方法。

from pyswarm import pso
def obj(x):
    model1 = products_rf_model.get('A')
    model2 = products_rf_model.get('B')
    model3 = products_rf_model.get('C')
    return -(model1 + model2 + model3) # -ve sign as to maximize

def con(x):
    x1 = x[0]
    x2 = x[1]
    x3 = x[2]
    return np.sum(units_A*x*mrp_A + units_B*x*mrp_B + units_C* x *spend_C)-20 # spend budget

lb = [0.0,0.0,0.0]
ub = [0.3,0.4,0.4]

xopt,fopt = pso(obj,lb,ub,f_ieqcons=con)

尊敬的SO专家,请就如何使用 PSO优化器(或其他优化器,如果我没有遵循正确的方法)的问题,寻求您的指导(几周以来一直在努力寻找任何指导) )与射频

添加用于模型的功能:

def pre_process_data(df,product):
    data = df.copy().reset_index()
#     print(data)
    bp = product
    print("----------product: {}----------".format(bp))
    # Pre-processing steps
    print("pre process df.shape {}".format(df.shape))
        #1. Reponse var transformation
    response = data.sales_uplift_norm # already transformed

        #2. predictor numeric var transformation 
    numeric_vars = ['discount_percentage'] # may include mrp,depth
    df_numeric = data[numeric_vars]
    df_norm = df_numeric.apply(lambda x: scale(x),axis = 0) # center and scale

        #3. char fields dummification
    #select category fields
    cat_cols = data.select_dtypes('category').columns
    #select string fields
    str_to_cat_cols = data.drop(['product'],axis = 1).select_dtypes('object').astype('category').columns
    # combine all categorical fields
    all_cat_cols = [*cat_cols,*str_to_cat_cols]
#     print(all_cat_cols)

    #convert cat to dummies
    df_dummies = pd.get_dummies(data[all_cat_cols])

        #4. combine num and char df together
    df_combined = pd.concat([df_dummies.reset_index(drop=True),df_norm.reset_index(drop=True)],axis=1)
    
    df_combined['sales_uplift_norm'] = response
    df_processed = df_combined.copy()
    print("post process df.shape {}".format(df_processed.shape))
#     print("model fields: {}".format(df_processed.columns))
    return(df_processed)


def rf_fit(df,random_state = 12):
    
    train_features = df.drop('sales_uplift_norm',axis = 1)
    train_labels = df['sales_uplift_norm']
    
    # Random Forest Regressor
    rf = RandomForestRegressor(n_estimators = 500,random_state = random_state,bootstrap = True,oob_score=True)
    # RF model
    rf_fit = rf.fit(train_features,train_labels)

    return(rf_fit)

编辑:将数据集更新为简化版本。

解决方法

您可以在下面找到完整的解决方案!

与您的方法的基本区别如下:

  1. 由于随机森林模型将season功能作为输入,因此必须为每个季节计算最佳折扣。
  2. 检查pyswarm的文档,con函数产生的输出必须符合con(x) >= 0.0。因此,正确的约束是20 - sum(...),而不是相反的约束。另外,没有给出unitsmrp变量;我只是假设值为1,您可能想更改这些值。

对原始代码的其他修改包括:

  1. sklearn的预处理和管道包装程序,以简化预处理步骤。
  2. 最佳参数存储在输出.xlsx文件中。
  3. 已将PSO的maxiter参数设置为5,以加快调试速度,您可能希望将其值设置为另一个值(默认= 100)。
  4. li>

因此,代码为:

import pandas as pd 
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder,StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.ensemble import RandomForestRegressor 
from sklearn.base import clone

# ====================== RF TRAINING ======================
# Preprocessing
def build_sample(season,discount_percentage):
    return pd.DataFrame({
        'season': [season],'discount_percentage': [discount_percentage]
    })

columns_to_encode = ["season"]
columns_to_scale = ["discount_percentage"]
encoder = OneHotEncoder()
scaler = StandardScaler()
preproc = ColumnTransformer(
    transformers=[
        ("encoder",Pipeline([("OneHotEncoder",encoder)]),columns_to_encode),("scaler",Pipeline([("StandardScaler",scaler)]),columns_to_scale)
    ]
)

# Model
myRFClassifier = RandomForestRegressor(
    n_estimators = 500,random_state = 12,bootstrap = True,oob_score = True)

pipeline_list = [
    ('preproc',preproc),('clf',myRFClassifier)
]

pipe = Pipeline(pipeline_list)

# Dataset
df_tot = pd.read_excel("so_data.xlsx")
df_dict = {
    product: df_tot[df_tot['product'] == product].drop(columns=['product']) for product in pd.unique(df_tot['product'])
}

# Fit
print("Training ...")
pipe_dict = {
    product: clone(pipe) for product in df_dict.keys()
}

for product,df in df_dict.items():
    X = df.drop(columns=["sales_uplift_norm"])
    y = df["sales_uplift_norm"]
    pipe_dict[product].fit(X,y)

# ====================== OPTIMIZATION ====================== 
from pyswarm import pso
# Parameter of PSO
maxiter = 5

n_product = len(pipe_dict.keys())

# Constraints
budget = 20
units  = [1,1,1]
mrp    = [1,1]

lb = [0.0,0.0,0.0]
ub = [0.3,0.4,0.4]

# Must always remain >= 0
def con(x):
    s = 0
    for i in range(n_product):
        s += units[i] * mrp[i] * x[i]

    return budget - s

print("Optimization ...")

# Save optimal discounts for every product and every season
df_opti = pd.DataFrame(data=None,columns=df_tot.columns)
for season in pd.unique(df_tot['season']):

    # Objective function to minimize
    def obj(x):
        s = 0
        for i,product in enumerate(pipe_dict.keys()):
            s += pipe_dict[product].predict(build_sample(season,x[i]))
        
        return -s

    # PSO
    xopt,fopt = pso(obj,lb,ub,f_ieqcons=con,maxiter=maxiter)
    print("Season: {}\t xopt: {}".format(season,xopt))

    # Store result
    df_opti = pd.concat([
        df_opti,pd.DataFrame({
            'product': list(pipe_dict.keys()),'season': [season] * n_product,'discount_percentage': xopt,'sales_uplift_norm': [
                pipe_dict[product].predict(build_sample(season,xopt[i]))[0] for i,product in enumerate(pipe_dict.keys())
            ]
        })
    ])

# Save result
df_opti = df_opti.reset_index().drop(columns=['index'])
df_opti.to_excel("so_result.xlsx")
print("Summary")
print(df_opti)

它给出了:

Training ...
Optimization ...
Stopping search: maximum iterations reached --> 5
Season: summer   xopt: [0.1941521  0.11233673 0.36548761]
Stopping search: maximum iterations reached --> 5
Season: winter   xopt: [0.18670604 0.37829516 0.21857777]
Stopping search: maximum iterations reached --> 5
Season: monsoon  xopt: [0.14898102 0.39847885 0.18889792]
Summary
  product   season  discount_percentage  sales_uplift_norm
0       A   summer             0.194152           0.175973
1       B   summer             0.112337           0.229735
2       C   summer             0.365488           0.374510
3       A   winter             0.186706          -0.028205
4       B   winter             0.378295           0.266675
5       C   winter             0.218578           0.146012
6       A  monsoon             0.148981           0.199073
7       B  monsoon             0.398479           0.307632
8       C  monsoon             0.188898           0.210134

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 <select id="xxx"> SELECT di.id, di.name, di.work_type, di.updated... <where> <if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 <property name="dynamic.classpath" value="tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams['font.sans-serif'] = ['SimHei'] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -> systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping("/hires") public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate<String
使用vite构建项目报错 C:\Users\ychen\work>npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-