如何将使用Tensorflow构建的模型转换为Keras API?

如何解决如何将使用Tensorflow构建的模型转换为Keras API?

因此,在当前的项目中,我必须修改此paper的官方代码,并且我对tensorflow及其所有概念还很陌生,我使用tf.keras,但实际代码是使用tensorflow编写的(版本1.7较旧...),我想知道是否有人可以解释该模型的工作原理,或者有一个等效的想法用keras和tensorflow(tf版本2)的最新版本编写,所以原始代码在其中repo(ConvNCF.py)

# prediction model
class ConvNCF:
    def __init__(self,num_users,num_items,args):#TODO:why the other arguments are useful?
        self.num_items = num_items
        self.num_users = num_users
        self.embedding_size = args.embed_size
        self.lr_embed = args.lr_embed
        self.lr_net = args.lr_net
        #TODO:what ara those things?
        self.hidden_size = args.hidden_size#nombre de couche ou nombre de neurones par couche?
        self.nc = eval(args.net_channel)#le nombre de map par couche (feature map )
        regs = eval(args.regs)
        self.lambda_bilinear = regs[0]
        self.gamma_bilinear = regs[1]
        self.lambda_weight = regs[2]
        self.dns = args.dns
        self.train_auc = args.train_auc
        self.prepared = False

    def _create_placeholders(self):
        with tf.name_scope("input_data"):
            self.user_input = tf.placeholder(tf.int32,shape = [None,1],name = "user_input")
            self.item_input_pos = tf.placeholder(tf.int32,name = "item_input_pos")
            self.item_input_neg = tf.placeholder(tf.int32,name = "item_input_neg")
            self.keep_prob = tf.placeholder(tf.float32,name = "keep_prob")#TODO:why this thing is useful?

    def _conv_weight(self,isz,osz):
        return (weight_variable([2,2,osz]),bias_variable([osz]))

    def _conv_layer(self,input,P):
        conv = tf.nn.conv2d(input,P[0],strides=[1,padding='SAME')
        return tf.nn.relu(conv + P[1])
    
    def _create_variables(self):
        with tf.name_scope("embedding"):
            self.embedding_P = tf.Variable(tf.truncated_normal(shape=[self.num_users,self.embedding_size],mean=0.0,stddev=0.01),name='embedding_P',dtype=tf.float32)  #(users,embedding_size)
            self.embedding_Q = tf.Variable(tf.truncated_normal(shape=[self.num_items,name='embedding_Q',dtype=tf.float32)  #(items,embedding_size)

            # here should have 6 iszs due to the size of outer products is 64x64
            iszs = [1] + self.nc[:-1]
            oszs = self.nc
            self.P = []
            for isz,osz in zip(iszs,oszs):
                self.P.append(self._conv_weight(isz,osz))

            self.W = weight_variable([self.nc[-1],1])
            self.b = weight_variable([1])


    

    def _create_inference(self,item_input):
        with tf.name_scope("inference"):
            # embedding look up
            self.embedding_p = tf.nn.embedding_lookup(self.embedding_P,self.user_input)
            self.embedding_q = tf.nn.embedding_lookup(self.embedding_Q,item_input)

            # outer product of P_u and Q_i
            self.relation = tf.matmul(tf.transpose(self.embedding_p,perm=[0,1]),self.embedding_q)
            self.net_input = tf.expand_dims(self.relation,-1)

            # CNN
            self.layer = []
            input = self.net_input
            for p in self.P:
                self.layer.append(self._conv_layer(input,p))
                input = self.layer[-1]

            # prediction
            self.dropout = tf.nn.dropout(self.layer[-1],self.keep_prob)
            self.output_layer = tf.matmul(tf.reshape(self.dropout,[-1,self.nc[-1]]),self.W) + self.b

            return self.embedding_p,self.embedding_q,self.output_layer


    def _regular(self,params):
        res = 0
        for param in params:
            res += tf.reduce_sum(tf.square(param[0])) + tf.reduce_sum(tf.square(param[1]))
        return res

    def _create_loss(self):
        with tf.name_scope("loss"):
            # BPR loss for L(Theta)
            self.p1,self.q1,self.output = self._create_inference(self.item_input_pos)
            self.p2,self.q2,self.output_neg = self._create_inference(self.item_input_neg)
            self.result = self.output - self.output_neg
            self.loss = tf.reduce_sum(tf.log(1 + tf.exp(-self.result)))

            self.opt_loss = self.loss + self.lambda_bilinear * ( tf.reduce_sum(tf.square(self.p1)) \
                                    + tf.reduce_sum(tf.square(self.q2)) + tf.reduce_sum(tf.square(self.q1)))\
                                    + self.gamma_bilinear * self._regular([(self.W,self.b)]) \
                                    + self.lambda_weight * (self._regular(self.P) + self._regular([(self.W,self.b)]))

    # used at the first time when emgeddings are pretrained yet network are randomly initialized
    # if not,the parameters may be NaN.
    def _create_pre_optimizer(self):
        self.pre_opt = tf.train.AdagradOptimizer(learning_rate=0.01).minimize(self.loss)

    def _create_optimizer(self):
        # seperated optimizer
        var_list1 = [self.embedding_P,self.embedding_Q]
        #[self.W1,self.W2,self.W3,self.W4,self.b1,self.b2,self.b3,self.b4,self.P1,self.P2,self.P3]
        var_list2 = list(set(tf.trainable_variables()) - set(var_list1))
        opt1 = tf.train.AdagradOptimizer(self.lr_embed)
        opt2 = tf.train.AdagradOptimizer(self.lr_net)
        grads = tf.gradients(self.opt_loss,var_list1 + var_list2)
        grads1 = grads[:len(var_list1)]
        grads2 = grads[len(var_list1):]
        train_op1 = opt1.apply_gradients(list(zip(grads1,var_list1)))
        train_op2 = opt2.apply_gradients(list(zip(grads2,var_list2)))
        self.optimizer = tf.group(train_op1,train_op2)


    def build_graph(self):
        self._create_placeholders()
        self._create_variables()
        self._create_loss()
        self._create_pre_optimizer()
        self._create_optimizer()

    def load_parameter_MF(self,sess,path):
        ps = np.load(path)
        ap = tf.assign(self.embedding_P,ps[0])
        aq = tf.assign(self.embedding_Q,ps[1])
        #ah = tf.assign(self.h,np.diag(ps[2][:,0]).reshape(4096,1))
        sess.run([ap,aq])
        print("parameter loaded")

    def load_parameter_logloss(self,path):
        ps = np.load(path).tolist()
        ap = tf.assign(self.embedding_P,ps['P'])
        aq = tf.assign(self.embedding_Q,ps['Q'])
        sess.run([ap,aq])
        print("logloss parameter loaded")

    def save_net_parameters(self,path):
        pass

    def get_optimizer(self):
        if self.prepared:  # common optimize
            return self.optimizer
        else:
            # do a previous optimizing with none regularizations if it is the first time to optimize the neural network.
            # if not,the parameters may be NaN.
            return self.pre_opt

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 <select id="xxx"> SELECT di.id, di.name, di.work_type, di.updated... <where> <if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 <property name="dynamic.classpath" value="tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams['font.sans-serif'] = ['SimHei'] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -> systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping("/hires") public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate<String
使用vite构建项目报错 C:\Users\ychen\work>npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-