微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Tensorflow 自动添加多个层

如何解决Tensorflow 自动添加多个层

我使用 keras 调谐器,随机搜索。 还有张量流。 但我不认为随机搜索是问题所在:

由于某些原因,自动添加了多个图层。


latenteVariable = 24 ##### IMPORTANT


class MyTuner(kerastuner.tuners.RandomSearch):

  def run_trial(self,trial,*args,**kwargs):


def sampling(args):
    z_mean,z_log_var = args
    epsilon = K.random_normal(shape=(K.shape(z_mean)[0],latenteVariable),mean=0.,stddev=epsilon_std)
    return z_mean + K.exp(z_log_var / 2) * epsilon


def build_model(hp):
...
  h = Dense(units=hp.Int('units4',min_value=48,max_value=64,step=8),activation=activation)(h)
  h = Batchnormalization(name="encoder_norm_4")(h)
  schicht4 = hp.get('units4')

  z_mean = Dense(latenteVariable)(h)
  z_log_var = Dense(latenteVariable)(h) 
  z = Lambda(sampling,output_shape=(latenteVariable,))([z_mean,z_log_var])  ###### variable is used here

  b = Dense(units=schicht4,activation=activation)(z)
  b = Batchnormalization(name="decoder_norm_1")(b)

output:
__________________________________________________________________________________________________
encoder_norm_4 (Batchnormalizat (None,48)           192         dense_3[0][0]
__________________________________________________________________________________________________
dense_4 (Dense)                 (None,24)           1176        encoder_norm_4[0][0]
__________________________________________________________________________________________________
dense_5 (Dense)                 (None,24)           1176        encoder_norm_4[0][0]
__________________________________________________________________________________________________
lambda (Lambda)                 (None,24)           0           dense_4[0][0]
                                                                 dense_5[0][0]
__________________________________________________________________________________________________
dense_6 (Dense)                 (None,48)           1200        lambda[0][0]
__________________________________________________________________________________________________

所以上面的latenteVariable是一个全局变量

Bellow latenteVariable 是一个局部变量。

def sampling(args):
    z_mean,z_log_var,latenteVariable = args
    epsilon = K.random_normal(shape=(K.shape(z_mean)[0],stddev=epsilon_std)
    return z_mean + K.exp(z_log_var / 2) * epsilon


def build_model(hp):

  h = Dense(units=hp.Int('units4',activation=activation)(h)
  h = Batchnormalization(name="encoder_norm_4")(h)
  schicht4 = hp.get('units4')

  latenteVariable = 24  ########## local variable
  z_mean = Dense(latenteVariable)(h)
  z_log_var = Dense(latenteVariable)(h)
  z = Lambda(sampling,z_log_var])

  b = Dense(units=schicht4,activation=activation)(z)
  b = Batchnormalization(name="decoder_norm_1")(b)


I get the result:


encoder_norm_4 (Batchnormalizat (None,64)           256         dense_3[0][0]
__________________________________________________________________________________________________
dense_4 (Dense)                 (None,24)           1560        encoder_norm_4[0][0]
__________________________________________________________________________________________________
tf_op_layer_Shape (TensorFlowOp [(2,)]               0           dense_4[0][0]
__________________________________________________________________________________________________
tf_op_layer_strided_slice (Tens [()]                 0           tf_op_layer_Shape[0][0]
__________________________________________________________________________________________________
tf_op_layer_shape_1 (TensorFlow [(2,)]               0           tf_op_layer_strided_slice[0][0]
__________________________________________________________________________________________________
dense_5 (Dense)                 (None,24)           1560        encoder_norm_4[0][0]
__________________________________________________________________________________________________
tf_op_layer_RandomStandardnorma [(None,24)]         0           tf_op_layer_shape_1[0][0]
__________________________________________________________________________________________________
tf_op_layer_RealDiv (TensorFlow [(None,24)]         0           dense_5[0][0]
__________________________________________________________________________________________________
tf_op_layer_Mul (TensorFlowOpLa [(None,24)]         0           tf_op_layer_RandomStandardnormal[
__________________________________________________________________________________________________
tf_op_layer_Exp (TensorFlowOpLa [(None,24)]         0           tf_op_layer_RealDiv[0][0]
__________________________________________________________________________________________________
tf_op_layer_Add (TensorFlowOpLa [(None,24)]         0           tf_op_layer_Mul[0][0]
__________________________________________________________________________________________________
tf_op_layer_Mul_1 (TensorFlowOp [(None,24)]         0           tf_op_layer_Exp[0][0]
                                                                 tf_op_layer_Add[0][0]
__________________________________________________________________________________________________
tf_op_layer_AddV2 (TensorFlowOp [(None,24)]         0           dense_4[0][0]
                                                                 tf_op_layer_Mul_1[0][0]
__________________________________________________________________________________________________
dense_6 (Dense)                 (None,64)           1600        tf_op_layer_AddV2[0][0]

所以在第一个例子中,我有三个 24 层。 在第二个示例中,我有 8 个 24 层。 如何在不自动获得八层(而不是三层)的情况下使用局部变量?

谢谢

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。