机器学习 – TensorFlow – 将L2正则化和退出引入网络.有什么意义吗?

我正在玩ANN,这是Udactity DeepLearning课程的一部分.

我成功建立和训练网络,并对所有权重和偏差引入了L2正则化.现在我正在尝试隐藏层的退出,以改善泛化.我想知道,将L2正则化引入同一层的隐藏层和辍学是否有意义?如果是这样,怎么办?

在辍学期间,我们关闭隐藏层的一半激活,并将其余神经元输出的数量翻倍.在使用L2时,我们计算所有隐藏权重的L2范数.但是如果我们使用dropout,我不知道如何计算L2.我们关闭一些激活,现在我们不应该从L2计算中删除“不用”的权重?任何关于此事的参考将是有用的,我还没有找到任何信息.

为了防止你有兴趣,我的代码为ANN与L2正则化在下面:

#for NeuralNetwork model code is below
#We will use SGD for training to save our time. Code is from Assignment 2
#beta is the new parameter - controls level of regularization. Default is 0.01
#but feel free to play with it
#notice,we introduce L2 for both biases and weights of all layers

beta = 0.01

#building tensorflow graph
graph = tf.Graph()
with graph.as_default():
      # Input data. For the training data,we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(tf.float32,shape=(batch_size,image_size * image_size))
  tf_train_labels = tf.placeholder(tf.float32,num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)

  #now let's build our new hidden layer
  #that's how many hidden neurons we want
  num_hidden_neurons = 1024
  #its weights
  hidden_weights = tf.Variable(
    tf.truncated_normal([image_size * image_size,num_hidden_neurons]))
  hidden_biases = tf.Variable(tf.zeros([num_hidden_neurons]))

  #now the layer itself. It multiplies data by weights,adds biases
  #and takes ReLU over result
  hidden_layer = tf.nn.relu(tf.matmul(tf_train_dataset,hidden_weights) + hidden_biases)

  #time to go for output linear layer
  #out weights connect hidden neurons to output labels
  #biases are added to output labels  
  out_weights = tf.Variable(
    tf.truncated_normal([num_hidden_neurons,num_labels]))  

  out_biases = tf.Variable(tf.zeros([num_labels]))  

  #compute output  
  out_layer = tf.matmul(hidden_layer,out_weights) + out_biases
  #our real output is a softmax of prior result
  #and we also compute its cross-entropy to get our loss
  #Notice - we introduce our L2 here
  loss = (tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
    out_layer,tf_train_labels) +
    beta*tf.nn.l2_loss(hidden_weights) +
    beta*tf.nn.l2_loss(hidden_biases) +
    beta*tf.nn.l2_loss(out_weights) +
    beta*tf.nn.l2_loss(out_biases)))

  #now we just minimize this loss to actually train the network
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

  #nice,now let's calculate the predictions on each dataset for evaluating the
  #performance so far
  # Predictions for the training,validation,and test data.
  train_prediction = tf.nn.softmax(out_layer)
  valid_relu = tf.nn.relu(  tf.matmul(tf_valid_dataset,hidden_weights) + hidden_biases)
  valid_prediction = tf.nn.softmax( tf.matmul(valid_relu,out_weights) + out_biases) 

  test_relu = tf.nn.relu( tf.matmul( tf_test_dataset,hidden_weights) + hidden_biases)
  test_prediction = tf.nn.softmax(tf.matmul(test_relu,out_weights) + out_biases)



#now is the actual training on the ANN we built
#we will run it for some number of steps and evaluate the progress after 
#every 500 steps

#number of steps we will train our ANN
num_steps = 3001

#actual training
with tf.Session(graph=graph) as session:
  tf.initialize_all_variables().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data,which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size),:]
    batch_labels = train_labels[offset:(offset + batch_size),:]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,# and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data,tf_train_labels : batch_labels}
    _,l,predictions = session.run(
      [optimizer,loss,train_prediction],feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step,l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions,batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(),valid_labels))
      print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(),test_labels))
好的,经过一些额外的努力,我设法解决它,并将L2和辍学引入我的网络,代码如下.在同一个网络中,我没有辍学(L2已经到位)略有改善.我仍然不确定是否真的很值得介绍他们两个,L2和辍学的努力,但至少它的作品,并略微提高了结果.
#ANN with introduced dropout
#This time we still use the L2 but restrict training dataset
#to be extremely small

#get just first 500 of examples,so that our ANN can memorize whole dataset
train_dataset_2 = train_dataset[:500,:]
train_labels_2 = train_labels[:500]

#batch size for SGD and beta parameter for L2 loss
batch_size = 128
beta = 0.001

#that's how many hidden neurons we want
num_hidden_neurons = 1024

#building tensorflow graph
graph = tf.Graph()
with graph.as_default():
  # Input data. For the training data,num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)

  #now let's build our new hidden layer
  #its weights
  hidden_weights = tf.Variable(
    tf.truncated_normal([image_size * image_size,hidden_weights) + hidden_biases)

  #add dropout on hidden layer
  #we pick up the probabylity of switching off the activation
  #and perform the switch off of the activations
  keep_prob = tf.placeholder("float")
  hidden_layer_drop = tf.nn.dropout(hidden_layer,keep_prob)  

  #time to go for output linear layer
  #out weights connect hidden neurons to output labels
  #biases are added to output labels  
  out_weights = tf.Variable(
    tf.truncated_normal([num_hidden_neurons,num_labels]))  

  out_biases = tf.Variable(tf.zeros([num_labels]))  

  #compute output
  #notice that upon training we use the switched off activations
  #i.e. the variaction of hidden_layer with the dropout active
  out_layer = tf.matmul(hidden_layer_drop,which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels_2.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset_2[offset:(offset + batch_size),:]
    batch_labels = train_labels_2[offset:(offset + batch_size),tf_train_labels : batch_labels,keep_prob : 0.5}
    _,test_labels))

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


jquery.validate使用攻略(表单校验) 目录 jquery.validate使用攻略1 第一章 jquery.validate使用攻略1 第二章 jQuery.validate.js API7 Custom selectors7 Utilities8 Validato
/\s+/g和/\s/g的区别 正则表达式/\s+/g和/\s/g,目的均是找出目标字符串中的所有空白字符,但两者到底有什么区别呢? 我们先来看下面一个例子: let name = 'ye wen jun';let ans = name.replace(/\s/g, '&#3
自整理几个jquery.Validate验证正则: 1. 只能输入数字和字母 /^[0-9a-zA-Z]*$/g jQuery.validator.addMethod("letters", function (value, element) { return this.optio
this.optional(element)的用法 this.optional(element)是jquery.validator.js表单验证框架中的一个函数,用于表单控件的值不为空时才触发验证。 简单来说,就是当表单控件值为空的时候不会进行表单校验,此函数会返回true,表示校验通过,当表单控件
jQuery.validate 表单动态验证 实际上jQuery.validate提供了动态校验的方法。而动态拼JSON串的方式是不支持动态校验的。牺牲jQuery.validate的性能优化可以实现(jQuery.validate的性能优化见图1.2 jQuery.validate源码 )。 也可
自定义验证之这能输入数字(包括小数 负数 ) <script type="text/javascript"> function onlyNumber(obj){ //得到第一个字符是否为负号 var t = obj.value.charAt(0); //先把非数字的都
// 引入了外部的验证规则 import { validateAccountNumber } from "@/utils/validate"; validator.js /*是否合法IP地址*/ export function validateIP(rule, value,cal
VUE开发--表单验证(六十三) 一、常用验证方式 vue 中表单字段验证的写法和方式有多种,常用的验证方式有3种: data 中验证 表单内容: <!-- 表单 --> <el-form ref="rulesForm" :rules="formRul
正则表达式 座机的: 例子: 座机有效写法: 0316-8418331 (010)-67433539 (010)67433539 010-67433539 (0316)-8418331 (0316)8418331 正则表达式写法 0\d{2,3}-\d{7,8}|\(?0\d{2,3}[)-]?\d
var reg = /^0\.[1-9]{0,2}$/;var linka = 0.1;console.log (reg.test (linka)); 0到1两位小数正则 ^(0\.(0[1-9]|[1-9]{1,2}|[1-9]0)$)|^1$ 不含0、0.0、0.00 // 验证是否是[1-10
input最大长度限制问题 <input type="text" maxlength="5" /> //可以 <input type="number" maxlength="5" /> //没有效
js输入验证是否为空、是否为null、是否都是空格 目录 1.截头去尾 trim 2.截头去尾 会去掉开始和结束的空格,类似于trim 3.会去掉所有的空格,包括开始,结束,中间 1.截头去尾 trim str=str.trim(); // 强烈推荐 最常用、最实用 or $.trim(str);
正则表达式语法大全 字符串.match(正则):返回符合的字符串,若不满足返回null 字符串.search(正则):返回搜索到的位置,若非一个字符,则返回第一个字母的下标,若不匹配则返回-1 字符串.replace(正则,新的字符串):找到符合正则的内容并替换 正则.test(字符串):在字符串中
正整数正则表达式正数的正则表达式(包括0,小数保留两位): ^((0{1}.\d{1,2})|([1-9]\d.{1}\d{1,2})|([1-9]+\d)|0)$正数的正则表达式(不包括0,小数保留两位): ^((0{1}.\d{1,2})|([1-9]\d.{1}\d{1,2})|([1-9]+
JS 正则验证 test() /*用途:检查输入手机号码是否正确输入:s:字符串返回:如果通过验证返回true,否则返回false /function checkMobile(s){var regu =/[1][3][0-9]{9}$/;var re = new RegExp(regu);if (r
请输入保留两位小数的销售价的正则: /(^[1-9]([0-9]+)?(\.[0-9]{1,2})?$)|(^(0){1}$)|(^[0-9]\.[0-9]([0-9])?$)/ 1.只能输入英文 <input type="text" onkeyup="value
判断价格的正则表达式 价格的正则表达式 /(^[1-9]\d*(\.\d{1,2})?$)|(^0(\.\d{1,2})?$)/; 1 解析:价格符合两种格式 ^ [1-9]\d*(.\d{1,2})?$ : 1-9 开头,后跟是 0-9,可以跟小数点,但小数点后要带上 1-2 位小数,类似 2,2
文章浏览阅读106次。这篇文章主要介绍了最实用的正则表达式整理,比如校验邮箱的正则,号码相关,数字相关等等,本文给大家列举的比较多,需要的朋友可以参考下。_/^(?:[1-9]d*)$/ 手机号
文章浏览阅读1.2k次。4、匹配中的==、an==、== an9、i9 == "9i"和99p==请注意下面这部分的作用,它在匹配中间内容的时候排除了说明:当html字符串如下时,可以匹配到两处,表示匹配的字符串不包含and且不包含空白字符。说明:在上面的正则表达式中,_gvim正则表达式匹配不包含某个字符串
文章浏览阅读897次。【代码】正则表达式匹配a标签的href。_auto.js 正则匹配herf