我正在玩ANN,这是Udactity DeepLearning课程的一部分.
我成功建立和训练网络,并对所有权重和偏差引入了L2正则化.现在我正在尝试隐藏层的退出,以改善泛化.我想知道,将L2正则化引入同一层的隐藏层和辍学是否有意义?如果是这样,怎么办?
在辍学期间,我们关闭隐藏层的一半激活,并将其余神经元输出的数量翻倍.在使用L2时,我们计算所有隐藏权重的L2范数.但是如果我们使用dropout,我不知道如何计算L2.我们关闭一些激活,现在我们不应该从L2计算中删除“不用”的权重?任何关于此事的参考将是有用的,我还没有找到任何信息.
为了防止你有兴趣,我的代码为ANN与L2正则化在下面:
#for NeuralNetwork model code is below
#We will use SGD for training to save our time. Code is from Assignment 2
#beta is the new parameter - controls level of regularization. Default is 0.01
#but feel free to play with it
#notice,we introduce L2 for both biases and weights of all layers
beta = 0.01
#building tensorflow graph
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data,we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,shape=(batch_size,image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32,num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
#Now let's build our new hidden layer
#that's how many hidden neurons we want
num_hidden_neurons = 1024
#its weights
hidden_weights = tf.Variable(
tf.truncated_normal([image_size * image_size,num_hidden_neurons]))
hidden_biases = tf.Variable(tf.zeros([num_hidden_neurons]))
#Now the layer itself. It multiplies data by weights,adds biases
#and takes ReLU over result
hidden_layer = tf.nn.relu(tf.matmul(tf_train_dataset,hidden_weights) + hidden_biases)
#time to go for output linear layer
#out weights connect hidden neurons to output labels
#biases are added to output labels
out_weights = tf.Variable(
tf.truncated_normal([num_hidden_neurons,num_labels]))
out_biases = tf.Variable(tf.zeros([num_labels]))
#compute output
out_layer = tf.matmul(hidden_layer,out_weights) + out_biases
#our real output is a softmax of prior result
#and we also compute its cross-entropy to get our loss
#Notice - we introduce our L2 here
loss = (tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
out_layer,tf_train_labels) +
beta*tf.nn.l2_loss(hidden_weights) +
beta*tf.nn.l2_loss(hidden_biases) +
beta*tf.nn.l2_loss(out_weights) +
beta*tf.nn.l2_loss(out_biases)))
#Now we just minimize this loss to actually train the network
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
#nice,Now let's calculate the predictions on each dataset for evaluating the
#performance so far
# Predictions for the training,validation,and test data.
train_prediction = tf.nn.softmax(out_layer)
valid_relu = tf.nn.relu( tf.matmul(tf_valid_dataset,hidden_weights) + hidden_biases)
valid_prediction = tf.nn.softmax( tf.matmul(valid_relu,out_weights) + out_biases)
test_relu = tf.nn.relu( tf.matmul( tf_test_dataset,hidden_weights) + hidden_biases)
test_prediction = tf.nn.softmax(tf.matmul(test_relu,out_weights) + out_biases)
#Now is the actual training on the ANN we built
#we will run it for some number of steps and evaluate the progress after
#every 500 steps
#number of steps we will train our ANN
num_steps = 3001
#actual training
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data,which has been randomized.
# Note: we Could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size),:]
batch_labels = train_labels[offset:(offset + batch_size),:]
# Prepare a dictionary telling the session where to Feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,# and the value is the numpy array to Feed to it.
Feed_dict = {tf_train_dataset : batch_data,tf_train_labels : batch_labels}
_,l,predictions = session.run(
[optimizer,loss,train_prediction],Feed_dict=Feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step,l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions,batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(),valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(),test_labels))
好的,经过一些额外的努力,我设法解决它,并将L2和辍学引入我的网络,代码如下.在同一个网络中,我没有辍学(L2已经到位)略有改善.我仍然不确定是否真的很值得介绍他们两个,L2和辍学的努力,但至少它的作品,并略微提高了结果.
#ANN with introduced dropout
#This time we still use the L2 but restrict training dataset
#to be extremely small
#get just first 500 of examples,so that our ANN can memorize whole dataset
train_dataset_2 = train_dataset[:500,:]
train_labels_2 = train_labels[:500]
#batch size for SGD and beta parameter for L2 loss
batch_size = 128
beta = 0.001
#that's how many hidden neurons we want
num_hidden_neurons = 1024
#building tensorflow graph
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data,num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
#Now let's build our new hidden layer
#its weights
hidden_weights = tf.Variable(
tf.truncated_normal([image_size * image_size,hidden_weights) + hidden_biases)
#add dropout on hidden layer
#we pick up the probabylity of switching off the activation
#and perform the switch off of the activations
keep_prob = tf.placeholder("float")
hidden_layer_drop = tf.nn.dropout(hidden_layer,keep_prob)
#time to go for output linear layer
#out weights connect hidden neurons to output labels
#biases are added to output labels
out_weights = tf.Variable(
tf.truncated_normal([num_hidden_neurons,num_labels]))
out_biases = tf.Variable(tf.zeros([num_labels]))
#compute output
#notice that upon training we use the switched off activations
#i.e. the variaction of hidden_layer with the dropout active
out_layer = tf.matmul(hidden_layer_drop,which has been randomized.
# Note: we Could use better randomization across epochs.
offset = (step * batch_size) % (train_labels_2.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset_2[offset:(offset + batch_size),:]
batch_labels = train_labels_2[offset:(offset + batch_size),tf_train_labels : batch_labels,keep_prob : 0.5}
_,test_labels))