TensorFlow框架优化神经网络模型攻略

一、写在开始

mnist手写集是非常经典的分类数据集,在上一篇有关mnist的文章中,我用到keras框架搭建全连接神经网络DNN,当然是一个非常简单的模型,准确度达到97%。那篇文章重在解析mnist数据集,例如呈现28×28像素图像、数字对应卡片等,而不是重在构建网络的过程。所以对mnist数据集尚未了解的同学呢,推荐从那里开始学习。

在这里插入图片描述

mnist数据集是二维图像的形式,最适合这种数据集的分类应该是CNN卷积神经网络,此时NN标准神经网络就显得有些无力了,毕竟网络结构简单,准确度仅仅达到92%。但是,在整理卷积神经网络之前,我非常想总结一次NN标准神经网络的搭建过程,更重要的是,如何优化NN?从92%的准确度到99%. 更大程度地去让NN训练效率接近CNN。

大一下学期疫情在家的时候,上了一门神经网络导论,重点在将微分方程的稳定性和各类神经网络模型的变形,所以最后我选择写了一篇有关的神经网络优化设计的数学分析结课论文。虽然当时还没有怎么接触神经网络代码,但是现在看来,原理最重要!代码基于原理!

二、标准的简单神经网络

# 调库
import tensorflow as tf
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
import numpy as np
# 载入数据集
mnist = read_data_sets("MNIST_data",one_hot=True)
# 每个批次的大小
batch_size = 100
# 计算有多少个批次
n_batch = mnist.train.num_examples // batch_size
# 定义两个placeholder占位符
x = tf.placeholder(tf.float32,[None,784]) # 二维图像展开成784列
y = tf.placeholder(tf.float32,[None,10]) # 10列标签
# 创建一个简单的神经网络(无中间层)
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10])) # zeros即元素设置为0
# 通过softmax将值转化为概率[0,1]
prediction = tf.nn.softmax(tf.matmul(x,W)+b) # matmul是求矩阵乘积
# 交叉熵
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y,logits=prediction))
# 定义一个梯度下降法来进行训练的优化器 学习率0.2
train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)
# 初始化变量
init = tf.global_variables_initializer()
# 结果存放在一个布尔型列表中 tf.equal判断两者是否相等1/0
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1)) 
# argmax返回一维张量中最大的值所在的位置
# 求准确率
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
with tf.Session() as sess:
    sess.run(init) # 初始化模型
    # 训练51轮次
    for epoch in range(51):
    	# 在每个批次上依此训练
        for batch in range(n_batch):
            batch_xs,batch_ys = mnist.train.next_batch(batch_size)
            sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})
        # 测试集上模型的准确率
        acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})
        print("Epoch "+str(epoch)+",Testing Accuracy "+str(acc))
复制代码
Epoch 0,Testing Accuracy 0.824
Epoch 1,Testing Accuracy 0.8929
Epoch 2,Testing Accuracy 0.9006
Epoch 3,Testing Accuracy 0.9061
Epoch 4,Testing Accuracy 0.9086
Epoch 5,Testing Accuracy 0.9103
Epoch 6,Testing Accuracy 0.9115
Epoch 7,Testing Accuracy 0.9141
Epoch 8,Testing Accuracy 0.9156
Epoch 9,Testing Accuracy 0.916
Epoch 10,Testing Accuracy 0.9179
Epoch 11,Testing Accuracy 0.918
Epoch 12,Testing Accuracy 0.9186
Epoch 13,Testing Accuracy 0.9197
Epoch 14,Testing Accuracy 0.9203
Epoch 15,Testing Accuracy 0.9201
Epoch 16,Testing Accuracy 0.9204
Epoch 17,Testing Accuracy 0.9205
Epoch 18,Testing Accuracy 0.9215
Epoch 19,Testing Accuracy 0.9212
Epoch 20,Testing Accuracy 0.922
Epoch 21,Testing Accuracy 0.9214
Epoch 22,Testing Accuracy 0.9213
Epoch 23,Testing Accuracy 0.9225
Epoch 24,Testing Accuracy 0.9222
Epoch 25,Testing Accuracy 0.9234
Epoch 26,Testing Accuracy 0.9227
Epoch 27,Testing Accuracy 0.9235
Epoch 28,Testing Accuracy 0.9246
Epoch 29,Testing Accuracy 0.9229
Epoch 30,Testing Accuracy 0.9249
Epoch 31,Testing Accuracy 0.9233
Epoch 32,Testing Accuracy 0.9243
Epoch 33,Testing Accuracy 0.9238
Epoch 34,Testing Accuracy 0.9248
Epoch 35,Testing Accuracy 0.9247
Epoch 36,Testing Accuracy 0.9247
Epoch 37,Testing Accuracy 0.9261
Epoch 38,Testing Accuracy 0.925
Epoch 39,Testing Accuracy 0.9255
Epoch 40,Testing Accuracy 0.926
Epoch 41,Testing Accuracy 0.9258
Epoch 42,Testing Accuracy 0.9264
Epoch 43,Testing Accuracy 0.9266
Epoch 44,Testing Accuracy 0.9266
Epoch 45,Testing Accuracy 0.9263
Epoch 46,Testing Accuracy 0.9265
Epoch 47,Testing Accuracy 0.927
Epoch 48,Testing Accuracy 0.9271
Epoch 49,Testing Accuracy 0.9265
Epoch 50,Testing Accuracy 0.928
复制代码

特点:

  • 用到基本优化器 梯度下降
  • 无隐藏单元
  • 学习率不可变
  • 没有防止过拟合

三、神经网络模型的优化设计

主要从以下几个方面优化:

  • 学习率可变

学习率过大,导致结果摇摆不定,学习率过小,难以快速达到收敛。更好的选择就是可调整学习率,开始时使用较大的学习率快速得到较优解,随着迭代的继续,降低学习率使得训练后期逐渐稳定收敛。

  • 更合适的优化器
    Adam优化器,是对梯度的一阶矩估计(即梯度的均值)和二阶矩估计(即梯度的未中心化的方差)进行综合考虑,计算出更新步长。
  • dropout机制神经元防止过拟合

定义drop_prob随机丢弃神经元防止过拟合。

  • 增加网络层数

合适的层数,不宜过少(效果不佳),不宜过多(过拟合)。

# 调库
import tensorflow as tf
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
import numpy as np
# 数据集
mnist = read_data_sets("MNIST_data",one_hot=True)
batch_size = 100
n_batch = mnist.train.num_examples // batch_size
# 定义占位符placeholder
x = tf.placeholder(tf.float32,[None,784])
y = tf.placeholder(tf.float32,[None,10])
# 定义神经元保留概率和学习率
keep_prob = tf.placeholder(tf.float32)
lr = tf.Variable(0.001,dtype=tf.float32)
# 第一层网络 1000个神经元
w1 = tf.Variable(tf.truncated_normal([784,1000],stddev=0.1))
# 截断的产生正态分布的随机数,即随机数与均值的差值若大于两倍的标准差,则重新生成。stddev即标准差0.1
b1 = tf.Variable(tf.zeros([1000])+0.1)
l1 = tf.nn.tanh(tf.matmul(x,w1)+b1)
l1_drop = tf.nn.dropout(l1,keep_prob)
# 第二层网络 500个神经元
w2 = tf.Variable(tf.truncated_normal([1000,500],stddev=0.1))
b2 = tf.Variable(tf.zeros([500])+0.1)
l2 = tf.nn.tanh(tf.matmul(l1_drop,w2)+b2)
l2_drop = tf.nn.dropout(l2,keep_prob)
# 输出层 10个神经元
w3 = tf.Variable(tf.truncated_normal([500,10],stddev=0.1))
b3 = tf.Variable(tf.zeros([10])+0.1)
prediction = tf.matmul(l2_drop,w3)+b3
# 交叉熵代价函数
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=prediction))
# Adam优化器代替梯度下降
train_step = tf.train.AdamOptimizer(lr).minimize(loss)
# 结果存放在一个布尔型列表中
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1)) # argmax返回一维张量中最大的值所在的位置
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
# 训练网络
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for epoch in range(51):
        sess.run(tf.assign(lr, 0.001*(0.95**epoch))) # 随着训练推进,学习率调低 注意是**表示指数!
        for batch in range(n_batch):
            batch_xs,batch_ys = mnist.train.next_batch(batch_size)
            sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys,keep_prob:0.7}) # 此时训练就需要加入keep_prob
        # 计算测试集和训练集准确率
        test_acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels,keep_prob:1.0}) # 全部保留
        train_acc = sess.run(accuracy,feed_dict={x:mnist.train.images,y:mnist.train.labels,keep_prob:1.0}) # 全部保留
        print("Iter:"+str(epoch)+",Testing acc="+str(test_acc)+",Training acc="+str(train_acc))
复制代码
Iter:0,Testing acc=0.9509,Training acc=0.9549091
Iter:1,Testing acc=0.9647,Training acc=0.97074544
Iter:2,Testing acc=0.9695,Training acc=0.9790909
Iter:3,Testing acc=0.9732,Training acc=0.9818182
Iter:4,Testing acc=0.975,Training acc=0.98514545
Iter:5,Testing acc=0.9761,Training acc=0.98876363
Iter:6,Testing acc=0.976,Training acc=0.9886909
Iter:7,Testing acc=0.9773,Training acc=0.9902727
Iter:8,Testing acc=0.9784,Training acc=0.9932727
Iter:9,Testing acc=0.979,Training acc=0.9944
Iter:10,Testing acc=0.9786,Training acc=0.99512726
Iter:11,Testing acc=0.9789,Training acc=0.9952
Iter:12,Testing acc=0.9817,Training acc=0.9966909
Iter:13,Testing acc=0.9813,Training acc=0.99750906
Iter:14,Testing acc=0.9807,Training acc=0.9977818
Iter:15,Testing acc=0.9808,Training acc=0.99807274
Iter:16,Testing acc=0.9827,Training acc=0.99881816
Iter:17,Testing acc=0.9816,Training acc=0.9990182
Iter:18,Testing acc=0.9824,Training acc=0.99914545
Iter:19,Testing acc=0.9818,Training acc=0.99923635
Iter:20,Testing acc=0.983,Training acc=0.99938184
Iter:21,Testing acc=0.9834,Training acc=0.99945456
Iter:22,Testing acc=0.9837,Training acc=0.9995273
Iter:23,Testing acc=0.9852,Training acc=0.9996182
Iter:24,Testing acc=0.9832,Training acc=0.9997454
Iter:25,Testing acc=0.9844,Training acc=0.9998182
Iter:26,Testing acc=0.9819,Training acc=0.9996
Iter:27,Testing acc=0.9843,Training acc=0.9997454
Iter:28,Testing acc=0.9828,Training acc=0.9998909
Iter:29,Testing acc=0.9832,Training acc=0.9999273
Iter:30,Testing acc=0.9837,Training acc=0.99994546
Iter:31,Testing acc=0.9841,Training acc=0.9999273
Iter:32,Testing acc=0.9852,Training acc=0.9999818
Iter:33,Testing acc=0.9833,Training acc=0.9999818
Iter:34,Testing acc=0.9839,Training acc=0.99996364
Iter:35,Testing acc=0.9835,Training acc=1.0
Iter:36,Testing acc=0.9838,Training acc=0.99996364
Iter:37,Testing acc=0.9846,Training acc=1.0
Iter:38,Testing acc=0.9849,Training acc=0.9999818
Iter:39,Testing acc=0.9833,Training acc=1.0
Iter:40,Testing acc=0.9834,Training acc=1.0
Iter:41,Testing acc=0.9837,Training acc=1.0
Iter:42,Testing acc=0.9843,Training acc=1.0
Iter:43,Testing acc=0.9844,Training acc=0.9999818
Iter:44,Testing acc=0.9853,Training acc=1.0
Iter:45,Testing acc=0.984,Training acc=1.0
Iter:46,Testing acc=0.9844,Training acc=0.9999818
Iter:47,Testing acc=0.9848,Training acc=1.0
Iter:48,Testing acc=0.9846,Training acc=1.0
Iter:49,Testing acc=0.9848,Training acc=1.0
Iter:50,Testing acc=0.9851,Training acc=1.0
复制代码

训练集上准确率达到100%,测试集上准确率达到98.5%。对于这样一个结构简单的神经网络来说,已经取得不错的效果了!

某知名人士说过:“训练有素,有备而来。”

四、网络优化小结

以上是针对mnist数据集,在标准网络的基础上进行优化。
如下是总结遇到网络优化问题时,应该考虑的几个方面。

  • 合适的损失函数
  • 合适的激活函数
  • 合适的优化器
  • 更深的网络层
  • 可调节的学习率
  • 对过拟合的处理
  • 更大的样本集
  • 更多的训练次数
© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享