首页 > 学院 > 开发设计 > 正文

卷积神经网络之tensorflow实现

2019-11-06 07:02:56
字体:
来源:转载
供稿:网友

tensorflow中集成了很多库和函数,卷积神经网络的实现变得十分简单,这节讲述如何利用tensorflow实现一个两个卷积层的神经网络,用于对手写数字的识别。

代码如下:

# -*- coding:utf-8 -*-#功能:使用卷积神经网络实现对手写数字的识别import tensorflow as tfimport tensorflow.examples.tutorials.mnist.input_data as input_data#新建权重函数和偏置函数def weight_variable(shape):    initial=tf.random_normal(shape,mean=0.0,stddev=0.01)    return tf.Variable(initial)def bias_variable(shape):    initial=tf.random_normal(shape,mean=0.0,stddev=0.01)    return tf.Variable(initial)#新建输入输出有关的占位符mnist=input_data.read_data_sets('MNIST_data/',one_hot=True)x=tf.placeholder(tf.float32,[None,784])y=tf.placeholder(tf.float32,[None,10])#构造两层卷积神经网络#把数据维度变为4-D,即(batch_size,width,height,channel)#bacth_size=-1的意思是这个维度只用总维度除以width*height*channelx_tensor=tf.reshape(x,[-1,28,28,1])#设置第一个卷积层感知区域的大小filter_size=3n_filter_1=16w_conv1=weight_variable([filter_size,filter_size,1,n_filter_1])b_conv1=bias_variable([n_filter_1])#第一次卷积后的结果 参数'SAME'满足了卷积前和卷积后的数据维度一致#elu为激活函数h_conv1=tf.nn.elu(tf.nn.conv2d(input=x_tensor,filter=w_conv1,strides=[1,2,2,1],padding='SAME')+b_conv1)#第二个卷积核的n_filter_2=16  #个数w_conv2=weight_variable([filter_size,filter_size,n_filter_1,n_filter_2])b_conv2=bias_variable([n_filter_2])#第二个卷积之后的结果 h_conv2=tf.nn.elu(tf.nn.conv2d(input=h_conv1,filter=w_conv2,strides=[1,2,2,1],padding='SAME')+b_conv2)#添加全连接隐层#由卷积层过度到隐含层,需要对卷积层的输出做一个维度变换h_conv2_flat=tf.reshape(h_conv2,[-1,7*7*n_filter_2])#创造一个全连接层,隐含神经元的个数为1024n_fc=1024w_fc1=weight_variable([7*7*n_filter_2,n_fc])b_fc1=bias_variable([n_fc])#全连接之后的输出层h_fc1=tf.nn.elu(tf.matmul(h_conv2_flat,w_fc1)+b_fc1)#添加dropout 防止过拟合keep_PRob=tf.placeholder(tf.float32)h_fc1_drop=tf.nn.dropout(h_fc1,keep_prob)#添加softmax层w_fc2=weight_variable([n_fc,10])b_fc2=bias_variable([10])y_pred=tf.nn.softmax(tf.matmul(h_fc1_drop,w_fc2)+b_fc2)#获取目标函数cross_entropy=-tf.reduce_sum(y*tf.log(y_pred))optimizer=tf.train.AdamOptimizer().minimize(cross_entropy)#计算分类准确率correct_prediction=tf.equal(tf.argmax(y_pred,1),tf.argmax(y,1))accuracy=tf.reduce_mean(tf.cast(correct_prediction,'float'))#新建会话 并进行mini_batch训练sess=tf.session()sess.run(tf.initialize_all_variables())#使用mini_batch来训练 batch_size=100#训练5轮n_epoch=5for epoch_i in range(n_epoch):   #每一轮都是对一个batch_size进行训练   for batch_i in range(mnist.train.num_examples//batch_size):        batch_xs,batch_ys=mnist.train.next_batch(batch_size)        sess.run(optimizer,feed_dict={x:batch_xs,y:batch_ys,keep_prob:0.5})    #对所有mini_batch执行完一轮以后 打印准确率结果    print (sess.run(accuracy,feed_dict={x:mnist.validation.images,y:mnist.validation.labels,keep_prob:0.5}))


发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表