python – Tensorflow运行之间的准确性保持不变
发布时间:2020-12-20 13:09:49 所属栏目:Python 来源:网络整理
导读:我一直在尝试利用Tensorflow来衡量它是否适合对我正在亨廷顿病项目中研究的数据进行分类(与问题无关,仅提供上下文).以前,我使用支持向量机对我的数据进行分类,这些数据都很“正常”.我希望NNetworks更好. 加载数据很好,没有问题.阅读了Tensorflow的文档,并在
我一直在尝试利用Tensorflow来衡量它是否适合对我正在亨廷顿病项目中研究的数据进行分类(与问题无关,仅提供上下文).以前,我使用支持向量机对我的数据进行分类,这些数据都很“正常”.我希望NNetworks更好.
加载数据很好,没有问题.阅读了Tensorflow的文档,并在线阅读了一些教程和示例后,我写了以下内容,用CSV数据做一个非常简单的网络示例.我在这个提供的例子中使用的数据是标准的MNIST图像数据库,但是是CSV格式. datafile = os.path.join('/pathtofile/','mnist_train.csv') descfile = os.path.join('/pathtofile/','mnist_train.rst') mnist = DataLoader(datafile,descfile).load_model() x_train,x_test,y_train,y_test = train_test_split(mnist.DATA,mnist.TARGET,test_size=0.33,random_state=42) ## Width and length of arrays train_width = len(a_train[0]) + 1; train_length = len(a_train) test_width = len(a_test[0]) + 1; test_length = len(a_test) data = self.build_rawdata(a_train,b_train,train_length,train_width) test_data = self.build_rawdata(a_test,b_test,test_length,test_width) y_train,y_train_onehot = self.onehot_converter(data) y_test,y_test_onehot = self.onehot_converter(test_data) ## A = Features,B = Classes A = data.shape[1]-1 B = len(y_train_onehot[0]) 全部功能.训练,测试和onehot阵列都是正确的大小,并填充正确的值. 实际的张量流代码是我最有可能出错的地方(?). sess = tf.InteractiveSession() ##Weights and bias x = tf.placeholder("float",shape=[None,A]) y_ = tf.placeholder("float",B]) W = tf.Variable(tf.random_normal([A,B],stddev=0.01)) b = tf.Variable(tf.random_normal([B],stddev=0.01)) sess.run(tf.initialize_all_variables()) y = tf.nn.softmax(tf.matmul(x,W) + b) cross_entropy = -tf.reduce_sum(y_*tf.log(y)) train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) ## 300 iterations of learning ## of the above GradientDescentOptimiser for i in range(100): train_step.run(feed_dict={x: x_train,y_: y_train_onehot}) correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float")) result = sess.run(accuracy,feed_dict={x: x_test,y_: y_test_onehot}) print 'Run {},{}'.format(i+1,result) 这段代码的每个输出总是产生完全相同的精度,我无法弄清楚为什么. I tensorflow/core/common_runtime/local_device.cc:40] Local device intra op parallelism threads: 12 I tensorflow/core/common_runtime/direct_session.cc:58] Direct session inter op parallelism threads: 12 Run 1,0.0974242389202 Run 2,0.0974242389202 Run 3,0.0974242389202 Run 4,0.0974242389202 Run 5,0.0974242389202 Run 6,0.0974242389202 Run 7,0.0974242389202 Run 8,0.0974242389202 Run 9,0.0974242389202 Run 10,0.0974242389202 .... Run 100,0.0974242389202 我回过头来看了一下教程,以及我从中学到的例子.虹膜数据集(以相同方式加载)产生了准确预测的适当输出.然而,这个带有MNIST CSV数据的代码却没有. 任何见解将不胜感激. 编辑1: 所以我有几分钟时间尝试了一些你的建议,但没有用.为了比较,我还决定回去使用Iris CSV数据集进行测试.使用sess.run后,输出略有不同(train_step,feed = dict = {…}: Run 1,0.300000011921 Run 2,0.319999992847 Run 3,0.699999988079 Run 4,0.699999988079 Run 5,0.699999988079 Run 6,0.699999988079 Run 7,0.360000014305 Run 8,0.699999988079 Run 9,0.699999988079 Run 10,0.699999988079 Run 11,0.699999988079 Run 12,0.699999988079 Run 13,0.699999988079 Run 14,0.699999988079 Run 15,0.699999988079 Run 16,0.300000011921 Run 17,0.759999990463 Run 18,0.680000007153 Run 19,0.819999992847 Run 20,0.680000007153 Run 21,0.680000007153 Run 22,0.839999973774 Run 23,0.319999992847 Run 24,0.699999988079 Run 25,0.699999988079 值通常在此范围内徘徊,直到达到运行64,其中它被锁定在: Run 64,0.379999995232 ... Run 100,0.379999995232 解决方法
我认为问题可能是你的train_step不在sess.run尝试这个.还可以考虑使用迷你批处理进行培训:
for i in range(100): for start,end in zip(range(0,len(x_train),20),range(20,20)): sess.run(train_step,feed_dict={x: x_train[start:end],y_: y_train_onehot[start:end]}) (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |