我們需要評估模型預(yù)測值來評估訓(xùn)練的好壞。
模型評估是非常重要的,隨后的每個模型都有模型評估方式。使用TensorFlow時,需要把模型評估加入到計算圖中,然后在模型訓(xùn)練完后調(diào)用模型評估。
在訓(xùn)練模型過程中,模型評估能洞察模型算法,給出提示信息來調(diào)試、提高或者改變整個模型。但是在模型訓(xùn)練中并不是總需要模型評估,我們將展示如何在回歸算法和分類算法中使用它。
訓(xùn)練模型之后,需要定量評估模型的性能如何。在理想情況下,評估模型需要一個訓(xùn)練數(shù)據(jù)集和測試數(shù)據(jù)集,有時甚至需要一個驗證數(shù)據(jù)集。
想評估一個模型時就得使用大批量數(shù)據(jù)點。如果完成批量訓(xùn)練,我們可以重用模型來預(yù)測批量數(shù)據(jù)點。但是如果要完成隨機訓(xùn)練,就不得不創(chuàng)建單獨的評估器來處理批量數(shù)據(jù)點。
分類算法模型基于數(shù)值型輸入預(yù)測分類值,實際目標(biāo)是1和0的序列。我們需要度量預(yù)測值與真實值之間的距離。分類算法模型的損失函數(shù)一般不容易解釋模型好壞,所以通常情況是看下準(zhǔn)確預(yù)測分類的結(jié)果的百分比。
不管算法模型預(yù)測的如何,我們都需要測試算法模型,這點相當(dāng)重要。在訓(xùn)練數(shù)據(jù)和測試數(shù)據(jù)上都進(jìn)行模型評估,以搞清楚模型是否過擬合。
# TensorFlowm模型評估## This code will implement two models. The first# is a simple regression model, we will show how to# call the loss function, MSE during training, and# output it after for test and training sets.## The second model will be a simple classification# model. We will also show how to print percent# classified correctly during training and after# for both the test and training sets.import matplotlib.pyplot as pltimport numpy as npimport tensorflow as tffrom tensorflow.python.framework import opsops.reset_default_graph()# 創(chuàng)建計算圖sess = tf.Session()# 回歸例子:# We will create sample data as follows:# x-data: 100 random samples from a normal ~ N(1, 0.1)# target: 100 values of the value 10.# We will fit the model:# x-data * A = target# 理論上, A = 10.# 聲明批量大小batch_size = 25# 創(chuàng)建數(shù)據(jù)集x_vals = np.random.normal(1, 0.1, 100)y_vals = np.repeat(10., 100)x_data = tf.placeholder(shape=[None, 1], dtype=tf.float32)y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)# 八二分訓(xùn)練/測試數(shù)據(jù) train/test = 80%/20%train_indices = np.random.choice(len(x_vals), round(len(x_vals)*0.8), replace=False)test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))x_vals_train = x_vals[train_indices]x_vals_test = x_vals[test_indices]y_vals_train = y_vals[train_indices]y_vals_test = y_vals[test_indices]# 創(chuàng)建變量 (one model parameter = A)A = tf.Variable(tf.random_normal(shape=[1,1]))# 增加操作到計算圖my_output = tf.matmul(x_data, A)# 增加L2損失函數(shù)到計算圖loss = tf.reduce_mean(tf.square(my_output - y_target))# 創(chuàng)建優(yōu)化器my_opt = tf.train.GradientDescentOptimizer(0.02)train_step = my_opt.minimize(loss)# 初始化變量init = tf.global_variables_initializer()sess.run(init)# 迭代運行# 如果在損失函數(shù)中使用的模型輸出結(jié)果經(jīng)過轉(zhuǎn)換操作,例如,sigmoid_cross_entropy_with_logits()函數(shù),# 為了精確計算預(yù)測結(jié)果,別忘了在模型評估中也要進(jìn)行轉(zhuǎn)換操作。for i in range(100): rand_index = np.random.choice(len(x_vals_train), size=batch_size) rand_x = np.transpose([x_vals_train[rand_index]]) rand_y = np.transpose([y_vals_train[rand_index]]) sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y}) if (i+1)%25==0: print('Step #' + str(i+1) + ' A = ' + str(sess.run(A))) print('Loss = ' + str(sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})))# 評估準(zhǔn)確率(loss)mse_test = sess.run(loss, feed_dict={x_data: np.transpose([x_vals_test]), y_target: np.transpose([y_vals_test])})mse_train = sess.run(loss, feed_dict={x_data: np.transpose([x_vals_train]), y_target: np.transpose([y_vals_train])})print('MSE on test:' + str(np.round(mse_test, 2)))print('MSE on train:' + str(np.round(mse_train, 2)))# 分類算法案例# We will create sample data as follows:# x-data: sample 50 random values from a normal = N(-1, 1)# + sample 50 random values from a normal = N(1, 1)# target: 50 values of 0 + 50 values of 1.# These are essentially 100 values of the corresponding output index# We will fit the binary classification model:# If sigmoid(x+A) < 0.5 -> 0 else 1# Theoretically, A should be -(mean1 + mean2)/2# 重置計算圖ops.reset_default_graph()# 加載計算圖sess = tf.Session()# 聲明批量大小batch_size = 25# 創(chuàng)建數(shù)據(jù)集x_vals = np.concatenate((np.random.normal(-1, 1, 50), np.random.normal(2, 1, 50)))y_vals = np.concatenate((np.repeat(0., 50), np.repeat(1., 50)))x_data = tf.placeholder(shape=[1, None], dtype=tf.float32)y_target = tf.placeholder(shape=[1, None], dtype=tf.float32)# 分割數(shù)據(jù)集 train/test = 80%/20%train_indices = np.random.choice(len(x_vals), round(len(x_vals)*0.8), replace=False)test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))x_vals_train = x_vals[train_indices]x_vals_test = x_vals[test_indices]y_vals_train = y_vals[train_indices]y_vals_test = y_vals[test_indices]# 創(chuàng)建變量 (one model parameter = A)A = tf.Variable(tf.random_normal(mean=10, shape=[1]))# Add operation to graph# Want to create the operstion sigmoid(x + A)# Note, the sigmoid() part is in the loss functionmy_output = tf.add(x_data, A)# 增加分類損失函數(shù) (cross entropy)xentropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=my_output, labels=y_target))# Create Optimizermy_opt = tf.train.GradientDescentOptimizer(0.05)train_step = my_opt.minimize(xentropy)# Initialize variablesinit = tf.global_variables_initializer()sess.run(init)# 運行迭代for i in range(1800): rand_index = np.random.choice(len(x_vals_train), size=batch_size) rand_x = [x_vals_train[rand_index]] rand_y = [y_vals_train[rand_index]] sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y}) if (i+1)%200==0: print('Step #' + str(i+1) + ' A = ' + str(sess.run(A))) print('Loss = ' + str(sess.run(xentropy, feed_dict={x_data: rand_x, y_target: rand_y})))# 評估預(yù)測# 用squeeze()函數(shù)封裝預(yù)測操作,使得預(yù)測值和目標(biāo)值有相同的維度。y_prediction = tf.squeeze(tf.round(tf.nn.sigmoid(tf.add(x_data, A))))# 用equal()函數(shù)檢測是否相等,# 把得到的true或false的boolean型張量轉(zhuǎn)化成float32型,# 再對其取平均值,得到一個準(zhǔn)確度值。correct_prediction = tf.equal(y_prediction, y_target)accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))acc_value_test = sess.run(accuracy, feed_dict={x_data: [x_vals_test], y_target: [y_vals_test]})acc_value_train = sess.run(accuracy, feed_dict={x_data: [x_vals_train], y_target: [y_vals_train]})print('Accuracy on train set: ' + str(acc_value_train))print('Accuracy on test set: ' + str(acc_value_test))# 繪制分類結(jié)果A_result = -sess.run(A)bins = np.linspace(-5, 5, 50)plt.hist(x_vals[0:50], bins, alpha=0.5, label='N(-1,1)', color='white')plt.hist(x_vals[50:100], bins[0:50], alpha=0.5, label='N(2,1)', color='red')plt.plot((A_result, A_result), (0, 8), 'k--', linewidth=3, label='A = '+ str(np.round(A_result, 2)))plt.legend(loc='upper right')plt.title('Binary Classifier, Accuracy=' + str(np.round(acc_value_test, 2)))plt.show()輸出:
Step #25 A = [[ 5.79096079]]Loss = 16.8725Step #50 A = [[ 8.36085415]]Loss = 3.60671Step #75 A = [[ 9.26366138]]Loss = 1.05438Step #100 A = [[ 9.58914948]]Loss = 1.39841MSE on test:1.04MSE on train:1.13Step #200 A = [ 5.83126402]Loss = 1.9799Step #400 A = [ 1.64923656]Loss = 0.678205Step #600 A = [ 0.12520729]Loss = 0.218827Step #800 A = [-0.21780498]Loss = 0.223919Step #1000 A = [-0.31613481]Loss = 0.234474Step #1200 A = [-0.33259964]Loss = 0.237227Step #1400 A = [-0.28847221]Loss = 0.345202Step #1600 A = [-0.30949864]Loss = 0.312794Step #1800 A = [-0.33211425]Loss = 0.277342Accuracy on train set: 0.9625Accuracy on test set: 1.0

以上就是本文的全部內(nèi)容,希望對大家的學(xué)習(xí)有所幫助,也希望大家多多支持VEVB武林網(wǎng)。
新聞熱點
疑難解答
圖片精選