国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 編程 > Python > 正文

Tensorflow實(shí)現(xiàn)AlexNet卷積神經(jīng)網(wǎng)絡(luò)及運(yùn)算時(shí)間評(píng)測

2020-01-04 14:57:55
字體:
供稿:網(wǎng)友

本文實(shí)例為大家分享了Tensorflow實(shí)現(xiàn)AlexNet卷積神經(jīng)網(wǎng)絡(luò)的具體實(shí)現(xiàn)代碼,供大家參考,具體內(nèi)容如下

之前已經(jīng)介紹過了AlexNet的網(wǎng)絡(luò)構(gòu)建了,這次主要不是為了訓(xùn)練數(shù)據(jù),而是為了對(duì)每個(gè)batch的前饋(Forward)和反饋(backward)的平均耗時(shí)進(jìn)行計(jì)算。在設(shè)計(jì)網(wǎng)絡(luò)的過程中,分類的結(jié)果很重要,但是運(yùn)算速率也相當(dāng)重要。尤其是在跟蹤(Tracking)的任務(wù)中,如果使用的網(wǎng)絡(luò)太深,那么也會(huì)導(dǎo)致實(shí)時(shí)性不好。

from datetime import datetimeimport mathimport timeimport tensorflow as tfbatch_size = 32num_batches = 100def print_activations(t): print(t.op.name, '', t.get_shape().as_list())def inference(images): parameters = [] with tf.name_scope('conv1') as scope:  kernel = tf.Variable(tf.truncated_normal([11, 11, 3, 64], dtype = tf.float32, stddev = 1e-1), name = 'weights')  conv = tf.nn.conv2d(images, kernel, [1, 4, 4, 1], padding = 'SAME')  biases = tf.Variable(tf.constant(0.0, shape = [64], dtype = tf.float32), trainable = True, name = 'biases')  bias = tf.nn.bias_add(conv, biases)  conv1 = tf.nn.relu(bias, name = scope)  print_activations(conv1)  parameters += [kernel, biases]  lrn1 = tf.nn.lrn(conv1, 4, bias = 1.0, alpha = 0.001 / 9, beta = 0.75, name = 'lrn1')  pool1 = tf.nn.max_pool(lrn1, ksize = [1, 3, 3, 1], strides = [1, 2, 2, 1], padding = 'VALID', name = 'pool1')  print_activations(pool1) with tf.name_scope('conv2') as scope:  kernel = tf.Variable(tf.truncated_normal([5, 5, 64, 192], dtype = tf.float32, stddev = 1e-1), name = 'weights')  conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding = 'SAME')  biases = tf.Variable(tf.constant(0.0, shape = [192], dtype = tf.float32), trainable = True, name = 'biases')  bias = tf.nn.bias_add(conv, biases)  conv2 = tf.nn.relu(bias, name = scope)  parameters += [kernel, biases]  print_activations(conv2)  lrn2 = tf.nn.lrn(conv2, 4, bias = 1.0, alpha = 0.001 / 9, beta = 0.75, name = 'lrn2')  pool2 = tf.nn.max_pool(lrn2, ksize = [1, 3, 3, 1], strides = [1, 2, 2, 1], padding = 'VALID', name = 'pool2')  print_activations(pool2) with tf.name_scope('conv3') as scope:  kernel = tf.Variable(tf.truncated_normal([3, 3, 192, 384], dtype = tf.float32, stddev = 1e-1), name = 'weights')  conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding = 'SAME')  biases = tf.Variable(tf.constant(0.0, shape = [384], dtype = tf.float32), trainable = True, name = 'biases')  bias = tf.nn.bias_add(conv, biases)  conv3 = tf.nn.relu(bias, name = scope)  parameters += [kernel, biases]  print_activations(conv3) with tf.name_scope('conv4') as scope:  kernel = tf.Variable(tf.truncated_normal([3, 3, 384, 256], dtype = tf.float32, stddev = 1e-1), name = 'weights')  conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding = 'SAME')  biases = tf.Variable(tf.constant(0.0, shape = [256], dtype = tf.float32), trainable = True, name = 'biases')  bias = tf.nn.bias_add(conv, biases)  conv4 = tf.nn.relu(bias, name = scope)  parameters += [kernel, biases]  print_activations(conv4) with tf.name_scope('conv5') as scope:  kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype = tf.float32, stddev = 1e-1), name = 'weights')  conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding = 'SAME')  biases = tf.Variable(tf.constant(0.0, shape = [256], dtype = tf.float32), trainable = True, name = 'biases')  bias = tf.nn.bias_add(conv, biases)  conv5 = tf.nn.relu(bias, name = scope)  parameters += [kernel, biases]  print_activations(conv5)  pool5 = tf.nn.max_pool(conv5, ksize = [1, 3, 3, 1], strides = [1, 2, 2, 1], padding = 'VALID', name = 'pool5')  print_activations(pool5)  return pool5, parametersdef time_tensorflow_run(session, target, info_string): num_steps_burn_in = 10 total_duration = 0.0 total_duration_squared = 0.0 for i in range(num_batches + num_steps_burn_in):  start_time = time.time()  _ = session.run(target)  duration = time.time() - start_time  if i >= num_steps_burn_in:   if not i % 10:    print('%s: step %d, duration = %.3f' %(datetime.now(), i - num_steps_burn_in, duration))   total_duration += duration   total_duration_squared += duration * duration mn = total_duration / num_batches vr = total_duration_squared / num_batches - mn * mn sd = math.sqrt(vr) print('%s: %s across %d steps, %.3f +/- %.3f sec / batch' %(datetime.now(), info_string, num_batches, mn, sd))def run_benchmark(): with tf.Graph().as_default():  image_size = 224  images = tf.Variable(tf.random_normal([batch_size, image_size, image_size, 3], dtype = tf.float32, stddev = 1e-1))  pool5, parameters = inference(images)  init = tf.global_variables_initializer()  sess = tf.Session()  sess.run(init)  time_tensorflow_run(sess, pool5, "Forward")  objective = tf.nn.l2_loss(pool5)  grad = tf.gradients(objective, parameters)  time_tensorflow_run(sess, grad, "Forward-backward")run_benchmark()

這里的代碼都是之前講過的,只是加了一個(gè)計(jì)算時(shí)間和現(xiàn)實(shí)網(wǎng)絡(luò)的卷積核的函數(shù),應(yīng)該很容易就看懂了,就不多贅述了。我在GTX TITAN X上前饋大概需要0.024s, 反饋大概需要0.079s。哈哈,自己動(dòng)手試一試哦。

以上就是本文的全部內(nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持VEVB武林網(wǎng)。


注:相關(guān)教程知識(shí)閱讀請(qǐng)移步到python教程頻道。
發(fā)表評(píng)論 共有條評(píng)論
用戶名: 密碼:
驗(yàn)證碼: 匿名發(fā)表
主站蜘蛛池模板: 宁德市| 武平县| 开化县| 平顺县| 开原市| 嫩江县| 鹰潭市| 苏州市| 兴宁市| 新郑市| 客服| 巴楚县| 车致| 肃南| 铜川市| 高要市| 黑龙江省| 嘉荫县| 濮阳市| 镇巴县| 增城市| 涟水县| 高州市| 太保市| 新河县| 长治市| 嘉荫县| 昭苏县| 莱阳市| 龙游县| 旌德县| 新平| 赤峰市| 姜堰市| 丰县| 息烽县| 壶关县| 铁岭县| 玉田县| 太原市| 游戏|