国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 編程 > Python > 正文

基于循環(huán)神經(jīng)網(wǎng)絡(luò)(RNN)的古詩生成器

2020-01-04 15:26:06
字體:
供稿:網(wǎng)友

基于循環(huán)神經(jīng)網(wǎng)絡(luò)(RNN)的古詩生成器,具體內(nèi)容如下

之前在手機(jī)百度上看到有個(gè)“為你寫詩”功能,能夠隨機(jī)生成古詩,當(dāng)時(shí)感覺很酷炫= =

在學(xué)習(xí)了深度學(xué)習(xí)后,了解了一下原理,打算自己做個(gè)實(shí)現(xiàn)練練手,于是,就有了這個(gè)項(xiàng)目。文中如有瑕疵紕漏之處,還請(qǐng)路過的諸位大佬不吝賜教,萬分感謝!

使用循環(huán)神經(jīng)網(wǎng)絡(luò)實(shí)現(xiàn)的古詩生成器,能夠完成古體詩的自動(dòng)生成。我簡(jiǎn)單地訓(xùn)練了一下,格式是對(duì)上了,至于意境么。。。emmm,呵呵

舉一下模型測(cè)試結(jié)果例子:

1.生成古體詩

示例1:

樹陰飛盡水三依,謾自為能厚景奇。
莫怪仙舟欲西望,楚人今此惜春風(fēng)。

示例2:

巖外前苗點(diǎn)有泉,紫崖煙靄碧芊芊。
似僧月明秋更好,一蹤顏事欲猶傷?

2.生成藏頭詩(以“神策”為例)

示例1:

神照隆祭測(cè)馨塵,策紫瓏氳羽團(tuán)娟。

示例2:

神輦鶯滿花臺(tái)潭,策窮漸見仙君地。

下面記錄項(xiàng)目實(shí)現(xiàn)過程(由于都是文本處理方面,跟前一個(gè)項(xiàng)目存在很多類似的內(nèi)容,對(duì)于這部分內(nèi)容,我就只簡(jiǎn)單提一下,不展開了,新的東西再具體說):

1.數(shù)據(jù)預(yù)處理

數(shù)據(jù)集使用四萬首的唐詩訓(xùn)練集,可以點(diǎn)擊這里進(jìn)行下載。

數(shù)據(jù)預(yù)處理的過程與前一個(gè)項(xiàng)目TensorFlow練手項(xiàng)目一:使用循環(huán)神經(jīng)網(wǎng)絡(luò)(RNN)實(shí)現(xiàn)影評(píng)情感分類大同小異,可以參考前一個(gè)項(xiàng)目,這里就不多說了,直接上代碼。

# -*- coding: utf-8 -*-# @Time : 18-3-13 上午11:04# @Author : AaronJny# @Email : Aaron__7@163.comimport sysreload(sys)sys.setdefaultencoding('utf8')import collectionsORIGIN_DATA = 'origin_data/poetry.txt' # 源數(shù)據(jù)路徑OUTPUT_DATA = 'processed_data/poetry.txt' # 輸出向量路徑VOCAB_DATA = 'vocab/poetry.vocab'def word_to_id(word, id_dict): if word in id_dict:  return id_dict[word] else:  return id_dict['<unknow>']poetry_list = [] # 存放唐詩的數(shù)組# 從文件中讀取唐詩with open(ORIGIN_DATA, 'r') as f: f_lines = f.readlines() print '唐詩總數(shù) : {}'.format(len(f_lines)) # 逐行進(jìn)行處理 for line in f_lines:  # 去除前后空白符,轉(zhuǎn)碼  strip_line = line.strip().decode('utf8')  try:   # 將唐詩分為標(biāo)題和內(nèi)容   title, content = strip_line.split(':')  except:   # 出現(xiàn)多個(gè)':'的將被舍棄   continue  # 去除內(nèi)容中的空格  content = content.strip().replace(' ', '')  # 舍棄含有非法字符的唐詩  if '(' in content or '(' in content or '<' in content or '《' in content or '_' in content or '[' in content:   continue  # 舍棄過短或過長(zhǎng)的唐詩  lenth = len(content)  if lenth < 20 or lenth > 100:   continue  # 加入列表  poetry_list.append('s' + content + 'e')print '用于訓(xùn)練的唐詩數(shù) : {}'.format(len(poetry_list))poetry_list=sorted(poetry_list,key=lambda x:len(x))words_list = []# 獲取唐詩中所有的字符for poetry in poetry_list: words_list.extend([word for word in poetry])# 統(tǒng)計(jì)其出現(xiàn)的次數(shù)counter = collections.Counter(words_list)# 排序sorted_words = sorted(counter.items(), key=lambda x: x[1], reverse=True)# 獲得出現(xiàn)次數(shù)降序排列的字符列表words_list = ['<unknow>'] + [x[0] for x in sorted_words]# 這里選擇保留高頻詞的數(shù)目,詞只有不到七千個(gè),所以我全部保留words_list = words_list[:len(words_list)]print '詞匯表大小 : {}'.format(words_list)with open(VOCAB_DATA, 'w') as f: for word in words_list:  f.write(word + '/n')# 生成單詞到id的映射word_id_dict = dict(zip(words_list, range(len(words_list))))# 將poetry_list轉(zhuǎn)換成向量形式id_list=[]for poetry in poetry_list: id_list.append([str(word_to_id(word,word_id_dict)) for word in poetry])# 將向量寫入文件with open(OUTPUT_DATA, 'w') as f: for id_l in id_list:  f.write(' '.join(id_l) + '/n')

2.模型編寫

這里要編寫兩個(gè)模型,一個(gè)用于訓(xùn)練,一個(gè)用于驗(yàn)證(生成古體詩)。兩個(gè)模型大體上一致,因?yàn)橛猛静煌杂行┘?xì)節(jié)有出入。當(dāng)進(jìn)行驗(yàn)證時(shí),驗(yàn)證模型讀取訓(xùn)練模型的參數(shù)進(jìn)行覆蓋。

注釋比較細(xì),就不多說了,看代碼。對(duì)于兩個(gè)模型不同的一些關(guān)鍵細(xì)節(jié),我也用注釋進(jìn)行了說明。

 

# -*- coding: utf-8 -*-# @Time : 18-3-13 下午2:06# @Author : AaronJny# @Email : Aaron__7@163.comimport tensorflow as tfimport functoolsimport settingHIDDEN_SIZE = 128 # LSTM隱藏節(jié)點(diǎn)個(gè)數(shù)NUM_LAYERS = 2 # RNN深度def doublewrap(function): @functools.wraps(function) def decorator(*args, **kwargs):  if len(args) == 1 and len(kwargs) == 0 and callable(args[0]):   return function(args[0])  else:   return lambda wrapee: function(wrapee, *args, **kwargs) return decorator@doublewrapdef define_scope(function, scope=None, *args, **kwargs): attribute = '_cache_' + function.__name__ name = scope or function.__name__ @property @functools.wraps(function) def decorator(self):  if not hasattr(self, attribute):   with tf.variable_scope(name, *args, **kwargs):    setattr(self, attribute, function(self))  return getattr(self, attribute) return decoratorclass TrainModel(object): """ 訓(xùn)練模型 """ def __init__(self, data, labels, emb_keep, rnn_keep):  self.data = data # 數(shù)據(jù)  self.labels = labels # 標(biāo)簽  self.emb_keep = emb_keep # embedding層dropout保留率  self.rnn_keep = rnn_keep # lstm層dropout保留率  self.global_step  self.cell  self.predict  self.loss  self.optimize @define_scope def cell(self):  """  rnn網(wǎng)絡(luò)結(jié)構(gòu)  :return:  """  lstm_cell = [   tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.BasicLSTMCell(HIDDEN_SIZE), output_keep_prob=self.rnn_keep) for   _ in range(NUM_LAYERS)]  cell = tf.nn.rnn_cell.MultiRNNCell(lstm_cell)  return cell @define_scope def predict(self):  """  定義前向傳播  :return:  """  # 創(chuàng)建詞嵌入矩陣權(quán)重  embedding = tf.get_variable('embedding', shape=[setting.VOCAB_SIZE, HIDDEN_SIZE])  # 創(chuàng)建softmax層參數(shù)  if setting.SHARE_EMD_WITH_SOFTMAX:   softmax_weights = tf.transpose(embedding)  else:   softmax_weights = tf.get_variable('softmaweights', shape=[HIDDEN_SIZE, setting.VOCAB_SIZE])  softmax_bais = tf.get_variable('softmax_bais', shape=[setting.VOCAB_SIZE])  # 進(jìn)行詞嵌入  emb = tf.nn.embedding_lookup(embedding, self.data)  # dropout  emb_dropout = tf.nn.dropout(emb, self.emb_keep)  # 計(jì)算循環(huán)神經(jīng)網(wǎng)絡(luò)的輸出  self.init_state = self.cell.zero_state(setting.BATCH_SIZE, dtype=tf.float32)  outputs, last_state = tf.nn.dynamic_rnn(self.cell, emb_dropout, scope='d_rnn', dtype=tf.float32,            initial_state=self.init_state)  outputs = tf.reshape(outputs, [-1, HIDDEN_SIZE])  # 計(jì)算logits  logits = tf.matmul(outputs, softmax_weights) + softmax_bais  return logits @define_scope def loss(self):  """  定義損失函數(shù)  :return:  """  # 計(jì)算交叉熵  outputs_target = tf.reshape(self.labels, [-1])  loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=self.predict, labels=outputs_target, )  # 平均  cost = tf.reduce_mean(loss)  return cost @define_scope def global_step(self):  """  global_step  :return:  """  global_step = tf.Variable(0, trainable=False)  return global_step @define_scope def optimize(self):  """  定義反向傳播過程  :return:  """  # 學(xué)習(xí)率衰減  learn_rate = tf.train.exponential_decay(setting.LEARN_RATE, self.global_step, setting.LR_DECAY_STEP,            setting.LR_DECAY)  # 計(jì)算梯度,并防止梯度爆炸  trainable_variables = tf.trainable_variables()  grads, _ = tf.clip_by_global_norm(tf.gradients(self.loss, trainable_variables), setting.MAX_GRAD)  # 創(chuàng)建優(yōu)化器,進(jìn)行反向傳播  optimizer = tf.train.AdamOptimizer(learn_rate)  train_op = optimizer.apply_gradients(zip(grads, trainable_variables), self.global_step)  return train_opclass EvalModel(object): """ 驗(yàn)證模型 """ def __init__(self, data, emb_keep, rnn_keep):  self.data = data # 輸入  self.emb_keep = emb_keep # embedding層dropout保留率  self.rnn_keep = rnn_keep # lstm層dropout保留率  self.cell  self.predict  self.prob @define_scope def cell(self):  """  rnn網(wǎng)絡(luò)結(jié)構(gòu)  :return:  """  lstm_cell = [   tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.BasicLSTMCell(HIDDEN_SIZE), output_keep_prob=self.rnn_keep) for   _ in range(NUM_LAYERS)]  cell = tf.nn.rnn_cell.MultiRNNCell(lstm_cell)  return cell @define_scope def predict(self):  """  定義前向傳播過程  :return:  """  embedding = tf.get_variable('embedding', shape=[setting.VOCAB_SIZE, HIDDEN_SIZE])  if setting.SHARE_EMD_WITH_SOFTMAX:   softmax_weights = tf.transpose(embedding)  else:   softmax_weights = tf.get_variable('softmaweights', shape=[HIDDEN_SIZE, setting.VOCAB_SIZE])  softmax_bais = tf.get_variable('softmax_bais', shape=[setting.VOCAB_SIZE])  emb = tf.nn.embedding_lookup(embedding, self.data)  emb_dropout = tf.nn.dropout(emb, self.emb_keep)  # 與訓(xùn)練模型不同,這里只要生成一首古體詩,所以batch_size=1  self.init_state = self.cell.zero_state(1, dtype=tf.float32)  outputs, last_state = tf.nn.dynamic_rnn(self.cell, emb_dropout, scope='d_rnn', dtype=tf.float32,            initial_state=self.init_state)  outputs = tf.reshape(outputs, [-1, HIDDEN_SIZE])  logits = tf.matmul(outputs, softmax_weights) + softmax_bais  # 與訓(xùn)練模型不同,這里要記錄最后的狀態(tài),以此來循環(huán)生成字,直到完成一首詩  self.last_state = last_state  return logits @define_scope def prob(self):  """  softmax計(jì)算概率  :return:  """  probs = tf.nn.softmax(self.predict)  return probs

3.組織數(shù)據(jù)集

編寫一個(gè)類用于組織數(shù)據(jù),方便訓(xùn)練使用。代碼很簡(jiǎn)單,應(yīng)該不存在什么問題。

 

# -*- coding: utf-8 -*-# @Time : 18-3-13 上午11:59# @Author : AaronJny# @Email : Aaron__7@163.comimport numpy as npBATCH_SIZE = 64DATA_PATH = 'processed_data/poetry.txt'class Dataset(object): def __init__(self, batch_size):  self.batch_size = batch_size  self.data, self.target = self.read_data()  self.start = 0  self.lenth = len(self.data) def read_data(self):  """  從文件中讀取數(shù)據(jù),構(gòu)建數(shù)據(jù)集  :return: 訓(xùn)練數(shù)據(jù),訓(xùn)練標(biāo)簽  """  # 從文件中讀取唐詩向量  id_list = []  with open(DATA_PATH, 'r') as f:   f_lines = f.readlines()   for line in f_lines:    id_list.append([int(num) for num in line.strip().split()])  # 計(jì)算可以生成多少個(gè)batch  num_batchs = len(id_list) // self.batch_size  # data和target  x_data = []  y_data = []  # 生成batch  for i in range(num_batchs):   # 截取一個(gè)batch的數(shù)據(jù)   start = i * self.batch_size   end = start + self.batch_size   batch = id_list[start:end]   # 計(jì)算最大長(zhǎng)度   max_lenth = max(map(len, batch))   # 填充   tmp_x = np.full((self.batch_size, max_lenth), 0, dtype=np.int32)   # 數(shù)據(jù)覆蓋   for row in range(self.batch_size):    tmp_x[row, :len(batch[row])] = batch[row]   tmp_y = np.copy(tmp_x)   tmp_y[:, :-1] = tmp_y[:, 1:]   x_data.append(tmp_x)   y_data.append(tmp_y)  return x_data, y_data def next_batch(self):  """  獲取下一個(gè)batch  :return:  """  start = self.start  self.start += 1  if self.start >= self.lenth:   self.start = 0  return self.data[start], self.target[start]if __name__ == '__main__': dataset = Dataset(BATCH_SIZE) dataset.read_data()

4.訓(xùn)練模型

萬事俱備,開始訓(xùn)練。

沒有按照epoch進(jìn)行訓(xùn)練,這里只是循環(huán)訓(xùn)練指定個(gè)mini_batch。

訓(xùn)練過程中,會(huì)定期顯示當(dāng)前訓(xùn)練步數(shù)以及l(fā)oss值。會(huì)定期保存當(dāng)前模型及對(duì)應(yīng)checkpoint。

訓(xùn)練代碼:

# -*- coding: utf-8 -*-

# @Time : 18-3-13 下午2:50# @Author : AaronJny# @Email : Aaron__7@163.comimport tensorflow as tffrom rnn_models import TrainModelimport datasetimport settingTRAIN_TIMES = 30000 # 迭代總次數(shù)(沒有計(jì)算epoch)SHOW_STEP = 1 # 顯示loss頻率SAVE_STEP = 100 # 保存模型參數(shù)頻率x_data = tf.placeholder(tf.int32, [setting.BATCH_SIZE, None]) # 輸入數(shù)據(jù)y_data = tf.placeholder(tf.int32, [setting.BATCH_SIZE, None]) # 標(biāo)簽emb_keep = tf.placeholder(tf.float32) # embedding層dropout保留率rnn_keep = tf.placeholder(tf.float32) # lstm層dropout保留率data = dataset.Dataset(setting.BATCH_SIZE) # 創(chuàng)建數(shù)據(jù)集model = TrainModel(x_data, y_data, emb_keep, rnn_keep) # 創(chuàng)建訓(xùn)練模型saver = tf.train.Saver()with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # 初始化 for step in range(TRAIN_TIMES):  # 獲取訓(xùn)練batch  x, y = data.next_batch()  # 計(jì)算loss  loss, _ = sess.run([model.loss, model.optimize],       {model.data: x, model.labels: y, model.emb_keep: setting.EMB_KEEP,       model.rnn_keep: setting.RNN_KEEP})  if step % SHOW_STEP == 0:   print 'step {}, loss is {}'.format(step, loss)  # 保存模型  if step % SAVE_STEP == 0:   saver.save(sess, setting.CKPT_PATH, global_step=model.global_step)

5.驗(yàn)證模型

提供兩種方法驗(yàn)證模型:

隨機(jī)生成古體詩

生成藏頭詩

隨機(jī)生成的結(jié)果勉強(qiáng)可以接受,起碼格式對(duì)了,看起來也像個(gè)樣子。

生成藏頭詩就五花八門了,效果不好,往往要多次才能生成一個(gè)差強(qiáng)人意的。emmm,其實(shí)也可以理解,畢竟我們指定的“藏頭”在訓(xùn)練集中的分布是不能保證的。

這里簡(jiǎn)單說一下生成古體詩的過程:

1.首先,讀取訓(xùn)練模型保存的參數(shù),覆蓋驗(yàn)證模型的參數(shù)

2.將開始符號(hào)'s'作為輸入,喂給模型,模型將輸出下一個(gè)字符為此表中各詞的概率,以及rnn傳遞的state。注意,驗(yàn)證模型時(shí),dropout的保留率應(yīng)設(shè)置為1.0

3.根據(jù)2中輸出的概率,使用輪盤賭法,隨機(jī)出下一個(gè)字

4.將隨機(jī)出來的字作為輸入,前一次輸出的state作為本次輸入的state,喂給模型,模型將輸入下一個(gè)字符為此表中各詞的概率,以及rnn傳遞的state

5.重復(fù)3,4步驟,直到隨機(jī)出結(jié)束符'e',生成結(jié)束。過程中生成的所有字符,構(gòu)成本次生成的古體詩('s'和'e'不算)

生成藏頭詩的過程與生成古體詩是類似的,主要區(qū)別在于,在開始和每個(gè)標(biāo)點(diǎn)符號(hào)被預(yù)測(cè)出來時(shí),向模型喂給的是“藏頭”中的一個(gè)字,就不多說了,詳情可參考代碼。

 

# -*- coding: utf-8 -*-# @Time : 18-3-13 下午2:50# @Author : AaronJny# @Email : Aaron__7@163.comimport sysreload(sys)sys.setdefaultencoding('utf8')import tensorflow as tfimport numpy as npfrom rnn_models import EvalModelimport utilsimport os# 指定驗(yàn)證時(shí)不使用cuda,這樣可以在用gpu訓(xùn)練的同時(shí),使用cpu進(jìn)行驗(yàn)證os.environ['CUDA_VISIBLE_DEVICES'] = ''x_data = tf.placeholder(tf.int32, [1, None])emb_keep = tf.placeholder(tf.float32)rnn_keep = tf.placeholder(tf.float32)# 驗(yàn)證用模型model = EvalModel(x_data, emb_keep, rnn_keep)saver = tf.train.Saver()# 單詞到id的映射word2id_dict = utils.read_word_to_id_dict()# id到單詞的映射id2word_dict = utils.read_id_to_word_dict()def generate_word(prob): """ 選擇概率最高的前100個(gè)詞,并用輪盤賭法選取最終結(jié)果 :param prob: 概率向量 :return: 生成的詞 """ prob = sorted(prob, reverse=True)[:100] index = np.searchsorted(np.cumsum(prob), np.random.rand(1) * np.sum(prob)) return id2word_dict[int(index)]# def generate_word(prob):#  """#  從所有詞中,使用輪盤賭法選取最終結(jié)果#  :param prob: 概率向量#  :return: 生成的詞#  """#  index = int(np.searchsorted(np.cumsum(prob), np.random.rand(1) * np.sum(prob)))#  return id2word_dict[index]def generate_poem(): """ 隨機(jī)生成一首詩歌 :return: """ with tf.Session() as sess:  # 加載最新的模型  ckpt = tf.train.get_checkpoint_state('ckpt')  saver.restore(sess, ckpt.model_checkpoint_path)  # 預(yù)測(cè)第一個(gè)詞  rnn_state = sess.run(model.cell.zero_state(1, tf.float32))  x = np.array([[word2id_dict['s']]], np.int32)  prob, rnn_state = sess.run([model.prob, model.last_state],         {model.data: x, model.init_state: rnn_state, model.emb_keep: 1.0,         model.rnn_keep: 1.0})  word = generate_word(prob)  poem = ''  # 循環(huán)操作,直到預(yù)測(cè)出結(jié)束符號(hào)‘e'  while word != 'e':   poem += word   x = np.array([[word2id_dict[word]]])   prob, rnn_state = sess.run([model.prob, model.last_state],          {model.data: x, model.init_state: rnn_state, model.emb_keep: 1.0,          model.rnn_keep: 1.0})   word = generate_word(prob)  # 打印生成的詩歌  print poemdef generate_acrostic(head): """ 生成藏頭詩 :param head:每行的第一個(gè)字組成的字符串 :return: """ with tf.Session() as sess:  # 加載最新的模型  ckpt = tf.train.get_checkpoint_state('ckpt')  saver.restore(sess, ckpt.model_checkpoint_path)  # 進(jìn)行預(yù)測(cè)  rnn_state = sess.run(model.cell.zero_state(1, tf.float32))  poem = ''  cnt = 1  # 一句句生成詩歌  for x in head:   word = x   while word != ',' and word != '。':    poem += word    x = np.array([[word2id_dict[word]]])    prob, rnn_state = sess.run([model.prob, model.last_state],           {model.data: x, model.init_state: rnn_state, model.emb_keep: 1.0,           model.rnn_keep: 1.0})    word = generate_word(prob)    if len(poem) > 25:     print 'bad.'     break   # 根據(jù)單雙句添加標(biāo)點(diǎn)符號(hào)   if cnt & 1:    poem += ','   else:    poem += '。'   cnt += 1  # 打印生成的詩歌  print poem  return poemif __name__ == '__main__': # generate_acrostic(u'神策') generate_poem()

6.一些提取出來的方法和配置

很簡(jiǎn)單,不多說。

utils.py

# -*- coding: utf-8 -*-# @Time : 18-3-13 下午4:16# @Author : AaronJny# @Email : Aaron__7@163.comimport settingdef read_word_list(): """ 從文件讀取詞匯表 :return: 詞匯列表 """ with open(setting.VOCAB_PATH, 'r') as f:  word_list = [word for word in f.read().decode('utf8').strip().split('/n')] return word_listdef read_word_to_id_dict(): """ 生成單詞到id的映射 :return: """ word_list=read_word_list() word2id=dict(zip(word_list,range(len(word_list)))) return word2iddef read_id_to_word_dict(): """ 生成id到單詞的映射 :return: """ word_list=read_word_list() id2word=dict(zip(range(len(word_list)),word_list)) return id2wordif __name__ == '__main__': read_id_to_word_dict()

setting.py

 

# -*- coding: utf-8 -*-# @Time : 18-3-13 下午3:08# @Author : AaronJny# @Email : Aaron__7@163.comVOCAB_SIZE = 6272 # 詞匯表大小SHARE_EMD_WITH_SOFTMAX = True # 是否在embedding層和softmax層之間共享參數(shù)MAX_GRAD = 5.0 # 最大梯度,防止梯度爆炸LEARN_RATE = 0.0005 # 初始學(xué)習(xí)率LR_DECAY = 0.92 # 學(xué)習(xí)率衰減LR_DECAY_STEP = 600 # 衰減步數(shù)BATCH_SIZE = 64 # batch大小CKPT_PATH = 'ckpt/model_ckpt' # 模型保存路徑VOCAB_PATH = 'vocab/poetry.vocab' # 詞表路徑EMB_KEEP = 0.5 # embedding層dropout保留率RNN_KEEP = 0.5 # lstm層dropout保留率

7.完畢

編碼到此結(jié)束,有興趣的朋友可以自己跑一跑,玩一玩,我就不多做測(cè)試了。

項(xiàng)目GitHub地址:https://github.com/AaronJny/peotry_generate

博主也正在學(xué)習(xí),能力淺薄,文中如有瑕疵紕漏之處,還請(qǐng)路過的諸位大佬不吝賜教,萬分感謝!

以上就是本文的全部?jī)?nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持VEVB武林網(wǎng)。


注:相關(guān)教程知識(shí)閱讀請(qǐng)移步到python教程頻道。
發(fā)表評(píng)論 共有條評(píng)論
用戶名: 密碼:
驗(yàn)證碼: 匿名發(fā)表
主站蜘蛛池模板: 霍邱县| 芜湖县| 衡水市| 温泉县| 卫辉市| 南丹县| 十堰市| 本溪市| 麟游县| 阳春市| 栾城县| 奉节县| 五寨县| 云和县| 株洲市| 吴旗县| 邵阳县| 安乡县| 乳山市| 邹平县| 盱眙县| 白山市| 武强县| 会东县| 沅江市| 和顺县| 昂仁县| 蓝山县| 奎屯市| 永嘉县| 安吉县| 巧家县| 郎溪县| 屯昌县| 大厂| 嵊州市| 庄河市| 托克托县| 漾濞| 崇礼县| 长宁县|