先說下:所謂的大文件并不是壓縮文件有多大,幾十兆的文件而是解壓后幾百兆。其中就遇到解壓不成功的情況.、讀小文件時成功,大文件時失敗等
def unzip_to_txt_plus(zipfilename):  zfile = zipfile.ZipFile(zipfilename, 'r')  for filename in zfile.namelist():    data = zfile.read(filename)    # data = data.decode('gbk').encode('utf-8')    data = data.decode('gbk', 'ignore').encode('utf-8')    file = open(filename, 'w+b')    file.write(data)    file.close()if __name__ == '__main__':  zipfilename = "E://share//python_excel//zip_to_database//20171025.zip"  unzip_to_txt_plus(zipfilename)注意參數:‘ignore' ,因為默認是嚴格編碼,如果不加這個參數就會報錯。
因為該函數已經把文件編成utf-8 所以后面讀取文件時成功,下面貼出讀取大文件代碼(忽略數據庫相關)
# - coding: utf-8 -import csvimport linecacheimport xlrdimport MySQLdbdef txt_todatabase(filename, linenum):   # with open(filename, "r", encoding="gbk") as csvfile:   #   Read = csv.reader(csvfile)   #   count =0   #   for i in Read:   #   #   print(i)   #      count += 1   #      # print('hello')   #   print(count)   count = linecache.getline(filename, linenum)   print(count)   # with open("new20171028.csv", "w", newline="") as datacsv:   #   # dialect為打開csv文件的方式,默認是excel,delimiter="/t"參數指寫入的時候的分隔符   #   csvwriter = csv.writer(datacsv, dialect=("excel"))   #   # csv文件插入一行數據,把下面列表中的每一項放入一個單元格(可以用循環插入多行)   #   csvwriter.writerow(["A", "B", "C", "D"])def bigtxt_read(filename):  with open(filename, 'r', encoding='utf-8') as data:    count =0    while 1:      count += 1      line = data.readline()      if 1000000 == count:        print(line)      if not line:        break    print(count)if __name__ == '__main__':  filename = '20171025.txt'  txt_todatabase(filename, 1000000)  bigtxt_read(filename)經過對比,發現兩個速度基本一樣快。兩百萬行的數據是沒壓力的。
以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持武林站長站。
新聞熱點
疑難解答