前言
最近工作工作中遇到一個需求,是要根據CDN日志過濾一些數據,例如流量、狀態碼統計,TOP IP、URL、UA、Referer等。以前都是用 bash shell 實現的,但是當日志量較大,日志文件數G、行數達數千萬億級時,通過 shell 處理有些力不從心,處理時間過長。于是研究了下Python pandas這個數據處理庫的使用。一千萬行日志,處理完成在40s左右。
代碼
#!/usr/bin/python# -*- coding: utf-8 -*-# sudo pip install pandas__author__ = 'Loya Chen'import sysimport pandas as pdfrom collections import OrderedDict"""Description: This script is used to analyse qiniu cdn log.================================================================================日志格式IP - ResponseTime [time +0800] "Method URL HTTP/1.1" code size "referer" "UA"================================================================================日志示例 [0] [1][2] [3] [4] [5]101.226.66.179 - 68 [16/Nov/2016:04:36:40 +0800] "GET http://www.qn.com/1.jpg -" [6] [7] [8] [9]200 502 "-" "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)"================================================================================"""if len(sys.argv) != 2: print('Usage:', sys.argv[0], 'file_of_log') exit() else: log_file = sys.argv[1] # 需統計字段對應的日志位置 ip = 0url = 5status_code = 6size = 7referer = 8ua = 9# 將日志讀入DataFramereader = pd.read_table(log_file, sep=' ', names=[i for i in range(10)], iterator=True)loop = TruechunkSize = 10000000chunks = []while loop: try: chunk = reader.get_chunk(chunkSize) chunks.append(chunk) except StopIteration: #Iteration is stopped. loop = Falsedf = pd.concat(chunks, ignore_index=True)byte_sum = df[size].sum() #流量統計top_status_code = pd.DataFrame(df[6].value_counts()) #狀態碼統計top_ip = df[ip].value_counts().head(10) #TOP IPtop_referer = df[referer].value_counts().head(10) #TOP Referertop_ua = df[ua].value_counts().head(10) #TOP User-Agenttop_status_code['persent'] = pd.DataFrame(top_status_code/top_status_code.sum()*100)top_url = df[url].value_counts().head(10) #TOP URLtop_url_byte = df[[url,size]].groupby(url).sum().apply(lambda x:x.astype(float)/1024/1024) / .round(decimals = 3).sort_values(by=[size], ascending=False)[size].head(10) #請求流量最大的URLtop_ip_byte = df[[ip,size]].groupby(ip).sum().apply(lambda x:x.astype(float)/1024/1024) / .round(decimals = 3).sort_values(by=[size], ascending=False)[size].head(10) #請求流量最多的IP# 將結果有序存入字典result = OrderedDict([("流量總計[單位:GB]:" , byte_sum/1024/1024/1024), ("狀態碼統計[次數|百分比]:" , top_status_code), ("IP TOP 10:" , top_ip), ("Referer TOP 10:" , top_referer), ("UA TOP 10:" , top_ua), ("URL TOP 10:" , top_url), ("請求流量最大的URL TOP 10[單位:MB]:" , top_url_byte), ("請求流量最大的IP TOP 10[單位:MB]:" , top_ip_byte)])# 輸出結果for k,v in result.items(): print(k) print(v) print('='*80)
新聞熱點
疑難解答