本文實例講述了Python爬蟲框架scrapy實現的文件下載功能。分享給大家供大家參考,具體如下:
我們在寫普通腳本的時候,從一個網站拿到一個文件的下載url,然后下載,直接將數據寫入文件或者保存下來,但是這個需要我們自己一點一點的寫出來,而且反復利用率并不高,為了不重復造輪子,scrapy提供很流暢的下載文件方式,只需要隨便寫寫便可用了。
mat.py文件
# -*- coding: utf-8 -*-import scrapyfrom scrapy.linkextractor import LinkExtractorfrom weidashang.items import matplotlibclass MatSpider(scrapy.Spider):  name = "mat"  allowed_domains = ["matplotlib.org"]  start_urls = ['https://matplotlib.org/examples']  def parse(self, response):       #抓取每個腳本文件的訪問頁面,拿到后下載    link = LinkExtractor(restrict_css='div.toctree-wrapper.compound li.toctree-l2')    for link in link.extract_links(response):      yield scrapy.Request(url=link.url,callback=self.example)  def example(self,response):      #進入每個腳本的頁面,抓取源碼文件按鈕,并和base_url結合起來形成一個完整的url    href = response.css('a.reference.external::attr(href)').extract_first()    url = response.urljoin(href)    example = matplotlib()    example['file_urls'] = [url]    return examplepipelines.py
class MyFilePlipeline(FilesPipeline): def file_path(self, request, response=None, info=None): path = urlparse(request.url).path return join(basename(dirname(path)),basename(path))
settings.py
ITEM_PIPELINES = {  'weidashang.pipelines.MyFilePlipeline': 1,}FILES_STORE = 'examples_src'items.py
class matplotlib(Item): file_urls = Field() files = Field()
run.py
from scrapy.cmdline import executeexecute(['scrapy', 'crawl', 'mat','-o','example.json'])
希望本文所述對大家Python程序設計有所幫助。
新聞熱點
疑難解答