国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 編程 > Python > 正文

scrapy spider的幾種爬取方式實例代碼

2020-01-04 16:06:44
字體:
供稿:網(wǎng)友

本節(jié)課介紹了scrapy的爬蟲框架,重點說了scrapy組件spider。

spider的幾種爬取方式:

  1. 爬取1頁內(nèi)容
  2. 按照給定列表拼出鏈接爬取多頁
  3. 找到‘下一頁'標(biāo)簽進(jìn)行爬取
  4. 進(jìn)入鏈接,按照鏈接進(jìn)行爬取

下面分別給出了示例

1.爬取1頁內(nèi)容

#by 寒小陽(hanxiaoyang.ml@gmail.com)import scrapyclass JulyeduSpider(scrapy.Spider):  name = "julyedu"  start_urls = [    'https://www.julyedu.com/category/index',  ]  def parse(self, response):    for julyedu_class in response.xpath('//div[@class="course_info_box"]'):      print julyedu_class.xpath('a/h4/text()').extract_first()      print julyedu_class.xpath('a/p[@class="course-info-tip"][1]/text()').extract_first()      print julyedu_class.xpath('a/p[@class="course-info-tip"][2]/text()').extract_first()      print response.urljoin(julyedu_class.xpath('a/img[1]/@src').extract_first())      print "/n"      yield {        'title':julyedu_class.xpath('a/h4/text()').extract_first(),        'desc': julyedu_class.xpath('a/p[@class="course-info-tip"][1]/text()').extract_first(),        'time': julyedu_class.xpath('a/p[@class="course-info-tip"][2]/text()').extract_first(),        'img_url': response.urljoin(julyedu_class.xpath('a/img[1]/@src').extract_first())      }

2.按照給定列表拼出鏈接爬取多頁

#by 寒小陽(hanxiaoyang.ml@gmail.com)import scrapyclass CnBlogSpider(scrapy.Spider):  name = "cnblogs"  allowed_domains = ["cnblogs.com"]  start_urls = [    'http://www.cnblogs.com/pick/#p%s' % p for p in xrange(1, 11)    ]  def parse(self, response):    for article in response.xpath('//div[@class="post_item"]'):      print article.xpath('div[@class="post_item_body"]/h3/a/text()').extract_first().strip()      print response.urljoin(article.xpath('div[@class="post_item_body"]/h3/a/@href').extract_first()).strip()      print article.xpath('div[@class="post_item_body"]/p/text()').extract_first().strip()      print article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/a/text()').extract_first().strip()      print response.urljoin(article.xpath('div[@class="post_item_body"]/div/a/@href').extract_first()).strip()      print article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/span[@class="article_comment"]/a/text()').extract_first().strip()      print article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/span[@class="article_view"]/a/text()').extract_first().strip()      print ""      yield {        'title': article.xpath('div[@class="post_item_body"]/h3/a/text()').extract_first().strip(),        'link': response.urljoin(article.xpath('div[@class="post_item_body"]/h3/a/@href').extract_first()).strip(),        'summary': article.xpath('div[@class="post_item_body"]/p/text()').extract_first().strip(),        'author': article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/a/text()').extract_first().strip(),        'author_link': response.urljoin(article.xpath('div[@class="post_item_body"]/div/a/@href').extract_first()).strip(),        'comment': article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/span[@class="article_comment"]/a/text()').extract_first().strip(),        'view': article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/span[@class="article_view"]/a/text()').extract_first().strip(),      }

3.找到‘下一頁'標(biāo)簽進(jìn)行爬取

import scrapyclass QuotesSpider(scrapy.Spider):  name = "quotes"  start_urls = [    'http://quotes.toscrape.com/tag/humor/',  ]  def parse(self, response):    for quote in response.xpath('//div[@class="quote"]'):      yield {        'text': quote.xpath('span[@class="text"]/text()').extract_first(),        'author': quote.xpath('span/small[@class="author"]/text()').extract_first(),      }    next_page = response.xpath('//li[@class="next"]/@herf').extract_first()    if next_page is not None:      next_page = response.urljoin(next_page)      yield scrapy.Request(next_page, callback=self.parse)

4.進(jìn)入鏈接,按照鏈接進(jìn)行爬取

#by 寒小陽(hanxiaoyang.ml@gmail.com)import scrapyclass QQNewsSpider(scrapy.Spider):  name = 'qqnews'  start_urls = ['http://news.qq.com/society_index.shtml']  def parse(self, response):    for href in response.xpath('//*[@id="news"]/div/div/div/div/em/a/@href'):      full_url = response.urljoin(href.extract())      yield scrapy.Request(full_url, callback=self.parse_question)  def parse_question(self, response):    print response.xpath('//div[@class="qq_article"]/div/h1/text()').extract_first()    print response.xpath('//span[@class="a_time"]/text()').extract_first()    print response.xpath('//span[@class="a_catalog"]/a/text()').extract_first()    print "/n".join(response.xpath('//div[@id="Cnt-Main-Article-QQ"]/p[@class="text"]/text()').extract())    print ""    yield {      'title': response.xpath('//div[@class="qq_article"]/div/h1/text()').extract_first(),      'content': "/n".join(response.xpath('//div[@id="Cnt-Main-Article-QQ"]/p[@class="text"]/text()').extract()),      'time': response.xpath('//span[@class="a_time"]/text()').extract_first(),      'cate': response.xpath('//span[@class="a_catalog"]/a/text()').extract_first(),    }

總結(jié)

以上就是本文關(guān)于scrapy spider的幾種爬取方式實例代碼的全部內(nèi)容,希望對大家有所幫助。感興趣的朋友可以繼續(xù)參閱本站其他相關(guān)專題,如有不足之處,歡迎留言指出。感謝朋友們對本站的支持!


注:相關(guān)教程知識閱讀請移步到python教程頻道。
發(fā)表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發(fā)表
主站蜘蛛池模板: 建昌县| 河西区| 湖北省| 满城县| 肥城市| 吕梁市| 高密市| 肥乡县| 临安市| 黄石市| 石林| 晋中市| 辽源市| 平阴县| 禹城市| 郑州市| 济阳县| 中西区| 丹凤县| 水城县| 舞阳县| 金坛市| 平利县| 秦皇岛市| 临泉县| 阿勒泰市| 弥勒县| 方山县| 松滋市| 商城县| 深水埗区| 嘉兴市| 徐闻县| 临泽县| 新宾| 天峻县| 廉江市| 古交市| 滕州市| 延边| 通榆县|