国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 編程 > Python > 正文

scrapy spider的幾種爬取方式實例代碼

2020-02-22 23:00:22
字體:
來源:轉載
供稿:網友

本節課介紹了scrapy的爬蟲框架,重點說了scrapy組件spider。

spider的幾種爬取方式:

    爬取1頁內容 按照給定列表拼出鏈接爬取多頁 找到‘下一頁'標簽進行爬取 進入鏈接,按照鏈接進行爬取

下面分別給出了示例

1.爬取1頁內容

#by 寒小陽(hanxiaoyang.ml@gmail.com)import scrapyclass JulyeduSpider(scrapy.Spider):  name = "julyedu"  start_urls = [    'https://www.julyedu.com/category/index',  ]  def parse(self, response):    for julyedu_class in response.xpath('//div[@class="course_info_box"]'):      print julyedu_class.xpath('a/h4/text()').extract_first()      print julyedu_class.xpath('a/p[@class="course-info-tip"][1]/text()').extract_first()      print julyedu_class.xpath('a/p[@class="course-info-tip"][2]/text()').extract_first()      print response.urljoin(julyedu_class.xpath('a/img[1]/@src').extract_first())      print "/n"      yield {        'title':julyedu_class.xpath('a/h4/text()').extract_first(),        'desc': julyedu_class.xpath('a/p[@class="course-info-tip"][1]/text()').extract_first(),        'time': julyedu_class.xpath('a/p[@class="course-info-tip"][2]/text()').extract_first(),        'img_url': response.urljoin(julyedu_class.xpath('a/img[1]/@src').extract_first())      }

2.按照給定列表拼出鏈接爬取多頁

#by 寒小陽(hanxiaoyang.ml@gmail.com)import scrapyclass CnBlogSpider(scrapy.Spider):  name = "cnblogs"  allowed_domains = ["cnblogs.com"]  start_urls = [    'http://www.cnblogs.com/pick/#p%s' % p for p in xrange(1, 11)    ]  def parse(self, response):    for article in response.xpath('//div[@class="post_item"]'):      print article.xpath('div[@class="post_item_body"]/h3/a/text()').extract_first().strip()      print response.urljoin(article.xpath('div[@class="post_item_body"]/h3/a/@href').extract_first()).strip()      print article.xpath('div[@class="post_item_body"]/p/text()').extract_first().strip()      print article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/a/text()').extract_first().strip()      print response.urljoin(article.xpath('div[@class="post_item_body"]/div/a/@href').extract_first()).strip()      print article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/span[@class="article_comment"]/a/text()').extract_first().strip()      print article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/span[@class="article_view"]/a/text()').extract_first().strip()      print ""      yield {        'title': article.xpath('div[@class="post_item_body"]/h3/a/text()').extract_first().strip(),        'link': response.urljoin(article.xpath('div[@class="post_item_body"]/h3/a/@href').extract_first()).strip(),        'summary': article.xpath('div[@class="post_item_body"]/p/text()').extract_first().strip(),        'author': article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/a/text()').extract_first().strip(),        'author_link': response.urljoin(article.xpath('div[@class="post_item_body"]/div/a/@href').extract_first()).strip(),        'comment': article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/span[@class="article_comment"]/a/text()').extract_first().strip(),        'view': article.xpath('div[@class="post_item_body"]/div[@class="post_item_foot"]/span[@class="article_view"]/a/text()').extract_first().strip(),      }            
發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 仙游县| 长阳| 广水市| 盐源县| 莎车县| 那曲县| 田林县| 开阳县| 甘泉县| 习水县| 额尔古纳市| 班玛县| 潍坊市| 宁远县| 白沙| 多伦县| 辽中县| 郧西县| 交口县| 云浮市| 黎平县| 铁岭县| 东兴市| 淮南市| 河北区| 宜城市| 祁阳县| 永清县| 攀枝花市| 新竹县| 轮台县| 平果县| 蕉岭县| 黑水县| 华容县| 胶南市| 织金县| 清水河县| 仪陇县| 康平县| 桐柏县|