国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 編程 > Python > 正文

python使用rabbitmq實現網絡爬蟲示例

2019-11-25 18:30:42
字體:
來源:轉載
供稿:網友

編寫tasks.py

復制代碼 代碼如下:

from celery import Celery
from tornado.httpclient import HTTPClient
app = Celery('tasks')
app.config_from_object('celeryconfig')
@app.task
def get_html(url):
    http_client = HTTPClient()
    try:
        response = http_client.fetch(url,follow_redirects=True)
        return response.body
    except httpclient.HTTPError as e:
        return None
    http_client.close()

編寫celeryconfig.py

復制代碼 代碼如下:

CELERY_IMPORTS = ('tasks',)
BROKER_URL = 'amqp://guest@localhost:5672//'
CELERY_RESULT_BACKEND = 'amqp://'

編寫spider.py

復制代碼 代碼如下:

from tasks import get_html
from queue import Queue
from bs4 import BeautifulSoup
from urllib.parse import urlparse,urljoin
import threading
class spider(object):
    def __init__(self):
        self.visited={}
        self.queue=Queue()
    def process_html(self, html):
        pass
        #print(html)
    def _add_links_to_queue(self,url_base,html):
        soup = BeautifulSoup(html)
        links=soup.find_all('a')
        for link in links:
            try:
                url=link['href']
            except:
                pass
            else:
                url_com=urlparse(url)
                if not url_com.netloc:
                    self.queue.put(urljoin(url_base,url))
                else:
                    self.queue.put(url_com.geturl())
    def start(self,url):
        self.queue.put(url)
        for i in range(20):
            t = threading.Thread(target=self._worker)
            t.daemon = True
            t.start()
        self.queue.join()
    def _worker(self):
        while 1:
            url=self.queue.get()
            if url in self.visited:
                continue
            else:
                result=get_html.delay(url)
                try:
                    html=result.get(timeout=5)
                except Exception as e:
                    print(url)
                    print(e)
                self.process_html(html)
                self._add_links_to_queue(url,html)

                self.visited[url]=True
                self.queue.task_done()
s=spider()
s.start("http://m.survivalescaperooms.com/")

由于html中某些特殊情況的存在,程序還有待完善。

發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 新巴尔虎左旗| 成都市| 红原县| 正蓝旗| 东明县| 连平县| 微博| 金堂县| 岳阳市| 绍兴市| 盈江县| 锡林郭勒盟| 安阳市| 仁寿县| 威远县| 宁明县| 南召县| 鹰潭市| 灵石县| 应城市| 霍邱县| 河西区| 涟源市| 灵丘县| 肇庆市| 若尔盖县| 云梦县| 海南省| 武功县| 河曲县| 新沂市| 贺州市| 高尔夫| 南投市| 深水埗区| 无极县| 滁州市| 元阳县| 鸡西市| 中西区| 连江县|