網(wǎng)絡(luò)爬蟲(又被稱為網(wǎng)頁蜘蛛,網(wǎng)絡(luò)機(jī)器人,在FOAF社區(qū)中間,更經(jīng)常的稱為網(wǎng)頁追逐者),是一種按照一定的規(guī)則,自動(dòng)的抓取萬維網(wǎng)信息的程序或者腳本。
下面有一個(gè)示例代碼,分享給大家:
#! /usr/bin/env python/200494.html">python#encoding = 'utf-8'#Filename: spider_58center_sth.pyfrom bs4import BeautifulSoupimport timeimport requestsurl_58 = 'http://nj.58.com/?PGTID=0d000000-0000-0c5c-ffba-71f8f3f7039e&ClickID=1''''用于爬取電商售賣信息: 例為58同城電腦售賣信息 '''def get_url_list(url): web_data = requests.get(url)soup = BeautifulSoup(web_data.text, 'lxml')url = soup.select('td.t > a[class="t"]')url_list = ''for link in url: link_n = link.get('href')if 'zhuanzhuan' in link_n: passelse : if 'jump' in link_n: passelse : url_list = url_list + '/n' + link_nprint('url_list: %s' % url_list)return url_list# 分類獲取目標(biāo)信息def get_url_info(): url_list = get_url_list(url_58)for url in url_list.split(): time.sleep(1)web_datas = requests.get(url)soup = BeautifulSoup(web_datas.text, 'lxml')type = soup.select('#head > div.breadCrumb.f12 > span:nth-of-type(3) > a')title = soup.select(' div.col_sub.mainTitle > h1')date = soup.select('li.time')price = soup.select('div.person_add_top.no_ident_top > div.per_ad_left > div.col_sub.summary > ul > ' 'li:nth-of-type(1) > div.su_con > span.price.c_f50')fineness = soup.select('div.col_sub.summary > u1 > li:nth-of-type(2) > div.su_con > span')area = soup.select('div.col_sub.summary > u1 > li:nth-of-type(3) > div.su_con > span')for typei, titlei, datei, pricei, finenessi, areai in zip(type, title, date, price, fineness, area): #做字典data = { 'type': typei.get_text(), 'title': titlei.get_text(), 'date': datei.get_text(), 'price': pricei.get_text(), 'fineness': (finenessi.get_text()).strip(), 'area': list(areai.stripped_strings)}print(data)get_url_info()爬取商城商品售賣信息
總結(jié)
以上就是本文關(guān)于Python探索之爬取電商售賣信息代碼示例的全部內(nèi)容,希望對(duì)大家有所幫助。如有不足之處,歡迎留言指出。感謝朋友們對(duì)本站的支持!
新聞熱點(diǎn)
疑難解答
圖片精選