国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 編程 > Python > 正文

python實現百度關鍵詞排名查詢

2019-11-25 18:28:17
字體:
來源:轉載
供稿:網友

就是一個簡單的python查詢百度關鍵詞排名的函數,以下是一些簡介:
1、UA隨機
2、操作簡單方便,直接getRank(關鍵詞,域名)就可以了
3、編碼轉化。編碼方面應該沒啥問題了。
4、結果豐富。不僅有排名,還有搜索結果的title,URL,快照時間,符合SEO需求
5、拿來做個軟件或者自己用都很方便。

功能是單線程實現,速度慢,大家可以參考修改成自己需要的。

復制代碼 代碼如下:

#coding=utf-8

import requests
import BeautifulSoup
import re
import random

def decodeAnyWord(w):
    try:
        w.decode('utf-8')
    except:
        w = w.decode('gb2312')
    else:
        w = w.decode('utf-8')
    return w

def createURL(checkWord):   #create baidu URL with search words
    checkWord = checkWord.strip()
    checkWord = checkWord.replace(' ', '+').replace('/n', '')
    baiduURL = 'http://www.baidu.com/s?wd=%s&rn=100' % checkWord
    return baiduURL

def getContent(baiduURL):   #get the content of the serp
    uaList = ['Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1;+.NET+CLR+1.1.4322;+TencentTraveler)',
    'Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729)',
    'Mozilla/5.0+(Windows+NT+5.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.89+Safari/537.1',
    'Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1)',
    'Mozilla/5.0+(Windows+NT+6.1;+rv:11.0)+Gecko/20100101+Firefox/11.0',
    'Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0;+SV1)',
    'Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0;+GTB7.1;+.NET+CLR+2.0.50727)',
    'Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0;+KB974489)']
    headers = {'User-Agent': random.choice(uaList)}

    r = requests.get(baiduURL, headers = headers)
    return r.content

def getLastURL(rawurl): #get final URL while there're redirects
    r = requests.get(rawurl)
    return r.url

def getAtext(atext):    #get the text with <a> and </a>
    pat = re.compile(r'<a .*?>(.*?)</a>')
    match = pat.findall(atext.replace('/n', ''))
    pureText = match[0].replace('<em>', '').replace('</em>', '')
    return pureText.replace('/n', '')

def getCacheDate(t):    #get the date of cache
    pat = re.compile(r'<span class="g">.*?(/d{4}-/d{1,2}-/d{1,2}) </span>')
    match = pat.findall(t)
    cacheDate = match[0]
    return cacheDate

def getRank(checkWord, domain): #main line
    checkWord = checkWord.replace('/n', '')
    checkWord = decodeAnyWord(checkWord)
    baiduURL = createURL(checkWord)
    cont = getContent(baiduURL)
    soup = BeautifulSoup.BeautifulSoup(cont)
    results = soup.findAll('table', {'class': 'result'})    #find all results in this page

    for result in results:
        checkData = unicode(result.find('span', {'class': 'g'}))
        if re.compile(r'^[^/]*%s.*?' %domain).match(checkData.replace('<b>', '').replace('</b>', '')): #改正則
            nowRank = result['id']  #get the rank if match the domain info

            resLink = result.find('h3').a
            resURL = resLink['href']
            domainURL = getLastURL(resURL)  #get the target URL
            resTitle = getAtext(unicode(resLink))   #get the title of the target page

            rescache = result.find('span', {'class': 'g'})
            cacheDate = getCacheDate(unicode(rescache)) #get the cache date of the target page

            res = u'%s, 第%s名, %s, %s, %s' % (checkWord, nowRank, resTitle, cacheDate, domainURL)
            return res.encode('gb2312')
            break
    else:
        return '>100'


domain = 'www.baidu.com' #set the domain which you want to search.
print getRank('百度', domain)

發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 石棉县| 湾仔区| 岢岚县| 沁水县| 梧州市| 云梦县| 卢龙县| 南雄市| 湾仔区| 乐亭县| 四子王旗| 濮阳县| 乐昌市| 海丰县| 万山特区| 龙岩市| 灵宝市| 寿光市| 上蔡县| 桐庐县| 沧源| 个旧市| 商水县| 屏东县| 永州市| 扎赉特旗| 南溪县| 迁西县| 杭锦后旗| 万山特区| 哈巴河县| 宜黄县| 开阳县| 紫阳县| 广宗县| 前郭尔| 中山市| 嘉义县| 厦门市| 民和| 石台县|