国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 編程 > Python > 正文

Python3學習urllib的使用方法示例

2020-01-04 16:14:12
字體:
來源:轉載
供稿:網友

python/81860.html">python/197958.html">urllib是python的一個獲取url(Uniform Resource Locators,統一資源定址符)了,可以利用它來抓取遠程的數據進行保存,本文整理了一些關于urllib使用中的一些關于header,代理,超時,認證,異常處理處理方法。

1.基本方法

urllib.request.urlopen(url, data=None, [timeout, ]*, cafile=None, capath=None, cadefault=False, context=None)

  1. url:  需要打開的網址
  2. data:Post提交的數據
  3. timeout:設置網站的訪問超時時間

直接用urllib.request模塊的urlopen()獲取頁面,page的數據格式為bytes類型,需要decode()解碼,轉換成str類型。

from urllib import requestresponse = request.urlopen(r'http://python.org/') # <http.client.HTTPResponse object at 0x00000000048BC908> HTTPResponse類型page = response.read()page = page.decode('utf-8')

urlopen返回對象提供方法:

  1. read() , readline() ,readlines() , fileno() , close() :對HTTPResponse類型數據進行操作
  2. info():返回HTTPMessage對象,表示遠程服務器返回的頭信息
  3. getcode():返回Http狀態碼。如果是http請求,200請求成功完成;404網址未找到
  4. geturl():返回請求的url

1、簡單讀取網頁信息

import urllib.request response = urllib.request.urlopen('http://python.org/') html = response.read() 

2、使用request

urllib.request.Request(url, data=None, headers={}, method=None)

使用request()來包裝請求,再通過urlopen()獲取頁面。

import urllib.request req = urllib.request.Request('http://python.org/') response = urllib.request.urlopen(req) the_page = response.read() 

3、發送數據,以登錄知乎為例

''''' Created on 2016年5月31日  @author: gionee ''' import gzip import re import urllib.request import urllib.parse import http.cookiejar  def ungzip(data):   try:     print("嘗試解壓縮...")     data = gzip.decompress(data)     print("解壓完畢")   except:     print("未經壓縮,無需解壓")      return data      def getXSRF(data):   cer = re.compile('name=/"_xsrf/" value=/"(.*)/"',flags = 0)   strlist = cer.findall(data)   return strlist[0]  def getOpener(head):   # cookies 處理   cj = http.cookiejar.CookieJar()   pro = urllib.request.HTTPCookieProcessor(cj)   opener = urllib.request.build_opener(pro)   header = []   for key,value in head.items():     elem = (key,value)     header.append(elem)   opener.addheaders = header   return opener # header信息可以通過firebug獲得 header = {   'Connection': 'Keep-Alive',   'Accept': 'text/html, application/xhtml+xml, */*',   'Accept-Language': 'en-US,en;q=0.8,zh-Hans-CN;q=0.5,zh-Hans;q=0.3',   'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:46.0) Gecko/20100101 Firefox/46.0',   'Accept-Encoding': 'gzip, deflate',   'Host': 'www.zhihu.com',   'DNT': '1' }  url = 'http://www.zhihu.com/' opener = getOpener(header) op = opener.open(url) data = op.read() data = ungzip(data) _xsrf = getXSRF(data.decode())  url += "login/email" email = "登錄賬號" password = "登錄密碼" postDict = {   '_xsrf': _xsrf,   'email': email,   'password': password,   'rememberme': 'y'  } postData = urllib.parse.urlencode(postDict).encode() op = opener.open(url,postData) data = op.read() data = ungzip(data)  print(data.decode()) 

4、http錯誤

import urllib.request req = urllib.request.Request('http://www.lz881228.blog.163.com ') try:   urllib.request.urlopen(req) except urllib.error.HTTPError as e: print(e.code) print(e.read().decode("utf8")) 

5、異常處理

from urllib.request import Request, urlopen from urllib.error import URLError, HTTPError  req = Request("http://www.abc.com /") try:   response = urlopen(req) except HTTPError as e:   print('The server couldn't fulfill the request.')   print('Error code: ', e.code) except URLError as e:   print('We failed to reach a server.')   print('Reason: ', e.reason) else:   print("good!")   print(response.read().decode("utf8")) 

6、http認證

import urllib.request  # create a password manager password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()  # Add the username and password. # If we knew the realm, we could use it instead of None. top_level_url = "http://m.survivalescaperooms.com /" password_mgr.add_password(None, top_level_url, 'rekfan', 'xxxxxx')  handler = urllib.request.HTTPBasicAuthHandler(password_mgr)  # create "opener" (OpenerDirector instance) opener = urllib.request.build_opener(handler)  # use the opener to fetch a URL a_url = "http://m.survivalescaperooms.com /" x = opener.open(a_url) print(x.read())  # Install the opener. # Now all calls to urllib.request.urlopen use our opener. urllib.request.install_opener(opener) a = urllib.request.urlopen(a_url).read().decode('utf8')  print(a) 

7、使用代理

import urllib.request  proxy_support = urllib.request.ProxyHandler({'sock5': 'localhost:1080'}) opener = urllib.request.build_opener(proxy_support) urllib.request.install_opener(opener)  a = urllib.request.urlopen("http://www.baidu.com ").read().decode("utf8") print(a) 

8、超時

import socket import urllib.request  # timeout in seconds timeout = 2 socket.setdefaulttimeout(timeout)  # this call to urllib.request.urlopen now uses the default timeout # we have set in the socket module req = urllib.request.Request('http://m.survivalescaperooms.com /') a = urllib.request.urlopen(req).read() print(a) 

以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持VEVB武林網。


注:相關教程知識閱讀請移步到python教程頻道。
發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 石棉县| 景洪市| 新化县| 陵川县| 岳阳市| 香港 | 新闻| 恭城| 榕江县| 锦屏县| 富蕴县| 荔浦县| 巴林右旗| 林周县| 云浮市| 临清市| 集贤县| 新沂市| 万山特区| 融水| 绥宁县| 金平| 仙游县| 乌拉特前旗| 广饶县| 循化| 宁城县| 桑日县| 锦州市| 临洮县| 吴旗县| 泸溪县| 蓬莱市| 绥化市| 镇赉县| 石河子市| 休宁县| 湘潭市| 颍上县| 康定县| 六安市|