浏览器是展示数据的,而网络爬虫是采集数据的
模拟客户端发送请求获取响应数据,按照一定规则,自动从万维网上获取信息的程序
从万维网上,获取我们需要的信息
requests是一个优雅而简单的python HTTP请求库,作用是发送请求获取响应数据
在终端运行如下命令:
pip install requests
import requests
response('')
)
获取丁香园首页内容 首页URL为:
# 1. 导入模块
import requests# 2. 发送请求, 获取响应
response = ('')# 3. 从响应中, 获取数据
# )
t.decode())
Beautiful Soup 是一个可以从HTML或XML文件中提取数据的Python库
pip install bs4pip install lxml
代表要解析整个文档树, 它支持遍历文档树和搜索文档树中描述的大部分的方法。
# 1. 导入模块
from bs4 import BeautifulSoup# 2. 创建BeautifulSoup对象
soup = BeautifulSoup('<html>data</html>', 'lxml')#指定解析器为lxml
print(soup)
搜索文档树
find(self, name=None, attrs={}, recursive=True, text=None, **kwargs)
查找到的第一个元素对象
需求: 获取下面文档中的title标签和a标签
<html><head><title>The Dormouse's story</title></head><body><p class="title"><b>The Dormouse's story</b></p><p class="story">Once upon a time there were three little sisters; and their names were<a href="" class="sister" id="link1">Elsie</a>,<a href="" class="sister" id="link2">Lacie</a>and<a href="" class="sister" id="link3">Tillie</a>;and they lived at the bottom of a well.</p><p class="story">...</p></body>
</html>
# 1. 导入模块
from bs4 import BeautifulSoup# 2. 准备文档字符串
html = '''
<html><head><title>The Dormouse's story</title></head><body><p class="title"><b>The Dormouse's story</b></p><p class="story">Once upon a time there were three little sisters; and their names were<a href="" class="sister" id="link1">Elsie</a>,<a href="" class="sister" id="link2">Lacie</a>and<a href="" class="sister" id="link3">Tillie</a>;and they lived at the bottom of a well.</p><p class="story">...</p></body>
</html>
'''
# 3. 创建BeautifulSoup对象
soup = BeautifulSoup(html, 'lxml')
# 4. 查找title标签
title = soup.find('title')
print(title)
# 5. 查找a 标签
a = soup.find('a')
print(a)
# 查找所有的a标签
a_s = soup.find_all('a')
print(a_s)
# 查找id为link1的标签
# 方式1: 通过命名参数进行指定的
a = soup.find(id='link1')
print(a)
# 方式2: 使用attrs来指定属性字典, 进行查找
a = soup.find(attrs={'id': 'link1'})
print(a)
text = soup.find(text='Elsie')
print(text)
Tag对象对应于原始文档中的XML或HTML标签 Tag有很多方法和属性, 可用遍历文档树和搜索文档树以及获取标签内容
print(type(a)) # <class 'bs4.element.Tag'>
print('标签名', a.name)
print('标签所有属性', a.attrs)
# 1. 导入相关模块
import requests
from bs4 import BeautifulSoup# 2. 发送请求, 获取首页内容
response = ('')
home_page = t.decode()
#print(home_page)
# 3. 使用BeautifulSoup提取数据
soup = BeautifulSoup(home_page, 'lxml')
script = soup.find(id='fetchIndexMallList')
text =
print(text)
正则表达式(regular expression) 是一种字符串匹配的模式(pattern)。
re.findall(pattern, string, flags=0)(重点)
扫描整个string字符串,返回所有与pattern匹配的列表
pattern: 正则表达式 string: 从那个字符串中查找 flags: 匹配模式
返回string中与pattern匹配的结果列表
re.findall("d","chuan1zhi2") >> ["1","2"]
如果正则表达式中有没有()则返回与整个正则匹配的列表
如果正则表达式中有(),则返回()中匹配的内容列表, 小括号两边东西都是负责确定提取数据所在位置.
有如下的例子
import re
rs = re.findall("a.+bc", "anbc", re.DOTALL)
print(rs) #['anbc']rs = re.findall("a(.+)bc", "anbc", re.DOTALL)
print(rs)#['n']
正则中使用r原始字符串, 能够忽略转义符号带来影响
待匹配的字符串中有多少个,r原串正则中就添加多少个即可
import re
rs = re.findall("anb","anb")
print(rs)#['anb']rs = re.findall("a\nb","a\nb")
print(rs)#[]rs = re.findall("a\\nb","a\nb")
print(rs)#['a\nb']rs = re.findall(r"anb","anb")
print(rs)#['anb']
# 1. 导入相关模块
import requests
from bs4 import BeautifulSoup
import re# 2. 发送请求, 获取首页内容
response = ('')
home_page = t.decode()
# print(home_page)
# 3. 使用BeautifulSoup提取数据
soup = BeautifulSoup(home_page, 'lxml')
script = soup.find(id='fetchIndexMallList')
text =
# print(text)# 4. 使用正则表达式, 提取json字符串
json_str = re.findall(r'[.+]', text)[0]
print(json_str)
json模块是python自带的模块, 用于json与python数据之间的相互转换.
json | python |
---|---|
object | dict |
array | list |
string | str |
number(int) | int,long |
number(real) | float |
true | True |
false | False |
null | None |
import json# 1. 把JSON字符串, 转换为PYTHON数据
# 1.1 准备JSON字符串
json_str = '''[{"commodityId":"3591210463042011540","commodityLogo":".png","commodityName":"现货抗原检测试剂盒25人份","cornerMark":"限量抢购中","miniProgramLink":"/pages/common/cms/activity/index?name=jk_zxcgvhbjk&moduleId=3588838986124169233&from=other&chdShareFromId=3589899615392486250&chdShareEntityId=3370441352948220030&chdShareType=2","miniProgramShortLink":"/YTGSHf","skuId":"3591210463042011547","price":25800,"discountPrice":2900,"sortId":110,"sellStatus":0},{"commodityId":"3549260783022588925","commodityLogo":".png","commodityName":"维生素C凝胶糖果60粒","cornerMark":"","miniProgramLink":"/pages/common/cms/activity/index?name=jk_zxcgvhbjk&moduleId=3587310742271172996&from=other&chdShareFromId=3589406510197798564&chdShareEntityId=3370441352948220031&chdShareType=2","miniProgramShortLink":"/13pZ2L","skuId":"3551181722177885802","price":5900,"discountPrice":4900,"sortId":90,"sellStatus":0}]'''
# 1.2 把JSON字符串, 转换为PYTHON数据
rs = json.loads(json_str)
print(rs)
print(type(rs)) # <class 'list'>
print(type(rs[0])) # <class 'dict'># 2. 把JSON格式文件, 转换为PYTHON类型的数据
# 2.1 构建指向该文件的文件对象
with open('data/test.json') as fp:# 2.2 加载该文件对象, 转换为PYTHON类型的数据python_list = json.load(fp)print(python_list)print(type(python_list)) # <class 'list'>print(type(python_list[0])) # <class 'dict'>
import json# 1. 把PYTHON转换为JSON字符串
# 1.1 PYTHON类型的数据
json_str = '''[{"commodityId":"3591210463042011540","commodityLogo":".png","commodityName":"现货抗原检测试剂盒25人份","cornerMark":"限量抢购中","miniProgramLink":"/pages/common/cms/activity/index?name=jk_zxcgvhbjk&moduleId=3588838986124169233&from=other&chdShareFromId=3589899615392486250&chdShareEntityId=3370441352948220030&chdShareType=2","miniProgramShortLink":"/YTGSHf","skuId":"3591210463042011547","price":25800,"discountPrice":2900,"sortId":110,"sellStatus":0},{"commodityId":"3549260783022588925","commodityLogo":".png","commodityName":"维生素C凝胶糖果60粒","cornerMark":"","miniProgramLink":"/pages/common/cms/activity/index?name=jk_zxcgvhbjk&moduleId=3587310742271172996&from=other&chdShareFromId=3589406510197798564&chdShareEntityId=3370441352948220031&chdShareType=2","miniProgramShortLink":"/13pZ2L","skuId":"3551181722177885802","price":5900,"discountPrice":4900,"sortId":90,"sellStatus":0}]'''
# 1.2 把JSON字符串, 转换为PYTHON数据
rs = json.loads(json_str)
# 1.2 把PYTHON转换为JSON字符串
json_str = json.dumps(rs, ensure_ascii=False)
print(json_str)# 2. 把PYTHON以JSON格式存储到文件中
# 2.1 构建要写入文件对象
with open('data/test1.json', 'w') as fp:# 2.2 把PYTHON以JSON格式存储到文件中json.dump(rs, fp, ensure_ascii=False)
# 1. 导入相关模块
import requests
from bs4 import BeautifulSoup
import re
import json# 2. 发送请求, 获取首页内容
response = ('')
home_page = t.decode()
# print(home_page)
# 3. 使用BeautifulSoup提取数据
soup = BeautifulSoup(home_page, 'lxml')
script = soup.find(id='fetchIndexMallList')
text =
# print(text)# 4. 使用正则表达式, 提取json字符串
json_str = re.findall(r'[.+]', text)[0]
# print(json_str)# 5. 把json字符串转换为Python类型的数据
data = json.loads(json_str)
print(data)
本文发布于:2024-02-03 06:35:20,感谢您对本站的认可!
本文链接:https://www.4u4v.net/it/170691332049277.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
留言与评论(共有 0 条评论) |