案例爬取网址为史记全文_古诗文网
import time
import requests
# from multiprocessing.dummy import Pool
from lxml import etreedef search(url):index_page = (url=url, headers=headers).texttree = etree.HTML(index_page)detail_page = tree.xpath('//*[@class="contson"]/p/text()')detail_page = "".join(detail_page)detail_name = tree.xpath('//*[@class="cont"]/h1/span/b/text()')detail_name = "".join(detail_name)detail_all = detail_name + 'n' + detail_pagefileName = './' + detail_nameprint(url,detail_name)with open(fileName, 'w', encoding='utf-8') as fp:fp.write(detail_all)if __name__ == '__main__':start_time = time.time()url= '.aspx'headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'}page_1 = (url=url,headers=headers).texttree = etree.HTML(page_1)urls = tree.xpath('//*[@id="html"]/body/div[2]/div[1]/div[3]/div/div[2]/span/a/@href')# print(urls)# pool = Pool(20) # 开启20个线程# pool.map(search,urls) # 多线程时间大约在 1s左右for url in urls: # 单线程时间大约在 12-15s 或者更长search(url)end_time = time.time()print("花费时长为" + end_time-start_time)
本文发布于:2024-02-01 10:25:58,感谢您对本站的认可!
本文链接:https://www.4u4v.net/it/170675436035970.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
留言与评论(共有 0 条评论) |