Scrapy抓取网易新闻

阅读: 评论:0

Scrapy抓取网易新闻

Scrapy抓取网易新闻


使用scrapy实现对网易新闻的抓取。详情请看注释。

定义要抓取的字段
# -*- coding: utf-8 -*-# Define here the models for your scraped items
#
# See documentation in:
# .htmlimport scrapyclass NewsItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()news_thread = scrapy.Field()news_title = scrapy.Field()news_url = scrapy.Field()news_time = scrapy.Field()news_source = scrapy.Field()source_url = scrapy.Field()news_body = scrapy.Field()
制定抓取规则
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from news.items import NewsItemclass News163Spider(CrawlSpider):name = 'news163'allowed_domains = ['news.163']start_urls = ['/']rules = (# 参数说明:第一个正则表达式 第二个回调函数 第三个是否允许深入# .html 新闻详情页的链接是这样的。# 编写一个正则表达式来详细匹配新闻详情页# /d+/.*?html# 然后将正则表达式规则传输LinkExtractor中,表示处理该新闻Rule(LinkExtractor(allow=r'/d+/.*?html'), callback='parse_item', follow=True),)def parse_item(self, response):item = NewsItem()item['news_thread'] = response.url.strip().split('/')[-1][:-_title(response, _time(response, _source(response, _source_url(response, _text(response, _url(response, item)# item['domain_id'] = response.xpath('//input[@id="sid"]/@value').get()# item['name'] = response.xpath('//div[@id="name"]').get()# item['description'] = response.xpath('//div[@id="description"]').get()return itemdef get_url(self, response, item):url = response.urlif url:item['news_url'] = urldef get_time(self, response, item):# 类选择器time = response.css('div.post_time_source::text').extract()if time:print("time:{}".format(time[0].strip().replace("来源", "")))item['news_time'] = time[0].strip().replace("来源", "").replace("u3000", "")def get_title(self, response, item):title = response.css('title::text').extract()if title is not None:print("title:{}".format(title[0]))item['news_title'] = title[0]def get_source(self, response, item):# id选择器source = response.css('#ne_article_source::text').extract()if source:print("source:{}".format(source[0]))item['news_source'] = source[0]def get_source_url(self, response, item):# id选择器 获取属性source_url = response.css('#ne_article_source::attr(href)').extract()if source_url:print("source_url:{}".format(source_url[0]))item['source_url'] = source_url[0]def get_text(self, response, item):text = response.css('.post_text p::text').extract()if text:item['news_body'] = text
处理抓取结果
# -*- coding: utf-8 -*-# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: .html
porters import CsvItemExporterclass NewsPipeline(object):def __init__(self):self.file = open('news_data.csv', 'wb')# 这个要根据所爬取的新闻的编码来填写porter = CsvItemExporter(self.file, encoding='utf-8')# 开始导入porter.start_exporting()def close_spider(self, spider):# 关闭进程porter.finish_exporting()self.file.close()def process_item(self, item, spider):port_item(item)return item
在setting.py中开启pipeline
# 开启
ITEM_PIPELINES = {'news.pipelines.NewsPipeline': 300,
}

这个爬虫并没有将所有的新闻都抓下来,只是抓去了主页中显示的部分。其实可以进一步分析url来实现全站抓取。详情不再阐述。

本文发布于:2024-02-02 12:57:46,感谢您对本站的认可!

本文链接:https://www.4u4v.net/it/170684986743952.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:网易新闻   Scrapy
留言与评论(共有 0 条评论)
   
验证码:

Copyright ©2019-2022 Comsenz Inc.Powered by ©

网站地图1 网站地图2 网站地图3 网站地图4 网站地图5 网站地图6 网站地图7 网站地图8 网站地图9 网站地图10 网站地图11 网站地图12 网站地图13 网站地图14 网站地图15 网站地图16 网站地图17 网站地图18 网站地图19 网站地图20 网站地图21 网站地图22/a> 网站地图23