将爬网数据写入CSV文件,取,的,csv,中

发表时间:2020-02-27

爬取某小说网站的小说信息

from lxml import etree
import requests
import csv
#创建csv文件
f = open('E:/python/myPython/test2.csv','wt',newline='')
writer = csv.writer(f)
#写入表头
writer.writerow(('names', 'authors'))
#构造url
urls = ['https://www.qidian.com/all?page={}'.format(str(i)) for i in range(2,5)]

headers = {
    'User-Agent':
        'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36'
}
for url in urls:
    html = requests.get(url, headers)
    selector = etree.HTML(html.text)
    titles = selector.xpath("//h4/a/text()")
    authors = selector.xpath("//p[@class='author']/a[1]/text()")

#这个循环不可少,因为titles和authors是列表

    for title,author in zip(titles,authors):
    #注意这个地方,这个writerow()方法只能有一个参数,我起初忘记了带里面那个(),导致报错
    writer.writerow((title,author))
f.close()

文章来源互联网,如有侵权,请联系管理员删除。邮箱:417803890@qq.com / QQ:417803890

微配音

Python Free

邮箱:417803890@qq.com
QQ:417803890

皖ICP备19001818号
© 2019 copyright www.pythonf.cn - All rights reserved

微信扫一扫关注公众号:

联系方式

Python Free