如何解决我如何将c_ numpy列表写入txt文件?
我正在为网站制作解析器。我从网站上获取了信息(漫画名称,作者,流派)。 这就是我添加到c_ list的所有内容
api = requests.get(f'https://henchan.pro/manga/new?offset={page_count}')
soup = BeautifulSoup(api.content,'lxml')
title = [e.get_text() for e in soup.find_all('a',attrs={"class": 'title_link'})]
href = [e['href'] for e in soup.find_all('a',attrs={"class": 'title_link'})]
author = [e.get_text() for e in soup.find_all('h3',attrs={"class": 'item2'})]
original_work = [e.get_text() for e in soup.find_all('h3',attrs={"class": 'original work'})]
genres = [e.get_text() for e in soup.find_all('div',attrs={"class": 'genre'})]
except Exception as e:
print(e)
page_count += 20
_all = c_[title,href,author,original_work,genres]
在输出中,我得到:https://prnt.sc/u2buv6
如何将所有这些以相同的格式记录在TXT文件中?
P.S我所有的代码:
import requests
from bs4 import BeautifulSoup
from numpy import c_
import os
def get_manga():
page_count = 0
pages = int(input('Lead the number of pages: '))
for e in range(pages):
try:
api = requests.get(f'https://henchan.pro/manga/new?offset={page_count}')
soup = BeautifulSoup(api.content,'lxml')
title = [e.get_text() for e in soup.find_all('a',attrs={"class": 'title_link'})]
href = [e['href'] for e in soup.find_all('a',attrs={"class": 'title_link'})]
author = [e.get_text() for e in soup.find_all('h3',attrs={"class": 'item2'})]
original_work = [e.get_text() for e in soup.find_all('h3',attrs={"class": 'original work'})]
genres = [e.get_text() for e in soup.find_all('div',attrs={"class": 'genre'})]
except Exception as e:
print(e)
page_count += 20
_all = c_[title,genres]
for x in _all:
print(f"Title: {x[0]}\nLink: https://henchan.pro/manga{x[1]}\nAuthor: {x[2]}\nOriginal Work: {x[3]}\nGenres: {x[4]}\n")
get_manga()
解决方法
下面是OP中的代码,带有文件输出而不是打印输出。
import requests
from bs4 import BeautifulSoup
from numpy import c_
import os
def get_manga():
with open("filename_goes_here.txt",'w') as out_file:
page_count = 0
pages = int(input('Lead the number of pages: '))
for e in range(pages):
try:
api = requests.get(f'https://henchan.pro/manga/new?offset={page_count}')
soup = BeautifulSoup(api.content,'lxml')
title = [e.get_text() for e in soup.find_all('a',attrs={"class": 'title_link'})]
href = [e['href'] for e in soup.find_all('a',attrs={"class": 'title_link'})]
author = [e.get_text() for e in soup.find_all('h3',attrs={"class": 'item2'})]
original_work = [e.get_text() for e in soup.find_all('h3',attrs={"class": 'original work'})]
genres = [e.get_text() for e in soup.find_all('div',attrs={"class": 'genre'})]
except Exception as e:
print(e)
page_count += 20
_all = c_[title,href,author,original_work,genres]
for x in _all:
out_file.write(f"Title: {x[0]}\nLink: https://henchan.pro/manga{x[1]}\nAuthor: {x[2]}\nOriginal Work: {x[3]}\nGenres: {x[4]}\n"))
get_manga()
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。