如何解决BeautifulSoup Webscrape问题
我对Python和编码一般都是新手。我正在观看有关在YouTube上进行网络抓取的视频教程,并且在尝试运行代码时遇到问题。
我要做什么:
- 我正在尝试编写查询以从中提取所有首页交易 网站(CigarMonster.com)并提取5个属性(Cigar 描述,尺寸/数量描述,原始价格,销售价格和 页面上每个列表的折扣百分比)。我想捕捉 CSV文件中的内容。
我尝试过的事情:
- 我已经在Anaconda中测试了代码,并且每个属性都有正确的text / html。
- 我无法运行整个脚本。我收到错误消息,不确定是怎么回事。
下面是脚本:
from urllib.request import Request,urlopen
from bs4 import BeautifulSoup as soup
#opens up connection and grabs the webpage
url = 'https://www.cigarmonster.com/'
req = Request(url,headers={'User-Agent': 'Mozilla/5.0'})
web_byte = urlopen(req).read()
webpage = web_byte.decode('utf-8')
#parses html
page_soup = soup(webpage,"html.parser")
# grabs each of the products
containers = page_soup.findAll("div",{"class":"quickview-pop launchModal"})
filename = "cigar_list.csv"
f = open(filename,"w")
headers = "cigar_brand,product_size,famous_price,monster_price,percent_off"
f.write(headers)
for container in containers:
try:
cigar_brand = container.find("div",{"class":"item-grid-product-title"}).text
except Exception as e:
cigar_brand = "NA"
else:
pass
finally:
pass
size_container = container.findAll("span",{"class":"product-subtitle"})
product_size = size_container[0].text
famous_price_container = container.findAll("div",{"class":"col-xs-12 item-grid-product-fss-price"})
famous_price = famous_price_container[0].text
monster_price_container = container.findAll("div",{"class":"col-xs-12 item-grid-product-monster-price"})
monster_price = monster_price_container[0].text
percent_off_container = container.findAll("div",{"class":"col-xs-12 item-grid-product-fss-pct"})
percent_off = percent_off_container[0].text
#print("cigar_brand: " + cigar_brand)
#print("product_size: " + product_size)
#print("famous_price: " + famous_price)
#print("monster_price: " + monster_price)
#print("percent_off: " + percent_off)
f.write(cigar_brand + "," + product_size + "," + famous_price + "," + monster_price + "," + percent_off + "\n")
f.close()
运行脚本时出现以下错误:
Traceback (most recent call last):
File "cigar_monster_scrape.py",line 8,in <module>
uClient = urlopen(uReq).read()
File "C:\Users\nmbuc\anaconda3\lib\urllib\request.py",line 222,in urlopen
return opener.open(url,data,timeout)
File "C:\Users\nmbuc\anaconda3\lib\urllib\request.py",line 525,in open
response = self._open(req,data)
File "C:\Users\nmbuc\anaconda3\lib\urllib\request.py",line 547,in _open
return self._call_chain(self.handle_open,'unknown',File "C:\Users\nmbuc\anaconda3\lib\urllib\request.py",line 502,in _call_chain
result = func(*args)
File "C:\Users\nmbuc\anaconda3\lib\urllib\request.py",line 1421,in unknown_open
raise URLError('unknown url type: %s' % type)
urllib.error.URLError: <urlopen error unknown url type: https>
我一开始也遇到了很多缩进错误。
解决方法
在下面的代码行中更改2,然后就可以了。在下面的代码中,我使用requests代替了 urlib ,其余的一切都保持不变。
import requests
url = 'https://www.cigarmonster.com/'
req = requests.get(url,headers={'User-Agent': 'Mozilla/5.0'})
#parses html
page_soup = soup(req.text,"html.parser")
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。