如何解决使用BeatifulSoup时找不到元素find_all
嗨社区:)
我的项目有些卡住。我尝试通过BeautifulSoup抓取来自https://mercomindia.com/category/solar/?_page=1和https://www.pv-magazine.com/news/page/2/网站的新闻。
BS4正常工作,我已经将代码用于其他多个网站,但是在这两种情况下(尤其是pv-magazine对我来说非常有价值),我找不到任何html标签。 我使用find_all查找日期标签和href标签,但我的列表始终为空。我试图将其从html5lib更改为htmlparser,试图更改标签,但似乎无济于事。 有人知道了,怎么了?
这是我的代码:
dates = []
news_bodies = []
# Capture the news from this url
scrape_url = 'https://mercomindia.com/category/solar/?_page=1'
# Parsing the HTML
r1 = requests.get(scrape_url)
page = r1.content
# Using Beatifulsoup to get the content of the correct HTML attributes
page_soup = soup(page,'html.parser')
dates_soup = page_soup.find_all('span',{'class':'entry-date'})
titles_soup = page_soup.find_all('a',{'class':'_self cvplbd'})
for i in range (0,len(dates_soup)):
corpus = ""
time.sleep(.1) # Prevents site spam
dates.append(dates_soup[i].time['text'])
news_url = titles_soup[i]['href']
r2 = requests.get(news_url)
news_page = r2.content
news_page_soup = soup(news_page,'html.parser')
news_text = news_page_soup.find_all('p')
for news in news_text:
if (("<!--" in news.text) == False):
text_p_tag = news.text.replace("\n"," ").replace("\t"," ")
if (("staff reporter" in text_p_tag) == True):
break
corpus += text_p_tag
news_bodies.append(corpus)
解决方法
您需要传递相关的标题才能获得正确的html响应。在标题中传递用户代理。
import requests
from bs4 import BeautifulSoup as soup
dates = []
hrefs = []
# Capture the news from this url
scrape_url = 'https://mercomindia.com/category/solar/?_page=1'
headers = {'user-agent':"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/84.0.4147.89 Safari/537.36"}
# Parsing the HTML
r1 = requests.get(scrape_url,headers=headers)
page = r1.content
# Using Beatifulsoup to get the content of the correct HTML attributes
page_soup = soup(page,'html.parser')
divs = page_soup.find("div",{"data-id":"pt-cv-page-1"}).find_all("div",class_="pt-cv-content-item")
for div in divs:
hrefs.append(div.find("a")["href"])
dates.append(div.find("span",class_="entry-date").get_text(strip=True))
print(dates)
print("-"*10)
print(hrefs)
输出:
['August 13,2020','August 13,'August 12,'August 11,'August 10,'August 8,'August 7,'August 6,2020']
----------
['https://mercomindia.com/winners-gujarat-solar-auction/','https://mercomindia.com/bescom-issues-amendment/','https://mercomindia.com/ireland-lists-solar-wind-projects/','https://mercomindia.com/tata-power-registers-profits/','https://mercomindia.com/european-union-extends-countervailing-duty/','https://mercomindia.com/south-africa-sasol-invites-bids-solar/','https://mercomindia.com/power-finance-loans-solar-developer/','https://mercomindia.com/eesl-tender-solar-projects-maharashtra/','https://mercomindia.com/tender-reissued-25-mw/','https://mercomindia.com/gujarat-new-industrial-policy-solar/','https://mercomindia.com/interested-parties-solar-glass-imports/','https://mercomindia.com/eib-engie-off-grid-solar-uganda/','https://mercomindia.com/eesl-empanel-consultants-solar-projects/','https://mercomindia.com/reil-tenders-multicrystalline-solar-cells/','https://mercomindia.com/french-technique-solaire-expand-portfolio/','https://mercomindia.com/actis-acquires-solar-projects-acme/','https://mercomindia.com/renesola-power-raises-12-million/','https://mercomindia.com/andhra-waives-stamp-duty-solar/','https://mercomindia.com/cerc-approves-tariffs-solar-projects/','https://mercomindia.com/ayana-renewable-acquires-two-solar-projects/','https://mercomindia.com/amp-energy-tata-azure-o2-power-ntpc-solar-auction/','https://mercomindia.com/no-ists-charges-solar-wind-projects/','https://mercomindia.com/another-deadline-extension-renewable-power/','https://mercomindia.com/long-term-bcd-india-solar-roundtable/','https://mercomindia.com/central-electronics-bids-solar-ribbons/']
,
要解决dates_soup
空列表的问题
这将解决它:
dates=[]
date=[]
for tag in soup.find_all('span',{'class' : 'entry-date'}):
for anchor in tag.find_all('time'):
date.append(anchor.get_text())
for i in range (0,len(date)):
print(date[i])
corpus = ""
time.sleep(.1) # Prevents site spam
dates.append(date[i])
news_url = titles_soup[i]['href']
另外,最好使用标头来防止出现响应错误(例如403 Forbidden
):
from bs4 import BeautifulSoup
from urllib.request import Request,urlopen
site= "https://mercomindia.com/category/solar/?_page=1"
hdr = {'User-Agent': 'Mozilla/5.0'}
req = Request(site,headers=hdr)
page = urlopen(req)
soup = BeautifulSoup(page)
print(soup)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。