Python新浪微博网络抓取问题

如何解决Python新浪微博网络抓取问题

我正在尝试对微博进行网络加密,并且帐户登录存在问题。我的目标是使用存储在.txt文件中的查询列表在s.weibo.com上执行搜索。以下是Python脚本。每次我运行代码时,它都会打开微博,成功输入查询并显示相关文章。然而,几秒钟后,一个窗口弹出,要求我登录(我已经这样做了)。有谁知道如何解决这个问题?提前致谢!任何帮助表示赞赏!

from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.chrome.options import Options
import urllib
import urllib.parse 
from selenium import webdriver
import datetime
import time as systime
from selenium.webdriver.firefox.webdriver import FirefoxProfile
import unicodecsv as csv
base_url = 'http://s.weibo.com/weibo/'
file = open(r'C:\Users\some.name\query.txt',encoding="utf8")
file_index = 6
def scrap():
    global file_index
    with open(r'C:\Users\some.name\query.txt',encoding="utf8") as f:
        each_query = f.readlines()
    each_query = [x.strip() for x in each_query]
    # print urllib.quote(urllib.quote(each_query[0]))
    for each in each_query:
        query = each
        s = each.split(';')
        keyword = s[0]# urllib.quote(urllib.quote(s[0]))
        date = s[1]
        start = s[2]
        end = s[3]
        page = s[4]
        scrap_each_query(keyword,date,start,end,page,query)
        file_index = file_index + 1

def scrap_each_query(keyword,query):
    real_keyword = keyword
    keyword = urllib.parse.quote(urllib.parse.quote(keyword))
    # login_url = 'http://m.weibo.com/'
    # driver = webdriver.Chrome()
    # driver.get(login_url)
    # driver.implicitly_wait(2)
    # string = '登录'
    # driver.find_element_by_link_text ( string.decode('utf-8') ).click()
    # driver.implicitly_wait(2)
    # driver.find_element_by_link_text(string.decode('utf-8') ).click()
    # savedCookies = driver.get_cookies()
    # # login code
    # pickle.dump(driver.get_cookies(),open("chrome.pkl","wb"))
    # driver.close()
    all_content = []
    all_time = []
#   profile = FirefoxProfile(r"C:\Users\keith.yuen\AppData\Roaming\Mozilla\Firefox\Profiles\ciyiapug.default-release")
#   driver = webdriver.Firefox(profile)
#   co = webdriver.ChromeOptions()
#   co.add_argument('user-data-dir=/Users/xuzhouyin/Library/Application Support/Google/Chrome/')
    driver = webdriver.Chrome(executable_path=r"C:\Users\some.name\chromedriver_win32\chromedriver.exe")
    url = base_url + keyword + "&typeall=1&suball=1&timescope=custom:" + start + ":" + end + "&page=" + "1"
    driver.get(url)
    systime.sleep(5)
    for i in range(int(page)):
        url = base_url + keyword + "&typeall=1&suball=1&timescope=custom:" + start + ":" + end + "&page=" + str(i + 1)
        # url = "http://s.weibo.com/weibo/%25E5%2585%2583%25E6%2597%25A6&typeall=1&suball=1&timescope=custom:2016-12-31:2016-12-31&Refer=g"
        # chrome_options = Options()
        # chrome_options.add_argument("~/Library/Application Support/Google/Chrome/Default");
        # co = webdriver.ChromeOptions()
        # co.add_argument('/Users/xuzhouyin/Library/Application\ Support/Google/Chrome/Default')
        
        # for cookie in pickle.load(open("chrome.pkl","rb")):
        #   driver.add_cookie(cookie)
        driver.get(url)
        
        # driver.magage().add_cookie(savedCookies)
        page_source = driver.page_source
        soup = BeautifulSoup(page_source,"html.parser")
        content = soup.findAll("p",{ "class" : "comment_txt" })
        time = soup.findAll("a",{ "class" : "W_textb" })
        
        for each in content:
            all_content.append(each.get_text().encode('utf-8'))
        for each in time:
            each = each.get_text()
            each = each.encode('utf-8')
            time = ""
            if "月" in each:
                time = str(datetime.datetime.now().year) + "-" + each[0:each.index("月")] + "-" + each[(each.index("月") + 3):each.index("日")]
            else:
                time = each[0:each.index(" ")]
            all_time.append(time)
    driver.close()
    save_to_csv(file + str(file_index),real_keyword,all_content,all_time,query)

def save_to_csv(filename,keyword,content,time,query):
    with open('./output/' + filename + '.csv','w') as csvfile:
        spamwriter = csv.writer(csvfile,dialect='excel',encoding='utf-16')
        spamwriter.writerow(["query","Post ID","keyword","event Date","Post Content","Post Time"])
        for i in range(len(content)):
            spamwriter.writerow([query,i + 1,content[i],time[i]])

scrap()

解决方法

我认为您遇到的真正问题是如何成功登录如何避免在脚本中登录

从代码中很难判断您是否成功登录或登录失败的原因。

因此,这是另一个解决问题的方法:use browser with default user data/profile to avoid login
注意:前提是您每天使用浏览器时已经手动登录了微博。

有关更多信息,请a demo

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 <select id="xxx"> SELECT di.id, di.name, di.work_type, di.updated... <where> <if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 <property name="dynamic.classpath" value="tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams['font.sans-serif'] = ['SimHei'] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -> systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping("/hires") public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate<String
使用vite构建项目报错 C:\Users\ychen\work>npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-