使用_save_word2vec_format以“ word2vec”格式保存预训练的快速文本向量时出现问题

如何解决使用_save_word2vec_format以“ word2vec”格式保存预训练的快速文本向量时出现问题

对于单词列表,我想获取它们的快速文本向量并将其保存为相同的“ word2vec” .txt格式(txt格式的word + space + vector)。

这就是我所做的:

dict = open("word_list.txt","r") #the list of words I have

path = "cc.en.300.bin" 

model = load_facebook_model(path)

vectors = []

words =[] 

for word in dict: 
    vectors.append(model[word])
    words.append(word)

vectors_array = np.array(vectors)


*我想获取列表“ words”和nd.array“ vectors_array”并保存为原始.txt格式。

我尝试使用gensim“ _save_word2vec_format”中的函数:

def _save_word2vec_format(fname,vocab,vectors,fvocab=None,binary=False,total_vec=None):
    """Store the input-hidden weight matrix in the same format used by the original
    C word2vec-tool,for compatibility.
    Parameters
    ----------
    fname : str
        The file path used to save the vectors in.
    vocab : dict
        The vocabulary of words.
    vectors : numpy.array
        The vectors to be stored.
    fvocab : str,optional
        File path used to save the vocabulary.
    binary : bool,optional
        If True,the data wil be saved in binary word2vec format,else it will be saved in plain text.
    total_vec : int,optional
        Explicitly specify total number of vectors
        (in case word vectors are appended with document vectors afterwards).
    """
    if not (vocab or vectors):
        raise RuntimeError("no input")
    if total_vec is None:
        total_vec = len(vocab)
    vector_size = vectors.shape[1]
    if fvocab is not None:
        logger.info("storing vocabulary in %s",fvocab)
        with utils.open(fvocab,'wb') as vout:
            for word,vocab_ in sorted(iteritems(vocab),key=lambda item: -item[1].count):
                vout.write(utils.to_utf8("%s %s\n" % (word,vocab_.count)))
    logger.info("storing %sx%s projection weights into %s",total_vec,vector_size,fname)
    assert (len(vocab),vector_size) == vectors.shape
    with utils.open(fname,'wb') as fout:
        fout.write(utils.to_utf8("%s %s\n" % (total_vec,vector_size)))
        # store in sorted order: most frequent words at the top
        for word,key=lambda item: -item[1].count):
            row = vectors[vocab_.index]
            if binary:
                row = row.astype(REAL)
                fout.write(utils.to_utf8(word) + b" " + row.tostring())
            else:
                fout.write(utils.to_utf8("%s %s\n" % (word,' '.join(repr(val) for val in row))))

但是我得到了错误:

INFO:gensim.models._fasttext_bin:loading 2000000 words for fastText model from cc.en.300.bin
INFO:gensim.models.word2vec:resetting layer weights
INFO:gensim.models.word2vec:Updating model with new vocabulary
INFO:gensim.models.word2vec:New added 2000000 unique words (50% of original 4000000) and increased the count of 2000000 pre-existing words (50% of original 4000000)
INFO:gensim.models.word2vec:deleting the raw counts dictionary of 2000000 items
INFO:gensim.models.word2vec:sample=1e-05 downsamples 6996 most-common words
INFO:gensim.models.word2vec:downsampling leaves estimated 390315457935 word corpus (70.7% of prior 552001338161)
INFO:gensim.models.fasttext:loaded (4000000,300) weight matrix for fastText model from cc.en.300.bin
trials.py:42: DeprecationWarning: Call to deprecated `__getitem__` (Method will be removed in 4.0.0,use self.wv.__getitem__() instead).
  vectors.append(model[word])
INFO:__main__:storing 8664x300 projection weights into arrays_to_txt_oct3.txt
loading the model for: en
finish loading the model for: en
len(vectors): 8664
len(words):  8664
shape of vectors_array (8664,300)
mission launched!
Traceback (most recent call last):
  File "trials.py",line 102,in <module>
    _save_word2vec_format(YOUR_VEC_FILE_PATH,words,vectors_array,total_vec=None)
  File "trials.py",line 89,in _save_word2vec_format
    for word,key=lambda item: -item[1].count):
  File "/cs/snapless/oabend/tailin/transdiv/lib/python3.7/site-packages/six.py",line 589,in iteritems
    return iter(d.items(**kw))
AttributeError: 'list' object has no attribute 'items'

我知道它与函数中的第二个参数有关,但是我不知道如何将单词列表放入字典对象中?

我尝试过这样做:

#convert list of words into a dictionary
words_dict = {i:x for i,x in enumerate(words)}

但仍然收到错误消息:

Traceback (most recent call last):
  File "trials.py",line 99,dict,line 77,in _save_word2vec_format
    total_vec = len(vocab)
TypeError: object of type '_io.TextIOWrapper' has no len()

我不明白如何以正确的格式插入单词列表...

解决方法

您可以直接导入并重新使用Gensim KeyedVectors类,以将自己的(子)词向量集组合为KeyedVectors的一个实例,然后使用其.save_word2vec_format()方法

例如,大致这应该可行:

from gensim.models import KeyedVectors

words_file = open("word_list.txt","r")  # your word-list as a text file
words_list = list(words_file)  # reads each line of file into a new `list` object

fasttext_path = "cc.en.300.bin" 
model = load_facebook_model(path)

kv = KeyedVectors(vector_size=model.wv.vector_size)  # new empty KV object

vectors = [] 
for word in words_list: 
    vectors.append(model[word])  # vectors for words_list,in same order

kv.add(words_list,vectors)  # adds those keys (words) & vectors as batch

kv.save_word2vec_format('my_kv.vec',binary=False)

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-