如何使用 Wiki:Fasttext.vec 和 Google 新闻:Word2vec.bin 预训练文件作为 Keras 嵌入层的权重

如何解决如何使用 Wiki:Fasttext.vec 和 Google 新闻:Word2vec.bin 预训练文件作为 Keras 嵌入层的权重

我有一个函数可以从 GloVe.txt 中提取预训练的嵌入并将它们加载为 Kears Embedding Layer 权重,但是对于给定的两个文件,我该如何做呢?

This accepted stackoverflow answer 给我一种感觉,.vec 可以被视为 .txt,我们可能使用相同的技术来提取我们用于 fasttext.vecglove.txt。我的理解正确吗?

我浏览了很多博客和堆栈答案,以找到如何处理二进制文件?我发现 in this stack answer 二进制文件或 .bin 文件是 MODEL 本身而不是嵌入,您可以使用 Gensim 将 bin 文件转换为文本文件。我认为它可以保存嵌入,我们可以像加载 Glove 一样加载预训练的嵌入。我的理解正确吗?

这是执行此操作的代码。我想知道我是否走在正确的道路上,因为我在任何地方都找不到满意的答案。

     tokenizer.fit_on_texts(data) # tokenizer is Keras Tokenizer()
     vocab_size = len(tokenizer.word_index) + 1 # extra 1 for unknown words
     encoded_docs = tokenizer.texts_to_sequences(data) # data is lists of lists of sentences
     padded_docs = pad_sequences(encoded_docs,maxlen=max_length,padding='post')   # max_length is say 30  


     model = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin',binary=True) # this will load the binary Word2Vec model
     model.save_word2vec_format('GoogleNews-vectors-negative300.txt',binary=False) # this will save the VECTORS in a text file. Can load it using the below function?


    def load_embeddings(vocab_size,fitted_tokenizer,emb_file_path,emb_dim=300):
        '''
        It can load GloVe.txt for sure. But is it the right way to load paragram.txt,fasttext.vec and word2vec.bin if converted to .txt?
        '''
        embeddings_index = dict()
        f = open(emb_file_path)
        for line in f:
            values = line.split()
            word = values[0]
            coefs = asarray(values[1:],dtype='float32')
            embeddings_index[word] = coefs
        f.close()

        embedding_matrix = zeros((vocab_size,emb_dim))
        for word,i in tokenizer.word_index.items():
            embedding_vector = embeddings_index.get(word)
            if embedding_vector is not None:
                embedding_matrix[i] = embedding_vector
                
        return embedding_matrix

我的问题是我们可以直接加载 .vec 文件吗?我们可以使用给定的 .bin 函数加载我上面描述的 load_embeddings() 文件吗?

解决方法

我已经找到了答案:如果有任何问题,请更新。

class PreProcess():
    # check: https://stackabuse.com/pythons-classmethod-and-staticmethod-explained/ for @staticmethod use
    @staticmethod # You don't have to create an object of this class in order access this method. Preprocess.preprocess_data()
    def preprocess_data(data:list,max_length:int):
        '''
        Method to parse,tokenize,build vocab and padding the text data
        args:
            data: List of all the texts as: ['this is text 1','this is text 2 of different length']
            max_length: maximum length to consider for an individual text entry in data
        out:
            vocab size,fitted tokenizer object,encoded input text and padded input text
        '''
        tokenizer = Tokenizer() # set num_words,oov_token arguments depending on your usecase
        tokenizer.fit_on_texts(data)
        vocab_size = len(tokenizer.word_index) + 1 # extra 1 for unknown words which will be all 0s when loading pre trained embeddings
        encoded_docs = tokenizer.texts_to_sequences(data)
        padded_docs = pad_sequences(encoded_docs,maxlen=max_length,padding='post')  
        return vocab_size,tokenizer,encoded_docs,padded_docs
    
    
    @staticmethod
    def load_pretrained_embeddings(fitted_tokenizer,vocab_size:int,emb_file:str,emb_dim:int=300,):
    '''
    All 300D Embeddings: https://www.kaggle.com/reppy4620/embeddings 
    '''
    if '.bin' in emb_file: # if it is binary file,it is not embeddings but the MODEL itself. It could be fasttext or word2vec model
        model = KeyedVectors.load_word2vec_format(emb_file,binary=True)
        # emb_file = emb_file.replace('.bin','.txt') # general purpose path
        emb_file = './new_emb_file.txt' # for Kaggle because you have to save data in out dir only
        model.save_word2vec_format(emb_file,binary=False)
    
    # open and read the contents of the .txt / .vec file (.vec is same as .txt file)
    embeddings_index = dict() 
    with open(emb_file,encoding="utf8",errors='ignore') as f:
        for i,line in enumerate(f): # each line is as: hello 0.9 0.3 0.5 0.01 0.001 ...
            if i>0: # why this? You'll see in most of the Kaggle Kernals as if len(line)>100. It is because there is a difference between GloVe style and Word2Vec style embeddings
                # check this link: https://radimrehurek.com/gensim/scripts/glove2word2vec.html

                values = line.split(' ') 
                word = values[0] # first value is "hello" 
                coefs = np.asarray(values[1:],dtype='float32') # everything else is vector of "hello"
                embeddings_index[word] = coefs
                
    # create the embedding matrix or Embedding weights based on your data
    embedding_matrix = np.zeros((vocab_size,emb_dim)) # build embeddings based on our vocab size
    for word,i in fitted_tokenizer.word_index.items(): # get each vocab token one by one
        embedding_vector = embeddings_index.get(word) # get from loaded embeddings
        if embedding_vector is not None:
            embedding_matrix[i] = embedding_vector # if it is present,just replace the corresponding vectors
            
    return embedding_matrix

             
    @staticmethod
    def load_ELMO(data):
        pass
    
    

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 <select id="xxx"> SELECT di.id, di.name, di.work_type, di.updated... <where> <if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 <property name="dynamic.classpath" value="tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams['font.sans-serif'] = ['SimHei'] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -> systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping("/hires") public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate<String
使用vite构建项目报错 C:\Users\ychen\work>npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-