如何解决Gensim - TF-IDF,如何执行正确的 Genesis TF-IDF?
我正在尝试在我的学士论文的一部分中执行一些 NLP(更准确地说是一个 TF-IDF 项目)。
我将其中的一小部分导出到名为“thesis.txt”的单个文档中,似乎在将清理过的文本数据拟合到 gensim 字典时遇到了问题。
所有的词都被标记化,存储在一个词袋中,我无法弄清楚我做错了什么。
这是我得到的错误:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-317-73828cccaebe> in <module>
17
18 #Create dictionary
---> 19 dictionary = Dictionary(tokens_no_stop)
20
21 #Create bag of words
~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in __init__(self,documents,prune_at)
89
90 if documents is not None:
---> 91 self.add_documents(documents,prune_at=prune_at)
92
93 def __getitem__(self,tokenid):
~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in add_documents(self,prune_at)
210
211 # update Dictionary with the document
--> 212 self.doc2bow(document,allow_update=True) # ignore the result,here we only care about updating token ids
213
214 logger.info(
~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in doc2bow(self,document,allow_update,return_missing)
250 """
251 if isinstance(document,string_types):
--> 252 raise TypeError("doc2bow expects an array of unicode tokens on input,not a single string")
253
254 # Construct (word,frequency) mapping.
TypeError: doc2bow expects an array of unicode tokens on input,not a single string
预先感谢您的帮助:)(在我的代码下方找到)
from nltk import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from collections import Counter
from gensim.corpora import Dictionary
from gensim.models.tfidfmodel import TfidfModel
f = open('/Users/romeoleon/Desktop/Python & R/NLP/TRIAL_THESIS/thesis.txt','r')
text = f.read()
#Tokenize text
Tokens = word_tokenize(text)
#Lower case everything
Tokens = [t.lower() for t in Tokens]
#Keep only leters
tokens_alpha = [t for t in Tokens if t.isalpha()]
#Remove stopwords
tokens_no_stop = [t for t in tokens_alpha if t not in stopwords.words('french')]
#Create Lemmatizer
lem = WordNetLemmatizer()
lemmatized = [lem.lemmatize(t) for t in tokens_no_stop]
#Create dictionary
dictionary = Dictionary(tokens_no_stop)
#Create bag of words
bow = [dictionary.doc2bow(line) for line in tokens_no_stop]
#Model TFID
tfidf = TfidfModel(bow)
bow_tfidf = tfidf[bow]
解决方法
您的 tokens_no_stop
是一个字符串列表,但 Dictionary
需要一个字符串列表列表(更准确地说,是一个可迭代的字符串列表)。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。