加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 编程开发 > Python > 正文

python – sklearn分解顶级术语

发布时间:2020-12-20 11:38:28 所属栏目:Python 来源:网络整理
导读:有没有办法在数据分解时确定每个群集的主要功能/术语? 在sklearn文档中的示例中,通过对特征进行排序并与具有相同数量的特征的矢量化器feature_names进行比较来提取顶部术语. http://scikit-learn.org/stable/auto_examples/document_classification_20newsg
有没有办法在数据分解时确定每个群集的主要功能/术语?

在sklearn文档中的示例中,通过对特征进行排序并与具有相同数量的特征的矢量化器feature_names进行比较来提取顶部术语.

http://scikit-learn.org/stable/auto_examples/document_classification_20newsgroups.html

我想知道如何实现get_top_terms_per_cluster():

X = vectorizer.fit_transform(dataset)  # with m features
X = lsa.fit_transform(X)  # reduce number of features to m'
k_means.fit(X)
get_top_terms_per_cluster()  # out of m features

解决方法

对于某些k,假设lsa = TruncatedSVD(n_components = k),获得项权重的明显方法是利用LSA / SVD是线性变换的事实,即lsa.components_的每一行是输入项的加权和.,你可以将它与k-means中的聚类质心相乘.

让我们设置一些东西并训练一些模型:

>>> from sklearn.datasets import fetch_20newsgroups
>>> from sklearn.feature_extraction.text import TfidfVectorizer
>>> from sklearn.cluster import KMeans
>>> from sklearn.decomposition import TruncatedSVD
>>> data = fetch_20newsgroups()
>>> vectorizer = TfidfVectorizer(min_df=3,max_df=.95,stop_words='english')
>>> lsa = TruncatedSVD(n_components=10)
>>> km = KMeans(n_clusters=3)
>>> X = vectorizer.fit_transform(data.data)
>>> X_lsa = lsa.fit_transform(X)
>>> km.fit(X_lsa)

现在乘以LSA分量和k均值质心:

>>> X.shape
(11314,38865)
>>> lsa.components_.shape
(10,38865)
>>> km.cluster_centers_.shape
(3,10)
>>> weights = np.dot(km.cluster_centers_,lsa.components_)
>>> weights.shape
(3,38865)

然后打印;由于LSA中的符号不??确定性,我们需要权重的绝对值:

>>> features = vectorizer.get_feature_names()
>>> weights = np.abs(weights)
>>> for i in range(km.n_clusters):
...     top5 = np.argsort(weights[i])[-5:]
...     print(zip([features[j] for j in top5],weights[i,top5]))
...     
[(u'escrow',0.042965734662740895),(u'chip',0.07227072329320372),(u'encryption',0.074855609122467345),(u'clipper',0.075661844826553887),(u'key',0.095064798549230306)]
[(u'posting',0.012893125486957332),(u'article',0.013105911161236845),(u'university',0.0131617377000081),(u'com',0.023016036009601809),(u'edu',0.034532489348082958)]
[(u'don',0.02087448155525683),0.024327099321009758),(u'people',0.033365757270264217),0.036318114826463417),(u'god',0.042203130080860719)]

请注意,你真的需要一个停用词过滤器来实现这一点.停用词往往会在每个组件中结束,并在每个群集质心中获得高权重.

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读