文本分类实战(二)—— textCNN 模型
1 大纲概述 文本分类这个系列将会有十篇左右,包括基于word2vec预训练的文本分类,与及基于最新的预训练模型(ELMo,BERT等)的文本分类。总共有以下系列: word2vec预训练词向量 textCNN 模型 charCNN 模型 Bi-LSTM 模型 Bi-LSTM + Attention 模型 RCNN 模型 Adversarial LSTM 模型 Transformer 模型 ELMo 预训练模型 BERT 预训练模型 所有代码均在textClassifier仓库中,觉得有帮助,请给个小星星。 ? 2 数据集 数据集为IMDB 电影影评,总共有三个数据文件,在/data/rawData目录下,包括unlabeledTrainData.tsv,labeledTrainData.tsv,testData.tsv。在进行文本分类时需要有标签的数据(labeledTrainData),数据预处理如文本分类实战(一)—— word2vec预训练词向量中一样,预处理后的文件为/data/preprocess/labeledTrain.csv。 ? 3 textCNN 模型结构 textCNN 可以看作是n-grams的表现形式,textCNN介绍可以看这篇,论文Convolutional Neural Networks for Sentence Classification中提出的三种feature size的卷积核可以认为是对应了3-gram,4-gram和5-gram。整体模型结构如下,先用不同尺寸(3, 4, 5)的卷积核去提取特征,在进行最大池化,最后将不同尺寸的卷积核提取的特征拼接在一起作为输入到softmax中的特征向量。
? 4 配置训练参数 我们将模型参数,训练参数等都配置在Config类中,方便之后调参。 import os import csv import time import datetime import random import json from collections import Counter from math import sqrt import gensim import pandas as pd import numpy as np import tensorflow as tf from sklearn.metrics import roc_auc_score,accuracy_score,precision_score,recall_score # 配置参数 class TrainingConfig(object): ? 5 生成训练数据 1)将数据加载进来,将句子分割成词表示,并去除低频词和停用词。 2)将词映射成索引表示,构建词汇-索引映射表,并保存成json的数据格式,之后做inference时可以用到。(注意,有的词可能不在word2vec的预训练词向量中,这种词直接用UNK表示) 3)从预训练的词向量模型中读取出词向量,作为初始化值输入到模型中。 4)将数据集分割成训练集和测试集 # 数据预处理的类,生成训练集和测试集 class Dataset(object): def __init__(self,config): self._dataSource = config.dataSource self._stopWordSource = config.stopWordSource self._sequenceLength = config.sequenceLength # 每条输入的序列处理为定长 self._embeddingSize = config.model.embeddingSize self._batchSize = config.batchSize self._rate = config.rate self._stopWordDict = {} self.trainReviews = [] self.trainLabels = [] self.evalReviews = [] self.evalLabels = [] self.wordEmbedding =None self._wordToIndex = {} self._indexToWord = {} def _readData(self,filePath): """ 从csv文件中读取数据集 """ df = pd.read_csv(filePath) labels = df["sentiment"].tolist() review = df["review"].tolist() reviews = [line.strip().split() for line in review] return reviews,labels def _reviewProcess(self,review,sequenceLength,wordToIndex): """ 将数据集中的每条评论用index表示 wordToIndex中“pad”对应的index为0 """ reviewVec = np.zeros((sequenceLength)) sequenceLen = sequenceLength # 判断当前的序列是否小于定义的固定序列长度 if len(review) < sequenceLength: sequenceLen = len(review) for i in range(sequenceLen): if review[i] in wordToIndex: reviewVec[i] = wordToIndex[review[i]] else: reviewVec[i] = wordToIndex["UNK"] return reviewVec def _genTrainEvalData(self,x,y,rate): """ 生成训练集和验证集 """ reviews = [] labels = [] # 遍历所有的文本,将文本中的词转换成index表示 for i in range(len(x)): reviewVec = self._reviewProcess(x[i],self._sequenceLength,self._wordToIndex) reviews.append(reviewVec) labels.append([y[i]]) trainIndex = int(len(x) * rate) trainReviews = np.asarray(reviews[:trainIndex],dtype="int64") trainLabels = np.array(labels[:trainIndex],dtype="float32") evalReviews = np.asarray(reviews[trainIndex:],dtype="int64") evalLabels = np.array(labels[trainIndex:],dtype="float32") return trainReviews,trainLabels,evalReviews,evalLabels def _genVocabulary(self,reviews): """ 生成词向量和词汇-索引映射字典,可以用全数据集 """ allWords = [word for review in reviews for word in review] # 去掉停用词 subWords = [word for word in allWords if word not in self.stopWordDict] wordCount = Counter(subWords) # 统计词频 sortWordCount = sorted(wordCount.items(),key=lambda x: x[1],reverse=True) # 去除低频词 words = [item[0] for item in sortWordCount if item[1] >= 5] vocab,wordEmbedding = self._getWordEmbedding(words) self.wordEmbedding = wordEmbedding self._wordToIndex = dict(zip(vocab,list(range(len(vocab))))) self._indexToWord = dict(zip(list(range(len(vocab))),vocab)) # 将词汇-索引映射表保存为json数据,之后做inference时直接加载来处理数据 with open("../data/wordJson/wordToIndex.json","w",encoding="utf-8") as f: json.dump(self._wordToIndex,f) with open("../data/wordJson/indexToWord.json",encoding="utf-8") as f: json.dump(self._indexToWord,f) def _getWordEmbedding(self,words): """ 按照我们的数据集中的单词取出预训练好的word2vec中的词向量 """ wordVec = gensim.models.KeyedVectors.load_word2vec_format("../word2vec/word2Vec.bin",binary=True) vocab = [] wordEmbedding = [] # 添加 "pad" 和 "UNK", vocab.append("pad") vocab.append("UNK") wordEmbedding.append(np.random.randn(self._embeddingSize)) wordEmbedding.append(np.random.randn(self._embeddingSize)) for word in words: try: vector = wordVec.wv[word] vocab.append(word) wordEmbedding.append(vector) except: print(word + "不存在于词向量中") return vocab,np.array(wordEmbedding) def _readStopWord(self,stopWordPath): """ 读取停用词 """ with open(stopWordPath,"r") as f: stopWords = f.read() stopWordList = stopWords.splitlines() # 将停用词用列表的形式生成,之后查找停用词时会比较快 self.stopWordDict = dict(zip(stopWordList,list(range(len(stopWordList))))) def dataGen(self): """ 初始化训练集和验证集 """ # 初始化停用词 self._readStopWord(self._stopWordSource) # 初始化数据集 reviews,labels = self._readData(self._dataSource) # 初始化词汇-索引映射表和词向量矩阵 self._genVocabulary(reviews) # 初始化训练集和测试集 trainReviews,evalLabels = self._genTrainEvalData(reviews,labels,self._rate) self.trainReviews = trainReviews self.trainLabels = trainLabels self.evalReviews = evalReviews self.evalLabels = evalLabels data = Dataset(config) data.dataGen() ? 6 生成batch数据集 采用生成器的形式向模型输入batch数据集,(生成器可以避免将所有的数据加入到内存中) # 输出batch数据集 def nextBatch(x,batchSize): """ 生成batch数据集,用生成器的方式输出 """ perm = np.arange(len(x)) np.random.shuffle(perm) x = x[perm] y = y[perm] numBatches = len(x) // batchSize for i in range(numBatches): start = i * batchSize end = start + batchSize batchX = np.array(x[start: end],dtype="int64") batchY = np.array(y[start: end],dtype="float32") yield batchX,batchY ? 7 textCNN 模型
# 构建模型 class TextCNN(object): """ Text CNN 用于文本分类 """ def __init__(self,config,wordEmbedding): # 定义模型的输入 self.inputX = tf.placeholder(tf.int32,[None,config.sequenceLength],name="inputX") self.inputY = tf.placeholder(tf.float32,1],name="inputY") self.dropoutKeepProb = tf.placeholder(tf.float32,name="dropoutKeepProb") # 定义l2损失 l2Loss = tf.constant(0.0) # 词嵌入层 with tf.name_scope("embedding"): # 利用预训练的词向量初始化词嵌入矩阵 self.W = tf.Variable(tf.cast(wordEmbedding,dtype=tf.float32,name="word2vec"),name="W") # 利用词嵌入矩阵将输入的数据中的词转换成词向量,维度[batch_size,sequence_length,embedding_size] self.embeddedWords = tf.nn.embedding_lookup(self.W,self.inputX) # 卷积的输入是思维[batch_size,width,height,channel],因此需要增加维度,用tf.expand_dims来增大维度 self.embeddedWordsExpanded = tf.expand_dims(self.embeddedWords,-1) # 创建卷积和池化层 pooledOutputs = [] # 有三种size的filter,3, 4, 5,textCNN是个多通道单层卷积的模型,可以看作三个单层的卷积模型的融合 for i,filterSize in enumerate(config.model.filterSizes): with tf.name_scope("conv-maxpool-%s" % filterSize): # 卷积层,卷积核尺寸为filterSize * embeddingSize,卷积核的个数为numFilters # 初始化权重矩阵和偏置 filterShape = [filterSize,config.model.embeddingSize,1,config.model.numFilters] W = tf.Variable(tf.truncated_normal(filterShape,stddev=0.1),name="W") b = tf.Variable(tf.constant(0.1,shape=[config.model.numFilters]),name="b") conv = tf.nn.conv2d( self.embeddedWordsExpanded,W,strides=[1,1,1],padding="VALID",name="conv") # relu函数的非线性映射 h = tf.nn.relu(tf.nn.bias_add(conv,b),name="relu") # 池化层,最大池化,池化是对卷积后的序列取一个最大值 pooled = tf.nn.max_pool( h,ksize=[1,config.sequenceLength - filterSize + 1,# ksize shape: [batch,channels] strides=[1,padding=‘VALID‘,name="pool") pooledOutputs.append(pooled) # 将三种size的filter的输出一起加入到列表中 # 得到CNN网络的输出长度 numFiltersTotal = config.model.numFilters * len(config.model.filterSizes) # 池化后的维度不变,按照最后的维度channel来concat self.hPool = tf.concat(pooledOutputs,3) # 摊平成二维的数据输入到全连接层 self.hPoolFlat = tf.reshape(self.hPool,[-1,numFiltersTotal]) # dropout with tf.name_scope("dropout"): self.hDrop = tf.nn.dropout(self.hPoolFlat,self.dropoutKeepProb) # 全连接层的输出 with tf.name_scope("output"): outputW = tf.get_variable( "outputW",shape=[numFiltersTotal,initializer=tf.contrib.layers.xavier_initializer()) outputB= tf.Variable(tf.constant(0.1,shape=[1]),name="outputB") l2Loss += tf.nn.l2_loss(outputW) l2Loss += tf.nn.l2_loss(outputB) self.predictions = tf.nn.xw_plus_b(self.hDrop,outputW,outputB,name="predictions") self.binaryPreds = tf.cast(tf.greater_equal(self.predictions,0.5),tf.float32,name="binaryPreds") # 计算二元交叉熵损失 with tf.name_scope("loss"): losses = tf.nn.sigmoid_cross_entropy_with_logits(logits=self.predictions,labels=self.inputY) self.loss = tf.reduce_mean(losses) + config.model.l2RegLambda * l2Loss ? 8 定义计算metrics的函数 # 定义性能指标函数 def mean(item): return sum(item) / len(item) def genMetrics(trueY,predY,binaryPredY): """ 生成acc和auc值 """ auc = roc_auc_score(trueY,predY) accuracy = accuracy_score(trueY,binaryPredY) precision = precision_score(trueY,binaryPredY) recall = recall_score(trueY,binaryPredY) return round(accuracy,4),round(auc,round(precision,round(recall,4) ? 9 训练模型 在训练时,我们定义了tensorBoard的输出,并定义了两种模型保存的方法。 # 训练模型 # 生成训练集和验证集 trainReviews = data.trainReviews trainLabels = data.trainLabels evalReviews = data.evalReviews evalLabels = data.evalLabels wordEmbedding = data.wordEmbedding # 定义计算图 with tf.Graph().as_default(): session_conf = tf.ConfigProto(allow_soft_placement=True,log_device_placement=False) session_conf.gpu_options.allow_growth=True session_conf.gpu_options.per_process_gpu_memory_fraction = 0.9 # 配置gpu占用率 sess = tf.Session(config=session_conf) # 定义会话 with sess.as_default(): cnn = TextCNN(config,wordEmbedding) globalStep = tf.Variable(0,name="globalStep",trainable=False) # 定义优化函数,传入学习速率参数 optimizer = tf.train.AdamOptimizer(config.training.learningRate) # 计算梯度,得到梯度和变量 gradsAndVars = optimizer.compute_gradients(cnn.loss) # 将梯度应用到变量下,生成训练器 trainOp = optimizer.apply_gradients(gradsAndVars,global_step=globalStep) # 用summary绘制tensorBoard gradSummaries = [] for g,v in gradsAndVars: if g is not None: tf.summary.histogram("{}/grad/hist".format(v.name),g) tf.summary.scalar("{}/grad/sparsity".format(v.name),tf.nn.zero_fraction(g)) outDir = os.path.abspath(os.path.join(os.path.curdir,"summarys")) print("Writing to {}n".format(outDir)) lossSummary = tf.summary.scalar("loss",cnn.loss) summaryOp = tf.summary.merge_all() trainSummaryDir = os.path.join(outDir,"train") trainSummaryWriter = tf.summary.FileWriter(trainSummaryDir,sess.graph) evalSummaryDir = os.path.join(outDir,"eval") evalSummaryWriter = tf.summary.FileWriter(evalSummaryDir,sess.graph) # 初始化所有变量 saver = tf.train.Saver(tf.global_variables(),max_to_keep=5) # 保存模型的一种方式,保存为pb文件 builder = tf.saved_model.builder.SavedModelBuilder("../model/textCNN/savedModel") sess.run(tf.global_variables_initializer()) def trainStep(batchX,batchY): """ 训练函数 """ feed_dict = { cnn.inputX: batchX,cnn.inputY: batchY,cnn.dropoutKeepProb: config.model.dropoutKeepProb } _,summary,step,loss,predictions,binaryPreds = sess.run( [trainOp,summaryOp,globalStep,cnn.loss,cnn.predictions,cnn.binaryPreds],feed_dict) timeStr = datetime.datetime.now().isoformat() acc,auc,precision,recall = genMetrics(batchY,binaryPreds) print("{},step: {},loss: {},acc: {},auc: {},precision: {},recall: {}".format(timeStr,acc,recall)) trainSummaryWriter.add_summary(summary,step) def devStep(batchX,batchY): """ 验证函数 """ feed_dict = { cnn.inputX: batchX,cnn.dropoutKeepProb: 1.0 } summary,binaryPreds = sess.run( [summaryOp,feed_dict) acc,binaryPreds) evalSummaryWriter.add_summary(summary,step) return loss,recall for i in range(config.training.epoches): # 训练模型 print("start training model") for batchTrain in nextBatch(trainReviews,config.batchSize): trainStep(batchTrain[0],batchTrain[1]) currentStep = tf.train.global_step(sess,globalStep) if currentStep % config.training.evaluateEvery == 0: print("nEvaluation:") losses = [] accs = [] aucs = [] precisions = [] recalls = [] for batchEval in nextBatch(evalReviews,evalLabels,config.batchSize): loss,recall = devStep(batchEval[0],batchEval[1]) losses.append(loss) accs.append(acc) aucs.append(auc) precisions.append(precision) recalls.append(recall) time_str = datetime.datetime.now().isoformat() print("{},recall: {}".format(time_str,currentStep,mean(losses),mean(accs),mean(aucs),mean(precisions),mean(recalls))) if currentStep % config.training.checkpointEvery == 0: # 保存模型的另一种方法,保存checkpoint文件 path = saver.save(sess,"../model/textCNN/model/my-model",global_step=currentStep) print("Saved model checkpoint to {}n".format(path)) ? 10 总结 构建tensorflow深度学习模型时,最好能有一套自己的代码框架,上述代码就是我比较喜欢的框架,总共有四大类:参数配置,训练数据生成,模型结构,训练模型。在之后构建其他的模型时,依照这种结构可以很快地实现。建议大家寻找最适合自己的代码结构,很多时候都可以实现代码的复用。 (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |