【tensorflow2.0】处理结构化数据-titanic生存预测
1、准备数据 import numpy as np pandas as pd matplotlib.pyplot as plt tensorflow as tf from tensorflow.keras models,layers dftrain_raw = pd.read_csv('./data/titanic/train.csv') dftest_raw = pd.read_csv(./data/titanic/test.csv) dftrain_raw.head(10) 部分数据: 相关字段说明:
2、探索数据 (1)标签分布 %matplotlib inline %config InlineBackend.figure_format = png ax = dftrain_raw[Survived'].value_counts().plot(kind = bar,figsize = (12,8),fontsize=15,rot = 0) ax.set_ylabel(Counts',fontsize = 15) ax.set_xlabel() plt.show() (2) 年龄分布 年龄分布情况 %Age'].plot(kind = histpurple) ax.set_ylabel(Frequency) plt.show() (3) 年龄和标签之间的相关性 % ax = dftrain_raw.query(Survived == 0')[density) dftrain_raw.query(Survived == 1) ax.legend([Survived==0Survived==1'],fontsize = 12) ax.set_ylabel(Density) plt.show() 3、数据预处理 (1)将Pclass转换为one-hot编码 dfresult=pd.DataFrame() #将船票类型转换为one-hot编码 dfPclass=pd.get_dummies(dftrain_raw["Pclass"]) 设置列名 dfPclass.columns =[Pclass_'+str(x) for x in dfPclass.columns] dfresult = pd.concat([dfresult,dfPclass],axis = 1) dfresult (2) 将Sex转换为One-hot编码 Sex dfSex = pd.get_dummies(dftrain_raw[Sex]) dfresult = pd.concat([dfresult,dfSex],1)">) dfresult (3) 用0填充Age列缺失值,并重新定义一列Age_null用来标记缺失值的位置 将缺失值用0填充 dfresult['] = dftrain_raw[].fillna(0) 增加一列数据为Age_null,同时将不为0的数据用0,将为0的数据用1表示,也就是标记出现0的位置 dfresult[Age_null'] = pd.isna(dftrain_raw[']).astype(int32) dfresult (4) 直接拼接SibSp、Parch、Fare dfresult[SibSp] dfresult[ParchFare] dfresult (5) 标记Cabin缺失的位置 Carbin dfresult[Cabin_null'] = pd.isna(dftrain_raw[Cabin) dfresult (6)将Embarked转换成one-hot编码 Embarked #需要注意的参数是dummy_na=True,将缺失值另外标记出来 dfEmbarked = pd.get_dummies(dftrain_raw[EmbarkedTrue) dfEmbarked.columns = [Embarked_' + str(x) dfEmbarked.columns] dfresult = pd.concat([dfresult,dfEmbarked],1)">) dfresult 最后,我们将上述操作封装成一个函数: def preprocessing(dfdata): dfresult= pd.DataFrame() Pclass dfPclass = pd.get_dummies(dfdata[]) dfPclass.columns = [' +str(x) dfPclass.columns ] dfresult = pd.concat([dfresult,1)">) Sex dfSex = pd.get_dummies(dfdata[]) dfresult = pd.concat([dfresult,1)">Age dfresult['] = dfdata[].fillna(0) dfresult['] = pd.isna(dfdata[SibSp,Parch,Fare dfresult[] dfresult[] Carbin dfresult['] = pd.isna(dfdata[Embarked dfEmbarked = pd.get_dummies(dfdata[True) dfEmbarked.columns = [ dfEmbarked.columns] dfresult = pd.concat([dfresult,1)">return(dfresult) 然后进行数据预处理: x_train = preprocessing(dftrain_raw) y_train = dftrain_raw[].values x_test = preprocessing(dftest_raw) y_test = dftest_raw[].values print(x_train.shape =x_test.shape =",x_test.shape ) x_train.shape = (712,15) x_test.shape = (179,15) 3、使用tensorflow定义模型 使用Keras接口有以下3种方式构建模型:使用Sequential按层顺序构建模型,使用函数式API构建任意结构模型,继承Model基类构建自定义模型。此处选择使用最简单的Sequential,按层顺序模型。 tf.keras.backend.clear_session() model = models.Sequential() model.add(layers.Dense(20,activation = relu10,1)"> )) model.add(layers.Dense(1,1)">sigmoid )) model.summary() 4、训练模型 训练模型通常有3种方法,内置fit方法,内置train_on_batch方法,以及自定义训练循环。此处我们选择最常用也最简单的内置fit方法 二分类问题选择二元交叉熵损失函数 model.compile(optimizer=adambinary_crossentropyAUC]) history = model.fit(x_train,y_train,batch_size= 64=0.2 分割一部分训练数据用于验证 ) 结果: Epoch 1/30 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. 9/9 [==============================] - 0s 30ms/step - loss: 4.3524 - auc: 0.4888 - val_loss: 3.0274 - val_auc: 0.5492 Epoch 2/30 9/9 [==============================] - 0s 6ms/step - loss: 2.7962 - auc: 0.4710 - val_loss: 1.8653 - val_auc: 0.4599 Epoch 3/30 9/9 [==============================] - 0s 6ms/step - loss: 1.6765 - auc: 0.4040 - val_loss: 1.2673 - val_auc: 0.4067 Epoch 4/30 9/9 [==============================] - 0s 7ms/step - loss: 1.1195 - auc: 0.3799 - val_loss: 0.9501 - val_auc: 0.4006 Epoch 5/30 9/9 [==============================] - 0s 6ms/step - loss: 0.8156 - auc: 0.4874 - val_loss: 0.7090 - val_auc: 0.5514 Epoch 6/30 9/9 [==============================] - 0s 5ms/step - loss: 0.6355 - auc: 0.6611 - val_loss: 0.6550 - val_auc: 0.6502 Epoch 7/30 9/9 [==============================] - 0s 6ms/step - loss: 0.6308 - auc: 0.7169 - val_loss: 0.6502 - val_auc: 0.6546 Epoch 8/30 9/9 [==============================] - 0s 6ms/step - loss: 0.6088 - auc: 0.7156 - val_loss: 0.6463 - val_auc: 0.6610 Epoch 9/30 9/9 [==============================] - 0s 6ms/step - loss: 0.6066 - auc: 0.7163 - val_loss: 0.6372 - val_auc: 0.6644 Epoch 10/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5964 - auc: 0.7253 - val_loss: 0.6283 - val_auc: 0.6646 Epoch 11/30 9/9 [==============================] - 0s 7ms/step - loss: 0.5876 - auc: 0.7326 - val_loss: 0.6253 - val_auc: 0.6717 Epoch 12/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5827 - auc: 0.7409 - val_loss: 0.6195 - val_auc: 0.6708 Epoch 13/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5769 - auc: 0.7489 - val_loss: 0.6170 - val_auc: 0.6762 Epoch 14/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5719 - auc: 0.7555 - val_loss: 0.6156 - val_auc: 0.6803 Epoch 15/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5662 - auc: 0.7629 - val_loss: 0.6119 - val_auc: 0.6826 Epoch 16/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5627 - auc: 0.7694 - val_loss: 0.6107 - val_auc: 0.6892 Epoch 17/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5586 - auc: 0.7753 - val_loss: 0.6084 - val_auc: 0.6927 Epoch 18/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5539 - auc: 0.7837 - val_loss: 0.6051 - val_auc: 0.6983 Epoch 19/30 9/9 [==============================] - 0s 7ms/step - loss: 0.5479 - auc: 0.7930 - val_loss: 0.6011 - val_auc: 0.7056 Epoch 20/30 9/9 [==============================] - 0s 9ms/step - loss: 0.5451 - auc: 0.7986 - val_loss: 0.5996 - val_auc: 0.7128 Epoch 21/30 9/9 [==============================] - 0s 7ms/step - loss: 0.5406 - auc: 0.8047 - val_loss: 0.5962 - val_auc: 0.7192 Epoch 22/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5357 - auc: 0.8123 - val_loss: 0.5948 - val_auc: 0.7212 Epoch 23/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5295 - auc: 0.8181 - val_loss: 0.5928 - val_auc: 0.7267 Epoch 24/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5275 - auc: 0.8223 - val_loss: 0.5910 - val_auc: 0.7296 Epoch 25/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5263 - auc: 0.8227 - val_loss: 0.5884 - val_auc: 0.7325 Epoch 26/30 9/9 [==============================] - 0s 7ms/step - loss: 0.5199 - auc: 0.8313 - val_loss: 0.5860 - val_auc: 0.7356 Epoch 27/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5145 - auc: 0.8356 - val_loss: 0.5835 - val_auc: 0.7386 Epoch 28/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5138 - auc: 0.8383 - val_loss: 0.5829 - val_auc: 0.7402 Epoch 29/30 9/9 [==============================] - 0s 7ms/step - loss: 0.5092 - auc: 0.8405 - val_loss: 0.5806 - val_auc: 0.7416 Epoch 30/30 9/9 [==============================] - 0s 6ms/step - loss: 0.5082 - auc: 0.8394 - val_loss: 0.5792 - val_auc: 0.7424 5、评估模型 我们首先评估一下模型在训练集和验证集上的效果。 %svg' matplotlib.pyplot as plt plot_metric(history,metric): train_metrics = history.history[metric] val_metrics = history.history[val_'+metric] epochs = range(1,len(train_metrics) + 1) plt.plot(epochs,train_metrics,bo--ro-) plt.title(Training and validation metric) plt.xlabel(Epochs) plt.ylabel(metric) plt.legend([train_"+metric,1)">metric]) plt.show() plot_metric(history,1)">loss) plot_metric(history,1)">auc") 然后看在在测试集上的效果: model.evaluate(x = x_test,y = y_test) 结果: 6/6 [==============================] - 0s 2ms/step - loss: 0.5286 - auc: 0.7869
[0.5286471247673035,0.786877453327179]
6、使用模型 (1)预测概率 model.predict(x_test[0:10]) 结果: array([[0.34822357],[0.4793241 ],[0.439865770.50268507 ],[0.290796460.34384924model.predict_classes(x_test[0:10]) 结果: WARNING:tensorflow:From <ipython-input-36-a161a0a6b51e>:1: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) and will be removed after 2021-01-01. Instructions updating: Please use instead:* `np.argmax(model.predict(x),axis=-1)`,if your model does multi-class classification (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype(")`,1)">if your model does binary classification (e.g. if it uses a `sigmoid` last-layer activation). array([[0],[0],[17、保存模型 |