加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 大数据 > 正文

GraphSAGE 代码解析(三) - aggregators.py

发布时间:2020-12-14 04:14:46 所属栏目:大数据 来源:网络整理
导读:1. class MeanAggregator(Layer): 该类主要用于实现 1. __init__()? __init_() 用于获取并初始化成员变量 dropout,bias(False),act(ReLu),concat(False),input_dim,output_dim,name(Variable scopr) 用glorot()方法初始化节点v的权值矩阵 vars[‘self_weight

1. class MeanAggregator(Layer):

该类主要用于实现

1. __init__()?

__init_() 用于获取并初始化成员变量 dropout,bias(False),act(ReLu),concat(False),input_dim,output_dim,name(Variable scopr)

用glorot()方法初始化节点v的权值矩阵 vars[‘self_weights‘] 和邻居节点均值u的权值矩阵 vars[‘neigh_weights‘]

用零向量初始化vars[‘bias‘]。(见inits.py: zeros(shape))

若logging为True,则调用 layers.py 中 class Layer()的成员函数_log_vars(),生成vars中各个变量的直方图。

glorot()

其中,glorot()?在inits.py中定义,用于权值初始化。(from .inits?import glorot)

均匀分布初始化方法,又称Xavier均匀初始化,参数从 [-limit,limit] 的均匀分布产生,其中limit为 sqrt(6 / (fan_in + fan_out))。fan_in为权值张量的输入单元数,fan_out是权重张量的输出单元数。该函数返回 [fan_in,fan_out]大小的Variable。

1 def glorot(shape,name=None):
2     """Glorot & Bengio (AISTATS 2010) init."""
3     init_range = np.sqrt(6.0/(shape[0]+shape[1]))
4     initial = tf.random_uniform(shape,minval=-init_range,maxval=init_range,dtype=tf.float32)
5     return tf.Variable(initial,name=name)
View Code

2. _call(inputs)

class MeanAggregator(Layer) 中的 _call(inputs) 函数是对父类class Layer(object)方法_call(inputs)的重写。

用于实现最上方的迭代更新式子。

在layer.py 中定义的?class Layer(object)中,执行特殊函数def __call__(inputs) 时有: outputs = self._call(inputs)调用_call(inputs) 方法,也即在这里调用子类MeanAggregator(Layer)中的_call(inputs)方法。

tf.nn.dropout(x,keep_prob,noise_shape=None,seed=None,name=None)

With probability keep_prob,outputs the input element scaled up by 1 / keep_prob,otherwise outputs 0. The scaling is so that the expected sum is unchanged.

注意:输出的非0元素是原来的 “1/keep_prob” 倍,以保证总和不变。

tf.add_n(inputs,name=None)

Adds all input tensors element-wise.

Args:
inputs: A list of Tensor or IndexedSlices objects,each with same shape and type.
name: A name for the operation (optional).
Returns:
A Tensor of same shape and type as the elements of inputs.

Raises:
ValueError: If inputs dont all have same shape and dtype or the shape cannot be inferred.
View Code

output = tf.concat([from_self,from_neighs],axis=1)

这里注意在concat后其维数变为之前的2倍。

3. class MeanAggregator(Layer) 代码

 1 class MeanAggregator(Layer):
 2     """
 3     Aggregates via mean followed by matmul and non-linearity.
 4     """
 5 
 6     def __init__(self,neigh_input_dim=None, 7             dropout=0.,bias=False,act=tf.nn.relu, 8             name=None,concat=False,**kwargs):
 9         super(MeanAggregator,self).__init__(**kwargs)
10 
11         self.dropout = dropout
12         self.bias = bias
13         self.act = act
14         self.concat = concat
15 
16         if neigh_input_dim is None:
17             neigh_input_dim = input_dim
18 
19         if name is not None:
20             name = / + name
21         else:
22             name = ‘‘
23 
24         with tf.variable_scope(self.name + name + _vars):
25             self.vars[neigh_weights] = glorot([neigh_input_dim,output_dim],26                                                         name=neigh_weights)
27             self.vars[self_weights] = glorot([input_dim,28                                                         name=self_weights)
29             if self.bias:
30                 self.vars[bias] = zeros([self.output_dim],name=bias)
31 
32         if self.logging:
33             self._log_vars()
34 
35         self.input_dim = input_dim
36         self.output_dim = output_dim
37 
38     def _call(self,inputs):
39         self_vecs,neigh_vecs = inputs
40 
41         neigh_vecs = tf.nn.dropout(neigh_vecs,1-self.dropout)
42         self_vecs = tf.nn.dropout(self_vecs,1-self.dropout)
43         neigh_means = tf.reduce_mean(neigh_vecs,axis=1)
44        
45         # [nodes] x [out_dim]
46         from_neighs = tf.matmul(neigh_means,self.vars[neigh_weights])
47 
48         from_self = tf.matmul(self_vecs,self.vars["self_weights"])
49          
50         if not self.concat:
51             output = tf.add_n([from_self,from_neighs])
52         else:
53             output = tf.concat([from_self,axis=1)
54 
55         # bias
56         if self.bias:
57             output += self.vars[bias]
58        
59         return self.act(output)
View Code

2. class GCNAggregator(Layer)

这里__init__()与MeanAggregator基本相同,在_call()的实现中略有不同。

 1 def _call(self,inputs):
 2     self_vecs,neigh_vecs = inputs
 3 
 4     neigh_vecs = tf.nn.dropout(neigh_vecs,1-self.dropout)
 5     self_vecs = tf.nn.dropout(self_vecs,1-self.dropout)
 6     means = tf.reduce_mean(tf.concat([neigh_vecs, 7         tf.expand_dims(self_vecs,axis=1)],axis=1),axis=1)
 8    
 9     # [nodes] x [out_dim]
10     output = tf.matmul(means,self.vars[weights])
11 
12     # bias
13     if self.bias:
14         output += self.vars[bias]
15    
16     return self.act(output)
View Code

其中对means求解时,

1. 先将self_vecs行列转换(tf.expand_dims(self_vecs,axis=1)),

2. 之后self_vecs的行数与neigh_vecs行数相同时,将二者concat,即相当于在原先的neigh_vecs矩阵后面新增一列self_vecs的转置

3. 最后将得到的矩阵每行求均值,即得means.

之后means与权值矩阵vars[‘weights‘]求内积,并加上vars[‘bias‘],最终将该值带入激活函数(ReLu)。

下面举个例子简单说明(例子中省略了点乘W的操作):

 1 import tensorflow as tf
 2 
 3 neigh_vecs = [[1,2,3],[4,5,6],[7,8,9]]
 4 self_vecs = [2,3,4]
 5 
 6 means = tf.reduce_mean(tf.concat([neigh_vecs, 7                                   tf.expand_dims(self_vecs,axis=1)
 8 
 9 print(tf.shape(self_vecs))
10 
11 print(tf.expand_dims(self_vecs,axis=0))
12 # Tensor("ExpandDims_1:0",shape=(1,3),dtype=int32)
13 
14 print(tf.expand_dims(self_vecs,axis=1))
15 # Tensor("ExpandDims_2:0",shape=(3,1),dtype=int32)
16 
17 sess = tf.Session()
18 print(sess.run(tf.expand_dims(self_vecs,axis=1)))
19 # [[2]
20 #  [3]
21 #  [4]]
22 
23 print(sess.run(tf.concat([neigh_vecs,24                           tf.expand_dims(self_vecs,axis=1)))
25 # [[1 2 3 2]
26 #  [4 5 6 3]
27 #  [7 8 9 4]]
28 
29 print(means)
30 # Tensor("Mean:0",),dtype=int32)
31 
32 print(sess.run(tf.reduce_mean(tf.concat([neigh_vecs,33                                          tf.expand_dims(self_vecs,axis=1)))
34 # [2 4 7]
35 
36 # [[1 2 3 2]   = 8 // 4  = 2
37 #  [4 5 6 3]   = 18 // 4 = 4
38 #  [7 8 9 4]]  = 28 // 4 = 7
39 
40 bias = [1]
41 output = means + bias
42 print(sess.run(output))
43 # [3 5 8]
44 # [2 + 1,4 + 1,7 + 1] = [3,8]
View Code

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读