加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 编程开发 > Python > 正文

python – Tensorflow上的多维RNN

发布时间:2020-12-20 13:17:52 所属栏目:Python 来源:网络整理
导读:我正在尝试在人类行为分类的背景下实现2D RNN(RNN的一个轴上的关节和另一个轴上的时间),并且一直在寻找可以完成工作的Tensorflow中的某些东西. 我听说过GridLSTMCell(internally和externally贡献),但无法使用dynamic_rnn(接受3-D张量,但我必须提供4-D张量[ba
我正在尝试在人类行为分类的背景下实现2D RNN(RNN的一个轴上的关节和另一个轴上的时间),并且一直在寻找可以完成工作的Tensorflow中的某些东西.

我听说过GridLSTMCell(internally和externally贡献),但无法使用dynamic_rnn(接受3-D张量,但我必须提供4-D张量[batchsize,max_time,num_joints,n_features]).

另外,ndlstm也是TF库的(有点未知)part,它基本上使用正常的1-D LSTM并转换输出以将其馈送到第二个1-D LSTM中.这也是here提倡的,但我不太确定它是否正确/是否与我需要的想法相同.

任何帮助,将不胜感激.

解决方法

我已成功尝试在tensorflow中使用GridLSTM和ndlstm.

我不确定如何将4D Tensor转换为3D,因为它被dynamic_rnn接受,但我想这可能会让你知道如何使用GridLSTM:

def reshape_to_rnn_dims(tensor,num_time_steps):
    return tf.unstack(tensor,num_time_steps,1)


class GridLSTMCellTest(tf.test.TestCase):
    def setUp(self):
        self.num_features = 1
        self.time_steps = 1
        self.batch_size = 1
        tf.reset_default_graph()
        self.input_layer = tf.placeholder(tf.float32,[self.batch_size,self.time_steps,self.num_features])
        self.cell = grid_rnn.Grid1LSTMCell(num_units=8)

    def test_simple_grid_rnn(self):
        self.input_layer = reshape_to_rnn_dims(self.input_layer,self.time_steps)
        tf.nn.static_rnn(self.cell,self.input_layer,dtype=tf.float32)

    def test_dynamic_grid_rnn(self):
        tf.nn.dynamic_rnn(self.cell,dtype=tf.float32)


class BidirectionalGridRNNCellTest(tf.test.TestCase):
    def setUp(self):
        self.num_features = 1
        self.time_steps = 1
        self.batch_size = 1
        tf.reset_default_graph()
        self.input_layer = tf.placeholder(tf.float32,self.num_features])
        self.cell_fw = grid_rnn.Grid1LSTMCell(num_units=8)
        self.cell_bw = grid_rnn.Grid1LSTMCell(num_units=8)

    def test_simple_bidirectional_grid_rnn(self):
        self.input_layer = reshape_to_rnn_dims(self.input_layer,self.time_steps)
        tf.nn.static_bidirectional_rnn(self.cell_fw,self.cell_fw,dtype=tf.float32)

    def test_bidirectional_dynamic_grid_rnn(self):
        tf.nn.bidirectional_dynamic_rnn(self.cell_fw,self.cell_bw,dtype=tf.float32)

if __name__ == '__main__':
    tf.test.main()

显然,ndlstms接受具有形状(batch_size,height,width,depth)的4D张量,我有这些测试(一个涉及使用tensorflow的ctc_loss.还发现它与conv2d一起使用了example):

class MultidimensionalRNNTest(tf.test.TestCase):
    def setUp(self):
        self.num_classes = 26
        self.num_features = 32
        self.time_steps = 64
        self.batch_size = 1 # Can't be dynamic,apparently.
        self.num_channels = 1
        self.num_filters = 16
        self.input_layer = tf.placeholder(tf.float32,self.num_features,self.num_channels])
        self.labels = tf.sparse_placeholder(tf.int32)

    def test_simple_mdrnn(self):
        net = lstm2d.separable_lstm(self.input_layer,self.num_filters)

    def test_image_to_sequence(self):
        net = lstm2d.separable_lstm(self.input_layer,self.num_filters)
        net = lstm2d.images_to_sequence(net)

    def test_convert_to_ctc_dims(self):
        net = lstm2d.separable_lstm(self.input_layer,self.num_filters)
        net = lstm2d.images_to_sequence(net)

        net = tf.reshape(inputs,[-1,self.num_filters])

         W = tf.Variable(tf.truncated_normal([self.num_filters,self.num_classes],stddev=0.1,dtype=tf.float32),name='W')
         b = tf.Variable(tf.constant(0.,dtype=tf.float32,shape=[self.num_classes],name='b'))

         net = tf.matmul(net,W) + b
         net = tf.reshape(net,-1,self.num_classes])

         net = tf.transpose(net,(1,2))

         loss = tf.nn.ctc_loss(inputs=net,labels=self.labels,sequence_length=[2])

    print(net)


if __name__ == '__main__':
    tf.test.main()

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读