Word EmbbedEncoder

I’m working on an inputter text to allow to encode words to generate new features like CharEmbbeder but for words.

Right now I’m getting a crash from tensorflow but I’m getting bold trying to fix it since I don’t get why is crashing. It would be really helpfull if someone could give me some insight

code:

 def transform(self, inputs, mode):
        timesteps = tf.shape(inputs)[1]
        batch_size = tf.shape(inputs)[0]
        sequence_length = tf.fill([batch_size], timesteps)
        embds = super(WordEmbedderEncoder, self).transform(inputs,mode)

        outputs, encoder_state, _ = self.encoder.encode(inputs=embds,sequence_length=sequence_length,mode=mode)

        encoding = last_encoding_from_state(encoder_state)

        outputs = tf.reshape(encoding, [-1, timesteps, tf.cast(self.encoder.num_units * 2,tf.int32)])

        return outputs

the crash:

InvalidArgumentError (see above for traceback): Input to reshape is a tensor with 8192 values, but the requested shape requires a multiple of 576
[[Node: seqclassifier/parallel_0/seqclassifier/encoder/inputter_3/Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _class=[“loc:@optim…ad/Reshape”], _device="/job:localhost/replica:0/task:0/device:CPU:0"](seqclassifier/parallel_0/seqclassifier/encoder/inputter_3/bi_layer_encoder/concat_3, seqclassifier/parallel_0/seqclassifier/encoder/inputter_3/Reshape/shape)]]

last_encoding_from_state returns the hidden state of the last timestep (of size [batch x depth]). Maybe you wanted to return outputs directly (of size [batch x time x depth])?

I was trying to something like CharRNNEmbedder, but for words with a custom encoder.

On the transform method inside CharRNNEmbedder its like

  def transform(self, inputs, mode):
    flat_inputs = tf.reshape(inputs, [-1, tf.shape(inputs)[-1]])
    embeddings = self._embed(flat_inputs, mode)
    sequence_length = tf.count_nonzero(flat_inputs, axis=1)

    cell = build_cell(
        1,
        self.num_units,
        mode,
        dropout=self.dropout,
        cell_class=self.cell_class)
    rnn_outputs, rnn_state = tf.nn.dynamic_rnn(
        cell,
        embeddings,
        sequence_length=sequence_length,
        dtype=embeddings.dtype)

    if self.encoding == "average":
      encoding = tf.reduce_mean(rnn_outputs, axis=1)
    elif self.encoding == "last":
      encoding = last_encoding_from_state(rnn_state)

    outputs = tf.reshape(encoding, [-1, tf.shape(inputs)[1], self.num_units])
    return outputs

I just doing the same thing but adapted for the words but is crashing when I’m doing the reshape not sure why.
Only happens when I do it on encoder_state. If I return the outputs from the encoder then it will work fine.
encoder state returns (?,64) tensor . I though with the reshape I could transform it to (?,?,64).

The only thing may be happening is the state from dynamic_rnn may have different time steps and dimension than the output? but if that was the case then why it works inside CharRNNEmbedder?

The difference is that CharRNNEmbedder works with an additional dimension: the words length.

In the code snippet above, embeddings is of shape:

[batch_size * max_sentence_length, max_word_length, embedding_size]

but your embds is of shape:

[batch_size, max_sentence_length, embedding_size]

yes. but on transform method from CharRNNEmbedder the variable inputs i guess its [batch_size, max_sentence_length] right?
that is why its reshaped to flat_inputs to be able to get the embeddings.

I thought Word embedder is expecting [batch_size, max_sentence_length, embedding_size]
and the encoder state I though it was [max_sentence_length, embedding_size]. may I’m wrong?
So I guess all the sequences in the batch would had the same encoder state at the same position of the sequence

The idea was to add more information to the words like the CharRNNEmbedder using the encoders.

I’m not sure if its posible or how to fix it

inputs is [batch_size, max_sentence_length, max_word_length] and flat_inputs is [batch_size * max_sentence_length, max_word_length]. We only reshape it because we want to apply the RNN over the character sequences.

[batch_size, max_sentence_length, embedding_size] is the dimension returned by the WordEmbedder.

For a 3D input of dimension [dim0, dim1, dim2], the last state of the RNN is of shape [dim0, dim2].

I see thanks so the encoder state is [batch, rnn_size], so it doesn’t have the max_sentence_length