InvalidArgumentError (see above for traceback): Input to reshape is a tensor with 8192 values, but the requested shape requires a multiple of 576
[[Node: seqclassifier/parallel_0/seqclassifier/encoder/inputter_3/Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _class=[“loc:@optim…ad/Reshape”], _device="/job:localhost/replica:0/task:0/device:CPU:0"](seqclassifier/parallel_0/seqclassifier/encoder/inputter_3/bi_layer_encoder/concat_3, seqclassifier/parallel_0/seqclassifier/encoder/inputter_3/Reshape/shape)]]
I just doing the same thing but adapted for the words but is crashing when I’m doing the reshape not sure why.
Only happens when I do it on encoder_state. If I return the outputs from the encoder then it will work fine.
encoder state returns (?,64) tensor . I though with the reshape I could transform it to (?,?,64).
The only thing may be happening is the state from dynamic_rnn may have different time steps and dimension than the output? but if that was the case then why it works inside CharRNNEmbedder?
yes. but on transform method from CharRNNEmbedder the variable inputs i guess its [batch_size, max_sentence_length] right?
that is why its reshaped to flat_inputs to be able to get the embeddings.
I thought Word embedder is expecting [batch_size, max_sentence_length, embedding_size]
and the encoder state I though it was [max_sentence_length, embedding_size]. may I’m wrong?
So I guess all the sequences in the batch would had the same encoder state at the same position of the sequence
The idea was to add more information to the words like the CharRNNEmbedder using the encoders.
inputs is [batch_size, max_sentence_length, max_word_length] and flat_inputs is [batch_size * max_sentence_length, max_word_length]. We only reshape it because we want to apply the RNN over the character sequences.
[batch_size, max_sentence_length, embedding_size] is the dimension returned by the WordEmbedder.
For a 3D input of dimension [dim0, dim1, dim2], the last state of the RNN is of shape [dim0, dim2].