Training with the same parameters and date give different results!

tensorflow

(Lockder) #1

I’m trying to do a Sequence Classificator.
Different sentences will be classified to numbers 1 - 5
But using the same data, model, config, steps. When I try to do an eval I get different results.
something like 59%-74% accurancy difference

I tried to change the learning rate, adding exponential decay. regularization l1,l2 and increased the dropout on the encoder. nothing seems to make it work.

Anyone can have an idea why this so much difference if everything is the same always?


(Guillaume Klein) #2

The experiments are not the same: the order of the data and the dropout are random. You can control that by setting a fixed --seed value to the command line.

That sounds like a lot. What optimizer and learning rate are you using? Is the training loss also comparatively worse?


(Lockder) #3

I tried RNMTPPlus and SelfAttention for encoders and adam as optimizer with default values
usually the loss is always between 0.1 and 0.01. So I see I didn’t know the seed can do so much difference I will try. Usually starts the error around 3 then after 500 steps goes down to 0.5 then after 1000 or 1,5k starts to jump between 0.1 and 0.01.

the dropout usually I have it around 0.3 I will try to set a seed. Thanks!


(Lockder) #4

I found the issue. the linguistic changed the classification classes merging 3 clases inside 1 withouth saying anything to me and the test set had the data changed but the training set not. That is why it was changing so much. When it was predicted wrong inside train it was better on test :slight_smile: I fixed the training set and now its fine .