Improving performance by replacing Softmax

As a matter of fact, it would be good to implement a kind of “sampled softmax” or NCE-like
if we don’t lose too much in accuracy, would definitely make things much quicker to train.

I don’t think it will have much effect on GPU. Depends on the demand for CPU training.

oh really ? on TF, it makes a huge difference.

That blog post doesn’t have any numeric speed comparisons that I could see. My guess is that for medium vocab/large models on GPU we’re bound by LSTM computation not softmax.

However, I agree that we should implement something, potentially the technique they describe from Jean et. al., 2014. It is a nice feature to have.

additional exchanges from gitter:


btw @vince62s there is a now an NCE module in DPNN
we could in theory fork that code…


@srush wrt to nce perf, we did not test the seq2seq of the link the forum post however we tested the same stuff for LSTM RNNLM. With vocab size of 200k there is a factor x5 to x8 in speed for medium size (I think 2x850) when using a sampled softmax vs full softmax. At 50k vocab might be a little less but I think tehre is some significant gain to get.


For the speed of Softmax on very large voc - cudnn will bring us a huge improvement - see


[…] it would be great if we could match these numbers
actually that paper even has simpler code (

first implementation (not totally clean) gives the following results in performance - for the 4x1000 100K vocabulary model of this page.


which means a speed of x4.5 on the generator - and an end-to-end speedup of x1.6.

I think maybe we could try something like Max-Margin and just remove the SoftMax and ClassNLL classifier, If it does not hurt the performance.

1 Like

for reference here while looking at the pytorch forum …

What do you think about “Breaking the Softmax Bottleneck: A High-Rank RNN Language Model” ?
The researches added disrcet latent variables and replaced the softmax with a Mixture of Softmaxes (MoS