Negative sampling

Negative sampling started out as a hack to speed up training and is now a well-accepted practice. The click point here is that in addition to training your model on what might be the correct answer, why not show it a few examples of wrong answers?

In particular, using negative sampling speeds up training by reducing the number of model updates required. Instead of updating the model for every single wrong word, we pick a small number, usually between 5 and 25, and train the model on them. So, we have reduced the number of updates from a few million, which is required for training on a large corpus, to a much smaller number. This is a classic software programming hack that works in academia too.

 

Get Natural Language Processing with Python Quick Start Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.