Data exploration and model evaluation

One simple technique for assessing any vectorization method is to simply use the training corpus as the test corpus. Of course, we expect that we will overfit our model to the training set, but that's fine.

We can use the training corpus as a test corpus by doing the following:

  • Learning a new result or inference vectors for each document
  • Comparing the vector to all examples
  • Ranking the document, sentence, and paragraph vectors according to the similarity score

Let's do this in code, as follows:

ranks = []for idx in range(len(ted_talk_docs)):    inferred_vector = model.infer_vector(ted_talk_docs[idx].words)    sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs)) rank = [docid for docid, ...

Get Natural Language Processing with Python Quick Start Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.