Chapter 9. Classics, Frontiers, and Next Steps

In this chapter, we review the previous chapters from the perspective of the entire book and see how the seemingly independent topics discussed in the book are interdependent, and how researchers can mix and match these ideas to solve the problem at hand. We also summarize some classical topics in natural language processing that we could not discuss in depth between these covers. Finally, we point to the frontiers in the field, as of 2018. In fast-moving fields like empirical NLP and deep learning, it is important for us to learn new ideas and keep ourselves up-to-date. We dedicate some space for learning how to learn about new topics in NLP.

What Have We Learned so Far?

We began with the supervised learning paradigm and how we could use the computational graph abstraction to encode complex ideas as a model that could be trained via backpropagation. PyTorch was introduced as our computational framework of choice. There is a risk in writing an NLP book that uses deep learning in treating the text input as “data” to be fed to black boxes. In Chapter 2, we introduced some basic concepts from NLP and linguistics to set the stage for rest of the book. The foundational concepts discussed in Chapter 3 like activation functions, loss functions, gradient-based optimization for supervised learning, and the training-eval loop came in handy for the rest of the chapters. We studied two examples of feed-forward networks—the Multilayer Perceptron ...

Get Natural Language Processing with PyTorch now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.