Adding until you overfit, then regularizing

Hopefully, by seeking out architectures for similar problems, you are at least close to an architecture that works for you. What can you do to further optimize your network architecture?

  • Across several experimental runs, add layers and/or neurons until your network begins to overfit on the problem. In deep learning speak, add units until you no longer have a high bias model.
  • Once you're beginning to overfit, you've found some network architecture that is able to fit the training data very well, and perhaps even too well. At this point, you should focus on reducing variance through the use of dropout, regularization, early stopping, or the like.

This approach is most often attributed to famed neural ...

Get Deep Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.