You are previewing Fundamentals of Deep Learning.
O'Reilly logo
Fundamentals of Deep Learning

Book Description

With the reinvigoration of neural networks in the 2000s, deep learning has become an extremely active area of research that is paving the way for modern machine learning. This book uses exposition and examples to help you understand major concepts in this complicated field.

Large companies such as Google, Microsoft, and Facebook have taken notice, and are actively growing in-house deep learning teams. For the rest of us however, deep learning is still a pretty complex and difficult subject to grasp. If you have a basic understanding of what machine learning is, have familiarity with the Python programming language, and have some mathematical background with calculus, this book will help you get started.

Table of Contents

  1. Preface Title
  2. 1. The Neural Network
    1. Building Intelligent Machines
    2. The Limits of Traditional Computer Programs
    3. The Mechanics of Machine Learning
    4. The Neuron
    5. Expressing Linear Perceptrons as Neurons
    6. Feed-forward Neural Networks
    7. Linear Neurons and their Limitations
    8. Sigmoid, Tanh, and ReLU Neurons
    9. Softmax Output Layers
    10. Looking Forward
  3. 2. Training Feed-Forward Neural Networks
    1. The Cafeteria Problem 
    2. Gradient Descent
    3. The Delta Rule and Learning Rates
    4. Gradient Descent with Sigmoidal Neurons
    5. The Backpropagation Algorithm
    6. Stochastic and Mini-Batch Gradient Descent
    7. Test Sets, Validation Sets, and Overfitting
    8. Preventing Overfitting in Deep Neural Networks
    9. Summary
  4. 3. Implementing Neural Networks in TensorFlow  
    1. What is TensorFlow? 
    2. How Does TensorFlow Compare to Alternatives?
    3. Installing TensorFlow
    4. Creating and Manipulating TensorFlow Variables
    5. TensorFlow Operations
    6. Placeholder Tensors
    7. Sessions in TensorFlow
    8. Navigating Variable Scopes and Sharing Variables
    9. Managing Models over the CPU and GPU
    10. Specifying the Logistic Regression Model in TensorFlow
    11. Logging and Training the Logistic Regression Model
    12. Leveraging TensorBoard to Visualize Computation Graphs and Learning
    13. Building a Multilayer Model for MNIST in TensorFlow
    14. Summary
  5. 4. Beyond Gradient Descent
    1. The Challenges with Gradient Descent
    2. Local Minima in the Error Surfaces of Deep Networks
    3. Model Identifiability
    4. How Pesky are Spurious Local Minima in Deep Networks?
    5. Flat Regions in the Error Surface
    6. When the Gradient Points in the Wrong Direction
    7. Momentum-Based Optimization
    8. A Brief View of Second Order Methods
    9. Learning Rate Adaptation
    10. AdaGrad - Accumulating Historical Gradients
    11. RMSProp - Exponentially Weighted Moving Average of Gradients
    12. Adam - Combining Momentum and RMSProp
    13. The Philosophy Behind Optimizer Selection
    14. Summary
  6. 5. Convolutional Neural Networks
    1. Neurons in Human Vision
    2. The Shortcomings of Feature Selection
    3. Vanilla Deep Neural Networks Don’t Scale
    4. Filters and Feature Maps
    5. Full Description of the Convolutional Layer
    6. Max Pooling
    7. Full Architectural Description of Convolution Networks
    8. Closing the Loop on MNIST with Convolutional Networks
    9. Image Preprocessing Pipelines Enable More Robust Models
    10. Accelerating Training with Batch Normalization
    11. Building a Convolutional Network for CIFAR-10
    12. Visualizing Learning in Convolutional Networks
    13. Leveraging Convolutional Filters to Replicate Artistic Styles
    14. Learning Convolutional Filters for Other Problem Domains
    15. Summary