Deep Learning with Python

Book description

Deep Learning with Python introduces the field of deep learning using the Python language and the powerful Keras library. Written by Keras creator and Google AI researcher François Chollet, this book builds your understanding through intuitive explanations and practical examples.



About the Technology
Machine learning has made remarkable progress in recent years. We went from near-unusable speech and image recognition, to near-human accuracy. We went from machines that couldn't beat a serious Go player, to defeating a world champion. Behind this progress is deep learning—a combination of engineering advances, best practices, and theory that enables a wealth of previously impossible smart applications.

About the Book

Deep Learning with Python introduces the field of deep learning using the Python language and the powerful Keras library. Written by Keras creator and Google AI researcher François Chollet, this book builds your understanding through intuitive explanations and practical examples. You'll explore challenging concepts and practice with applications in computer vision, natural-language processing, and generative models. By the time you finish, you'll have the knowledge and hands-on skills to apply deep learning in your own projects.



What's Inside
  • Deep learning from first principles
  • Setting up your own deep-learning environment
  • Image-classification models
  • Deep learning for text and sequences
  • Neural style transfer, text generation, and image generation


About the Reader
Readers need intermediate Python skills. No previous experience with Keras, TensorFlow, or machine learning is required.

About the Author
François Chollet works on deep learning at Google in Mountain View, CA. He is the creator of the Keras deep-learning library, as well as a contributor to the TensorFlow machine-learning framework. He also does deep-learning research, with a focus on computer vision and the application of machine learning to formal reasoning. His papers have been published at major conferences in the field, including the Conference on Computer Vision and Pattern Recognition (CVPR), the Conference and Workshop on Neural Information Processing Systems (NIPS), the International Conference on Learning Representations (ICLR), and others.

Quotes
The clearest explanation of deep learning I have come across...it was a joy to read.
- Richard Tobias, Cephasonics

An excellent hands-on introductory title, with great depth and breadth.
- David Blumenthal-Barby, Babbel

Bridges the gap between the hype and a functioning deep-learning system.
- Peter Rabinovitch, Akamai

The best resource for becoming a master of Keras and deep learning.
- Claudio Rodriguez, Cox Media Group

Publisher resources

View/Submit Errata

Table of contents

  1. Deep Learning with Python
    1. François Chollet
  2. Copyright
  3. Brief Table of Contents
  4. Table of Contents
  5. Preface
  6. Acknowledgments
  7. About this Book
    1. Who should read this book
    2. Roadmap
    3. Software/hardware requirements
    4. Source code
    5. Book forum
  8. About the Author
  9. About the Cover
  10. Part 1. Fundamentals of deep learning
  11. Chapter 1. What is deep learning?
    1. 1.1. Artificial intelligence, machine learning, and deep learning
      1. 1.1.1. Artificial intelligence
      2. 1.1.2. Machine learning
      3. 1.1.3. Learning representations from data
      4. 1.1.4. The “deep” in deep learning
      5. 1.1.5. Understanding how deep learning works, in three figures
      6. 1.1.6. What deep learning has achieved so far
      7. 1.1.7. Don’t believe the short-term hype
      8. 1.1.8. The promise of AI
    2. 1.2. Before deep learning: a brief history of machine learning
      1. 1.2.1. Probabilistic modeling
      2. 1.2.2. Early neural networks
      3. 1.2.3. Kernel methods
      4. 1.2.4. Decision trees, random forests, and gradient boosting machines
      5. 1.2.5. Back to neural networks
      6. 1.2.6. What makes deep learning different
      7. 1.2.7. The modern machine-learning landscape
    3. 1.3. Why deep learning? Why now?
      1. 1.3.1. Hardware
      2. 1.3.2. Data
      3. 1.3.3. Algorithms
      4. 1.3.4. A new wave of investment
      5. 1.3.5. The democratization of deep learning
      6. 1.3.6. Will it last?
  12. Chapter 2. Before we begin: the mathematical building blocks of neural networks
    1. 2.1. A first look at a neural network
    2. 2.2. Data representations for neural networks
      1. 2.2.1. Scalars (0D tensors)
      2. 2.2.2. Vectors (1D tensors)
      3. 2.2.3. Matrices (2D tensors)
      4. 2.2.4. 3D tensors and higher-dimensional tensors
      5. 2.2.5. Key attributes
      6. 2.2.6. Manipulating tensors in Numpy
      7. 2.2.7. The notion of data batches
      8. 2.2.8. Real-world examples of data tensors
      9. 2.2.9. Vector data
      10. 2.2.10. Timeseries data or sequence data
      11. 2.2.11. Image data
      12. 2.2.12. Video data
    3. 2.3. The gears of neural networks: tensor operations
      1. 2.3.1. Element-wise operations
      2. 2.3.2. Broadcasting
      3. 2.3.3. Tensor dot
      4. 2.3.4. Tensor reshaping
      5. 2.3.5. Geometric interpretation of tensor operations
      6. 2.3.6. A geometric interpretation of deep learning
    4. 2.4. The engine of neural networks: gradient-based optimization
      1. 2.4.1. What’s a derivative?
      2. 2.4.2. Derivative of a tensor operation: the gradient
      3. 2.4.3. Stochastic gradient descent
      4. 2.4.4. Chaining derivatives: the Backpropagation algorithm
    5. 2.5. Looking back at our first example
  13. Chapter 3. Getting started with neural networks
    1. 3.1. Anatomy of a neural network
      1. 3.1.1. Layers: the building blocks of deep learning
      2. 3.1.2. Models: networks of layers
      3. 3.1.3. Loss functions and optimizers: keys to configuring the learning process
    2. 3.2. Introduction to Keras
      1. 3.2.1. Keras, TensorFlow, Theano, and CNTK
      2. 3.2.2. Developing with Keras: a quick overview
    3. 3.3. Setting up a deep-learning workstation
      1. 3.3.1. Jupyter notebooks: the preferred way to run deep-learning experiments
      2. 3.3.2. Getting Keras running: two options
      3. 3.3.3. Running deep-learning jobs in the cloud: pros and cons
      4. 3.3.4. What is the best GPU for deep learning?
    4. 3.4. Classifying movie reviews: a binary classification example
      1. 3.4.1. The IMDB dataset
      2. 3.4.2. Preparing the data
      3. 3.4.3. Building your network
      4. 3.4.4. Validating your approach
      5. 3.4.5. Using a trained network to generate predictions on new data
      6. 3.4.6. Further experiments
      7. 3.4.7. Wrapping up
    5. 3.5. Classifying newswires: a multiclass classification example
      1. 3.5.1. The Reuters dataset
      2. 3.5.2. Preparing the data
      3. 3.5.3. Building your network
      4. 3.5.4. Validating your approach
      5. 3.5.5. Generating predictions on new data
      6. 3.5.6. A different way to handle the labels and the loss
      7. 3.5.7. The importance of having sufficiently large intermediate layers
      8. 3.5.8. Further experiments
      9. 3.5.9. Wrapping up
    6. 3.6. Predicting house prices: a regression example
      1. 3.6.1. The Boston Housing Price dataset
      2. 3.6.2. Preparing the data
      3. 3.6.3. Building your network
      4. 3.6.4. Validating your approach using K-fold validation
      5. 3.6.5. Wrapping up
  14. Chapter 4. Fundamentals of machine learning
    1. 4.1. Four branches of machine learning
      1. 4.1.1. Supervised learning
      2. 4.1.2. Unsupervised learning
      3. 4.1.3. Self-supervised learning
      4. 4.1.4. Reinforcement learning
    2. 4.2. Evaluating machine-learning models
      1. 4.2.1. Training, validation, and test sets
      2. 4.2.2. Things to keep in mind
    3. 4.3. Data preprocessing, feature engineering, and feature learning
      1. 4.3.1. Data preprocessing for neural networks
      2. 4.3.2. Feature engineering
    4. 4.4. Overfitting and underfitting
      1. 4.4.1. Reducing the network’s size
      2. 4.4.2. Adding weight regularization
      3. 4.4.3. Adding dropout
    5. 4.5. The universal workflow of machine learning
      1. 4.5.1. Defining the problem and assembling a dataset
      2. 4.5.2. Choosing a measure of success
      3. 4.5.3. Deciding on an evaluation protocol
      4. 4.5.4. Preparing your data
      5. 4.5.5. Developing a model that does better than a baseline
      6. 4.5.6. Scaling up: developing a model that overfits
      7. 4.5.7. Regularizing your model and tuning your hyperparameters
  15. Part 2. Deep learning in practice
  16. Chapter 5. Deep learning for computer vision
    1. 5.1. Introduction to convnets
      1. 5.1.1. The convolution operation
      2. 5.1.2. The max-pooling operation
    2. 5.2. Training a convnet from scratch on a small dataset
      1. 5.2.1. The relevance of deep learning for small-data problems
      2. 5.2.2. Downloading the data
      3. 5.2.3. Building your network
      4. 5.2.4. Data preprocessing
      5. 5.2.5. Using data augmentation
    3. 5.3. Using a pretrained convnet
      1. 5.3.1. Feature extraction
      2. 5.3.2. Fine-tuning
      3. 5.3.3. Wrapping up
    4. 5.4. Visualizing what convnets learn
      1. 5.4.1. Visualizing intermediate activations
      2. 5.4.2. Visualizing convnet filters
      3. 5.4.3. Visualizing heatmaps of class activation
  17. Chapter 6. Deep learning for text and sequences
    1. 6.1. Working with text data
      1. 6.1.1. One-hot encoding of words and characters
      2. 6.1.2. Using word embeddings
      3. 6.1.3. Putting it all together: from raw text to word embeddings
      4. 6.1.4. Wrapping up
    2. 6.2. Understanding recurrent neural networks
      1. 6.2.1. A recurrent layer in Keras
      2. 6.2.2. Understanding the LSTM and GRU layers
      3. 6.2.3. A concrete LSTM example in Keras
      4. 6.2.4. Wrapping up
    3. 6.3. Advanced use of recurrent neural networks
      1. 6.3.1. A temperature-forecasting problem
      2. 6.3.2. Preparing the data
      3. 6.3.3. A common-sense, non-machine-learning baseline
      4. 6.3.4. A basic machine-learning approach
      5. 6.3.5. A first recurrent baseline
      6. 6.3.6. Using recurrent dropout to fight overfitting
      7. 6.3.7. Stacking recurrent layers
      8. 6.3.8. Using bidirectional RNNs
      9. 6.3.9. Going even further
      10. 6.3.10. Wrapping up
    4. 6.4. Sequence processing with convnets
      1. 6.4.1. Understanding 1D convolution for sequence data
      2. 6.4.2. 1D pooling for sequence data
      3. 6.4.3. Implementing a 1D convnet
      4. 6.4.4. Combining CNNs and RNNs to process long sequences
      5. 6.4.5. Wrapping up
  18. Chapter 7. Advanced deep-learning best practices
    1. 7.1. Going beyond the Sequential model: the Keras functional API
      1. 7.1.1. Introduction to the functional API
      2. 7.1.2. Multi-input models
      3. 7.1.3. Multi-output models
      4. 7.1.4. Directed acyclic graphs of layers
      5. 7.1.5. Layer weight sharing
      6. 7.1.6. Models as layers
      7. 7.1.7. Wrapping up
    2. 7.2. Inspecting and monitoring deep-learning models using Keras callba- acks and TensorBoard
      1. 7.2.1. Using callbacks to act on a model during training
      2. 7.2.2. Introduction to TensorBoard: the TensorFlow visualization framework
      3. 7.2.3. Wrapping up
    3. 7.3. Getting the most out of your models
      1. 7.3.1. Advanced architecture patterns
      2. 7.3.2. Hyperparameter optimization
      3. 7.3.3. Model ensembling
      4. 7.3.4. Wrapping up
  19. Chapter 8. Generative deep learning
    1. 8.1. Text generation with LSTM
      1. 8.1.1. A brief history of generative recurrent networks
      2. 8.1.2. How do you generate sequence data?
      3. 8.1.3. The importance of the sampling strategy
      4. 8.1.4. Implementing character-level LSTM text generation
      5. 8.1.5. Wrapping up
    2. 8.2. DeepDream
      1. 8.2.1. Implementing DeepDream in Keras
      2. 8.2.2. Wrapping up
    3. 8.3. Neural style transfer
      1. 8.3.1. The content loss
      2. 8.3.2. The style loss
      3. 8.3.3. Neural style transfer in Keras
      4. 8.3.4. Wrapping up
    4. 8.4. Generating images with variational autoencoders
      1. 8.4.1. Sampling from latent spaces of images
      2. 8.4.2. Concept vectors for image editing
      3. 8.4.3. Variational autoencoders
      4. 8.4.4. Wrapping up
    5. 8.5. Introduction to generative adversarial networks
      1. 8.5.1. A schematic GAN implementation
      2. 8.5.2. A bag of tricks
      3. 8.5.3. The generator
      4. 8.5.4. The discriminator
      5. 8.5.5. The adversarial network
      6. 8.5.6. How to train your DCGAN
      7. 8.5.7. Wrapping up
  20. Chapter 9. Conclusions
    1. 9.1. Key concepts in review
      1. 9.1.1. Various approaches to AI
      2. 9.1.2. What makes deep learning special within the field of machine learning
      3. 9.1.3. How to think about deep learning
      4. 9.1.4. Key enabling technologies
      5. 9.1.5. The universal machine-learning workflow
      6. 9.1.6. Key network architectures
      7. 9.1.7. The space of possibilities
    2. 9.2. The limitations of deep learning
      1. 9.2.1. The risk of anthropomorphizing machine-learning models
      2. 9.2.2. Local generalization vs. extreme generalization
      3. 9.2.3. Wrapping up
    3. 9.3. The future of deep learning
      1. 9.3.1. Models as programs
      2. 9.3.2. Beyond backpropagation and differentiable layers
      3. 9.3.3. Automated machine learning
      4. 9.3.4. Lifelong learning and modular subroutine reuse
      5. 9.3.5. The long-term vision
    4. 9.4. Staying up to date in a fast-moving field
      1. 9.4.1. Practice on real-world problems using Kaggle
      2. 9.4.2. Read about the latest developments on arXiv
      3. 9.4.3. Explore the Keras ecosystem
    5. 9.5. Final words
  21. Appendix A. Installing Keras and its dependencies on Ubuntu
    1. A.1. Installing the Python scientific suite
    2. A.2. Setting up GPU support
    3. A.3. Installing Theano (optional)
    4. A.4. Installing Keras
  22. Appendix B. Running Jupyter notebooks on an EC2 GPU instance
    1. B.1. What are Jupyter notebooks? Why run Jupyter notebooks on AWS GPUs?
    2. B.2. Why would you not want to use Jupyter on AWS for deep learning?
    3. B.3. Setting up an AWS GPU instance
      1. B.3.1. Configuring Jupyter
    4. B.4. Installing Keras
    5. B.5. Setting up local port forwarding
    6. B.6. Using Jupyter from your local browser
  23. Index
  24. List of Figures
  25. List of Tables
  26. List of Listings

Product information

  • Title: Deep Learning with Python
  • Author(s): Francois Chollet
  • Release date: December 2017
  • Publisher(s): Manning Publications
  • ISBN: 9781617294433