O'Reilly logo
live online training icon Live Online training

Intermediate Machine Learning with scikit-learn

Using scikit-learn effectively and performantly

David Mertz, Ph.D.

‘Machine learning’ is simply what we call the algorithmic extraction of knowledge from data. The ability to perform complex analysis of data, moving beyond the basic tools of statistics, has been refined and developed increasingly over the last two decades. Over a similar period, Python has grown to be the premier language for data science, and scikit-learn has grown to be the main toolkit used within Python for general purpose machine learning.

This course moves beyond the topics covered in Beginning Machine Learning with scikit-learn. A recap is given of a few essential concepts for students starting here. We then first discuss unsupervised machine learning techniques, and then look at data preparation and “massaging” that is always needed for robust models. Finally, we address concerns best practices for robust and generalizable modeling techniques needed for real-world data science.

What you'll learn-and how you can apply it

  • Recap: Classification vs. Regression vs. Clustering
  • Unsupervised machine learning
  • Feature engineering and feature selection
  • Pipelines
  • Better train/test splits

This training course is for you because...

  • You are an aspiring or beginning data scientist.
  • You have a comfortable intermediate-level knowledge of Python and a very basic familiarity with statistics and linear algebra.
  • You are a working programmer or student who is motivated to expand your skills to include machine learning with Python.
  • You have some familiarity with the fundamentals of machine learning or have taken the Beginning Machine Learning with scikit-learn live training class.


  • A first course in Python and/or working experience as a programmer
  • College level basic mathematics
  • Recommended: Attend or view Beginning Machine Learning with scikit-learn

Course Set-up

Students should have a system with Jupyter notebooks installed, a recent version of scikit-learn, along with Pandas, NumPy, and matplotlib, and the general scientific Python tool stack. The training materials will be made available as notebooks at a GitHub repository.

Recommended Preparation

These resources are optional, but helpful if you need a refresher on Python, Jupyter Notebooks, or Pandas:

Recommended Follow-up

About your instructor

  • David Mertz was most recently a Senior Trainer and Senior Software Developer for Anaconda, Inc., in which role he created and structured its training program. He was a Director of the Python Software Foundation (PSF) for six years and remains co-chair of its Trademarks Committee and of the PSF Scientific Python Working Group. David worked for nine years with D. E. Shaw Research, some folks who built the world's fastest, highly-specialized (down to the ASICs and network layer) supercomputer for performing molecular dynamics.

    David wrote the widely read columns Charming Python and XML Matters for IBM developerWorks, short books for O'Reilly, and the Addison-Wesley book Text Processing in Python. He has spoken at multiple OSCons, PyCons, and AnacondaCon, and was invited to be a keynote speaker at PyCon-India, PyCon-UK, PyCon-ZA, PyCon Belarus, PyCon Cuba, and PyData SF.

    David is pleased to find Python becoming the default high-level language for most scientific computing projects.


The timeframes are only estimates and may vary according to how the class is progressing

Lesson 1: Recap: What is Machine Learning? (30 minutes)

1.1 Overview of techniques used in Machine Learning - 1.1.1 Classification, Regression, Clustering - 1.1.2 Dimensionality Reduction, Feature Engineering, Feature Selection

1.2.3 Categorical vs. Ordinal vs. Continuous variables

1.2.4 Results of Classification and Regression in earlier session

1.2.5 Metrics [BREAK]

Lesson 2: Clustering (45 minutes)

2.1 Overview of (some) clustering algorithms - 2.1.1 Kmeans - 2.1.2 Agglomerative - 2.1.3 Density based clustering - DBScan - HDBScan

2.2 n_clusters, labels, and predictions

2.3 Visualizing results [BREAK]

Lesson 3: Feature engineering and feature selection (45 minutes)

3.1 Dimensionality reduction - 3.1.1 Principal Component Analysis (PCA) - 3.1.2 Non-Negative Matrix Factorization (NMF) - 3.1.3 Latent Dirichlet Allocation (LDA) - 3.1.4 Independent component analysis (ICA) - 3.1.5 SelectKBest

3.2 Dimensionality expansion - 3.2.1 Polynomial Features - 3.2.2 One-Hot Encoding

3.3 Scaling - 3.3.1 StandardScaler, RobustScaler, MinMaxScaler, Normalizer - 3.3.2 Quantiles, binarize [BREAK]

Lesson 4: Pipelines (30 minutes)

4.1 Feature selection and engineering

4.2 Grid search

4.3 Model [BREAK]

Lesson 5: Robust Train/test splits (30 minutes)

5.1 cross_val_score

5.2 ShuffleSplit

5.3 KFold, RepeatedKFold, LeaveOneOut, LeavePOut, StratifiedKFold