O'Reilly logo
live online training icon Live Online training

Practical AI on iOS

Building with Core ML and Vision

Jon Manning

Expert Jon Manning offers a hands-on overview of the new machine learning features built into iOS. Over the course of this 90-minute course, you’ll learn how to apply the Vision and Core ML frameworks to solve practical problems in object detection, face recognition, and more.

Because these frameworks run on-device rather than relying on a cloud processing service, they work quickly—with no network access required. Additionally, user input never has to leave the phone, making these frameworks ideal for situations where the user wants to maintain their privacy. All of these factors and more make the AI features of iOS 11 incredibly appealing. Join in to learn how to build tools that take full advantage of them.

What you'll learn-and how you can apply it

By the end of this live online course, you’ll understand:

  • What the Core ML and Vision frameworks are
  • How to expand upon your new skills

And you’ll be able to:

  • Detect faces and facial features
  • Load trained models for use in machine learning
  • Detect and classify objects in photos

This training course is for you because...

  • You’re a programmer who’s excited about the possibilities of machine learning, and you want to explore the capabilities of iOS 11’s machine learning features.
  • You’re interested in machine learning in general and want to see it in action.
  • You want to learn how to apply your existing machine learning knowledge to iOS.

Prerequisites

  • A working knowledge of the Swift programming language

Materials or downloads needed in advance:

Recommended preparation:

Apple’s Machine Learning overview (website) https://developer.apple.com/machine-learning/

Recommended follow-up:

Hands-On Machine Learning with Scikit-Learn and TensorFlow (book)

About your instructor

  • Jon Manning is the cofounder of independent game development studio Secret Lab. He's currently working on top-down puzzler Button Squid and the critically acclaimed adventure game Night in the Woods, which includes his interactive dialogue system Yarn Spinner. Jon has written a whole bunch of books for O'Reilly Media about iOS development and game development. He holds a doctorate about jerks on the internet. Jon can be found as @desplesda on Twitter.

Schedule

The timeframes are only estimates and may vary according to how the class is progressing

Overview of Core ML and Vision (20 minutes)

  • Lecture and hands-on exercises: The feature sets available in the new Core ML and Vision frameworks that ship with iOS 11

Vision with face detection (25 minutes)

  • Lecture and hands-on exercises: Creating an app that detects faces in provided photos

Break (10 minutes)

Working with AVKit (25 minutes)

  • Lecture and hands-on exercises: Getting access to real-time camera input, using a lower-level framework that lets us work with the audio-visual systems built into the device directly

Detecting facial features (20 minutes)

  • Lecture and hands-on exercises: Using Vision to describe where certain parts of faces are

Classifying objects in photos (20 minutes)

  • Lecture and hands-on exercises: Using Core ML and a pretrained neural network to detect and classify the objects present in the camera’s view