Machine learning on iOS

From Apple's teaser page on the new Core ML feature announced as part of iOS 11:

Core ML lets you integrate a broad variety of machine learning model types into your app. [...] Core ML seamlessly takes advantage of the CPU and GPU to provide maximum performance and efficiency. You can run machine learning models on the device so data doesn't need to leave the device to be analyzed.

A big part of the keynote was privacy, and this is another way to help protect privacy by ensuring data doesn't have to leave the device. But it also seems to be a terrific technical accomplishment - like where Google is heading with Tensorflow on mobile, but here and now with models at your ready.

With Core ML, Apple is adding a ton of capability in image recognition and language processing.

Vision - You can easily build computer vision machine learning features into your app. Supported features include face tracking, face detection, landmarks, text detection, rectangle detection, barcode detection, object tracking, and image registration.
Natural Language Processing - The natural language processing APIs in Foundation use machine learning to deeply understand text using features such as language identification, tokenization, lemmatization, part of speech, and named entity recognition.

These of course join Speech Recognition introduced at last year's WWDC.