During the latest edition of its annual WWDC event, Apple made important strides to show developers and creators that it is finally getting serious about artificial intelligence.
The company announced its all-new Core ML framework specifically designed to enable developers to build smarter apps by embedding them with on-device machine learning capabilities. But it seems the new system still has some learning to do.
Toying around with the Core ML beta, developer Paul Haddad took to Twitter to showcase how well the framework handles computer vision tasks.
Using the new built-in screen recording tool, Haddad tested the capacity of Core ML to identify and caption objects in real-time.
While the app appears to accurately recognize certain objects – like a screwdriver, a keyboard and some boxes – it paradoxically struggled to caption the first-generation Mac Pro, misidentifying the desktop system as either a “speaker unit” or a “space heater.”
Despite little inconsistencies, Haddad expressed enthusiasm about the framework’s potential, noting that users need to point their cameras “at the right angle” for optimal results. He also remarked that Core ML seemed to considerably heat up the device when using the app.
Given that apps relying on machine learning algorithms to caption objects in the real world misidentify their targets all the time, it is hardly surprising Apple’s framework is getting stuff wrong here and there – it is after all still in beta.
However, chances are the software will continue to improve once Apple rolls out the official release later in autumn and more people start experimenting with Core ML.
Here are some more examples of what you can do with the new framework:
Meanwhile, you can catch up with everything the Big A announced during WWDC by clicking here.
[H/T Paul Haddad]