In iOS 10, Siri will be able to do a lot more than just answer your inane questions. Developers can now utilize something called SiriKit for their apps and extensions, and it’s going to be awesome.
How it works
During a session at WWDC, Apple detailed just how Siri for third party apps would work via its extensions.
If you’re not sure what extensions are, they’re the backbone of what you see on-screen once Siri is activated. There are two different types — one for data and one for an extension’s interface. They can even be used if an app isn’t running in the background.
The data extension is mandatory; Siri can’t do anything if you’re not giving her context. The UI extension is optional, and there if developers want their Siri experience to better reflect the app proper.
It’s possible Lyft would simply default to the last type of ride you took (Line, carpool, etc.) to better serve your request. From there, you’d need to approve the location and make a payment, then a car would be on its way.
Lyft could also make what you see when it’s called via Siri very pink with the UI extension. Rather than some black-box of data, you may see a map view with a lot of pink surrounding it, just as you would in the app.
Apple isn’t making it too easy for developers
Getting Siri to work with apps isn’t going to be as simple as plugging in an API call. It may actually be pretty difficult for developers.
What Apple is asking developers to do is offload their logic into an app’s framework. For those apps that carry a lot of the code logic in the app’s main file, it’ll require a bit (maybe a lot) of retooling.
For larger apps, this shouldn’t be a problem. The architecture for robust apps typically involve carrying a lot of the logic separately so it can be updated easier (think Facebook and it’s ‘we update the app every two weeks for no apparent reason’ as a probable example).
SiriKit is also restricted to five types of app: Audio or video calling, Messaging, Payments, Searching photos, Workouts and Ride booking.
Once a developer has the architecture right, there’s a lot of granular work for making sure Siri works properly. Siri can’t just be called on without cause, after all.
Another example: if you said ‘send a WhatsApp message to Jeff that I’m on my way,’ Siri would need access to the app (duh), access to your WhatsApp contacts, approval to send a message on your behalf, and the ability to tell you a message was sent.
She’d also need a way to resolve disputes. If you knew two Jeffs but didn’t clarify which one you wanted to chat with, she’d want to make sure you were messaging the right person.
Similarly, if the app has dedicated usernames, you may want to message someone via a special username. A good example there would be tweeting at someone on Twitter; that person’s username may be some weird, clever thing you can remember while their name may not be one you can remember (or even pronounce).
It won’t happen quickly, but it’ll be awesome
This won’t happen overnight, obviously. Even if a developer has much of the app’s architecture correct, there’s a lot of implementation that needs to happen.
And when it happens, it may not be a great start. To my mind, it’ll be a lot like the Apple Watch gold rush. When Apple Watch was launched, developers rushed to get their apps on your wrist, which was only exacerbated once complications were introduced.
But like the Apple Watch, developers quickly learned what worked and what didn’t — and adjusted accordingly. We should just be good users and tweet at them rather than leave one-star reviews for something that — at least for the first six months or so — should be considered beta.