iOS 13 makes SIRI and voice shortcuts something apps can create, suggest and react to. Although basic reader interactions are not a current priority, these features may help accessibility, and in general fall under "new platform features" that we try to stay current with. So, net net, this is interesting, but not an immediate priority...
There are 3 abstract stories I can see so far that we may want to support or further consider:
- The ability of an app to define shortcuts and parameters to those shortcuts which Siri can learn and Shortcuts can use. In our case the primary use would be to "look up [parameter]" which would open the article for the first result and/or open the search results view. Other shortcuts might include content requests based on feed content ("read me today's featured article") or location ("what historic landmarks are near [my current location]"
- The ability to "donate" or suggest siri actions or shortcuts based on anonymized user activity. This is based on the NSActivity framework which we already participate in, and would be based on registering the actions defined in the previous bullet.
- The ability for SIRI to respond to multiple subsequent requests in a context aware manner. So this might be something like "look up Flagstaff on Wikipedia" followed by "open this article for editing", which would then be aware of the Flagstaff article and know to open it for editing. This case is the most complex, and I think we should aim to start supporting basic actions first.
Again for all these, there is a mass market convenience factor and an accessibility factor. On balance I believe we should focus on any identifiable accessibility wins, and then move down to nice-to-have voice control conveniences.