Hypothesis
If we make article topic inference data available via a service that meets agreed-upon scalability and availability requirements, plus any necessary data backfills, then we will have established the technical foundation necessary to support upcoming personalized reader experiences that depend on this data.
Scoping details
- Problem: We want to be able to access the topic scores for articles at scale for each page view on the mobile apps to be able to track this information however there currently isn't a scalable way to access this information from liftwing.
- [Optional] Possible solutions:
- Add caching to LiftWing to reduce server load.
- Establish way forward to leverage Cirrusdoc or other search based alternative as a key-value store more generally.
- Make data lake queryable from external api call's and retrieve output from there.
- Enabled projects: Which specific user-facing features or experiments would be unblocked or meaningfully enabled (in terms of development ease, velocity, etc.) by solving this problem? Which teams are launching these features or experiments?
- Year-in-review on iOS and Android
- Urgency and importance: When are these features or experiments expected to launch? How essential is this infrastructure for unblocking development?
- Year-in-review would go out to users in Q2 however the later the availability, the amount of data that would need backfilling would increase with time.
- [Optional] Notes: Is there anything else you'd like to share?
- Being able to access model outputs at scale would likely unlock additioanal use cases for liftwing for the mobile apps and other teams in the long term
Reporting format
Summary of progress:
Next steps: