User Details
- User Since
- Jun 25 2020, 6:43 PM (104 w, 3 d)
- Availability
- Available
- IRC Nick
- chrisalbon
- LDAP User
- Calbon
- MediaWiki User
- Unknown
Tue, Jun 21
Tue, Jun 14
Wed, Jun 8
Approved!
Yeah, its a good point. I'm going to assign myself to this task.
Tue, Jun 7
May 25 2022
May 24 2022
May 18 2022
I approve
May 17 2022
May 11 2022
We've created a ticket on our team's board to explore how we might migrate the modeling functionality of the application to Lift Wing (T308165).
May 10 2022
May 4 2022
Thanks for highlighting this Strainu!
Mar 23 2022
These are good points, but we should definitely keep in mind that if folks want to use hit the inference/prediction API a lot, one option is to take advantage of Lift Wing being k8s and buy more boxes.
Mar 22 2022
Awesome, thanks for this. I'll loop in the growth team.
Mar 21 2022
The Product Dept reached out to me about enabling ORES in Hindi wikipedia's Recent Changes. They have two issues they need resolved:
Might need some quick review to see if we deployed.
Mar 15 2022
Mar 7 2022
Mar 2 2022
Feb 28 2022
Feb 14 2022
This is for a new hire on my team.
Feb 7 2022
Feb 2 2022
Jan 26 2022
Jan 24 2022
Jan 19 2022
@Htriedman lets talk about this next meeting
Jan 18 2022
Some thoughts about this:
Jan 12 2022
Thanks for this Luca. I thought about it yesterday and came to a similar conclusion (score cache >> online feature store) but for slightly different reasons. Mainly, I think that you have done well exposing is that a feature store (online/offline) is a big project, with lots of unknowns, and requiring lots of experimentation. My worry now with the feature store is that if Revscoring models are currently too slow without either a feature store or score cache, taking on a big, complicated, never-tried before solution (feature store) rather than a well understood solution (score cache) seems like a bad decision.