This is an idea that came up at the Developer Summit, 2017
- "Define" how we can use implicit user feedback to assess if user got the best result
- Then how can we use that feedback to actually make the results more relevant?
- OR rank those results in some manner
Things that might helps us get this AI built:
- http://www.cs.cornell.edu/people/tj/publications/joachims_02c.pdf
- https://www.cs.cornell.edu/people/tj/publications/joachims_etal_05a.pdf
- https://www.cs.cornell.edu/people/tj/publications/radlinski_etal_08b.pdf
- https://commons.wikimedia.org/wiki/File:20161119_Key_note_Maarten_de_Rijke_WCN_2016.pdf
- In Dutch but I (@Basvb) could translate the relevant points on request, includes invitation for TREC open search
- It was a Talk by prof Maarten de Rijke, the last parts are relevant, he deems that search could be improved upon a lot and proposes some competition style formats and points our TREC open search.
Related tasks: