Create or revive a dashboard for the Search Platform team (formerly Discovery) that includes the below metrics, and the question we are trying to answer with them:
1. User engagement (= number of queries with a dwell time > 10 seconds).
** Are users finding full text search results relevant/useful?
2. Number and percentage of searches on Desktop, Mobile Web, Android, iOS
** Where are searches coming from?
3. Number and percentage of full-text searches, "Go" box, morelike, autocomplete searches
** What type of searches are being used?
4. Number and percentage of abandoned sessions.
** How many users are unsatisfied with search results that they are leaving?
5. Number and percentage of queries with "did you mean" suggestions.
** How well are we accommodating typos and other unintentional orthographic issues?
6. Number and percentage of "did you mean" suggestions clicked on.
** Are we doing a good job correcting for typos and other unintentional orthographic issues and suggesting relevant results?
7. Number of WDQS time outs
** How often is Wikidata Query Service failing to return results for a user's query?
8. Number of requests to WDQS and Linked Data Fragments, dumps, mediawiki APIs
** What services are people using to get data from Wikidata?
9. Top queries and top keywords
** Are there any common query patterns worth doing anything about?
10. Number and percentage of zero results
** What happened (recently) that may have drastically affected how many queries are returning zero results?
11. Top returned documents (articles) and top clicked through documents
** Are there patterns in specific search results that are worth doing anything about?
We have excluded any metrics that would require building a new thing: i.e. a "smiley face" search satisfaction survey that would need to be built.
**Meta-requests**
Search Platform currently lacks analyst support in two major ways:
1. Technical. Legacy bespoke dashboards were built in shiny and R and the team lacks the current resources and technical expertise to maintain this code ourselves.
2. Data expertise. The team is able to do rudimentary data analysis, but lacks the expertise to really validate statistical significance, whether we are capturing the right signals to test our hypotheses, etc.
Even if a dashboard were to be built for us, it runs the risk of growing stale and obsolete without the ability to actively maintain/tune it, which we are unable to do on our own. To ensure long-term value and avoid wasted effort in constantly (re)building dashboards that grow stale, it would be ideal to have access to resources to help us maintain our metrics in the long term.