Page MenuHomePhabricator

[Analytics] Extract a representative sample of SPARQL queries from the query logs
Closed, DuplicatePublic

Description

Purpose

In the context of the WDQS Graph Splitting initiative we need to understand the consequences of selected splits. As a base for the respective evaluations we need a set of representative queries.

Scope

The goal of this task is to extract a representative sample of SPARQL queries from the Blazegraph query logs.

A The query set should be representative of the following characteristics:

OR

B We create subsets of queries that are representative for different types of queries

  • Query size
  • Query time
  • Status code (http return status) [not in the table!]
  • (user agents), was mentioned in the call

Notes

Open questions

  • Confirmation of goal and scope
    • Are you interested in a representative set of queries, or a in different subsets of queries that are following different characteristics, or what AKhatun did?
    • What subsets of queries would you be most interested in?
  • What is the data source?
  • What time frame should we look into?
  • What sample size were you looking for?
  • What output format would you prefer?
  • What is the urgency of this task?

Desired output

The output is expected to be a hive table with 2 columns:

query: the sparql query in plain text
provenance: a code identifying the provenance (source) of the query

Urgency

When this task should be completed by. If this task is time sensitive then please make this clear. Please also provide the date when the output will be used if there is a specific meeting or event, for example.

DD.MM.YYYY


Information below this point is filled out by the Wikidata Analytics team.

General Planning

Information is filled out by the analytics product manager.

Assignee Planning

Information is filled out by the assignee of this task.

Estimation

Estimate:
Actual:

Sub Tasks

Full breakdown of the steps to complete this task:

  • subtask

Data to be used

See Analytics/Data_Lake for the breakdown of the data lake databases and tables.

The following tables will be referenced in this task:

  • link_to_table

Notes and Questions

Things that came up during the completion of this task, questions to be answered and follow up tasks:

  • What is the metadata that defines this sample that we want?
    • How big of a sample? Is this supposed to be determined by accuracy metrics?
      • We don't need the sample to be exactly representational, but the various kinds of queries should at least be represented
    • Are we using the same breakdowns as AKhatun did
      • Query size: this was not binned, but could be
      • Query time: < 10ms, 10-100ms, 100ms - 1s, 1-10s, > 10s
      • Status code (http return status): 200 or 500
      • User agent: modern or old version

Event Timeline

Hi @AndrewTavis_WMDE, could you please already have a first look at AKhatun's notebooks before we have a meeting with Guillaume?

I'm really not sure why I can't assign both Wikidata Analytics and Wikidata Analytics Kanban, @Manuel. Did you change the settings?