=== Purpose ===
In the context of the WDQS Graph Splitting initiative we need to understand the consequences of selected splits. As a base for the respective evaluations we need a set of representative queries.
=== Scope ===
The goal of this task is to extract a representative sample of SPARQL queries from the Blazegraph query logs.
The query set should be representative of the following characteristics:
* Query size
* Query time
* Status code (http return status)
===Notes===
* https://wikitech.wikimedia.org/wiki/User:AKhatun/Wikidata_Subgraph_Query_Analysis
** Would be nice to get the notebooks that produced these results, if possible
=== Open questions ===
* Confirmation of goal and scope
* That is the data source?
* What time frame should we look into?
* What sample size were you looking for?
* What output format would you prefer?
* What is the urgency of this task?
=== Desired output ===
Description of the desired output for this task.
> replace with the desired output
=== Urgency ===
When this task should be completed by. If this task is time sensitive then please make this clear. Please also provide the date when the output will be used if there is a specific meeting or event, for example.
DD.MM.YYYY
---
**Information below this point is filled out by the Wikidata Analytics team.**
== General Planning ==
Information is filled out by the analytics product manager.
== Assignee Planning ==
Information is filled out by the assignee of this task.
=== Estimation ===
Estimate:
Actual:
=== Sub Tasks ===
Full breakdown of the steps to complete this task:
[ ] subtask
=== Data to be used ===
See [Analytics/Data_Lake](https://wikitech.wikimedia.org/wiki/Analytics/Data_Lake) for the breakdown of the data lake databases and tables.
The following tables will be referenced in this task:
- link_to_table
=== Notes and Questions ===
Things that came up during the completion of this task, questions to be answered and follow up tasks:
- What is the metadata that defines this sample that we want?
- How big of a sample? Is this supposed to be determined by accuracy metrics?
- Are we using the same breakdowns as AKhatun did
- Query size: this was not binned, but could be
- Query time: < 10ms, 10-100ms, 100ms - 1s, 1-10s, > 10s
- Status code (http return status): 200 or 500