Page MenuHomePhabricator

Examine wikistats reports, make a summary of the most granular data needed that would serve all reports
Closed, ResolvedPublic8 Story Points

Description

Tons of documentation on this task

Event Timeline

Nuria created this task.Apr 4 2016, 5:31 PM
Nuria added a comment.EditedApr 18 2016, 5:09 PM

Notes on this so far: https://etherpad.wikimedia.org/p/wikistats-edits

We are leaning to 1st load data from mysql , which might mean we need to load data using mediawiki.

Open questions (that will shape our tasks going forward)

0. What data are we loading on 1st instance?
Are we loading only data to do the verticals?https://phabricator.wikimedia.org/T131779

Are we loading more data so we can optimize our data extraction?

  1. Do we need mw client to load data from db? If so estimate work.
  1. Estimate work of loading data we need from dump/hdfss, using as basis joseph work in altiscale.
JAllemandou moved this task from Next Up to In Progress on the Analytics-Kanban board.
JAllemandou changed the point value for this task from 0 to 8.Apr 21 2016, 11:28 AM

After reviewing charts and talking with @ezachte, here is a schema-like view of data that would be needed to replicate most of the useful charts.
All of the information discussed here is easily accessible from dumps except for archived and historical changes (page title change or contrib rights changefor instance), and categories, for which template building would be needed to be correct.

  • wiki
    • name
    • creation_date - [Date earliest edit]
    • language
    • Regions associated (see other fact, this data is currently hard-coded in perl, with some html scrapping to retrieve - numbers)
    • project-class [wikipedia, wikitravel, wiktionary ..., otherwikis]
  • namespace
    • name
    • is_countable (mostly namespace 0, but for some wikis, community decided that some other namespaces should counted in the -articles definition - This list can be retrieved using an API call)
  • page
    • wiki
    • namespace
    • creation_date [date of 1st revision]
    • creator [contrib of 1st revision]
    • title
    • redirect
    • restrictions [Present in dumps, NOT USED in wikistat]
  • contrib
    • engagement_date [date 1st edit]
    • is_bot
    • name
    • rights [Access: A=Admin, B=Bureaucrat, C=Checkuser, D=Developer, O=Oversight, X=bot]
  • edit
    • page
    • contrib
    • edit_date
    • revert_info [can either be reverted_revision in case of sha1 equality, or if revert is present in comment (Not sure this - last one should be continued becasue of cross-language issues) ]
    • count of links (internal, otherwikis(really needed now that wikidata handles cross-wiki links?), binaries, external) [ - This one is tricky to get using dumps: implies parsing]
    • minor? [present in dumps, NOT USED in wikistats]
    • model?? [present in dumps, NOT USED in wikistats]
    • format?? [present in dumps, NOT USED in wikistats]
    • parent_id [present in dumps, NOT USED in wikistats]
    • ------------To Be Discussed----------------
      • bytes -- Easy to get, good proxy for article size without having the issues of counting words or chars]
      • chars without wikitext -- Better mesure of article size, but tricky in case of language where characters and words - overlap (japanese for instance)
      • words without wikitext - Idem, with the even more difficult job to split sentences.
      • --------------------------------

About categories: Erik and I agreed that categories, while being very interesting, could be a project in itself.

-categories in edits (count and possibly list) [categories inserted by template is an issue]

  • category
    • name
    • creation_date
    • parent
  • Other facts
    • Region
    • name
    • estimated population
    • Languages associated + Estimated number of spearkers
ezachte added a comment.EditedApr 24 2016, 9:42 AM

Countable namespaces

Content (=countable) namespaces are collected daily via API call [1] using perl file [2] and bash file [3], and written to csv files [4] [5]

Namespaces with string 'content' do qualify.

The perl script adds a few that are not in the API but historically deemed countable. In particular ns 6 for Commons which signal binary uploads, the most important events on Commons.

[1] https://en.wikisource.org//w/api.php?action=query&meta=siteinfo&siprop=namespaces
[2] https://github.com/wikimedia/analytics-wikistats/blob/master/dumps/perl/WikiCountsScanNamespacesWithContent.pl
[3] https://github.com/wikimedia/analytics-wikistats/blob/master/dumps/bash/collect_countable_namespaces.sh
[4]


[5]

In the past an issue was that new namespaces were invented, articles were moved from ns0 to new ns, and only much later after a community vote that new namespace was officially deemed content namespace, and even later some config file was updated and the API informed us. Until that happened article count could be seen as falling in Wikistats.

A good example (one of few) where rebuilding all data every month proved helpful.

ezachte added a comment.EditedApr 24 2016, 10:16 AM

Bytes, chars, words

As for counting bytes vs chars vs words, here are some considerations.

Bytes
Surely counting bytes is easy and cheap (and that explains its popularity) and Wikistats reports these in e.g. [1].

Characters
Character count is more costly to collect (in fact much more costly, with Wikistats' very strict regexp, a good-enough lighter version of that reg-exp might be advisable).

English is an exception where most texts only contain 1 byte characters. So for English speakers byte count is a good proxy. Less so for French, Swedish, German, and much much less for ideographic languages [2].

English telephone and French téléphone are words of equal size, in characters, not in bytes. Arguably comparisons of text volume between languages are more fair in chars than in bytes.

Words
Word count is really tricky. And may be to ambitious. But this seems the default unit of comparison to compare text volumes, in particular encyclopedias. [3]

Wikistats went as far as guesstimating conversion ratios, for text size 'normalization' by comparing official translations of US Constitution in English, Japanese, Chinese and some other ideographic languages, and use this (language dependent) ratio to calculate a 'normalized' word count. Yet only a few ideographic languages underwent this treatment. And its validity can be questioned.

[1] https://stats.wikimedia.org/EN/TablesWikipediaEN.htm#distribution
(the kind of html table than few people will use, but it can be helpful to detect anomalies, e.g. when average size of article drops dramatically)
[2] https://en.wikipedia.org/wiki/List_of_writing_systems
[3] https://en.wikipedia.org/wiki/Wikipedia:Size_comparisons

Regional codes and number of speakers per language.

I said indeed this was html scraped from Wikipedia, but I confused traffic reports where some demographic data are indeed html scraped from Wikipedia (population count, # internet connections). Those are stored as csv, and used e.g. in [1]

For dump based reports region codes and number of speakers is taken manually from language articles in English Wikipedia. (#speakers including secondary speakers where an estimation is available, hence bypassing the page where all languages are listed with native speakers only) And these are indeed stored in perl file. [2] I will export this to csv when the need arises.

[1] https://stats.wikimedia.org/wikimedia/squids/SquidReportPageViewsPerCountryOverview.htm
[2] https://github.com/wikimedia/analytics-wikistats/blob/master/dumps/perl/WikiReportsLiterals.pm

Anons

Wikistats groups editors, edits and creates by user type, registered (and logged in) user, anonymous user or bot.

Often the xml dump tells which edits have been done by an unregistered or logged out user, by adding an <ip> tag.

But in early years this wasn't always the case, and ip addresses weren't always a series of numeric triplets (but instead e.g. [username]@comcast.net . Hence Wikistats vets user name and if it contains two or more dots, treats it as anon, a few false positives taken for granted (and a handful of names excluded explicitly when users wrote me about being registered names).

ezachte added a comment.EditedApr 24 2016, 11:08 AM

Bots

Note that many users like puns or letter games and make up a nick with 'bot' in it just because they can.

Recap on how Wikistats detects bots:

  1. Is a name registered as bot, in other words is there a bot flag in user group table?
  2. Does it sound like a bot? (nowadays only allowed for bot, on many wikis)
  3. Is it known to be an unregistered bot ? (Wikipedia has a list of false negatives at http://en.wikipedia.org/wiki/Wikipedia:List_of_Wikipedians_by_number_of_edits/Unflagged_bots ) I copied that list long ago but do not keep it auto-updated.
  4. Is a name flagged as a bot on at least 10 wikis than treat it so on any wiki within the project. (in the past when user names could easily collide this was more relevant). Basic rationale is that on smaller wikis bot registrations are often forgotten. With SUL it is unlikely that people use same name as bot on one wiki and as regular user on another wiki.
  5. Three names that sound like bot are hard coded exceptions (people who wrote me to tell me they are human): Paucabot|Niabot|Marbot

Wikistats is certainly more restrictive in 'does it sound like a bot' than what I saw elsewhere.

Perl: if (($user =~ /bot\b/i) || ($user =~ /_bot_/i))
Meaning only names sound like a bot for Wikistats where

  • 'bot' is end of string or is followed by non alpha-numerical char
  • or is preceded and followed by underscores (in Mediawiki often place holder for spaces)

(it would be interesting (but too much work right now) to break this down by language. I guess some languages are more prone to have 'bot' in real names than others.)

From a 2014 mail: 7453 / 21589 names with 'bot' in it (35%) are *perhaps* not a bot.

BTW this is an example where complete rebuild of stats is tricky, as bot flags can disappear.

Reverts

Quote: "revert_info [can either be reverted_revision in case of sha1 equality, or if revert is present in comment (Not sure this - last one should be continued because of cross-language issues) ]"

Does cross-language issues mean: the 'REV' acronym in edit comments could be spelled differently in other languages? If so, yes that's an issue, but leaving these out would under-report reverts on English Wp [1] by 13%, on German Wp [2] 22%, on Dutch Wp [3] also 20%. I'd rather see the list of likely acronyms per language extended (community curated input file?)

[1] https://stats.wikimedia.org/EN/EditsRevertsEN.htm
[2] https://stats.wikimedia.org/EN/EditsRevertsDE.htm
[3] https://stats.wikimedia.org/EN/EditsRevertsNL.htm

ezachte added a comment.EditedApr 24 2016, 11:29 AM

Count of links

Counts of links are seldom mentioned anywhere. This is also susceptible to skewing as many internal links occur in templates (which Wikistats doesn't parse).

If anything I would favor external links only, but there might be a better way to collect these than via full archive dump. (namely dump [somewiki]-[yyyymmdd]-langlinks.sql.gz)

Counts of links are seldom mentioned anywhere.

I usually see mentions in mailing lists and Meta-Wiki pages comparing various wikis.

Nuria closed this task as Resolved.Jul 4 2016, 4:52 PM
Nuria updated the task description. (Show Details)
Akeron added a subscriber: Akeron.Jul 4 2016, 11:53 PM