Page MenuHomePhabricator

improve logging
Closed, ResolvedPublic

Description

As was raised several times during Wikimania 2016 the current logs are huge* and essentially useless since we add all kinds of info to them therefore making it hard to find the actual errors.

The suggestions were to either:

  • Rebuild the logging structure using something more advanced (sentry and X where mentioned)
  • Use the levels implemented in pywikibot.logging
  • Skip outputting various messages which we are often (but not always) not interested in or which are already logged on-wiki.

*Last I checked there were 1,932,469 lines (before the commonscat bit had finished) out of which 1,823,328 start with "Found unknown field on page".

Related Objects

StatusSubtypeAssignedTask
ResolvedLokal_Profil
DeclinedNone

Event Timeline

@Multichill please replace X in the description with the system you mentioned.

If either of you (@Multichill, @JeanFred) are willing to do a proper rebuild then I'm happy to go along with that.

Otherwise using the error/warning/debug/info levels of pywikibot.logging should be an ok start although it might require modifying what is piped where in the crontab

Change 296885 had a related patch set uploaded (by Lokal Profil):
Restructure logging

https://gerrit.wikimedia.org/r/296885

Change 296885 merged by jenkins-bot:
Restructure logging

https://gerrit.wikimedia.org/r/296885

Mentioned in SAL [2016-07-03T13:01:03Z] <JeanFred> Deployed latest from Git: 0154a31, 9a4d05b, 4b6c343, 39c5409 (T138633)

Looking at the logs they are now down to ~40k lines per run. The main culprit now seems to be output such as:

Page [[commons:Commons:Monuments database/Unknown fields/monuments ch (fr)]] saved
Retrieving 1 pages from wikipedia:en.

which come from pywikibot itself. Is there maybe a flag we can pass along to mute that output (or force it into the log rather than stdout)?

Looking at the logs they are now down to ~40k lines per run. The main culprit now seems to be output such as:

Page [[commons:Commons:Monuments database/Unknown fields/monuments ch (fr)]] saved
Retrieving 1 pages from wikipedia:en.

which come from pywikibot itself. Is there maybe a flag we can pass along to mute that output (or force it into the log rather than stdout)?

Correction. The main culprit is:
WARNING: No primkey available on [...] with 39k entries

Correction. The main culprit is:
WARNING: No primkey available on [...] with 39k entries

So these are due to pages like de:Liste_der_Kulturgüter_in_Stadel_bei_Niederglatt" where the id numbers are missing.

The result is to large for this to just be strays and a quick look puts fr:Portail:Lieux_patrimoniaux_du_Canada/Nouveau-Brunswick in the lead with 1590 errors!

Change 297766 had a related patch set uploaded (by Lokal Profil):
Add more data to primkey warning

https://gerrit.wikimedia.org/r/297766

Change 297766 merged by jenkins-bot:
Add more data to primkey warning

https://gerrit.wikimedia.org/r/297766

Change 297766 merged by jenkins-bot:
Add more data to primkey warning

https://gerrit.wikimedia.org/r/297766

To see if there is any particular config setting which is responsible for most of these.

I put together P3454 with which tables (and by extension which dataset) gives the primkey warnings most often. The main vilains are:

18505	monuments_de-he_(de)
11099	monuments_ca_(en)
2824	monuments_ca_(fr)
1068	monuments_ph_(en)
1025	monuments_ru_(ru)
995	monuments_fr_(fr)
979	monuments-old_ch_(en)
967	monuments_es_(es)

Who in Catalonia and Germany could we ping about their issues?

Who could we ping about their issues?
in Catalonia?

@Kippelboy, would you know who would be involved in WLM that we could talk to?

in Germany

Harder :-/ My understanding is that WLM varies wildly from one federal state to the other, including folks involved. Tentatively pinging @Cirdan?

Who could we ping about their issues?
in Catalonia?

For monuments_ca_ you should search in Canada, not Catalonia

Who could we ping about their issues?
in Catalonia?

For monuments_ca_ you should search in Canada, not Catalonia

Doh!.... my bad.

(I'm not very active right now, but I'm the one responsible for de-he and supervise/maintain all German monuments templates. )

With de-he the problem is that only some regions of the state (Hesse) are available in an online database. We use the database keys as identifiers, as there is no such thing as a monument ID in Germany. For the regions which are not yet in the database (more and more get added every year), we create the lists from the official monument books, therefore these entries have no id, but are otherwise identical.

Since we have no control over it/it does not constitute a mistake if the ID is missing, this doesn't need to be logged at all. (To the contrary, I'd appreciate if also the monuments without ID could be added to the database, since it seems that these are currently just dismissed and not counted for the statistics. But at least a couple of years ago when I was still more involved, the database relied heavily on this ID, so it might not be possible.)

To give you some context/an outlook: Currently, Germany is building up a nationwide database on cultural objects, which also includes monuments (Deutsche Digitale Bibliothek, DDB). Hopefully all states will add their monuments, then we will switch to this database ID as the monument ID (Hesse added their monuments last year and on Wikipedia we record the DDB ID alongside the cultural heritage database ID already.)

Change 312975 had a related patch set uploaded (by Lokal Profil):
Only output one primkey warning per page

https://gerrit.wikimedia.org/r/312975

Change 312975 merged by jenkins-bot:
Only output one primkey warning per page

https://gerrit.wikimedia.org/r/312975

@Lokal_Profil: Hi, all related patches in Gerrit have been merged. Can this task be resolved (via Add Action...Change Status in the dropdown menu), or is there more to do in this task? Asking as you are set as task assignee. Thanks in advance!

I'll resolve this as the restructuring of the logging drastically reduced the size of the log files.

Yes more work is needed but that would need a better tailored task describing how we will deal with it.