I do not see any downside. If you would like to take the task, no issues on my side.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Aug 9 2018
Aug 5 2018
See for instance https://archive.org/details/b28710964:
@Samwilson:
IMO the root cause is that sometimes the XML file does not necessarily contains all the jp2 images.
In such cases, when you update the XML file with the new names, as you go incrementally, you can introduce an offset.
Jul 21 2018
IMO, the API should do what it can, the rest should be done outside.
I agree with @zhuyifei1999 in T196619 that an exception should be raised, instead of a warning.
Might be the same as T87014.
Jul 7 2018
Root cause is T114318.
In case I was unclear, T114318 is the root cause for the failure you showed.
In T198470#4405422, @Ankry wrote:@Billinghurst Just FYI: for zh.ws it would be more effective (IMO) to go through pre-genarated page list than through all pages / selected categories. This is because relatively small ratio of pages is affected for this wiki.
Pre-generated list of ALL affected pages van be found here: https://quarry.wmflabs.org/query/28053
In T198999#4405593, @Dvorapa wrote:Is it the same issue? Or are they caused by the same code part/API point at least? (It seemed to me so)
In T198999#4405235, @Dvorapa wrote:Also it fails on AppVeyor and Travis
Jul 6 2018
Jul 5 2018
Seems OK now.
Thanks and sorry for the inconvenience.
OK, I was using the old UI, with the new UI now I have a preferred email.
Let's see if this works now.
In T198573#4396913, @hashar wrote:In the All-Users database for refs/users/65/1065, the preferredEmail got removed on Jun 30th:
Author: Gerrit Code Review <gerrit@wikimedia.org> Date: Sat Jun 30 21:31:52 2018 +0000 Update account diff --git a/account.config b/account.config index 2e91b6f..d5dcc18 100644 --- a/account.config +++ b/account.config @@ -1,3 +1,2 @@ [account] fullName = Mpaa - preferredEmail = mpaa.wiki@gmail.com@Mpaa your email should come from LDAP and be listed in your Gerrit setting https://gerrit.wikimedia.org/r/#/settings/
And it is apparently no more listed as your "Preferred Email" on https://gerrit.wikimedia.org/r/#/settings/contact . So maybe there you have to Register New Email ... and then confirm it via a link that would have been mailed to the address?
Jul 2 2018
Jul 1 2018
Jun 30 2018
For pywikibot/compat/userlib.py
Jun 20 2018
I suspect it might be imissing in the initialization of file_handler = RotatingFileHandler in bot.py (?)
I have no windows pc so I can't test.
Jun 19 2018
Or make a script with your path selection logic, that generate the needed command string "python pwb.py projects/wikidatafix/fix_everything_in_wikidata.py" and execute it.
This script cannot solve the requested use case:
In T157535#4300310, @Billinghurst wrote:@Xqt: I see this as a desirable feature. As mentioned above, I would NOT want a bot to overwrite proofread pages. I would prefer that this is closed with no action.
Jun 16 2018
I meant a local 'core' in mpaa@tools-bastion-03:~$
Jun 15 2018
OK, then it worked as I had a local 'core' copy, maybe?
Jun 14 2018
Worked for me.
Can you list what you did and what error you get?
Jun 10 2018
Done.
I manipulate the file produced by djvused (http://djvu.sourceforge.net/doc/man/djvused.html), see section "Dumping/restoring annotations and text", realigning the page numbers in myfile.dsed.
Jun 5 2018
Honestly the wanted behavior is not clear to me.
May 31 2018
@Peteforsyth, seems little action around here, if you need I can fix the file.
The original upload will be there for inspection, if needed.
Exactly, then you need to use -subcats: option to get the Category pages inside the main category.
In T195992#4244246, @Mpaa wrote:Try a small one: https://commons.wikimedia.org/wiki/Category:Arthrocereus
This category has the following 5 subcategories, out of 5 total.
This category contains only the following file.user@pc:~/python/core {master}$ python scripts/listpages.py -lang:commons -family:commons -cat:"Arthrocereus" 1 Arthrocereus HU 330.jpg 1 page(s) found user@pc:~/python/core {master}$ python scripts/listpages.py -lang:commons -family:commons -cat:"Arthrocereus" -ns:14 0 page(s) found
May 30 2018
Try a small one: https://commons.wikimedia.org/wiki/Category:Arthrocereus
This category has the following 5 subcategories, out of 5 total.
This category contains only the following file.
I cannot explain the error, but I think no Category pages will be yielded (which would be a separate bug)
May 27 2018
May 25 2018
DeprecationWarning is issued by python.
We should change the framework regexes so that we do not get it due to our own code.
Script owners are on their own, I think. They are getting the same warning, they should fix their code.
May 24 2018
May 23 2018
May 19 2018
In pywikibbot: python scripts/listpages.py -unusedfiles
May 18 2018
It happened again: https://commons.wikimedia.org/wiki/File:Atlantis_Arisen.djvu
Explanation above justifies also this case.
May 17 2018
I have no debug capabilities.
A possibility is that under some circumstances there is a misalignment during the generation of
djvu numbering:
May 16 2018
And also in the XML file in https://ia801202.us.archive.org/34/items/dollshousetwooth00ibse/dollshousetwooth00ibse_djvu.xml it is at page 2.
Something wrong with the xml parsing/mapping onto the djvu?
The issue is present from the first page, so I think SegFault is a different issue (it happens later on in the file).
Mar 17 2018
IMO, add a new function for this and mark filterredir=True as deprecated.
Mar 13 2018
I tried again with this: https://tools.wmflabs.org/ia-upload/log/CapuanaGiacinta
It failed as the file is already available at Commons but the djvu in the tool dir has no text layer.
So it is a good test case (given that I do not understand why the produced djvu has one page less, 248 pages vs 249 images, weird as it should have been an error in the logs ....).
Anyhow, once I removed the missing page from the xml file, djvuxmlparser produced a djvu with text layer in my local directory.
Mar 12 2018
I sort of replicated the process of your program and I get a text layer.
Some corrupted pages were filtered but at least a few survived.
It looks like the original djvu file is untouched when djvuxmlparser runs, instead of bring modified.
Is it possible to access the log files after a job is completed?
Mar 10 2018
This should fix it:
+2 offset comes from here.
The following steps are done:
I inserted some debug printout in XMLParser.cpp and tested it on https://tools.wmflabs.org/ia-upload/log/toda1
Mar 7 2018
You can start from here: https://www.mediawiki.org/wiki/Manual:Pywikibot/Development
Mar 6 2018
@bd808, zhuyifei1999 answered for me.
I do not know about Tabular Data. However, it would be nice to expand the library with new features.
Not having the possibility of getting different contentmodel/format is a Pywikibot limitation which would be nice to remove.
If you feel like to 'standardize' your new class within pywikibot, you're welcome :-)
You are not doing anything wrong.
Page() class does not support all values of:
Mar 5 2018
I am trying to address PyMySql support.
Mar 4 2018
Mar 3 2018
OK, I didn't see zhuyifei1999 answered as well while I was writing :-)
I think partial is better than lambda as it gives a clearer inspection than:
I think it is because in
Feb 23 2018
In T74681#748032, @XZise wrote:Unless you see more tests we can convert from this I think this is now resolved.
Feb 21 2018
Feb 12 2018
What is the error exactly?
I get this:
Feb 2 2018
Only a workaround has been implemented.
It is good enough IMO to close the bug, even if the philosophical question of what is 'purge' still remains ....
Jan 22 2018
Dec 30 2017
Dec 3 2017
Nov 30 2017
It does not fail for me in python 27 or python36.
Nov 27 2017
It is [[Category:Mus�e du quai Branly]] giving problems in page.py(5362)init()
Nov 23 2017
I agree.
Oct 28 2017
@Zoranzoki21, you have been instructed several times how to open/document a ticket.
Do not expect someone to look into your tickets if they are not described correctly (and without picture possibly).
Oct 15 2017
@Tpt, have you had time to look into this? Thanks!