It seems OK to me how it is checked today, it is checked via template checking and I did not see reference to current text.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jan 6 2024
Dec 29 2023
Dec 16 2023
@Xqt, since now pywikibot supports only >= 3.7 this can be closed., right?
After adding lazy load of metadata in T253591
I get the same number in petscan, category in commons and script.
I'll close it for now, to be reopened in case.
Dec 15 2023
There is an error, but it almost invisible due to the long traceback:
ERROR: OAuth authentication not supported: No module named 'requests_oauthlib'
Dec 3 2023
@JJMC89 do you have updates about this? e.g. a new .pre-commit-config.yaml?
Dec 2 2023
I think that here we could use apipath, as the goal should be only to check that the site is online.
One reason this now fails is because on my request to delete "Category:Pywikibot Protect Test" in T352560, the category has now been removed from:
User:Sn1per/ProtectTest1
User:Sn1per/ProtectTest2
Dec 1 2023
This is my fault.
There was an issue with a PetScan test and I thought it was due to this category not being present:
Oct 14 2023
May 4 2023
In T335720#8825687, @Xqt wrote:The problem is that the pop default is evaluated even if it is not needed:
x = {'foo': 'bar'} def baz(): print('quez') x.pop('foo', baz()) quez 'bar'Here some other samples:
https://codesearch.wmcloud.org/pywikibot/?q=%28get%7Cpop%29%5C%28%5B%5E+%2C%5C%29%5D%2B%2C+%5B%5E%5C%28%5C%29%5D%2B%5C%28&files=&excludeFiles=&repos=dict should have been implemented like that instead:
class lazy_dict(dict): sentinel = object() def pop(value, default=sentinel): try: r = super.pop(value) except KeyError: if default is sentinel: raise r = default return rbut it is a C function
Nov 27 2022
Weird.
Could you give an example where MW outputs a timestamp on 12 digits?
Nov 7 2022
Oct 1 2022
A lot of files have been changed by https://gerrit.wikimedia.org/r/c/pywikibot/core/+/836138.
This might occur somewhere else.
Sep 3 2022
In T316976#8209768, @Soda wrote:
I cleared pywikibot cache and now they are aligned.
API query:
URI: '/w/api.php' Body: 'prppiititle=Index%3AConfederate+Military+History+-+1899+-+Volume+4.djvu&list=proofreadpagesinindex&action=query&indexpageids=&continue=&prppiilimit=500&meta=userinfo&uiprop=blockinfo%7Chasmsg&maxlag=5&format=json'
Pinging @Inductiveload, which might help on API side.
Aug 18 2022
In T205155#4845859, @Mpaa wrote:I do not know what sets the limit of 10000 in the API.
Aug 13 2022
Aug 10 2022
Aug 9 2022
style="vertical-align:middle; padding-left:20px;" | <div style="font-size:x-large; padding-bottom:5px;">'''新年快樂!'''</div>感謝您過去一年來對中文維基百科的貢獻!祝閣下[[新年]]快樂,萬事如意!—— '''[[使用者:Ericliu1912|Eric Liu]]'''<sub> 創造は生命('''[[使用者討論:Ericliu1912|留言]].[[使用者:Ericliu1912#訪客芳名錄|留名]].[[維基百科:維基學生會|學生會]]''')</sub> 2022年1月31日 (一) 18:48 (UTC)<div style="font-size:x-small; text-align:right; padding-top:5px;">{{color | grey | (模板使用方法參見[[使用者:Ericliu1912/維基友愛模板|此處]])} |
Jul 28 2022
See T67163
Jul 25 2022
May 14 2022
In T308016#7919265, @ShakespeareFan00 wrote:However, it does make me wonder if there is a need for the filter to be more robust about how it processes titles that aren't in the standard format for the Proofread-page content model.
May 13 2022
May 12 2022
random.choices() creates a list with replacements, so you might have duplicates.
random.sample() maybe?
May 10 2022
This is not a standard way of working with Proofread pages, and to handle it it would create several inconveniences (e.g. how many pages will have the related index? etc.)
Before acting on pywikibot, there should be an agreement in the wikisource world on such subpages.
May 7 2022
In T307830#7911218, @Xqt wrote:@Mpaa: I am unsure how to proceed. We could
- keep the current implementation and add a warning to our documentation that the Thread object hast to be sublassed and provide a stop method if stop_all() is to be used (like ThreadedGenerator does)
- remove the stop_all method and stopping must be implemented outside the ThreadList class
- ignore AttributeError in stop_all() method and just write a debug message (and add a warning to our documentation)
What do you suggest here?
Apr 30 2022
In T307280#7893596, @gerritbot wrote:Change 787876 had a related patch set uploaded (by Mpaa; author: Mpaa):
[pywikibot/core@master] [IMPR]: make IndexPage more robust when getting links in Page ns
Apr 24 2022
The return type is specified clearly.
Scripts that use this function know what to expect and should act accordingly.
Apr 14 2022
@Xqt, I think this might be the cause.
https://gerrit.wikimedia.org/r/c/pywikibot/core/+/769728
Apr 10 2022
A well documented task.
I tried it and it looks nice to me.
BTW, I never found a case where the code is changed yet. Will continue using it and see if I encounter one of those cases.
Jan 22 2022
Jan 21 2022
page.text is:
Jan 14 2022
@Inductiveload , yes I agree.
Jan 9 2022
In T167200#7366146, @Inductiveload wrote:@Mpaa what's the use case here? There are two ways (non-exclusive) to go about this:
- Embed the useful data in the pages JS
- Provide an API to get the info from the server asynchronously
Maybe we should do both?
Oct 5 2021
@Billinghurst, see T292367
Oct 3 2021
Probably a regression from https://gerrit.wikimedia.org/r/c/pywikibot/core/+/680754
Sep 7 2021
Sep 6 2021
Sep 5 2021
Dec 26 2020
Dec 13 2020
Dec 12 2020
In T269503#6671662, @Xqt wrote:Really strange: The current wikipedia:it release is '1.36.0-wmf.18' and the response should have 'messagecode' key.
Nov 24 2020
According to this test, there are several actions that might emit 'wikibase-api-failed-save' as message.
https://github.com/wikimedia/Wikibase/blob/e388e5b69ee0487566d7a8a77196ac0056f49747/repo/tests/phpunit/includes/Api/ApiUserBlockedTest.php
Nov 22 2020
Nov 13 2020
@Xover, I think it is a misunderstanding
data.text.substring(0,5) != "<?xml" -> XML is accepted, if it is not XML, then is considered error.
Nov 11 2020
Definitely. I tried to install from scratch:
New versions of requests and urllib3 were released today.
@Xqt shouldn't self.available_options['delay_td'] be accessible also via self.opt.delay_td also after setting a new value?
Nov 3 2020
@Xqt, I think this should be implemented as a filter.
Oct 27 2020
Oct 26 2020
In T132676#6577051, @Anomie wrote:In T132676#6575067, @Mpaa wrote:@Anomie is base64 encode supported by api action=upload?
Not in itself, but you can use MIME's Content-Transfer-Encoding to encode the data as base64 rather than binary.
Since I no longer work for the WMF, I'm not inclined to do much to do their work for them at this time.
Oct 25 2020
In T266117#6576230, @Fae wrote:As my 'personal' work-around, just for UK legislation PDFs that the API flags with chunk-too-small and fails on a second upload, the pdf is trimmed of the final byte and re-attempted. In my view this is a terrible hack rather than a fix.
However, this initially appears to be working with the files both uploading and displaying successfully, though it may later cause unpredictable errors as it's hardly an intelligent fix. Ref to this category for examples.
Code snippet:
rec = uptry(local, fn, dd, comment, False) if rec in ['chunk-too-small']: print "Chunk-too-small, so trying trimming off 1 byte" with open(local, 'rb+') as filehandle: filehandle.seek(-1, os.SEEK_END) filehandle.truncate() rec = uptry(local, fn, dd + "\n[[Category:Work around of byte trimmed for chunk-too-small API error]]", comment, False)
@Fae, strange, it worked for me, I tried this file and the byte was added.
https://commons.wikimedia.org/w/index.php?title=File%3ABritish_Transport_Commission_Act_1949_%28UKLA_1949-29%29.pdf&type=revision&diff=500900036&oldid=500838588
Oct 23 2020
@Anomie is base64 encode supported by api action=upload?
Oct 22 2020
I think it is related to T132676, I checked the first file and it ends in '\r'.
Oct 21 2020
btw, I am using Anaconda distribution.
The problem is that with expiry=True in self.siteinfo.get('time', expiry=True)
a CachedRequest is called and there True is converted to 1 day
Oct 20 2020
Seems working, strange.
Anyhow I have added Server414Error to the managed exceptions, so the bot won't crash.
OK, I will mark this as Resolved, shall be reopened if needed.
Or better, add another exception handling after line 620
Print the url, if possible.
Add
pywikibot.output(ref.url) f = comms.http.fetch(ref.url, use_fake_user_agent=self._use_fake_user_agent)
I hope you can reproduce the error without going through 100000 links :-)
@Vicarage, is this solved after the patch?
Not sure this is a pywikibot error.
Could you find out which is the ref.url it is trying to fetch in:
f = comms.http.fetch(ref.url, use_fake_user_agent=self._use_fake_user_agent)
and post it here?