User Details
- User Since
- Dec 13 2017, 2:13 AM (423 w, 15 h)
- Availability
- Available
- LDAP User
- Unknown
- MediaWiki User
- Ryan10145 [ Global Accounts ]
Jan 18 2018
Jan 17 2018
Jan 16 2018
Hi, I wrote my blog post in English here. I have also posted it on the wiki page, and I hope you enjoy reading it :) Thank you for all for the incredible experience this year!
Jan 13 2018
Jan 12 2018
Jan 11 2018
Changed some of the description, since the merged changes made the line numbers incorrect
Jan 10 2018
Jan 8 2018
Jan 7 2018
Thank you for the information!
I noticed that the other tests in tests/phpunit/includes/logging have the comment
/** * Provide different rows from the logging table to test * for backward compatibility. * Do not change the existing data, just add a new database row */
Do I have to add multiple rows to the tests I am writing for ContentModelLogFormatter?
Oops, claimed the wrong task.
Jan 5 2018
Jan 3 2018
Jan 2 2018
Good point, it is probably best to avoid recursion and to instead use or. However, I feel that the while loop implementation is still somewhat difficult to understand, especially for a newer programmer. @Framawiki, what are your thoughts on which one to use?
Jan 1 2018
@zhuyifei1999 I was thinking of something along the lines of
@Framawiki I can work on it, but I'm not sure if it would be right to do it at this moment, since the patches for T183664 and T183667 would be affected. I'll try to get this done when there aren't any open patches affecting download_dump.py
I discovered the error, I was running the script incorrectly. I was using python scripts/maintenance/download_dump.py -filename instead of python pwb.py maintenance/download_dump.py
pywikibot.comms.http.fetch('https://dumps.wikimedia.org/idwiki/latest/idwiki-latest-abstract.xml', stream=True) works as intended when ran correctly.
I was experimenting with the location of a test script, and I found something interesting. I used the following test script.
import sys import pywikibot
Dec 31 2017
I downloaded the patch that was supposed to fix the streaming issue, but I am still running into issues with the streaming. First of all, when I test the functionality of pywikibot.comms.http.fetch(url, stream=True) from the pywikibot shell, it works as intended. I tested it using
def test_fetch():
resp = pywikibot.comms.http.fetch('https://dumps.wikimedia.org/idwiki/latest/idwiki-latest-abstract.xml', stream=True)
for data in resp.data.iter_content(100 * 1024):
sys.stdout.write('asd')
sys.stdout.flush()This produced the string 'asd' gradually as it began downloading the file.
However, when I go into download_dump.py, and I have the below code, it does not work.
response = fetch(url, stream=True)
for data in response.data.iter_content(100 * 1024):
sys.stdout.write('asd')
sys.stdout.flush()What happens instead is that there is a ~10 second pause, and then the string 'asd' is printed many times, nearly instantaneously.
This leads me to believe that for some strange reason, fetch(url, stream=True) is still not streaming the data.
I don't know why this is occurring, and help would be greatly appreciated.
@eflyjason I tried it, and the none of the messages printed until the download completed.
I'm running into a problem with this task. From the line response = fetch(url, stream=True), I assumed that the file is not being downloaded on that line, and rather it will wait until the data is accessed to begin downloading it. Therefore, I thought that I could put any download bar code inside of
for data in response.data.iter_content(100 * 1024):
result_file.write(data)To test this, I put pywikibot.output('test') right after result_file.write(data)
However, when I did this and ran the script, the console stopped for ~10 seconds, and then printed my test code multiple times extremely rapidly.
Dec 29 2017
I just submitted a patch, but I'll try to make the change.
Dec 28 2017
Thank you for the help, I didn't see this before because my web browser would automatically correct the slashes when I clicked on it.
I'm trying to run this script on my computer so I can do this task, but whenever I try to download the abstract.xml file, I get a Http reponse status 404 error. I am using a computer running Windows 7.
This is my console output without -v:
python scripts/maintenance/download_dump.py -filename abstract.xml
Oops, made a typo with the bug number, sorry.
Dec 26 2017
I'm having difficulties with being able to create 2 different newsletters with the same name. Whenever I try to input a new newsletter, it gives me an error saying that the mainpage is either in use or nonexistent. What can I put in Title::newFromText( ' ' )->getBaseText() in order to successfully submit 2 different newsletters?
Dec 24 2017
Dec 20 2017
Is there any way for me to get any more information about which dates in particular are causing an error with moment.js? I have tried setting up the extension on my own computer in order to see the problems for myself and try to fix them, but I have been having technical difficulties.
Dec 13 2017
I don't understand how to format the date on the X-axis. If I have a variable that holds the date in the format "2001-1-1", how would I go about converting this into the wanted format?