@BBlack is the switchover complete? Can the previous content host stop hosting?
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Feb 29 2020
Feb 26 2020
In T244587#5916280, @Isaac wrote:Thanks for the updates @bmansurov -- code makes sense. I don't see the language overrides on my end so I assume you're waiting to push to the server until we get verification of the other lingering questions.
In case I'm doing something wrong, example: https://recommend-large.wmflabs.org/?campaign=WikiGapFinder&t=de still gives me English -> Swedish even with a full refresh (cmd+shift+R on mac) but https://recommend-large.wmflabs.org/?t=de appropriately makes German the target.
Feb 25 2020
- Per Eric's comments above, I think we will be extending the list of acceptable Wikidata values for sex-or-gender here to include at least transgender female (Q1052281).
Done
Feb 24 2020
@Isaac I've made some changes and created a large labs instance. Here it is: http://recommend-large.wmflabs.org/?campaign=WikiGapFinder
Feb 20 2020
In T244587#5891759, @Isaac wrote:And a few other questions / comments:
- Thinking about the increased delay for search results, do we know what is causing it? I suspect it's the addition of the claims parameter in the Wikidata query (as opposed to any additional processing that is going on).
Yes, that's true. Also, the site was already slow. I did a quick profiling. Here's the data.
Feb 18 2020
@Astinson if I'm not mistaken the campaigns are already being tracked with eventlogger and also content translation. I can look into it if you need the details.
@Isaac The labs instance has been updated with the latest code. The WikiGapFinder campaign users will see articles about women on the landing page. Subsequent searches will also filter out articles that are not about women. Below are some of the before and after screenshots.
Feb 12 2020
It seems like if the post-hoc filtering that you are suggesting doesn't return enough results, we would have to consider making queries to WDQS instead?
@BBlack the site has been updated. Please turn on the DNS.
@BBlack I'll let you know when we get the latest code into Gerrit.
@leila what do you think about T242374#5872212. Can we get the latest code changes?
Feb 11 2020
@Isaac thanks. I made the change (which will be deployed once we're done with the remaining parts).
Feb 10 2020
My bad. Apparently code runs on Python 3 too. Thanks for the input @Legoktm. I'll look into flake.
@Isaac this is how the new interface will look like on page load. Let me know if you want me to change anything in it.
Feb 9 2020
Feb 8 2020
@Mholloway thanks for the heads up. I'll look into it.
Feb 4 2020
@Vgutierrez, thanks for working on this task. Please let me know if I can help move the task forward.
Jan 22 2020
Volker can probably talk to the status of the task, but I'd advise against using JavaScript to load the header/footer because not all browsers support Javascript or some users disable Javascript on purpose.
Jan 20 2020
@akosiaris thanks! The first two points were already done. I've created a chart and uploaded a patch. Would you please review it.
Jan 19 2020
Jan 13 2020
Jan 12 2020
@akosiaris thanks for the patch. What are the next steps once your patch is merged?
Status update: I've requested a Gerrit repo.
Dec 26 2019
Thanks, @jcrespo. I got my access back.
Dec 21 2019
@bd808 Thanks for increasing the quota. I'm done porting.
Install scripts in two repositories have been updated (see above comments with Gerrit patch links). I'll try to merge them when I get my merge rights back. For now all Jessie instances have been ported to Buster.
Notes about https://recommend-alpha.wmflabs.org/missing_sections. Before upgrading I see the following error (which needs to be fixed separately):
For posterity, some notes about tool.recommendation-api.eqiad.wmflabs (as it's being used by ContentTranslation). Here's the results of pinging https://recommend.wmflabs.org/types/translation/v1/articles?source=es&target=ca&seed=Cascada%7CLuxemburgo&search=related_articles&application=CX before and after the upgrade:
Dec 20 2019
@Pchelolo Hey, any documentation on how to move the service to node-js 10?
Dec 18 2019
Aug 28 2019
Aug 2 2019
@Usmanmuhd what is your OS, what version? What about MySQL and mysql connector versions? In README.org it's written that you need to install python-mysql.connector on Debian. Try figuring out the package version in Debian and installing the same in your system.
Jul 30 2019
No, you don't have to stop. I think, the system will schedule other processes between each step in your loop.
Jul 29 2019
Also maybe add a UNIQUE constraint?
Jul 28 2019
How about we pass the combined filename to the script and chunking and importing is done automatically behind the scenes? That way we don't have to worry about calculating the number of chunks ourselves. We should of course add logging and an ability continue from where we left off in case of an error.
Jul 27 2019
@Usmanmuhd I was thinking that we should split up the file into multiple files and import them one by one. But I'm curious to know if you can find a better way. Can you do some research on this? Thanks.
@leila my Gerrit +2 right have been revoked so I couldn't merge the student's patch. Could you help us find someone who can merge the patch? Thanks. Here's the patch: https://gerrit.wikimedia.org/r/#/c/mediawiki/services/recommendation-api/+/523779/
Jul 26 2019
@Usmanmuhd since we're only requesting 500 items, can you set lllimit to 500 instead? This should return all langlinks in the first request.
I think you should handle it similar to this code, i.e. llcontinue should go into the getArticlesBySeed and getArticlesByPageviews functions directly.
Jul 25 2019
Looks like so. Compare the following two URLs:
Jul 24 2019
@Usmanmuhd Looks like we're not passing all the required parameters to the API. Take a look at https://www.mediawiki.org/wiki/API:Langlinks, specifically llcontinue and lllimit. Once you pass those, you should get langlinks for results in subsequent requests after the first one.
Jul 16 2019
@Usmanmuhd that URL looks correct. As for sitelinks_count, you'd have to checkout the MediaWiki API documentation. I can't remember off the top of my head.
Jul 13 2019
@Usmanmuhd Good call. I forgot about that. So we have to make two kinds of requests to the API: by seed and by pageviews. In each case we need to make a request that fetches Wikidata IDs, then we need to make a separate request to sort these IDs by sitelink_count. We cannot phase it out because the purpose of this API is to recommend the most appropriate articles for creation, and filtering by sitelink_count was deemed appropriate when the API was created.
Jul 12 2019
@Usmanmuhd is index meaningful? I don't think it matters whether you sort it or not. It's up to you.
Actually, @Usmanmuhd I also deleted node_modules and did npm install, then the command worked. Can you try again?
Looks like some dependencies may be missing. @Mholloway I see some recent patches by you. Anything ideas how to fix the above error?
Jul 5 2019
Jun 28 2019
You don't need action=wbgetentities if you use the URL I shared. That URL gives you all the information you need to display the results.
I think you can get all the info you need in one query. Here's an example when you have a seed article:
Jun 27 2019
Your MediaWiki API URL doesn't seem to filter out the disambiguation pages (unlike WDQS) and that item (Q4077077) points to a disambiguation page.
Jun 26 2019
It was determined that two surveys cannot have the same name (as was the case for both the English and French surveys) -- as a fix, we appended "-af" to the surveys that are targeted to African countries. Though for statistical purposes, this is nice, we had avoided it because it means that readers in African countries might see survey again after answering. Dismissing or answering it the second time will completely dismiss it though and this is not different from a reader who switches browsers or clears their cache (who also could see the survey multiple times even after answering).
Wikidata has items and properties (not sure about other things). Items start with Q, and properties with P. So ?article schema:about ?item couldn't exclude Q4077077 because it's an item.
OK, let me take a look at it later today/tomorrow.
@Isaac do you want to divide up the wikis for testing? We'll have a little time to test before deploying everywhere.
Jun 25 2019
During code review we discussed disabling the button, but because it would require more coding time and because that would complicate the codebase we decided to use an alert box instead. So the user will be alerted when sending an empty response.
Jun 24 2019
Looks like the results are similar. I wonder if the MediaWiki API supports counting results rather than outputting them? Can you see if it does?
QS should be live on hewiki (labs) in about 30 mins.
Jun 22 2019
Marking tests as skipped doesn't take a lot of time and allows us not to get false alerts. How long before you can get T216750 done?
Sorry about that, @Mholloway. We'll take care of the failing tests.
Jun 20 2019
QuickSurveys may not be enabled on hewiki. I should have checked that. Will check tomorrow.
Nice!
The survey will be live in about 30 mins.
The survey will be live in about 30 mins.
Thanks, I forgot about the free form text label. Can you create one more (hopefully, last) message: "Editor-gender-1-free-form-text-label"?
Jun 19 2019
@Isaac can you create two more messages: "Editor-gender-1-answer-man" and "Editor-gender-1-answer-woman" with the values "man" and "woman", respectively.
Deploying this change on 06/20, at 11:00–12:00 UTC.
Deploying this change on 06/20, at 11:00–12:00 UTC.
Jun 18 2019
The production server has a different configuration file in this repository. Maybe that's overriding the query parameter.
@Usmanmuhd good job! The end point is working after deployment:
Thanks, I hadn't rebuilt the dependencies.
Yes, please move on to the next task.
@Usmanmuhd you're offline on IRC, so I'm replying here:
Jun 17 2019
- We'll need to deploy the changes and see if that fixes the problem in production.