User Details
- User Since
- Nov 9 2016, 7:25 PM (289 w, 1 d)
- Availability
- Available
- LDAP User
- Unknown
- MediaWiki User
- Pintoch [ Global Accounts ]
Sat, May 21
Thanks to @matthiasmullie we now know that this API response cannot return the pageid, because it is not known when the API response is sent back. The pageid is allocated asynchronously afterwards.
Sun, May 15
Sun, May 8
However, wouldn't it be even simpler for the user to not even need the page id, and be able to add claims with a Commons file name?
Thu, Apr 28
One helpful step in that direction would be to return the page id in the JSON response of a successful file upload. That should be fairly straightforward and would bring the number of requests required from three to two.
Apr 2 2022
I have not checked but I trust you!
Mar 29 2022
Mar 13 2022
Closing this data import task because it was created for a training workshop and therefore does not need to be done.
Closing this data import task because it was created for a training workshop and therefore does not need to be done.
Closing this data import task because it was created for a training workshop and therefore does not need to be done.
Feb 23 2022
@Eugene233 this would be worth having a look at, I think :)
Feb 21 2022
Ok, thanks a lot! So, looking at those logs:
@Antonin would running the service with OpenRefine address your comment?
Feb 18 2022
Nice! Is that really all? I am pleasantly surprised! Can you also try for:
- A batch of reconciliation queries, with multiple files (so, converting multiple file names to mids)
- A data extension query with multiple mids, fetching multiple properties on them, one of which returns Wikidata items as values (for instance "depicts")
Feb 15 2022
Jan 28 2022
As pointed out by @Spinster we need to update the property suggest endpoint so that captions are suggested there too:
Jan 27 2022
Jan 26 2022
The free plan should be enough.
Jan 25 2022
yes we can suggest anything that is a property in the sense of the reconciliation service (which is broader than what Wikibase calls properties), including wikitex, captions and other things like that.
Jan 19 2022
Jan 8 2022
Jan 6 2022
Example request: https://commonsreconcile.toolforge.org/en/api?extend={"ids":["M12345" ],"properties":[{"id":"Cen"}]}
Jan 5 2022
This should only be merged once the service is actually implemented (so I would say we could have had a single task instead of three).
One simple solution to this is to include the language code in the name of the reconciliation service, like I do in the Wikibase recon service: https://github.com/wetneb/openrefine-wikibase/blob/master/app.py#L130
Let's make progress on this one!
We could set up a downtime notifier to periodically send data extension queries to the service and check that the response is right.
This could be done via https://www.downnotifier.com/ for instance.
@Eugene233 could you set that up? We can then review the results in some weeks.
Dec 2 2021
One important thing to document is the properties supported by the reconciliation service (so that people know what they can fetch).
For instance, it would be difficult for people to discover that they can fetch the wikitext. Depending on the outcome of T296062 and T295278, there could be other things to document there.
I think this is solved!
Nov 30 2021
Nov 19 2021
@Ladsgroup thank you so much, that is massively helpful!
I think we can close this task, since the goal was to do an import in the scope of a wiki event, and we should not expect people to import more data beyond the event itself.
In the Wikidata reconciliation service, users can fetch labels, descriptions and aliases for any language (regardless of which language version of the reconciliation endpoint they are using).
Nov 9 2021
Curious! We will investigate more.
Oct 30 2021
Oct 29 2021
Oct 26 2021
@Eugene233 I have deployed your implementation on Toolforge. If you try it out on the reconciliation testbench, you will see that it does not work yet: we are missing the CORS headers :)
Oct 25 2021
To be honest, I do not suspect a lot of uptake of this reconciliation service outside of OpenRefine itself, because the service isn't actually doing any "reconciliation" (no fuzzy-matching): we just use it as a mechanism to develop the Commons integration in OpenRefine because of the current architecture of the Wikibase integration there. So I would just document it at https://commonsreconcile.toolforge.org/ if it were just me, but of course there is no harm in doing it on wiki too.
Oct 22 2021
Oct 20 2021
Oct 19 2021
Oct 14 2021
Oct 12 2021
Oct 7 2021
Oct 6 2021
@Eugene233 I think this has been solved, no?
@Eugene233 once you have deployed the latest changes to Toolforge, I think this ticket can probably be closed! It would be worth checking with https://reconciliation-api.github.io/testbench/ that the service works as expected before closing this, I guess.
Oct 1 2021
Sep 24 2021
Sep 21 2021
Sep 17 2021
In the past I manually went through the list of people who had the userbox {{User loves OpenRefine}} on their Wikidata/Meta user page, to notify them of a new release or something like that. Maybe it's worth doing that again to point them to that page?
@Eugene233 it's not clear to me where Continuous Integration has been set up? For nistance, looking at a patch like this one: https://gerrit.wikimedia.org/r/c/labs/tools/commons-recon-service/+/720971, where do I see the results of running the tests in the CI?
Sep 15 2021
This is different from T257405 but broadly related: unexplained 502 errors that seem to be beyond the control of the tool authors.
Sep 14 2021
Sep 13 2021
If someone takes the initiative to run a bot to propagate Wikidata redirects in Commons, they might be interested in integrating that bot with the EditGroups instance for Commons (https://editgroups-commons.toolforge.org/, that I just deployed) so that the effect of a bad merge can be reverted on Commons too (just like KrBot is integrated with the Wikidata instance of the tool: https://editgroups.toolforge.org/?tool=KrBotResolvingRedirect
And an instance can be tested here: https://editgroups-commons.toolforge.org/. You might not be able to login there yet, my OAuth consumer credentials need to be approved first (this generally takes at most a few days).
Sep 10 2021
Here are some sample queries (formulated as curl commands) with the associated expected responses, following the reconciliation protocol specifications. In those examples I am assuming that the service runs at http://localhost:8000/ and that the reconciliation endpoint is http://localhost:8000/en/api.
Other things the service should do:
- when the root path is requested (/), serve a description of the service as an HTML page
- when the root API endpoint is requested (such as /api, or /en/api if we want to make it language-dependent), the JSON manifest is served
Sep 3 2021
The 'feed OpenRefine some category names' scenario is by far the easiest for our end users IMO 😎
Another possibility would be to let people use the "Create project by fetching URL" functionality, and teach users to use it with URLs such that
https://common-recon-service.toolforge.org/fetch_category?name=Category:My_Category
If we manage to document it at the right place then I guess it can be workable (but it's not a super nice integration).
I understand the need but intuitively I don't see how that fits in the reconciliation service. Am I understanding correctly that this would be a new importer in OpenRefine? You would create a new OpenRefine project by supplying a category, and then you would get a new project with each row corresponding to a file in that category?