Sat, Feb 10
this bug seems to be fairly new, it has probably been introduced by a recent code change. I have just run into it and it is definitely a new behavior.
Sun, Jan 28
Fri, Jan 26
I think in general if we have a good pipeline to parse citations, a LOT of people would be interested in that, and yes it would be sad to just throw the results away…
It would make sense but I fear this might go against some enwp guidelines. Editors are free to choose between citation formats but citations should be uniform in a given page, so migrating just a few citations in a page is discouraged: http://enwp.org/WP:CITEVAR.
Jan 15 2018
Note: as far as the templates are concerned, it has already been discussed a few times on wiki:
Dec 13 2017
WikiProjects seem to encode the sort of information we are after, indeed. And there are categories for them, so it might be possible to use that with the API…
@Samwalton9 wow, awesome! that's a big one.
Dec 12 2017
@Samwalton9 at some point OAbot proposed RG & Academia.edu links, so I guess that's when we introduced this regex, but no I don't think we filter anything currently.
Dec 1 2017
Hi @Samwalton9 - your refactoring is very sensible but it switched from http://old.dissem.in to https://dissem.in. There was a bug in that API that I just fixed (I think) so you can try again… but there will be other issues down the line: for instance, this new API also returns ResearchGate URLs, that should probably be filtered out.
Thanks a lot, I had not realized scratch codes could be used in the same way as normal codes.
Nov 20 2017
At Wikicite we have also had a demo of Bilbo, which attempts to parse plain text citations to extract metadata:
I am in touch with the authors of this tool and they were interested in adapting it to wikitext (but no concrete plans yet).
Nov 15 2017
Hi, I have a similar issue. I have enabled 2FA for Wikitech but cannot figure out how to use the scratch codes to regain access to my account. Any idea?
Nov 13 2017
Yep, the issue was introduced when I deployed the new project structure - it's fixed now, I think.
This is not relevant for this project (it applies to the hashtag tool, not oabot).
Oct 30 2017
Oct 28 2017
@Nemo_bis thanks! Yeah I get the idea, but there is quite a lot of work from such a prototype to something solid (like, in this state we would query the enwiki page every time we get a candidate edit)… Also, I think the simplest would be to store candidate edits in SQL directly so that we could filter them out easily when we add a new link to the blacklist.
Oct 27 2017
Thanks a lot! It looks great to me. I will merge in the hope that it does not go against WMF's policy for third-party content: T166604 .
Oct 24 2017
I have regenerated them with ./manage.py makemessages -l qot which is cleaner.
Oct 23 2017
Weird. Here is what I see:
Do you spot anything wrong? I just deleted and resent the invite.
Here is a link to the invite: https://github.com/dissemin/dissemin/invitation
Note that I haven't invited translatewiki to the organization but to the repository itself.
@Nikerabbit I have invited translatewiki's github user to the github project yesterday, should I invite any other username?
Oct 17 2017
@Ocaasi_WMF pong, done. For the favicon you can just use https://stackoverflow.com/questions/9943771/adding-a-favicon-to-a-static-html-page#9943801 in the <head></head> section.
Sep 27 2017
@Lydia_Pintscher that makes sense. Okay, thank you to you both, we are on the same page! Given all these tickets on the topic I was worried that I had missed something obvious about this issue…
@Smalyshev thanks for your quick reply! Just for clarity, I am not personally working on the PST, I was just trying to find out if there was any established way to use RDF to represent a data import. If that is the case, then other tools could use that format too (for instance, OpenRefine could export datasets to this format). I'd be happy to work on that but I can only do it if a RDF model is agreed on.
@Lydia_Pintscher , @Smalyshev and @Tpt : is there any info about how RDF is expected to behave as an import format for Wikidata? As far as I can tell, the RDF that gets fed into the Query Service is not designed for import at all:
Sep 25 2017
Sep 23 2017
The hashtags tool misses some edits (see https://github.com/hatnote/hashtag-search/issues/20#issue-259883134 ) so I went for internal tracking: https://tools.wmflabs.org/oabot/stats
Sep 19 2017
Sep 10 2017
Sep 7 2017
Aug 28 2017
Aug 15 2017
For 1., that should normally already be the case: if there is an arXiv or PMC link, no edit should be proposed on that citation. (note: PMC != Pubmed).
For 2., it's not clear: you are relying on the assumption that at some point oadoi.org would become the default DOI resolver on wikipedia, I guess?
For 3.: definitely!
Jul 24 2017
Jul 23 2017
@akosiaris It works like a charm, thanks a lot!
Jul 22 2017
Jul 18 2017
Jul 17 2017
@akosiaris It will all be much clearer when the app is hosted somewhere. I am using Leaflet.js as a means to display a large image (not a geographical map) by means of tiles generated by my application. So no PostGIS required. These tiles are generated from the following table:
Jul 16 2017
Jul 15 2017
Jul 14 2017
@bd808 okay that totally makes sense. Then I will kindly ask @akosiaris if I could have a database on his postgres instance.
I am aware of the OSM powered tile server, but mine creates tiles based on data extracted exclusively from Wikidata.
@bd808 Oops, sorry about that!
Sure, but isn't it cleaner with the native fields postgres provides?
I am also a bit concerned about the efficiency of the Tools Labs for this application, because the app provides a tile server (which represents a map of IP space). Tiles are rendered on the fly from the IP ranges stored in the database.
May 31 2017
Out of curiosity, is it possible to run tools from WMF Tools Labs with a custom domain name?
Fixed, thanks for reporting.
May 25 2017
This is a bit hard with the current infrastructure because we are caching suggestions, so we would need two caches.
It would be nice to do this with Bilbo: we just need to know how to convert wikicode to plain text.
@Tpt starts working on this, as a Wikidata Distributed Game!!
May 11 2017
for 4., we should not require that the paper is an author manuscript or anything like that, because some publishers allow the published version to be archived as well. We should only say that the document should not infringe copyright law, whatever that means in an international context.
Apr 29 2017
Apr 26 2017
Thanks for the PR! I have merged it and updated the tool.
Apr 25 2017
Absolutely! We can disable the GitHub issues and use Phabricator instead.
Apr 18 2017
Apr 4 2017
@valhallasw Thank you SO MUCH!!!
Apr 3 2017
Please can any of the maintainers on PyPI do this? @valhallasw, @Xqt, @jayvdb, @XZise, @Legoktm, or @hashar
Or add anybody else on PyPI to do it for them. I am happy to do it if I am given the rights to do so. This is really important! People keep reporting issues which are duplicates of this one.
Apr 1 2017
This issue is resolved, the feature was released in OpenRefine 2.7 rc1.
Mar 27 2017
Jan 2 2017
@Multichill: I reopened it because the issue is still not solved in the latest version of the library on PyPI. Please just do a new release!
Nov 9 2016
Lokal_Profil: thanks, yes indeed that's what I did. I hope someone will be able to make a new release on PyPi soon.
Hi, I still have the error above with the latest version of pywikibot in PyPi.
When logging in (from wmflabs, with a bot account), I get the KeyError exception. Applying the hack above does not solve the problem, because I then get another exception: