Thu, Jul 20
@Reedy yep, that's pretty much all of it. Also, JsonConfig is mostly a refactoring of the original ZeroAccess extension, which was originally implemented ~7? years ago, and later had many changes. It grew so big that at one point i simply split it into 3 extensions - JsonConfig, ZeroPortal, and ZeroBanner.
Wed, Jun 28
Same as Karthotherian - noone. I volunteer on occasion when I have some time at my new job, so does Max. A few more amazing individuals have volunteered their dev time to keep it afloat. @Pnorman is working on the new map style and fixes some data-related problems. @Gehel watches over the servers. This is what I have observed by watching the tickets & the public channels.
Jun 20 2017
This task is still valid, but it is needs to wait for the jQuery update. Reopening and making it the parent of the other one.
Jun 15 2017
I might have some time to poke at it today or tomorrow. I know its hard for the WMF to do any map work after it got rid of all the map devs, so volunteers would have to step in.
Jun 10 2017
May 31 2017
May 26 2017
May 24 2017
May 23 2017
If i am adding data non-wikidata data into the same Blazegraph db, should prefixes.conf be modified?
May 19 2017
May 9 2017
@Aklapper I meant that a phab ticket is a much more convenient way to manage reminders and organize/plan/track work. The review of the code should happen in the code itself, which in this case is actually github (it has an excellent review system).
May 8 2017
Some notes from @Smalyshev in IRC:
manual watching wouldn't work, but auto-creating phab tickets based on pull requests should solve it. We must allow community to contribute the way it feels the most comfortable with. We shouldn't require community members to learn an obscure tool (Gerrit) in order to submit a 2 line patch.
Kartographer is enabled, but not the mapframe support
May 3 2017
I am still maintaining it (e.g. redirect support that @MaxSem and I are working on), but obviously not as much as before. I agree with @Tgr - the main problem is that it is very hard to test. It works across wikis, including secure wikis, via shared cache and HTTP with authentication. With Zero, Varnish also plays a big role -- changing request headers on the fly, and performing IP lookups.
Apr 28 2017
Apr 26 2017
Apr 24 2017
@MaxSem, the code seems to be ok, but i haven't fully tested it.
Apr 20 2017
Seems like the new property hasn't been created yet, or has it?
Apr 19 2017
JCMapDataContent relies on both JsonConfig & Kartographer. But while it could still function properly without Kartographer, it cannot function at all without JsonConfig. This is similar to how MediaWiki api uses syntax highlighting - it makes the output nicer, but not required to exist.
Apr 18 2017
was done a while ago
@JGirault, I think it got fixed. Thx for checking, closing.
Apr 7 2017
This post might help: https://medium.com/@adamhooper/fonts-in-node-canvas-bbf0b6b0cabf
I don't think there should ever be a need for merging any service-template changes with the actual services. That's what I would like to avoid with this restructuring.
Of the listed candidates, routes seem to be the least obvious to fit this description. Users are expected to write similar routes or modify existing ones, so it seems that keeping them in the template might make more sense.
Agree for v1 -- it is clearly an example, but I think the _info and robot file generating ones should be part of the lib, as they are mostly boilerplate and do not require any changes.
@mobrovac, I am not proposing to make 100 tiny npm libs. I am suggesting that we make a few "template libraries"., and move all the code there. This way services based on the template can be easily upgraded to the newer versions. Right now, there is no way to merge, without a very tedious line-by-line conflict resolution.
Apr 6 2017
@Pchelolo I agree that many of these things should not be their own modules. On the other hand, I don't think we should keep it as part of a copy/paste template. Instead, lets move them all into a "template utility" lib, which would allow existing template instants to simply reference it, and would allow you to improve them and migrate to the new versions in a more controlled way.
I see two ways of deploying: either as a submodule, or as part of docker built script.
- The submodule is fairly straightforward, because kartotherian itself is already deployed as one: https://phabricator.wikimedia.org/diffusion/GMKD/browse/master/ -- the src/ dir is a submodule. You can create a few more top-level submodules - tm2, tm2source, fonts.
- The dockerfile is a bit tricker - the dockerfile is autogenerated from the package.json, so adding things to it is not very straightforward, especially when we want to keep the plugins (e.g. meddo) separate from the core kartotherian.
Mar 30 2017
Done in upstream -- Added special handling for this in @kartotherian/server. Basic test on California coast showed a drop from 1.8MB to 855KB.
Mar 28 2017
Now that Kartotherian has an editor module, it should be very easy to set it up on a wmflabs machine, and pre-set it with maps.wikimedia.org and "shorter" servers as the source of tiles. The editor has "Inspect mode" that allows in-depth examination of each feature of a vector tile.
Mar 24 2017
Mar 22 2017
Mar 18 2017
Mar 15 2017
@Pnorman are you adding an index to OSM ID for updates? Also, why an index on geometry? I don't see a usecase for that yet (can always be added later if needed)
I suggest you use a python script to get the data from overpass-turbo like this one (I wil upload the new version today that includes nodes and ways). I use that script to validate that OSM's wikidata IDs match Wikidata instance-of and possibly other properties.
Mar 14 2017
@Nemo_bis, what alternative to template translation are you suggesting?
Mar 11 2017
Sounds great. Source loader can consume multiple source files, so tilerator could use production file plus some more sources
Mar 10 2017
In case needed, Kartotherian has a module to combine multiple sources based on zoom - e.g. data for zooms 0..4 can be coming from one source, and 5+ from the other
Mar 9 2017
Mar 8 2017
@Smalyshev you are right that it shouldn't be random. Instead, we could establish a well known list of the fallback languages. I would argue that latin-based languages should be first in that list, followed by the "closeness" to latin alphabet - e.g. if there are no known latin-language, use the next script that has the highest number of speakers or the number of Wikipedia readers, but is the closest to Latin. E.g. Russian probably before Greek, but Greek before Chineese. Or something along those lines. It really doesn't matter what order we choose, as long as there is a way to get something. Having nothing is always the worst.
Mar 7 2017
For those who work with the data extensively, could we have an easy way to copy wikidata IDs without navigating to them? Goal: when viewing an item, to be able to quickly copy Pnnn and Qnnn values of any statement. This means that when showing that Q or P number, it should not be a link (links are much harder to select). Thanks!
@MaxSem that extra blob of json can be added to the sources.prod.yaml file - it supports metadata injection.
Quite a few users have been requesting this. The Vega graphs already support this boxing mode, it just requires an extra param in the spec. @JGirault, what would happen if the actual image is bigger than the size you auto-detect? Will it autogrow? I think the best way to give this option to the users (literally two people asked me about it last night), is to make it optional instead of on by default. This way graph template authors can easily use this functionality when they design graphs in a way that will work with it.
@Lydia_Pintscher lets not close it, but reassign it to hovercard as one of the requirements? Is there a tag for it?
Mar 6 2017
Mar 5 2017
Mar 1 2017
Sure, all existing tech can be used for this. I would suggest creating a table first using a .tab page on commons. That table should probably have a countryId (string) column (values like "US", "FR", ...), and you can add all sorts of other fun columns there - like number of images uploaded? Organizer1,2,3? Basically think of a spreadsheet , so whatever fits into a table structure, you can add there. Once the data is figured out, you can create both a graph and a table (wiki markup) from it. The graph would use the table for the list of countries to highlight (and possibly make it proportional if you want some sort of a competitive map), and the lua modules could use that same data to generate the list of participating countries, ...
Feb 28 2017
@Lydia_Pintscher, showing description assumes that it is given for each item. Never the case. Any time i search in wikidata, it shows me useless Qnnn, or at most a label, because the search does not use language fallbacks. P-31/P-279 have much higher chance of having more informative label/description than the item itself, especially in the language i'm searching.
Feb 27 2017
Feb 24 2017
Feb 22 2017
@Smalyshev the issue here is really about the location of coordinates. Commons' datasets, and mapframe/maplink tags may contain all tags, points, and shapes. Wikidata cannot contain shapes, but can contain the rest. OSM cannot contain anything that is outside of their scope (like historical features, zip code areas, animal migration paths, etc). So the question is - should Wikipedia allow point coordinates to be retrieved from OSM - in other words treat a single [longitude, latitude] coordinate pair as an object that can be referenced by a wikidata ID, or should that pair be stored in all the other places. The geoshapes cannot be stored in Wikidata, hence its natural to be stored everywhere else. In a way - should we normalize or denormalize point data? Shapes clearly should be normalized.
Feb 21 2017
This is easy enough to fix by adding data to memcached when saving, just like we do in graphoid. Moreover, this can be done at the jsonconfig level.
I'm a bit unsure if there should be a node coordinate support in maps. Our main use case is to prevent significant data duplication, by reducing complex geometries (e.g. an outline of a country, city, or a river) to a single wikidata ID, or even better - to a SPARQL query that gets that ID. So we prevent duplication by getting it from OSM. So - OSM and .map datasets store geometries, while wikidata and .map datasets can store data points. Note that we already have a limitation - OSM can only provide geometries, not the associated tags like names, population, etc -- all that data can only come from wikidata. I think we should continue this split -- simple data comes from wiki sources (wikidata, .map datasets, or directly in <mapframe>/<maplink>, while complex geometries should not be duplicated, and should come from OSM if available.
Feb 17 2017
@JGirault, looks awesome, thanks!
Feb 14 2017
Feb 10 2017
I just posted a question to community on how to handle language fallbacks. Also, got it to run on my machine. :)
Feb 9 2017
@Pnorman geoshape service accesses both the line and polygon tables. If we can generate an alternative data source for shapes, it would be good (because we could also solve the bug with non-closed relations like roads and rivers)