I think this is done now?
I just checked in Firefox 64, and it seems to work fine now.
Thu, Dec 13
Also got this one today:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:3332) at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:472) at java.lang.StringBuffer.append(StringBuffer.java:310) at java.lang.StringBuffer.append(StringBuffer.java:97) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:486) at java.lang.StringBuffer.append(StringBuffer.java:338) at java.util.regex.Matcher.appendReplacement(Matcher.java:890) at java.util.regex.Matcher.replaceAll(Matcher.java:955) at java.lang.String.replace(String.java:2240) at org.wikidata.query.rdf.tool.rdf.UpdateBuilder.bindValue(UpdateBuilder.java:40) at org.wikidata.query.rdf.tool.rdf.RdfRepository.syncFromChanges(RdfRepository.java:308) at org.wikidata.query.rdf.tool.Updater.handleChanges(Updater.java:214) at org.wikidata.query.rdf.tool.Updater.run(Updater.java:129) at org.wikidata.query.rdf.tool.Update.run(Update.java:163) at org.wikidata.query.rdf.tool.Update.main(Update.java:88)
It is the standard Mediawiki title encoding - however Mediawiki represents the title, the same does WDQS, since Mediawiki (Wikidata) is the data source.
Probably was due to delete channel not working, should be ok now.
Submitted https://github.com/blazegraph/database/issues/110 for upstream.
Wed, Dec 12
This is fixed in 2.8.0 of Sesame, unfortunately Blazegraph does not build cleanly with it... Will have to look into how to upgrade.
This seems to be an issue in Sesame library, it quotes commas when they are part of the literal value, but not when they are part of the URI.
Tue, Dec 11
Sounds good to me.
Given that this seems to be very rare, dropping the priority a bit.
Mon, Dec 10
Added Ctrl-Alt-Space and Alt-Enter as secondary shortcuts for now.
I see JS error: this._querySamplesApi.getLanguage in not a function. There were some changes related to this recently, so possible there's a breakage.
Fri, Dec 7
Also it seems like /srv/wdqs/blazegraph is created with wrong permissions by default - Blazegraph can not write its store file there.
The one that I've posted I think is not easily replaceable with MW API, however
If you know you usually have to type 6-8 characters before anything good comes up, that's what you are going to do without looking at the suggestions.
@Pchelolo Yes, in this case MWAPI is probably better because WDQS has no idea about secondary domains like beta, test, etc. We could in theory set it up, but using MWAPI is probably much easier.
No, WDQS data can't be newer than Wikidata data because WDQS is updated from Wikidata.
Could you add some more details about how the API uses WDQS so I could see how this could be fixed/changed/improved?
Wed, Dec 5
Looks like with the change above errors are no longer happening. As I see no noticeable change on the dashboard tracking entity fetch times, I think the problem is solved.
This query can not work as described, because SERVICE wikibase:label clause is in one query (inside federated query) and labels are in another query (outside), so inside service does not know anything about it.
Tue, Dec 4
The query is captured in F27383365.
SPARQL dumps show that data is present in SPARQL but not in the database. Filed https://github.com/blazegraph/database/issues/109 with upstream and will dig into it further to see what we can find out there.
I've reduced the pool lifetime to 1s (which should be essentially as if there was no pooling if I get it correctly), let's see what happens. I've also looked through the code, and I don't see any possibility we're leaking connections (could be wrong of course) so whatever happens seems to be happening outside Updater code.
This looks like it can be caused by T210901: Stale reads for WDQS Updater.
Why are you trying to use federation when running on the same service? Can't you just use a subquery?
Mon, Dec 3
RDF dumps confirm that data is coming fine through
Thanks, I'll try to play with the connection pooling and see what happens and report here.
@lmarlier wouldn't that be slower? But I could try that too I guess.
I don't think we need to support fbcid in query strings themselves, but I see no problem in UI stripping it when translating hash to the query.
we could add another param to ensure the latest is retrieved?
Sat, Dec 1
@Gehel do you know any maven magic to make this work?
Fri, Nov 30
Generally we do need this host, but in light of T206636 I haven't been using it much lately because it run out of space and generally fell into disrepair. We'd need some host that is at least close to what we've got in T206636, if we can't do anything better, for experiments on labs. But right now wdqs-test is mostly useless, so it can be shut down, and then re-created in a useful form, hopefully. I've checked and it doesn't look like there's any useful data there, so you can shut down it whenever necessary.
Thu, Nov 29
Prefixed listed here: https://www.mediawiki.org/wiki/Wikibase/Indexing/RDF_Dump_Format#Prefixes_used probably should be configurable, except for wikibase one which probably should stay the same. Configuration should allow for easily generating the set of prefixes from a common URI prefix, since they all share the same one.
In the meantime, can I delete t206636-3?
The dist.xml builds distributable package, but we don't use it to deploy. We use /wikidata/query/deploy repo, which should have all the files that are needed to be deployed.
Wed, Nov 28
@Lea_Lacroix_WMDE yes, this is possible.
Hmm looks like newer items are affected too, so probably it's an instance of T210044: Data corruption when loading RDF data into WDQS.