Page MenuHomePhabricator

What to do if the WDQS does not synchronise with Wikibase?
Closed, InvalidPublic

Description

I am running a wikibase-container at http://185.54.113.154:8181/wiki/Main_Page. I have populated this wikibase instance with 176,113 items using a bot written with the Wikidata integrator.
However, after completion that WDQS remains empty.

During the bot bot import the WDQS crashed and I had to restart the docker image. Can the WDQS be synchronized afterwards?

Event Timeline

Hmm, if WDQS just stops, you should be able to restart it and it should pick up from the last point in recent changes that it took data from.
What error did the query service / updater crash out with?

What happens when you restart the container?
Remember the data is stored in the wdqs container / service and the updater is run within the updater container, so if the updater crashes the wdqs itself should still be running.
Also when restarting these things, you need to restart both containers, if you just restart wdqs no updating will happen, you also need to start the updater.

In any case, as long as you don't have more than 30 days of edits / all of your edits are accessible via recent changes you should be able to ditch both the wdqs updater and container (and data from them) and start new ones and all of the data will be picked up from recent changes.

See T182394 which allows the number of days that recent changes can be used for to be configured, as well as forcing a initial run time to be added to the query service which could also help out here or in the future (although these are not in the images yet)

I actually don't know when things went wrong. I launched all containers using docker compose. Upon initiation, seems to work. I then launched a bot to populate the wiki base instance. Which ran overnight to at around 180lk items. When I then tried to run some SPARQL queries in the WDQS the WDQS gave a nginx error, which I solved by relaunching all containers.

I have restarted all with the following commands:

sudo /usr/local/bin/docker-compose down
ubuntu@wikibase:~/wikibase-docker$ sudo /usr/local/bin/docker-compose down
Stopping wikibasedocker_wdqs-updater_1  ... done
Stopping wikibasedocker_wdqs-frontend_1 ... done
Stopping wikibasedocker_wikibase_1      ... done
Stopping wikibasedocker_wdqs-proxy_1    ... done
Stopping wikibasedocker_mysql_1         ... done
Stopping wikibasedocker_wdqs_1          ... done
Removing wikibasedocker_wdqs-frontend_1 ... done
Removing wikibasedocker_wikibase_1      ... done
Removing wikibasedocker_wdqs-proxy_1    ... done
Removing wikibasedocker_mysql_1         ... done
Removing wikibasedocker_wdqs_1          ... done

The be sure, a reboot, followed by:

ubuntu@wikibase:~/wikibase-docker$ sudo /usr/local/bin/docker-compose up --no-build -d
wikibasedocker_wdqs_1 is up-to-date
wikibasedocker_mysql_1 is up-to-date
wikibasedocker_wdqs-proxy_1 is up-to-date
wikibasedocker_wikibase_1 is up-to-date
wikibasedocker_wdqs-frontend_1 is up-to-date
wikibasedocker_wdqs-updater_1 is up-to-date

I waited an evening for the wdqs to pick up. Unfortunatly still not updates

Am I a missing a specific command?

So you can have a look at the log for everything (checkout the docker-compose logs command), a paste of those would help pin down what happened.
Although if you have already run 'down' then the containers have been removed and the logs won't exist any more.

When I then tried to run some SPARQL queries in the WDQS the WDQS gave a nginx error

What error exactly? Again you can see what was happening in cases like this by looking at the logs of the containers.

I waited an evening for the wdqs to pick up. Unfortunatly still not updates

It looks like you are running into the exact issue that T182394 was filed for (though that is a guess without seeing the logs).
Per T182394#3938346 the fix we need will be in 0.3.0 of the query service which is currently blocked on T178712

Also, how much memory does the instance have that is running all of the docker containers?

Addshore triaged this task as Medium priority.Jun 26 2018, 3:52 PM

So, this should all be documented, but perhaps we need to more clearly link to the query service docs in the wdqs docker images readme and maybe make it explicit how to move the needed files between 2 containers.

I would like to re-open the ticket/issue.
I am running and/or have access to several wikibase instances. After loading data via a bot, no data appear to be querable at the wqds endpoint.
I tried different form of queries, even trying all possible form of prefixes modelling them on those available for Wikidata (a bit "out of desperation"). No query and for no prefix reported any result.
Even putting down all services and restarting the machine did not solve the problem on the wikibase instances I can control.

As for instances openly available, for example consider the query

https://orig-query.wmflabs.org/#PREFIX%20wb%3A%20%3Chttp%3A%2F%2Forig-query.wmflabs.org%2Fentity%2F%3E%0APREFIX%20p%3A%20%3Chttp%3A%2F%2Forig-query.wmflabs.org%2Fprop%2Fdirect%2F%3E%0A%0ADESCRIBE%20wb%3AQ2%0A

which does not return any result although the Q2 item exists in the instance.

Andrawaag raised the priority of this task from Medium to High.Jul 4 2018, 10:47 AM

@Addshore This is the issue I mentioned in Antwerp, which I could then not replicate.

Addshore lowered the priority of this task from High to Medium.Jul 10 2018, 8:36 AM
In T186161#4394184, @Considering.Different.Routes wrote:No query and for no prefix reported any result.

Note even the below?

SELECT * where { ?a ?b ?c }

I saw you linked to your query service but I seem to see tripples https://orig-query.wmflabs.org/#SELECT%20%2a%20WHERE%20%7B%20%3Fa%20%3Fb%20%3Fc%20%7D

If that returns nothing at all then the updater has not started running and you'll have to paste the logs of the updater here so we can see what happened.

As for instances openly available, for example consider the query

https://orig-query.wmflabs.org/#PREFIX%20wb%3A%20%3Chttp%3A%2F%2Forig-query.wmflabs.org%2Fentity%2F%3E%0APREFIX%20p%3A%20%3Chttp%3A%2F%2Forig-query.wmflabs.org%2Fprop%2Fdirect%2F%3E%0A%0ADESCRIBE%20wb%3AQ2%0A

which does not return any result although the Q2 item exists in the instance.

It looks like your prefixes were wrong, the below shows Q2:

https://orig-query.wmflabs.org/#PREFIX%20wb%3A%20%3Chttp%3A%2F%2Forig.wmflabs.org%2Fentity%2F%3E%0APREFIX%20p%3A%20%3Chttp%3A%2F%2Forig.wmflabs.org%2Fprop%2Fdirect%2F%3E%0A%0ADESCRIBE%20wb%3AQ2%0A

Hi, I am having the same issue. I installed the Wikibase and created new properties and items. But I am unable to query them and they do not appear in the query results. Is there any resolution to this problem?

SELECT * {?item ?prop ?val}

doesn't show the newly created properties and items.

Same here. I get this from logs:

...
wdqs-updater_1     | wait-for-it.sh: waiting 300 seconds for wikibase.svc:80
wdqs-updater_1     | wait-for-it.sh: wikibase.svc:80 is available after 0 seconds
wdqs-updater_1     | wait-for-it.sh: waiting 300 seconds for wdqs.svc:9999
wdqs-updater_1     | wait-for-it.sh: wdqs.svc:9999 is available after 0 seconds
wdqs-updater_1     | Updating via http://wdqs.svc:9999/bigdata/namespace/wdq/sparql
wdqs-updater_1     | OpenJDK 64-Bit Server VM warning: Cannot open file /var/log/wdqs/wdqs-updater_jvm_gc.pid9.log due to No such file or directory
wdqs-updater_1     |
wdqs-updater_1     | #logback.classic pattern: %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
wdqs-updater_1     | 02:22:52.824 [main] INFO  org.wikidata.query.rdf.tool.Update - Starting Updater 0.3.10-SNAPSHOT (8bba8bfb0bbb12361c0e214fe482fbe15fcaa129)
wdqs-updater_1     | 02:22:53.753 [main] INFO  o.w.q.r.t.change.ChangeSourceContext - Checking where we left off
wdqs-updater_1     | 02:22:53.753 [main] INFO  o.w.query.rdf.tool.rdf.RdfRepository - Checking for left off time from the updater
wdqs-updater_1     | 02:22:53.971 [main] INFO  o.w.query.rdf.tool.rdf.RdfRepository - Found left off time from the updater
wdqs-updater_1     | 02:22:53.973 [main] ERROR org.wikidata.query.rdf.tool.Update - Error during initialization.
wdqs-updater_1     | java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time.  You will have to reload from scratch or you might have missing data.
wdqs-updater_1     |    at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:97)
wdqs-updater_1     |    at org.wikidata.query.rdf.tool.Update.initialize(Update.java:144)
wdqs-updater_1     |    at org.wikidata.query.rdf.tool.Update.main(Update.java:97)
wdqs-updater_1     | Exception in thread "main" java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time.  You will have to reload from scratch or you might have missing data.
wdqs-updater_1     |    at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:97)
wdqs-updater_1     |    at org.wikidata.query.rdf.tool.Update.initialize(Update.java:144)
wdqs-updater_1     |    at org.wikidata.query.rdf.tool.Update.main(Update.java:97)
wikibase-docker_wdqs-updater_1 exited with code 1

I am having the same problem.

wait-for-it.sh: waiting 300 seconds for wikibase.svc:80

wait-for-it.sh: wikibase.svc:80 is available after 40 seconds

wait-for-it.sh: waiting 300 seconds for wdqs.svc:9999

wait-for-it.sh: wdqs.svc:9999 is available after 0 seconds

Updating via http://wdqs.svc:9999/bigdata/namespace/wdq/sparql

OpenJDK 64-Bit Server VM warning: Cannot open file /var/log/wdqs/wdqs-updater_jvm_gc.pid9.log due to No such file or directory


#logback.classic pattern: %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n

04:05:54.813 [main] INFO org.wikidata.query.rdf.tool.Update - Starting Updater 0.3.10-SNAPSHOT (8bba8bfb0bbb12361c0e214fe482fbe15fcaa129)

04:05:57.183 [main] INFO o.w.q.r.t.change.ChangeSourceContext - Checking where we left off

04:05:57.183 [main] INFO o.w.query.rdf.tool.rdf.RdfRepository - Checking for left off time from the updater

04:05:58.306 [main] INFO o.w.query.rdf.tool.rdf.RdfRepository - Found left off time from the updater

04:05:58.310 [main] ERROR org.wikidata.query.rdf.tool.Update - Error during initialization.

java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time. You will have to reload from scratch or you might have missing data.

at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:97)

at org.wikidata.query.rdf.tool.Update.initialize(Update.java:144)

at org.wikidata.query.rdf.tool.Update.main(Update.java:97)

Exception in thread "main" java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time. You will have to reload from scratch or you might have missing data.

at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:97)

at org.wikidata.query.rdf.tool.Update.initialize(Update.java:144)

at org.wikidata.query.rdf.tool.Update.main(Update.java:97)

wait-for-it.sh: waiting 300 seconds for wikibase.svc:80

wait-for-it.sh: wikibase.svc:80 is available after 0 seconds

wait-for-it.sh: waiting 300 seconds for wdqs.svc:9999

wait-for-it.sh: wdqs.svc:9999 is available after 0 seconds

Updating via http://wdqs.svc:9999/bigdata/namespace/wdq/sparql

OpenJDK 64-Bit Server VM warning: Cannot open file /var/log/wdqs/wdqs-updater_jvm_gc.pid10.log due to No such file or directory


#logback.classic pattern: %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n

04:06:02.098 [main] INFO org.wikidata.query.rdf.tool.Update - Starting Updater 0.3.10-SNAPSHOT (8bba8bfb0bbb12361c0e214fe482fbe15fcaa129)

04:06:03.347 [main] INFO o.w.q.r.t.change.ChangeSourceContext - Checking where we left off

04:06:03.347 [main] INFO o.w.query.rdf.tool.rdf.RdfRepository - Checking for left off time from the updater

04:06:03.725 [main] INFO o.w.query.rdf.tool.rdf.RdfRepository - Found left off time from the updater

04:06:03.728 [main] ERROR org.wikidata.query.rdf.tool.Update - Error during initialization.

java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time. You will have to reload from scratch or you might have missing data.

at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:97)

at org.wikidata.query.rdf.tool.Update.initialize(Update.java:144)

at org.wikidata.query.rdf.tool.Update.main(Update.java:97)

Exception in thread "main" java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time. You will have to reload from scratch or you might have missing data.

at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:97)

at org.wikidata.query.rdf.tool.Update.initialize(Update.java:144)

at org.wikidata.query.rdf.tool.Update.main(Update.java:97)

Same here. I get this from logs:

...
wdqs-updater_1     | wait-for-it.sh: waiting 300 seconds for wikibase.svc:80
wdqs-updater_1     | wait-for-it.sh: wikibase.svc:80 is available after 0 seconds
wdqs-updater_1     | wait-for-it.sh: waiting 300 seconds for wdqs.svc:9999
wdqs-updater_1     | wait-for-it.sh: wdqs.svc:9999 is available after 0 seconds
wdqs-updater_1     | Updating via http://wdqs.svc:9999/bigdata/namespace/wdq/sparql
wdqs-updater_1     | OpenJDK 64-Bit Server VM warning: Cannot open file /var/log/wdqs/wdqs-updater_jvm_gc.pid9.log due to No such file or directory
wdqs-updater_1     |
wdqs-updater_1     | #logback.classic pattern: %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
wdqs-updater_1     | 02:22:52.824 [main] INFO  org.wikidata.query.rdf.tool.Update - Starting Updater 0.3.10-SNAPSHOT (8bba8bfb0bbb12361c0e214fe482fbe15fcaa129)
wdqs-updater_1     | 02:22:53.753 [main] INFO  o.w.q.r.t.change.ChangeSourceContext - Checking where we left off
wdqs-updater_1     | 02:22:53.753 [main] INFO  o.w.query.rdf.tool.rdf.RdfRepository - Checking for left off time from the updater
wdqs-updater_1     | 02:22:53.971 [main] INFO  o.w.query.rdf.tool.rdf.RdfRepository - Found left off time from the updater
wdqs-updater_1     | 02:22:53.973 [main] ERROR org.wikidata.query.rdf.tool.Update - Error during initialization.
wdqs-updater_1     | java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time.  You will have to reload from scratch or you might have missing data.
wdqs-updater_1     |    at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:97)
wdqs-updater_1     |    at org.wikidata.query.rdf.tool.Update.initialize(Update.java:144)
wdqs-updater_1     |    at org.wikidata.query.rdf.tool.Update.main(Update.java:97)
wdqs-updater_1     | Exception in thread "main" java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time.  You will have to reload from scratch or you might have missing data.
wdqs-updater_1     |    at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:97)
wdqs-updater_1     |    at org.wikidata.query.rdf.tool.Update.initialize(Update.java:144)
wdqs-updater_1     |    at org.wikidata.query.rdf.tool.Update.main(Update.java:97)
wikibase-docker_wdqs-updater_1 exited with code 1

@Tinyttt I finally found a working configuration.

I noticed by running docker container ls that my Docker container named wikibase-docker_wdqs-updater_1 was always restarting and never up.

First of all I made sure that the ports 8181, 8282, 8989 and 9191 were open in the firewall.

Then I had to edit docker-compose.yml to replace wikibase.svc by myIP:8181 for the following services : wdqs and wdqs-updater. wikibase.svc remained in the following services : wdqs-frontend and wikibase. I added to that (and not replaced) myIP:8181 as an alias in the wikibase service. I have to underline the poor documentation in docker-compose.yml regarding this step.

Here you have to restart the services: docker-compose stop then docker-compose up -d.

Finally this allowed me to enter in the wikibase-docker_wdqs_1 container with bash (docker exec -it CONTAINERID bash) and execute the following command:

./runUpdate.sh -h http://$WDQS_HOST:$WDQS_PORT -- --wikibaseUrl $WIKIBASE_SCHEME://$WIKIBASE_HOST --conceptUri $WIKIBASE_SCHEME://$WIKIBASE_HOST --entityNamespaces $WDQS_ENTITY_NAMESPACES -s 20200501000000

I found this command in https://github.com/wmde/wikibase-docker/blob/master/wdqs/0.3.40/runUpdate.sh and added the -s argument to it with a date that fitted with my data. Once successfully runned, the data already in the wikibase was sync with Blazegraph and queryable by the SPARQL frontend client. Moreover, the wikibase-docker_wdqs-updater_1 container finally became stable and stopped to restart again and again. That allowed runUpdate.sh to keep all future edits synced and queryable as well.

Hope that helps.

@Louperivois I've run into the issue of docker_compose_files_wdqs-updater_1 crashing and restarting constantly as well.

I started to follow your instructions having the following in my docker-compose.yml:

services:
  wikibase:
    image: wikibase/wikibase:1.35-bundle
    links:
      - mysql
    ports:
    # CONFIG - Change the 8181 here to expose Wikibase & MediaWiki on a different port
     - "8181:80"
    volumes:
      - mediawiki-images-data:/var/www/html/images
      - quickstatements-data:/quickstatements/data
    depends_on:
    - mysql
    - elasticsearch
    restart: unless-stopped
    networks:
      default:
        aliases:
         - wikibase.svc
         # CONFIG - Add your real wikibase hostname here, only for internal names and when NOT terminating SSL outside the container.
         - localhost:8181
    environment:
      - DB_SERVER=mysql.svc:3306
      - MW_ELASTIC_HOST=elasticsearch.svc
      - MW_ELASTIC_PORT=9200
      # CONFIG - Change the default values below
      - MW_ADMIN_NAME=WikibaseAdmin
      - MW_ADMIN_PASS=WikibaseDockerAdminPass
      - MW_ADMIN_EMAIL=admin@example.com
      - MW_WG_SECRET_KEY=secretkey
      # CONFIG - Change the default values below (should match mysql values in this file)
      - DB_USER=wikiuser
      - DB_PASS=sqlpass
      - DB_NAME=my_wiki
      - QS_PUBLIC_SCHEME_HOST_AND_PORT=http://localhost:9191
  mysql:
    image: mariadb:10.3
    restart: unless-stopped
    volumes:
      - mediawiki-mysql-data:/var/lib/mysql
    environment:
      MYSQL_RANDOM_ROOT_PASSWORD: 'yes'
      # CONFIG - Change the default values below (should match values passed to wikibase)
      MYSQL_DATABASE: 'my_wiki'
      MYSQL_USER: 'wikiuser'
      MYSQL_PASSWORD: 'sqlpass'
    networks:
      default:
        aliases:
         - mysql.svc
  wdqs-frontend:
    image: wikibase/wdqs-frontend:latest
    restart: unless-stopped
    ports:
    # CONFIG - Change the 8282 here to expose the Query Service UI on a different port
     - "8282:80"
    depends_on:
    - wdqs-proxy
    networks:
      default:
        aliases:
         - wdqs-frontend.svc
    environment:
      - WIKIBASE_HOST=wikibase.svc
      - WDQS_HOST=wdqs-proxy.svc
  wdqs:
    image: wikibase/wdqs:0.3.40
    restart: unless-stopped
    volumes:
      - query-service-data:/wdqs/data
    command: /runBlazegraph.sh
    networks:
      default:
        aliases:
         - wdqs.svc
    environment:
      - WIKIBASE_HOST=localhost:8181
      #wikibase.svc
      - WDQS_HOST=wdqs.svc
      - WDQS_PORT=9999
    expose:
      - 9999
  wdqs-proxy:
    image: wikibase/wdqs-proxy
    restart: unless-stopped
    environment:
      - PROXY_PASS_HOST=wdqs.svc:9999
    ports:
     - "8989:80"
    depends_on:
    - wdqs
    networks:
      default:
        aliases:
         - wdqs-proxy.svc
  wdqs-updater:
    image: wikibase/wdqs:0.3.40
    restart: unless-stopped
    command: /runUpdate.sh
    depends_on:
    - wdqs
    - wikibase
    networks:
      default:
        aliases:
         - wdqs-updater.svc
    environment:
     - WIKIBASE_HOST=localhost:8181
     #wikibase.svc
     - WDQS_HOST=wdqs.svc
     - WDQS_PORT=9999
  elasticsearch:
    image: wikibase/elasticsearch:6.5.4-extra
    restart: unless-stopped
    networks:
      default:
        aliases:
         - elasticsearch.svc
    environment:
      discovery.type: single-node
      ES_JAVA_OPTS: "-Xms512m -Xmx512m"
  # CONFING, in order to not load quickstatements then remove this entire section
  quickstatements:
    image: wikibase/quickstatements:latest
    ports:
     - "9191:80"
    depends_on:
     - wikibase
    volumes:
     - quickstatements-data:/quickstatements/data
    networks:
      default:
        aliases:
         - quickstatements.svc
    environment:
      - QS_PUBLIC_SCHEME_HOST_AND_PORT=http://localhost:9191
      - WB_PUBLIC_SCHEME_HOST_AND_PORT=http://localhost:8181
      - WIKIBASE_SCHEME_AND_HOST=http://wikibase.svc
      - WB_PROPERTY_NAMESPACE=122
      - "WB_PROPERTY_PREFIX=Property:"
      - WB_ITEM_NAMESPACE=120
      - "WB_ITEM_PREFIX=Item:"

However, upon running docker-compose stop, then docker-compose up -d, opening a bash container in wikibase-docker_wdqs_1, and running the above command as:

./runUpdate.sh -h http://$WDQS_HOST:$WDQS_PORT -- --wikibaseUrl $WIKIBASE_SCHEME://$WIKIBASE_HOST --conceptUri $WIKIBASE_SCHEME://$WIKIBASE_HOST --entityNamespaces $WDQS_ENTITY_NAMESPACES -s 20210311000000

I get this error:

Updating via http://wdqs.svc:9999/bigdata/namespace/wdq/sparql
#logback.classic pattern: %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
05:40:34.854 [main] INFO  org.wikidata.query.rdf.tool.Update - Starting Updater 0.3.40 (a115a80eec974454d140389e1f52aad0e54913f9)
05:40:35.514 [main] ERROR org.wikidata.query.rdf.tool.Update - Error during updater run.
java.lang.RuntimeException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8181 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
        at org.wikidata.query.rdf.tool.wikibase.WikibaseRepository.fetchRecentChanges(WikibaseRepository.java:347)
        at org.wikidata.query.rdf.tool.change.RecentChangesPoller.doFetchRecentChanges(RecentChangesPoller.java:325)
        at org.wikidata.query.rdf.tool.change.RecentChangesPoller.fetchRecentChanges(RecentChangesPoller.java:314)
        at org.wikidata.query.rdf.tool.change.RecentChangesPoller.batch(RecentChangesPoller.java:338)
        at org.wikidata.query.rdf.tool.change.RecentChangesPoller.firstBatch(RecentChangesPoller.java:162)
        at org.wikidata.query.rdf.tool.change.RecentChangesPoller.firstBatch(RecentChangesPoller.java:38)
        at org.wikidata.query.rdf.tool.Updater.run(Updater.java:149)
        at org.wikidata.query.rdf.tool.Update.run(Update.java:175)
        at org.wikidata.query.rdf.tool.Update.main(Update.java:99)
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8181 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
        at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
        at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
        at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
        at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
        at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
        at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
        at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
        at org.apache.http.impl.execchain.ServiceUnavailableRetryExec.execute(ServiceUnavailableRetryExec.java:84)
        at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
        at org.wikidata.query.rdf.tool.wikibase.WikibaseRepository.getJson(WikibaseRepository.java:508)
        at org.wikidata.query.rdf.tool.wikibase.WikibaseRepository.fetchRecentChanges(WikibaseRepository.java:344)
        ... 8 common frames omitted
Caused by: java.net.ConnectException: Connection refused (Connection refused)
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
        at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
        ... 20 common frames omitted
Exception in thread "main" java.lang.RuntimeException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8181 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
        at org.wikidata.query.rdf.tool.wikibase.WikibaseRepository.fetchRecentChanges(WikibaseRepository.java:347)
        at org.wikidata.query.rdf.tool.change.RecentChangesPoller.doFetchRecentChanges(RecentChangesPoller.java:325)
        at org.wikidata.query.rdf.tool.change.RecentChangesPoller.fetchRecentChanges(RecentChangesPoller.java:314)
        at org.wikidata.query.rdf.tool.change.RecentChangesPoller.batch(RecentChangesPoller.java:338)
        at org.wikidata.query.rdf.tool.change.RecentChangesPoller.firstBatch(RecentChangesPoller.java:162)
        at org.wikidata.query.rdf.tool.change.RecentChangesPoller.firstBatch(RecentChangesPoller.java:38)
        at org.wikidata.query.rdf.tool.Updater.run(Updater.java:149)
        at org.wikidata.query.rdf.tool.Update.run(Update.java:175)
        at org.wikidata.query.rdf.tool.Update.main(Update.java:99)
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8181 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
        at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
        at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
        at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
        at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
        at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
        at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
        at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
        at org.apache.http.impl.execchain.ServiceUnavailableRetryExec.execute(ServiceUnavailableRetryExec.java:84)
        at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
        at org.wikidata.query.rdf.tool.wikibase.WikibaseRepository.getJson(WikibaseRepository.java:508)
        at org.wikidata.query.rdf.tool.wikibase.WikibaseRepository.fetchRecentChanges(WikibaseRepository.java:344)
        ... 8 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
        at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
        ... 20 more

I'm not sure why that is, or if I did something wrong here, but it just can't seem to connect to localhost:8181? Any help or advice would be much appreciated!

UPDATE #1: This problem goes a bit deeper than just the updater not working (although maybe this is a symptom of the updater as well).

Basically, I had a Wikibase instance running on a WAMP64 server, but I dumped the XML using the appropriate PHP and re-uploaded the data to the Wikibase instance on Docker using the steps outlined here (https://wikibase.consulting/transferring-wikibase-data-between-wikis/).

I had gotten it to where I could add new items and properties (and they are being added to the database, because when I try to add them again, it tells me they already exist); however, they do not appear when searching at all. Unless, I re-run the rebuild scripts every time I add something:

php maintenance/rebuildall.php

php maintenance/runJobs.php

php maintenance/initSiteStats.php --update

Then they appear in search. The caveat is that I have to manually re-run this every time I add anything or it doesn't appear. I think this means that the elastic search(?) might also be crashing simultaneously with the updater, or maybe it's just the updater; but I'm very confused as to why this is happening.

UPDATE #2: Thought I'd add a pic of what the query service is returning as well:

image.png (931×1 px, 76 KB)

UPDATE #3: I looked under the hood at the Elasticsearch and saw the issue:

[2021-03-11T16:22:01,530][WARN ][o.e.b.BootstrapChecks ] [6HoJsGL] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Fixed it using a comment here (https://stackoverflow.com/questions/51445846/elasticsearch-max-virtual-memory-areas-vm-max-map-count-65530-is-too-low-inc); namely to open Powershell, run wsl -d docker-desktop and then run sysctl -w vm.max_map_count=262144.

While this seemed to fix the Elasticsearch issue, it didn't fix any of the other issues, so it was probably independent of the issues with the updater and not related to anything else I mentioned.

UPDATE #4: I saw a warning in the quickstatements area which was:

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.18.0.8. Set the 'ServerName' directive globally to suppress this message

However, I highly doubt this is affecting anything, per here: https://stackoverflow.com/questions/46266527/could-not-reliably-determine-the-servers-fully-qualified-domain-name-how-to.

UPDATE #5: Looking at the errors within the updater, I have:

14:02:42.816 [main] INFO o.w.q.r.t.change.ChangeSourceContext - Checking where we left off

14:02:42.816 [main] INFO o.w.query.rdf.tool.rdf.RdfRepository - Checking for left off time from the updater

14:02:42.946 [main] INFO o.w.query.rdf.tool.rdf.RdfRepository - Found left off time from the updater

14:02:42.947 [main] ERROR org.wikidata.query.rdf.tool.Update - Error during initialization.

java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time. You will have to reload from scratch or you might have missing data.

at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:100)

at org.wikidata.query.rdf.tool.Update.initialize(Update.java:145)

at org.wikidata.query.rdf.tool.Update.main(Update.java:98)

Exception in thread "main" java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time. You will have to reload from scratch or you might have missing data.

at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:100)

at org.wikidata.query.rdf.tool.Update.initialize(Update.java:145)

at org.wikidata.query.rdf.tool.Update.main(Update.java:98)

wait-for-it.sh: waiting 300 seconds for wikibase.svc:80

wait-for-it.sh: wikibase.svc:80 is available after 0 seconds

wait-for-it.sh: waiting 300 seconds for wdqs.svc:9999

wait-for-it.sh: wdqs.svc:9999 is available after 0 seconds

Updating via http://wdqs.svc:9999/bigdata/namespace/wdq/sparql

#logback.classic pattern: %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n

Which essentially just repeats ad infinitum. On the surface is looks similar to: https://phabricator.wikimedia.org/T182394. I'll see if I can get anything there working later today.

UPDATE #6: So https://phabricator.wikimedia.org/T182394 mentions a patch (https://gerrit.wikimedia.org/r/c/wikidata/query/rdf/+/405826/) (by Smalyshev; owner: Smalyshev) which includes two Java files. I'm so sorry, but being a novice here, I have absolutely no idea how and/or where to implement those, or if they will only break my configuration if I implement them haphazardly.

I can't seem to find the Java files mentioned in the containers (I can't check the updater container though because it doesn't stay open long enough to open a CLI). Any help here would be great! Thank you!

UPDATE #7: Same issue is seemingly documented here (https://stackoverflow.com/questions/46257610/error-during-a-wikidata-update), here (https://github.com/blazegraph/database/issues/96), and here (https://www.mediawiki.org/wiki/Wikibase/FAQ/en#Why_doesn't_the_query_service_update?). However, the fix is unclear.

Running ./runUpdate.sh -DwikibaseMaxDaysBack=DAYS in the wdqs container with anything (I tried 100, then 60, then 30) results in:

17:20:42.016 [main] INFO  org.wikidata.query.rdf.tool.Update - Starting Updater 0.3.40 (a115a80eec974454d140389e1f52aad0e54913f9)
17:20:42.592 [main] INFO  o.w.q.r.t.change.ChangeSourceContext - Checking where we left off
17:20:42.592 [main] INFO  o.w.query.rdf.tool.rdf.RdfRepository - Checking for left off time from the updater
17:20:42.688 [main] ERROR org.wikidata.query.rdf.tool.Update - Error during initialization.
org.wikidata.query.rdf.tool.exception.ContainedException: Non-200 response from triple store:  HttpContentResponse[HTTP/1.1 404 Not Found - 0 bytes] body=

        at org.wikidata.query.rdf.tool.rdf.client.RdfClient.execute(RdfClient.java:226)
        at org.wikidata.query.rdf.tool.rdf.client.RdfClient.query(RdfClient.java:97)
        at org.wikidata.query.rdf.tool.rdf.RdfRepository.dateFromQuery(RdfRepository.java:540)
        at org.wikidata.query.rdf.tool.rdf.RdfRepository.fetchLeftOffTime(RdfRepository.java:509)
        at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:92)
        at org.wikidata.query.rdf.tool.Update.initialize(Update.java:145)
        at org.wikidata.query.rdf.tool.Update.main(Update.java:98)
Exception in thread "main" org.wikidata.query.rdf.tool.exception.ContainedException: Non-200 response from triple store:  HttpContentResponse[HTTP/1.1 404 Not Found - 0 bytes] body=

        at org.wikidata.query.rdf.tool.rdf.client.RdfClient.execute(RdfClient.java:226)
        at org.wikidata.query.rdf.tool.rdf.client.RdfClient.query(RdfClient.java:97)
        at org.wikidata.query.rdf.tool.rdf.RdfRepository.dateFromQuery(RdfRepository.java:540)
        at org.wikidata.query.rdf.tool.rdf.RdfRepository.fetchLeftOffTime(RdfRepository.java:509)
        at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:92)
        at org.wikidata.query.rdf.tool.Update.initialize(Update.java:145)
        at org.wikidata.query.rdf.tool.Update.main(Update.java:98)

UPDATE #8: At a cursory glance, this is the same problem as described here: https://phabricator.wikimedia.org/T207133.

UPDATE #9: I've tried every combination of the individual comments at T207133, and I can't get it working. Checking out https://phabricator.wikimedia.org/T197658, as this also seems similar.

UPDATE #10: T197658 didn't seem helpful... The errors seem potentially worse now.

Newer YML I'm trying:

# Wikibase with Query Service
#
# This docker-compose example can be used to pull the images from docker hub.
#
# Examples:
#
# Access Wikibase via "http://localhost:8181"
#   (or "http://$(docker-machine ip):8181" if using docker-machine)
#
# Access Query Service via "http://localhost:8282"
#   (or "http://$(docker-machine ip):8282" if using docker-machine)
version: '3'

services:
  wikibase:
    image: wikibase/wikibase:1.35-bundle
    links:
      - mysql
    ports:
    # CONFIG - Change the 8181 here to expose Wikibase & MediaWiki on a different port
     - "8181:80"
    volumes:
      - mediawiki-images-data:/var/www/html/images
      #- quickstatements-data:/quickstatements/data
    depends_on:
    - mysql
    - elasticsearch
    restart: unless-stopped
    networks:
      default:
        aliases:
         - wikibase.svc
         # CONFIG - Add your real wikibase hostname here, only for internal names and when NOT terminating SSL outside the container.
         - localhost:8181
    environment:
      - DB_SERVER=mysql.svc:3306
      - MW_ELASTIC_HOST=elasticsearch.svc
      - MW_ELASTIC_PORT=9200
      # CONFIG - Change the default values below
      - MW_ADMIN_NAME=WikibaseAdmin
      - MW_ADMIN_PASS=WikibaseDockerAdminPass
      - MW_ADMIN_EMAIL=admin@example.com
      - MW_WG_SECRET_KEY=secretkey
      # CONFIG - Change the default values below (should match mysql values in this file)
      - DB_USER=wikiuser
      - DB_PASS=sqlpass
      - DB_NAME=my_wiki
      - QS_PUBLIC_SCHEME_HOST_AND_PORT=http://localhost:9191
  mysql:
    image: mariadb:10.3
    restart: unless-stopped
    volumes:
      - mediawiki-mysql-data:/var/lib/mysql
    environment:
      MYSQL_RANDOM_ROOT_PASSWORD: 'yes'
      # CONFIG - Change the default values below (should match values passed to wikibase)
      MYSQL_DATABASE: 'my_wiki'
      MYSQL_USER: 'wikiuser'
      MYSQL_PASSWORD: 'sqlpass'
    networks:
      default:
        aliases:
         - mysql.svc
  wdqs-frontend:
    image: wikibase/wdqs-frontend:latest
    restart: unless-stopped
    ports:
    # CONFIG - Change the 8282 here to expose the Query Service UI on a different port
     - "8282:80"
    depends_on:
    - wdqs-proxy
    networks:
      default:
        aliases:
         - wdqs-frontend.svc
    environment:
      - WIKIBASE_HOST=localhost:8181
      #wikibase.svc
      - WDQS_HOST=wdqs-proxy.svc
  wdqs:
    image: wikibase/wdqs:0.3.40
    restart: unless-stopped
    volumes:
      - query-service-data:/wdqs/data
    command: /runBlazegraph.sh
    networks:
      default:
        aliases:
         - wdqs.svc
    environment:
      - WIKIBASE_HOST=localhost:8181
      #wikibase.svc
      - WDQS_HOST=wdqs.svc
      - WDQS_PORT=9999
    expose:
      - 9999
  wdqs-proxy:
    image: wikibase/wdqs-proxy
    restart: unless-stopped
    environment:
      - PROXY_PASS_HOST=wdqs.svc:9999
    ports:
     - "8989:80"
    depends_on:
    - wdqs
    networks:
      default:
        aliases:
         - wdqs-proxy.svc
  wdqs-updater:
    image: wikibase/wdqs:0.3.40
    restart: unless-stopped
    command: /runUpdate.sh
    depends_on:
    - wdqs
    - wikibase
    networks:
      default:
        aliases:
         - wdqs-updater.svc
    environment:
     - WIKIBASE_HOST=localhost:8181
     #wikibase.svc
     - WDQS_HOST=wdqs.svc
     - WDQS_PORT=9999
  elasticsearch:
    image: wikibase/elasticsearch:6.5.4-extra
    restart: unless-stopped
    networks:
      default:
        aliases:
         - elasticsearch.svc
    environment:
      discovery.type: single-node
      ES_JAVA_OPTS: "-Xms512m -Xmx512m"
  # CONFING, in order to not load quickstatements then remove this entire section
#  quickstatements:
#    image: wikibase/quickstatements:latest
#    ports:
#     - "9191:80"
#    depends_on:
#     - wikibase
#    volumes:
#     - quickstatements-data:/quickstatements/data
#    networks:
#      default:
#        aliases:
#         - quickstatements.svc
#    environment:
#      - QS_PUBLIC_SCHEME_HOST_AND_PORT=http://localhost:9191
#      - WB_PUBLIC_SCHEME_HOST_AND_PORT=http://localhost:8181
#      - WIKIBASE_SCHEME_AND_HOST=http://wikibase.svc
#      - WB_PROPERTY_NAMESPACE=122
#      - "WB_PROPERTY_PREFIX=Property:"
#      - WB_ITEM_NAMESPACE=120
#      - "WB_ITEM_PREFIX=Item:"

volumes:
  mediawiki-mysql-data:
  mediawiki-images-data:
  query-service-data:
#  quickstatements-data:

From running runBlazegraph.sh

17:47:51.649 [main] INFO  o.w.q.r.b.WikibaseContextListener IP: UA: - Wikibase services initialized.
17:47:51.660 [main] INFO  o.w.q.r.b.filters.MonitoredFilter IP: UA: - ThrottlingFilter MBean registered as org.wikidata.query.rdf.blazegraph.throttling.SystemOverloadFilter:filterName=system-overload-filter.
17:47:51.665 [main] INFO  o.w.q.r.b.filters.MonitoredFilter IP: UA: - ThrottlingFilter MBean registered as org.wikidata.query.rdf.blazegraph.throttling.ThrottlingFilter:filterName=throttling-filter.
17:47:51.666 [main] INFO  o.w.q.r.b.t.ThrottlingFilter IP: UA: - Patterns file patterns.txt not found, ignoring.
17:47:51.667 [main] INFO  o.w.q.r.b.t.ThrottlingFilter IP: UA: - Patterns file agents.txt not found, ignoring.
2021-03-15 17:47:51.731:INFO:oejsh.ContextHandler:main: Started o.e.j.w.WebAppContext@546a03af{Bigdata,/bigdata,file:///tmp/jetty-0.0.0.0-9999-blazegraph-service-0.3.40.war-_bigdata-any-4318036009279387395.dir/webapp/,AVAILABLE}{file:///wdqs/blazegraph-service-0.3.40.war}
java.net.BindException: Address in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:339)
        at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
        at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
        at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:235)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.server.Server.doStart(Server.java:395)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.runner.Runner.run(Runner.java:532)
        at org.eclipse.jetty.runner.Runner.main(Runner.java:577)

From running runUpdater.sh:

/wdqs # ./runUpdate.sh -- -s 20210315000000 --init
Updating via http://localhost:9999/bigdata/namespace/wdq/sparql
#logback.classic pattern: %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
17:39:27.168 [main] INFO  org.wikidata.query.rdf.tool.Update - Starting Updater 0.3.40 (a115a80eec974454d140389e1f52aad0e54913f9)
17:39:27.884 [main] ERROR org.wikidata.query.rdf.tool.Update - Error during initialization.
org.wikidata.query.rdf.tool.exception.ContainedException: Non-200 response from triple store:  HttpContentResponse[HTTP/1.1 500 Server Error - 8147 bytes] body=
SPARQL-UPDATE: updateStr=DELETE {
  <http://www.wikidata.org> <http://schema.org/dateModified> ?o .
}
WHERE {
  <http://www.wikidata.org> <http://schema.org/dateModified> ?o .
};
INSERT DATA {
  <http://www.wikidata.org> <http://schema.org/dateModified> "2021-03-15T00:00:00Z"^^xsd:dateTime .
}

java.util.concurrent.ExecutionException: org.openrdf.repository.RepositoryException: org.openrdf.sail.SailException: java.lang.RuntimeException: off=0, len=702
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:206)
        at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:292)
        at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:460)
        at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:245)
        at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:269)
        at com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:195)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:865)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1655)
        at org.wikidata.query.rdf.blazegraph.throttling.ThrottlingFilter.doFilter(ThrottlingFilter.java:320)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.throttling.SystemOverloadFilter.doFilter(SystemOverloadFilter.java:82)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at ch.qos.logback.classic.helpers.MDCInsertingServletFilter.doFilter(MDCInsertingServletFilter.java:49)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.filters.QueryEventSenderFilter.doFilter(QueryEventSenderFilter.java:86)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.filters.ClientIPFilter.doFilter(ClientIPFilter.java:43)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.filters.RealAgentFilter.doFilter(RealAgentFilter.java:33)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1340)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1242)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
        at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
        at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
        at org.eclipse.jetty.server.Server.handle(Server.java:503)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
        at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:132)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.openrdf.repository.RepositoryException: org.openrdf.sail.SailException: java.lang.RuntimeException: off=0, len=702
        at com.bigdata.rdf.sail.BigdataSailRepository.getConnection(BigdataSailRepository.java:105)
        at com.bigdata.rdf.task.AbstractApiTask.getConnection(AbstractApiTask.java:299)
        at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:542)
        at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:472)
        at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        ... 1 more
Caused by: org.openrdf.sail.SailException: java.lang.RuntimeException: off=0, len=702
        at com.bigdata.rdf.sail.BigdataSail.getConnectionInternal(BigdataSail.java:1175)
        at com.bigdata.rdf.sail.BigdataSail.getConnectionInternal(BigdataSail.java:259)
        at com.bigdata.rdf.sail.SailBase.getConnection(SailBase.java:261)
        at com.bigdata.rdf.sail.BigdataSail.getConnection(BigdataSail.java:1219)
        at com.bigdata.rdf.sail.BigdataSailRepository.getConnection(BigdataSailRepository.java:101)
        ... 8 more
Caused by: java.lang.RuntimeException: off=0, len=702
        at com.bigdata.io.SerializerUtil.deserialize(SerializerUtil.java:239)
        at com.bigdata.io.SerializerUtil.deserialize(SerializerUtil.java:207)
        at com.bigdata.sparse.ValueType.decode(ValueType.java:333)
        at com.bigdata.sparse.AbstractAtomicRowReadOrWrite.atomicRead(AbstractAtomicRowReadOrWrite.java:347)
        at com.bigdata.sparse.AbstractAtomicRowReadOrWrite.atomicRead(AbstractAtomicRowReadOrWrite.java:157)
        at com.bigdata.sparse.AtomicRowRead.apply(AtomicRowRead.java:98)
        at com.bigdata.sparse.AtomicRowRead.apply(AtomicRowRead.java:36)
        at com.bigdata.btree.AbstractBTree.submit(AbstractBTree.java:3263)
        at com.bigdata.sparse.SparseRowStore.read(SparseRowStore.java:537)
        at com.bigdata.sparse.SparseRowStore.read(SparseRowStore.java:420)
        at com.bigdata.relation.locator.DefaultResourceLocator.locateResourceOn(DefaultResourceLocator.java:910)
        at com.bigdata.relation.locator.DefaultResourceLocator.locateResource(DefaultResourceLocator.java:586)
        at com.bigdata.relation.locator.DefaultResourceLocator.cacheMiss(DefaultResourceLocator.java:395)
        at com.bigdata.relation.locator.DefaultResourceLocator.locate(DefaultResourceLocator.java:347)
        at com.bigdata.rdf.sail.BigdataSail.getConnectionInternal(BigdataSail.java:1140)
        ... 12 more
Caused by: com.bigdata.rdf.vocab.BaseVocabulary$VocabularyVersioningException
        at com.bigdata.rdf.vocab.BaseVocabulary.readVersion2(BaseVocabulary.java:680)
        at com.bigdata.rdf.vocab.BaseVocabulary.readExternal(BaseVocabulary.java:458)
        at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:2118)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
        at com.bigdata.io.SerializerUtil.deserialize(SerializerUtil.java:231)
        ... 26 more

        at org.wikidata.query.rdf.tool.rdf.client.RdfClient.execute(RdfClient.java:226)
        at org.wikidata.query.rdf.tool.rdf.client.RdfClient.update(RdfClient.java:104)
        at org.wikidata.query.rdf.tool.rdf.RdfRepository.updateLeftOffTime(RdfRepository.java:532)
        at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:88)
        at org.wikidata.query.rdf.tool.Update.initialize(Update.java:145)
        at org.wikidata.query.rdf.tool.Update.main(Update.java:98)
Exception in thread "main" org.wikidata.query.rdf.tool.exception.ContainedException: Non-200 response from triple store:  HttpContentResponse[HTTP/1.1 500 Server Error - 8147 bytes] body=
SPARQL-UPDATE: updateStr=DELETE {
  <http://www.wikidata.org> <http://schema.org/dateModified> ?o .
}
WHERE {
  <http://www.wikidata.org> <http://schema.org/dateModified> ?o .
};
INSERT DATA {
  <http://www.wikidata.org> <http://schema.org/dateModified> "2021-03-15T00:00:00Z"^^xsd:dateTime .
}

java.util.concurrent.ExecutionException: org.openrdf.repository.RepositoryException: org.openrdf.sail.SailException: java.lang.RuntimeException: off=0, len=702
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:206)
        at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:292)
        at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:460)
        at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:245)
        at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:269)
        at com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:195)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:865)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1655)
        at org.wikidata.query.rdf.blazegraph.throttling.ThrottlingFilter.doFilter(ThrottlingFilter.java:320)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.throttling.SystemOverloadFilter.doFilter(SystemOverloadFilter.java:82)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at ch.qos.logback.classic.helpers.MDCInsertingServletFilter.doFilter(MDCInsertingServletFilter.java:49)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.filters.QueryEventSenderFilter.doFilter(QueryEventSenderFilter.java:86)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.filters.ClientIPFilter.doFilter(ClientIPFilter.java:43)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.filters.RealAgentFilter.doFilter(RealAgentFilter.java:33)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1340)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1242)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
        at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
        at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
        at org.eclipse.jetty.server.Server.handle(Server.java:503)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
        at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:132)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.openrdf.repository.RepositoryException: org.openrdf.sail.SailException: java.lang.RuntimeException: off=0, len=702
        at com.bigdata.rdf.sail.BigdataSailRepository.getConnection(BigdataSailRepository.java:105)
        at com.bigdata.rdf.task.AbstractApiTask.getConnection(AbstractApiTask.java:299)
        at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:542)
        at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:472)
        at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        ... 1 more
Caused by: org.openrdf.sail.SailException: java.lang.RuntimeException: off=0, len=702
        at com.bigdata.rdf.sail.BigdataSail.getConnectionInternal(BigdataSail.java:1175)
        at com.bigdata.rdf.sail.BigdataSail.getConnectionInternal(BigdataSail.java:259)
        at com.bigdata.rdf.sail.SailBase.getConnection(SailBase.java:261)
        at com.bigdata.rdf.sail.BigdataSail.getConnection(BigdataSail.java:1219)
        at com.bigdata.rdf.sail.BigdataSailRepository.getConnection(BigdataSailRepository.java:101)
        ... 8 more
Caused by: java.lang.RuntimeException: off=0, len=702
        at com.bigdata.io.SerializerUtil.deserialize(SerializerUtil.java:239)
        at com.bigdata.io.SerializerUtil.deserialize(SerializerUtil.java:207)
        at com.bigdata.sparse.ValueType.decode(ValueType.java:333)
        at com.bigdata.sparse.AbstractAtomicRowReadOrWrite.atomicRead(AbstractAtomicRowReadOrWrite.java:347)
        at com.bigdata.sparse.AbstractAtomicRowReadOrWrite.atomicRead(AbstractAtomicRowReadOrWrite.java:157)
        at com.bigdata.sparse.AtomicRowRead.apply(AtomicRowRead.java:98)
        at com.bigdata.sparse.AtomicRowRead.apply(AtomicRowRead.java:36)
        at com.bigdata.btree.AbstractBTree.submit(AbstractBTree.java:3263)
        at com.bigdata.sparse.SparseRowStore.read(SparseRowStore.java:537)
        at com.bigdata.sparse.SparseRowStore.read(SparseRowStore.java:420)
        at com.bigdata.relation.locator.DefaultResourceLocator.locateResourceOn(DefaultResourceLocator.java:910)
        at com.bigdata.relation.locator.DefaultResourceLocator.locateResource(DefaultResourceLocator.java:586)
        at com.bigdata.relation.locator.DefaultResourceLocator.cacheMiss(DefaultResourceLocator.java:395)
        at com.bigdata.relation.locator.DefaultResourceLocator.locate(DefaultResourceLocator.java:347)
        at com.bigdata.rdf.sail.BigdataSail.getConnectionInternal(BigdataSail.java:1140)
        ... 12 more
Caused by: com.bigdata.rdf.vocab.BaseVocabulary$VocabularyVersioningException
        at com.bigdata.rdf.vocab.BaseVocabulary.readVersion2(BaseVocabulary.java:680)
        at com.bigdata.rdf.vocab.BaseVocabulary.readExternal(BaseVocabulary.java:458)
        at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:2118)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
        at com.bigdata.io.SerializerUtil.deserialize(SerializerUtil.java:231)
        ... 26 more

        at org.wikidata.query.rdf.tool.rdf.client.RdfClient.execute(RdfClient.java:226)
        at org.wikidata.query.rdf.tool.rdf.client.RdfClient.update(RdfClient.java:104)
        at org.wikidata.query.rdf.tool.rdf.RdfRepository.updateLeftOffTime(RdfRepository.java:532)
        at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:88)
        at org.wikidata.query.rdf.tool.Update.initialize(Update.java:145)
        at org.wikidata.query.rdf.tool.Update.main(Update.java:98)

I worry that this task is becoming a kitchen sink of query service related update issues, not really sure what to do with it in it's current state.

Reading through T186161#6903394 just in case I can be helpful

java.lang.RuntimeException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8181 [localhost/127.0.0.1] failed: Connection refused (Connection refused)

Your query service updater is not able to connect to your wikibase.

UPDATE #2: Thought I'd add a pic of what the query service is returning as well:

This picture looks like it has a very old last touched timestamp.
If your updater has stopped running since this time you will need to reload all triples into blazegraph from a fresh dump, as the updater will not be able to resume with such a big gap since the last timestamp in there.
You can find this discussed a little at T197658#4577721

[2021-03-11T16:22:01,530][WARN ][o.e.b.BootstrapChecks ] [6HoJsGL] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

This looks like your setup in general needs more available memory

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.18.0.8. Set the 'ServerName' directive globally to suppress this message

This will not be causing the issues described (as you correctly identified)

java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time. You will have to reload from scratch or you might have missing data.

Mentioned above, but you either need a reload or to reset the time if you havn't made any changes since the last update that need loading
T197658#4577721

UPDATE #6: So https://phabricator.wikimedia.org/T182394 mentions a patch (https://gerrit.wikimedia.org/r/c/wikidata/query/rdf/+/405826/) (by Smalyshev; owner: Smalyshev) which includes two Java files. I'm so sorry, but being a novice here, I have absolutely no idea how and/or where to implement those, or if they will only break my configuration if I implement them haphazardly.

These options are part of the update script.
You can see an example using them in T197658#4577956

UPDATE #8: At a cursory glance, this is the same problem as described here: https://phabricator.wikimedia.org/T207133.

I believe this ticket is different


Then for Update #10

java.net.BindException: Address in use

Do you still have your old containers / multiple containers or services running binding to the same port?
Blazegraph can not start because of this.

@Addshore thank you so much for responding!

For update #10, I don't think other containers are running on the same port. When I run docker ps I get:

CONTAINER ID   IMAGE                                COMMAND                   CREATED          STATUS                          PORTS                  NAMES
ec1ca3cc8116   wikibase/wdqs-frontend:latest        "/entrypoint.sh ngin…"    30 seconds ago   Restarting (1) 10 seconds ago                          docker_compose_files_wdqs-frontend_1
5f81fde0230c   wikibase/wdqs-proxy                  "/bin/sh -c \"/entryp…"   32 seconds ago   Up 30 seconds                   0.0.0.0:8989->80/tcp   docker_compose_files_wdqs-proxy_1
5ed54229ea6d   wikibase/wdqs:0.3.40                 "/entrypoint.sh /run…"    7 hours ago      Up 29 seconds                                          docker_compose_files_wdqs-updater_1
a4f2f620c95a   wikibase/wikibase:1.35-bundle        "/bin/bash /entrypoi…"    7 hours ago      Up 30 seconds                   0.0.0.0:8181->80/tcp   docker_compose_files_wikibase_1
bd1c88a2229d   mariadb:10.3                         "docker-entrypoint.s…"    7 hours ago      Up 31 seconds                   3306/tcp               docker_compose_files_mysql_1
13ba2f6d582d   wikibase/elasticsearch:6.5.4-extra   "/usr/local/bin/dock…"    7 hours ago      Up 31 seconds                   9200/tcp, 9300/tcp     docker_compose_files_elasticsearch_1
3426f2046c29   wikibase/wdqs:0.3.40                 "/entrypoint.sh /run…"    7 hours ago      Up 31 seconds                   9999/tcp               docker_compose_files_wdqs_1

EDIT #1: In terms of the response to UPDATE #2, yes the timestamp is old, but I'm not sure why. I downloaded the docker YML like last week. The fresh dump was created last week as well.

How should I reload and/or reset the time? Every way I've tried just leads to further errors.

In response to UPDATE #6, when you say that these options are part of the update script, I assume you mean runUpdate.sh? So for them to work, do I just stick them in the same directory as runUpdate.sh or do I have to do something else? I don't quite understand.

EDIT #2: Just in case, I ran docker system prune (via: https://stackoverflow.com/questions/17236796/how-to-remove-old-docker-containers) and then attempted to re-run runBlazegraph.sh. I'm still getting java.net.BindException: Address in use, but the full stack is:

2021-03-16 00:32:33.601:WARN:oejw.WebAppContext:main: Failed startup of context o.e.j.w.WebAppContext@546a03af{Bigdata,/bigdata,file:///tmp/jetty-0.0.0.0-9999-blazegraph-service-0.3.40.war-_bigdata-any-2188252620419942069.dir/webapp/,UNAVAILABLE}{file:///wdqs/blazegraph-service-0.3.40.war}
java.lang.RuntimeException: java.lang.RuntimeException: file=data/data.jnl
        at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.openIndexManager(BigdataRDFServletContextListener.java:816)
        at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.contextInitialized(BigdataRDFServletContextListener.java:277)
        at org.wikidata.query.rdf.blazegraph.WikibaseContextListener.contextInitialized(WikibaseContextListener.java:279)
        at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:952)
        at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:558)
        at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:917)
        at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:370)
        at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497)
        at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459)
        at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:847)
        at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:287)
        at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:168)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
        at org.eclipse.jetty.server.Server.start(Server.java:416)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.server.Server.doStart(Server.java:383)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.runner.Runner.run(Runner.java:532)
        at org.eclipse.jetty.runner.Runner.main(Runner.java:577)
Caused by:
java.lang.RuntimeException: file=data/data.jnl
        at com.bigdata.journal.FileMetadata.<init>(FileMetadata.java:1144)
        at com.bigdata.journal.FileMetadata.createInstance(FileMetadata.java:1470)
        at com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:1156)
        at com.bigdata.journal.Journal.<init>(Journal.java:276)
        at com.bigdata.journal.Journal.<init>(Journal.java:269)
        at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.openIndexManager(BigdataRDFServletContextListener.java:810)
        at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.contextInitialized(BigdataRDFServletContextListener.java:277)
        at org.wikidata.query.rdf.blazegraph.WikibaseContextListener.contextInitialized(WikibaseContextListener.java:279)
        at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:952)
        at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:558)
        at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:917)
        at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:370)
        at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497)
        at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459)
        at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:847)
        at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:287)
        at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:168)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
        at org.eclipse.jetty.server.Server.start(Server.java:416)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.server.Server.doStart(Server.java:383)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.runner.Runner.run(Runner.java:532)
        at org.eclipse.jetty.runner.Runner.main(Runner.java:577)
Caused by:
java.io.IOException: Stream Closed
        at java.io.RandomAccessFile.length(Native Method)
        at com.bigdata.journal.FileMetadata.<init>(FileMetadata.java:956)
        at com.bigdata.journal.FileMetadata.createInstance(FileMetadata.java:1470)
        at com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:1156)
        at com.bigdata.journal.Journal.<init>(Journal.java:276)
        at com.bigdata.journal.Journal.<init>(Journal.java:269)
        at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.openIndexManager(BigdataRDFServletContextListener.java:810)
        at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.contextInitialized(BigdataRDFServletContextListener.java:277)
        at org.wikidata.query.rdf.blazegraph.WikibaseContextListener.contextInitialized(WikibaseContextListener.java:279)
        at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:952)
        at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:558)
        at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:917)
        at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:370)
        at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497)
        at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459)
        at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:847)
        at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:287)
        at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:168)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
        at org.eclipse.jetty.server.Server.start(Server.java:416)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.server.Server.doStart(Server.java:383)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.runner.Runner.run(Runner.java:532)
        at org.eclipse.jetty.runner.Runner.main(Runner.java:577)
java.net.BindException: Address in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:339)
        at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
        at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
        at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:235)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.server.Server.doStart(Server.java:395)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.runner.Runner.run(Runner.java:532)
        at org.eclipse.jetty.runner.Runner.main(Runner.java:577)

EDIT #3: I've tried a number of changes to the YML, using localhost, localhost:8080, localhost:8181, localhost:8282, localhost:9000, and localhost:9999 and the result is the same.

Then I tried rebuilding the whole system and I still get the same errors.

@Addshore and @Superraptor123, I observed the following:

If I use https://github.com/UB-Mannheim/RaiseWikibase/blob/main/docker-compose.yml and https://github.com/UB-Mannheim/RaiseWikibase/blob/main/berd/LocalSettings.php.template with the time zone and time offset specified at the bottom of LocalSettings.php.template (lines 178-180), then the updater runs fine, but it does not catch all the entities though.

If I move setting of the time zone to the top of the LocalSettings.php.template (for example, to the lines 2-4 or 19-21), then the updater stops working at all ("wikibase_wdqs-updater_1 exited with code 1").

Ideas?

P.S.:

raisewikibase_wdqs-updater_1 exited with code 1
wdqs-updater_1     | wait-for-it.sh: waiting 300 seconds for wdqs.svc:9999
wdqs-updater_1     | wait-for-it.sh: wdqs.svc:9999 is available after 0 seconds
wdqs-updater_1     | Updating via http://wdqs.svc:9999/bigdata/namespace/wdq/sparql
wdqs-updater_1     | #logback.classic pattern: %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
wdqs-updater_1     | 11:09:54.777 [main] INFO  org.wikidata.query.rdf.tool.Update - Starting Updater 0.3.40 (a115a80eec974454d140389e1f52aad0e54913f9)
wdqs-updater_1     | 11:09:55.439 [main] INFO  o.w.q.r.t.change.ChangeSourceContext - Checking where we left off
wdqs-updater_1     | 11:09:55.439 [main] INFO  o.w.query.rdf.tool.rdf.RdfRepository - Checking for left off time from the updater
wdqs-updater_1     | 11:09:55.594 [main] INFO  o.w.query.rdf.tool.rdf.RdfRepository - Found left off time from the updater
wdqs-updater_1     | 11:09:55.597 [main] ERROR org.wikidata.query.rdf.tool.Update - Error during initialization.
wdqs-updater_1     | java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time.  You will have to reload from scratch or you might have missing data.
wdqs-updater_1     | 	at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:100)
wdqs-updater_1     | 	at org.wikidata.query.rdf.tool.Update.initialize(Update.java:145)
wdqs-updater_1     | 	at org.wikidata.query.rdf.tool.Update.main(Update.java:98)
wdqs-updater_1     | Exception in thread "main" java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time.  You will have to reload from scratch or you might have missing data.
wdqs-updater_1     | 	at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:100)
wdqs-updater_1     | 	at org.wikidata.query.rdf.tool.Update.initialize(Update.java:145)
wdqs-updater_1     | 	at org.wikidata.query.rdf.tool.Update.main(Update.java:98)

java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time. You will have to reload from scratch or you might have missing data.

This again suggests that your updater was not running for a period of time.

java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time. You will have to reload from scratch or you might have missing data.

Mentioned above, but you either need a reload or to reset the time if you haven't made any changes since the last update that need loading
T197658#4577721

I'm going to try and write a minimal example of this and the fix right now.

Addshore closed this task as Invalid.EditedMar 17 2021, 6:18 PM

Run an empty blazegraph container.

docker run -d -p 9999:9999 --env WIKIBASE_SCHEME=https --env WIKIBASE_HOST=intentionally-empty.wiki.opencura.com --env WDQS_HOST=localhost --env WDQS_PORT=9999 --name demo-wdqs wikibase/wdqs:0.3.40 /runBlazegraph.sh

Wait for the service to come up, and make sure it is empty

curl "localhost:9999/bigdata/sparql?query=SELECT%20%2A%20WHERE%20%7B%3Fa%20%3Fb%20%3Fc%7D"

You should see something like this

<?xml version='1.0' encoding='UTF-8'?>
<sparql xmlns='http://www.w3.org/2005/sparql-results#'>
        <head>
                <variable name='a'/>
                <variable name='b'/>
                <variable name='c'/>
        </head>
        <results>
        </results>
</sparql>

Run the updater once pointing to some wikibase, and the query service we just made

docker exec demo-wdqs /runUpdate.sh

You should see something like this, and you can kill / stop it after a few loops (Ctrl+C)

wait-for-it.sh: waiting 300 seconds for intentionally-empty.wiki.opencura.com:80
wait-for-it.sh: intentionally-empty.wiki.opencura.com:80 is available after 0 seconds
wait-for-it.sh: waiting 300 seconds for localhost:9999
wait-for-it.sh: localhost:9999 is available after 0 seconds
Updating via http://localhost:9999/bigdata/namespace/wdq/sparql
#logback.classic pattern: %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
18:00:17.284 [main] INFO  org.wikidata.query.rdf.tool.Update - Starting Updater 0.3.40 (a115a80eec974454d140389e1f52aad0e54913f9)
18:00:18.959 [main] INFO  o.w.q.r.t.change.ChangeSourceContext - Checking where we left off
18:00:18.960 [main] INFO  o.w.query.rdf.tool.rdf.RdfRepository - Checking for left off time from the updater
18:00:19.267 [main] INFO  o.w.query.rdf.tool.rdf.RdfRepository - Checking for left off time from the dump
18:00:19.333 [main] INFO  o.w.q.r.t.change.ChangeSourceContext - Defaulting start time to 30 days ago: 2021-02-15T18:00:19.333Z
18:00:20.452 [main] INFO  o.w.q.r.t.change.RecentChangesPoller - Got no real changes
18:00:20.780 [main] INFO  org.wikidata.query.rdf.tool.Updater - Polled up to 2021-02-15T18:00:19.333Z at (0.0, 0.0, 0.0) updates per second and (0.0, 0.0, 0.0) milliseconds per second
18:00:21.066 [main] INFO  o.w.q.r.t.change.RecentChangesPoller - Got no real changes
18:00:21.067 [main] INFO  org.wikidata.query.rdf.tool.Updater - Sleeping for 10 secs
18:00:31.661 [main] INFO  o.w.q.r.t.change.RecentChangesPoller - Got no real changes
18:00:31.662 [main] INFO  org.wikidata.query.rdf.tool.Updater - Sleeping for 10 secs

Run the updater again.

docker exec demo-wdqs /runUpdate.sh

This time you should see the error

wait-for-it.sh: waiting 300 seconds for intentionally-empty.wiki.opencura.com:80
wait-for-it.sh: intentionally-empty.wiki.opencura.com:80 is available after 0 seconds
wait-for-it.sh: waiting 300 seconds for localhost:9999
wait-for-it.sh: localhost:9999 is available after 0 seconds
Updating via http://localhost:9999/bigdata/namespace/wdq/sparql
#logback.classic pattern: %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
18:00:55.545 [main] INFO  org.wikidata.query.rdf.tool.Update - Starting Updater 0.3.40 (a115a80eec974454d140389e1f52aad0e54913f9)
18:00:57.495 [main] INFO  o.w.q.r.t.change.ChangeSourceContext - Checking where we left off
18:00:57.496 [main] INFO  o.w.query.rdf.tool.rdf.RdfRepository - Checking for left off time from the updater
18:00:57.996 [main] INFO  o.w.query.rdf.tool.rdf.RdfRepository - Found left off time from the updater
18:00:58.000 [main] ERROR org.wikidata.query.rdf.tool.Update - Error during initialization.
java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time.  You will have to reload from scratch or you might have missing data.
        at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:100)
        at org.wikidata.query.rdf.tool.Update.initialize(Update.java:145)
        at org.wikidata.query.rdf.tool.Update.main(Update.java:98)
Exception in thread "main" java.lang.IllegalStateException: RDF store reports the last update time is before the minimum safe poll time.  You will have to reload from scratch or you might have missing data.
        at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:100)
        at org.wikidata.query.rdf.tool.Update.initialize(Update.java:145)
        at org.wikidata.query.rdf.tool.Update.main(Update.java:98)

This is because the timestamp recording where updates are has been set, and is no longer "safe".

This can be seen as a triple, and is by default 30 days ago.

curl "localhost:9999/bigdata/sparql?query=SELECT%20%2A%20WHERE%20%7B%3Fa%20%3Fb%20%3Fc%7D"
<?xml version='1.0' encoding='UTF-8'?>
<sparql xmlns='http://www.w3.org/2005/sparql-results#'>
        <head>
                <variable name='a'/>
                <variable name='b'/>
                <variable name='c'/>
        </head>
        <results>
                <result>
                        <binding name='a'>
                                <uri>https://intentionally-empty.wiki.opencura.com</uri>
                        </binding>
                        <binding name='b'>
                                <uri>http://schema.org/dateModified</uri>
                        </binding>
                        <binding name='c'>
                                <literal datatype='http://www.w3.org/2001/XMLSchema#dateTime'>2021-02-15T18:00:18Z</literal>
                        </binding>
                </result>
        </results>
</sparql>

If everything is safe to update, and you're not going to end up missing data, you can reset this time, to a date in the last 30 days.
(Overriding what is normally done https://github.com/wmde/wikibase-docker/blob/0c561dd6c17a918323b44c7282b5e5acccfd4e45/wdqs/0.3.40/runUpdate.sh#L9)

docker exec demo-wdqs bash -c '/wdqs/runUpdate.sh -h http://${WDQS_HOST}:${WDQS_PORT} -- --wikibaseUrl ${WIKIBASE_SCHEME}://${WIKIBASE_HOST} --conceptUri ${WIKIBASE_SCHEME}://${WIKIBASE_HOST} --entityNamespaces ${WDQS_ENTITY_NAMESPACES} --init --start 20210301010101'

The date is now updated

curl "localhost:9999/bigdata/sparql?query=SELECT%20%2A%20WHERE%20%7B%3Fa%20%3Fb%20%3Fc%7D"

Should show something like

<?xml version='1.0' encoding='UTF-8'?>
<sparql xmlns='http://www.w3.org/2005/sparql-results#'>
        <head>
                <variable name='a'/>
                <variable name='b'/>
                <variable name='c'/>
        </head>
        <results>
                <result>
                        <binding name='a'>
                                <uri>https://intentionally-empty.wiki.opencura.com</uri>
                        </binding>
                        <binding name='b'>
                                <uri>http://schema.org/dateModified</uri>
                        </binding>
                        <binding name='c'>
                                <literal datatype='http://www.w3.org/2001/XMLSchema#dateTime'>2021-03-01T01:01:00Z</literal>
                        </binding>
                </result>
        </results>
</sparql>

I'm going to close this ticket now as the scope of it is rather unclear.
The case mentioned above should not really be happening during regular operation of a wikibase, but perhaps we need to make the last step here (resetting the timestamp) more resilient, and perhaps the default behaviour when using an empty wikibase a bit better (the ability to reset was added in T197658).
This would need some collaboration between wmde and the wikidata query service team.
If people have individual bugs or feature requests then new tickets are welcome!

@Addshore thank you so much for the response. I ended up deleting everything and attempted to follow a different set of Wikibase install instructions (except now post-load runUpdate.sh just runs forever (even though there is very little that I added to the database); localhost:8282 queries are stall forever.

I'm going to attempt what you've added, but I'm not sure it's applicable anymore.

NOTE #1: Running docker run -d -p 9999:9999 --env WIKIBASE_SCHEME=https --env WIKIBASE_HOST=intentionally-empty.wiki.opencura.com --env WDQS_HOST=localhost --env WDQS_PORT=9999 --name demo-wdqs wikibase/wdqs:0.3.40 /runBlazegraph.sh fails saying su-exec: C:/Program Files/Git/runBlazegraph.sh: No such file or directory. I tried adding the absolute path location of runBlazegraph.sh, no dice.

The guide posted in my above comment now exists at https://www.mediawiki.org/wiki/Wikibase/FAQ#Why_doesn't_the_query_service_update?

@Addshore thank you so much for the response. I ended up deleting everything and attempted to follow a different set of Wikibase install instructions (except now post-load runUpdate.sh just runs forever (even though there is very little that I added to the database); localhost:8282 queries are stall forever.

localhost:8282?
Looking at your above snippet nothing is running on 8282

CONTAINER ID   IMAGE                                COMMAND                   CREATED          STATUS                          PORTS                  NAMES
ec1ca3cc8116   wikibase/wdqs-frontend:latest        "/entrypoint.sh ngin…"    30 seconds ago   Restarting (1) 10 seconds ago                          docker_compose_files_wdqs-frontend_1
5f81fde0230c   wikibase/wdqs-proxy                  "/bin/sh -c \"/entryp…"   32 seconds ago   Up 30 seconds                   0.0.0.0:8989->80/tcp   docker_compose_files_wdqs-proxy_1
5ed54229ea6d   wikibase/wdqs:0.3.40                 "/entrypoint.sh /run…"    7 hours ago      Up 29 seconds                                          docker_compose_files_wdqs-updater_1
a4f2f620c95a   wikibase/wikibase:1.35-bundle        "/bin/bash /entrypoi…"    7 hours ago      Up 30 seconds                   0.0.0.0:8181->80/tcp   docker_compose_files_wikibase_1
bd1c88a2229d   mariadb:10.3                         "docker-entrypoint.s…"    7 hours ago      Up 31 seconds                   3306/tcp               docker_compose_files_mysql_1
13ba2f6d582d   wikibase/elasticsearch:6.5.4-extra   "/usr/local/bin/dock…"    7 hours ago      Up 31 seconds                   9200/tcp, 9300/tcp     docker_compose_files_elasticsearch_1
3426f2046c29   wikibase/wdqs:0.3.40                 "/entrypoint.sh /run…"    7 hours ago      Up 31 seconds                   9999/tcp               docker_compose_files_wdqs_1

I'm going to attempt what you've added, but I'm not sure it's applicable anymore.

NOTE #1: Running docker run -d -p 9999:9999 --env WIKIBASE_SCHEME=https --env WIKIBASE_HOST=intentionally-empty.wiki.opencura.com --env WDQS_HOST=localhost --env WDQS_PORT=9999 --name demo-wdqs wikibase/wdqs:0.3.40 /runBlazegraph.sh fails saying su-exec: C:/Program Files/Git/runBlazegraph.sh: No such file or directory. I tried adding the absolute path location of runBlazegraph.sh, no dice.

/runBlazegraph.sh is an absolute location.
It looks like your runnign this in some flavour of windows shell though so try adding another / to the start

@Addshore yeah so I was so knee deep in errors, I deleted everything with my first try and attempted to use this Git directory instead: https://github.com/wmde/wikibase-docker/blob/master/README-compose.md

Running with two slashes though seemed to work! (Using: docker run -d -p 9999:9999 --env WIKIBASE_SCHEME=https --env WIKIBASE_HOST=intentionally-empty.wiki.opencura.com --env WDQS_HOST=localhost --env WDQS_PORT=9999 --name demo-wdqs wikibase/wdqs:0.3.40 //runBlazegraph.sh). I'll check back after lunch and run everything else in the queue! Thank you!

@Addshore so sorry to keep bothering, I'm just trying to make sure I'm following everything correctly.

Step 1: Begin the instructions here (https://semlab.io/howto/wikibase_basic.html), but for localhost, by cloning the repo:

git clone https://github.com/wmde/wikibase-docker.git
cd wikibase-docker
git clone https://github.com/SemanticLab/wikibase-basic-local.git

Step 2: Run the following (I have a Windows installation, so these are modified slightly):

chmod +x wikibase-basic-local/install_compose.sh
./wikibase-basic-local/install_compose.sh

Step 3: Run docker-compose up and test if the front-end looks right.

Step 4: Following steps here (https://wikibase.consulting/transferring-wikibase-data-between-wikis/), export previous wikibase XML dump using:

php maintenance/dumpBackup.php --full --quiet --filter=namespace:120,122 > wikibase.xml

Step 5: Copy this file and rebuildWikibaseIdCounters.sql into the wikibase_1 docker container:

docker cp rebuildWikibaseIdCounters.sql wikibase-docker_wikibase_1:/var/www/html/rebuildWikibaseIdCounters.sql
docker cp wikibase.xml wikibase-docker_wikibase_1:/var/www/html/wikibase.xml

Step 6: Install VIM on docker container:

apt-get update && apt-get install -y vim

Step 7: Add $wgWBRepoSettings['allowEntityImport'] = true; to the end of LocalSettings.php.

Step 8: Run:

php ./maintenance/importDump.php < wikibase.xml

php ./maintenance/rebuildall.php

php ./maintenance/runJobs.php

php ./maintenance/initSiteStats.php --update

php maintenance/sql.php rebuildWikibaseIdCounters.sql

Step 9: Check and confirm that the new pages have been loaded at localhost:8282.

Step 10: (Starting your steps) Run the following (all slashes changed for Windows):

docker run -d -p 9999:9999 --env WIKIBASE_SCHEME=https --env WIKIBASE_HOST=intentionally-empty.wiki.opencura.com --env WDQS_HOST=localhost --env WDQS_PORT=9999 --name demo-wdqs wikibase/wdqs:0.3.40 //runBlazegraph.sh

Step 11: Check if empty:

curl "localhost:9999/bigdata/sparql?query=SELECT%20%2A%20WHERE%20%7B%3Fa%20%3Fb%20%3Fc%7D"

Step 12: Run the updater:

docker exec demo-wdqs //runUpdate.sh

Step 13: Run

docker exec demo-wdqs bash -c '//wdqs//runUpdate.sh -h http://${WDQS_HOST}:${WDQS_PORT} -- --wikibaseUrl ${WIKIBASE_SCHEME}://${WIKIBASE_HOST} --conceptUri ${WIKIBASE_SCHEME}://${WIKIBASE_HOST} --entityNamespaces ${WDQS_ENTITY_NAMESPACES} --init --start 20210301010101

Here is where things get funky. So the above command seems to hang for a very long time repeating:

18:22:24.098 [main] INFO  o.w.q.r.t.change.RecentChangesPoller - Got no real changes
18:22:24.098 [main] INFO  org.wikidata.query.rdf.tool.Updater - Sleeping for 10 secs

I ctrl-C to quit and run curl "localhost:9999/bigdata/sparql?query=SELECT%20%2A%20WHERE%20%7B%3Fa%20%3Fb%20%3Fc%7D" and I do get an updated date when I run:

curl "localhost:9999/bigdata/sparql?query=SELECT%20%2A%20WHERE%20%7B%3Fa%20%3Fb%20%3Fc%7D"

(Output includes: <literal datatype='http://www.w3.org/2001/XMLSchema#dateTime'>2021-03-01T01:01:00Z</literal>, just as you noted)

However, when I go to http://localhost:8282/ and run

SELECT * where { ?a ?b ?c }

It seems to hang indefinitely. So then I thought, okay, maybe there's actually a lot of stuff in there, so I run a query I know has less than 20-30 results (there are 880 pages total that I've imported, 3588 edits):

SELECT ?item ?itemLabel 
WHERE 
{
  ?item wdt:P9 wd:Q8.
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}

(This is where things get weird; so if I hover over P9 or 'Q8`, it is clear that the WDQS knows what they are and they link to the correct pages. However, when I run the query I get No matching records found.)

Did I mess something up here?

Okay, wow I'm dumb. Running this inside the wikibase WDQS docker container made queries work:

./runUpdate.sh -h http://${WDQS_HOST}:${WDQS_PORT} -- --wikibaseUrl ${WIKIBASE_SCHEME}://${WIKIBASE_HOST} --conceptUri ${WIKIBASE_SCHEME}://${WIKIBASE_HOST} --entityNamespaces ${WDQS_ENTITY_NAMESPACES} --init --start 20210301010101

However, now the items have dead links like <http://wikibase.svc/entity/Q39>, so I'm working on replacing the "http://wikibase.svc" with "localhost:8181" as "localhost:8181/entity/Q39" resolves correctly.

EDIT 1: Now I'm experiencing the same errors described here: https://phabricator.wikimedia.org/T207133. Namely:

/wdqs # ./runUpdate.sh -h http://${WDQS_HOST}:${WDQS_PORT} -- --wikibaseUrl ${WIKIBASE_SCHEME}://${WIKIBASE_HOST} --conc
eptUri ${WIKIBASE_SCHEME}://${WIKIBASE_HOST} --entityNamespaces ${WDQS_ENTITY_NAMESPACES} --init --start 20210301010101
Updating via http://wdqs.svc:9999/bigdata/namespace/wdq/sparql
#logback.classic pattern: %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
19:59:03.564 [main] INFO  org.wikidata.query.rdf.tool.Update - Starting Updater 0.3.40 (a115a80eec974454d140389e1f52aad0e54913f9)
19:59:04.394 [main] ERROR org.wikidata.query.rdf.tool.Update - Error during initialization.
org.wikidata.query.rdf.tool.exception.ContainedException: Non-200 response from triple store:  HttpContentResponse[HTTP/1.1 500 Server Error - 7772 bytes] body=
SPARQL-UPDATE: updateStr=DELETE {
  <http://localhost:8181> <http://schema.org/dateModified> ?o .
}
WHERE {
  <http://localhost:8181> <http://schema.org/dateModified> ?o .
};
INSERT DATA {
  <http://localhost:8181> <http://schema.org/dateModified> "2021-03-01T01:01:01Z"^^xsd:dateTime .
}

java.util.concurrent.ExecutionException: org.openrdf.repository.RepositoryException: org.openrdf.sail.SailException: java.lang.RuntimeException: off=0, len=702
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:206)
        at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:292)
        at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:460)
        at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:245)
        at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:269)
        at com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:195)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:865)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1655)
        at org.wikidata.query.rdf.blazegraph.throttling.ThrottlingFilter.doFilter(ThrottlingFilter.java:320)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.throttling.SystemOverloadFilter.doFilter(SystemOverloadFilter.java:82)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at ch.qos.logback.classic.helpers.MDCInsertingServletFilter.doFilter(MDCInsertingServletFilter.java:49)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.filters.QueryEventSenderFilter.doFilter(QueryEventSenderFilter.java:86)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.filters.ClientIPFilter.doFilter(ClientIPFilter.java:43)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.filters.RealAgentFilter.doFilter(RealAgentFilter.java:33)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1340)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1242)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
        at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
        at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
        at org.eclipse.jetty.server.Server.handle(Server.java:503)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
        at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.openrdf.repository.RepositoryException: org.openrdf.sail.SailException: java.lang.RuntimeException: off=0, len=702
        at com.bigdata.rdf.sail.BigdataSailRepository.getConnection(BigdataSailRepository.java:105)
        at com.bigdata.rdf.task.AbstractApiTask.getConnection(AbstractApiTask.java:299)
        at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:542)
        at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:472)
        at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        ... 1 more
Caused by: org.openrdf.sail.SailException: java.lang.RuntimeException: off=0, len=702
        at com.bigdata.rdf.sail.BigdataSail.getConnectionInternal(BigdataSail.java:1175)
        at com.bigdata.rdf.sail.BigdataSail.getConnectionInternal(BigdataSail.java:259)
        at com.bigdata.rdf.sail.SailBase.getConnection(SailBase.java:261)
        at com.bigdata.rdf.sail.BigdataSail.getConnection(BigdataSail.java:1219)
        at com.bigdata.rdf.sail.BigdataSailRepository.getConnection(BigdataSailRepository.java:101)
        ... 8 more
Caused by: java.lang.RuntimeException: off=0, len=702
        at com.bigdata.io.SerializerUtil.deserialize(SerializerUtil.java:239)
        at com.bigdata.io.SerializerUtil.deserialize(SerializerUtil.java:207)
        at com.bigdata.sparse.ValueType.decode(ValueType.java:333)
        at com.bigdata.sparse.AbstractAtomicRowReadOrWrite.atomicRead(AbstractAtomicRowReadOrWrite.java:347)
        at com.bigdata.sparse.AbstractAtomicRowReadOrWrite.atomicRead(AbstractAtomicRowReadOrWrite.java:157)
        at com.bigdata.sparse.AtomicRowRead.apply(AtomicRowRead.java:98)
        at com.bigdata.sparse.AtomicRowRead.apply(AtomicRowRead.java:36)
        at com.bigdata.btree.AbstractBTree.submit(AbstractBTree.java:3263)
        at com.bigdata.sparse.SparseRowStore.read(SparseRowStore.java:537)
        at com.bigdata.sparse.SparseRowStore.read(SparseRowStore.java:420)
        at com.bigdata.relation.locator.DefaultResourceLocator.locateResourceOn(DefaultResourceLocator.java:910)
        at com.bigdata.relation.locator.DefaultResourceLocator.locateResource(DefaultResourceLocator.java:586)
        at com.bigdata.relation.locator.DefaultResourceLocator.cacheMiss(DefaultResourceLocator.java:395)
        at com.bigdata.relation.locator.DefaultResourceLocator.locate(DefaultResourceLocator.java:347)
        at com.bigdata.rdf.sail.BigdataSail.getConnectionInternal(BigdataSail.java:1140)
        ... 12 more
Caused by: com.bigdata.rdf.vocab.BaseVocabulary$VocabularyVersioningException
        at com.bigdata.rdf.vocab.BaseVocabulary.readVersion2(BaseVocabulary.java:680)
        at com.bigdata.rdf.vocab.BaseVocabulary.readExternal(BaseVocabulary.java:458)
        at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:2118)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
        at com.bigdata.io.SerializerUtil.deserialize(SerializerUtil.java:231)
        ... 26 more

        at org.wikidata.query.rdf.tool.rdf.client.RdfClient.execute(RdfClient.java:226)
        at org.wikidata.query.rdf.tool.rdf.client.RdfClient.update(RdfClient.java:104)
        at org.wikidata.query.rdf.tool.rdf.RdfRepository.updateLeftOffTime(RdfRepository.java:532)
        at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:88)
        at org.wikidata.query.rdf.tool.Update.initialize(Update.java:145)
        at org.wikidata.query.rdf.tool.Update.main(Update.java:98)
Exception in thread "main" org.wikidata.query.rdf.tool.exception.ContainedException: Non-200 response from triple store:  HttpContentResponse[HTTP/1.1 500 Server Error - 7772 bytes] body=
SPARQL-UPDATE: updateStr=DELETE {
  <http://localhost:8181> <http://schema.org/dateModified> ?o .
}
WHERE {
  <http://localhost:8181> <http://schema.org/dateModified> ?o .
};
INSERT DATA {
  <http://localhost:8181> <http://schema.org/dateModified> "2021-03-01T01:01:01Z"^^xsd:dateTime .
}

java.util.concurrent.ExecutionException: org.openrdf.repository.RepositoryException: org.openrdf.sail.SailException: java.lang.RuntimeException: off=0, len=702
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:206)
        at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:292)
        at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:460)
        at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:245)
        at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:269)
        at com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:195)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:865)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1655)
        at org.wikidata.query.rdf.blazegraph.throttling.ThrottlingFilter.doFilter(ThrottlingFilter.java:320)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.throttling.SystemOverloadFilter.doFilter(SystemOverloadFilter.java:82)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at ch.qos.logback.classic.helpers.MDCInsertingServletFilter.doFilter(MDCInsertingServletFilter.java:49)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.filters.QueryEventSenderFilter.doFilter(QueryEventSenderFilter.java:86)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.filters.ClientIPFilter.doFilter(ClientIPFilter.java:43)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.wikidata.query.rdf.blazegraph.filters.RealAgentFilter.doFilter(RealAgentFilter.java:33)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1340)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1242)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
        at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
        at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
        at org.eclipse.jetty.server.Server.handle(Server.java:503)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
        at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.openrdf.repository.RepositoryException: org.openrdf.sail.SailException: java.lang.RuntimeException: off=0, len=702
        at com.bigdata.rdf.sail.BigdataSailRepository.getConnection(BigdataSailRepository.java:105)
        at com.bigdata.rdf.task.AbstractApiTask.getConnection(AbstractApiTask.java:299)
        at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:542)
        at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:472)
        at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        ... 1 more
Caused by: org.openrdf.sail.SailException: java.lang.RuntimeException: off=0, len=702
        at com.bigdata.rdf.sail.BigdataSail.getConnectionInternal(BigdataSail.java:1175)
        at com.bigdata.rdf.sail.BigdataSail.getConnectionInternal(BigdataSail.java:259)
        at com.bigdata.rdf.sail.SailBase.getConnection(SailBase.java:261)
        at com.bigdata.rdf.sail.BigdataSail.getConnection(BigdataSail.java:1219)
        at com.bigdata.rdf.sail.BigdataSailRepository.getConnection(BigdataSailRepository.java:101)
        ... 8 more
Caused by: java.lang.RuntimeException: off=0, len=702
        at com.bigdata.io.SerializerUtil.deserialize(SerializerUtil.java:239)
        at com.bigdata.io.SerializerUtil.deserialize(SerializerUtil.java:207)
        at com.bigdata.sparse.ValueType.decode(ValueType.java:333)
        at com.bigdata.sparse.AbstractAtomicRowReadOrWrite.atomicRead(AbstractAtomicRowReadOrWrite.java:347)
        at com.bigdata.sparse.AbstractAtomicRowReadOrWrite.atomicRead(AbstractAtomicRowReadOrWrite.java:157)
        at com.bigdata.sparse.AtomicRowRead.apply(AtomicRowRead.java:98)
        at com.bigdata.sparse.AtomicRowRead.apply(AtomicRowRead.java:36)
        at com.bigdata.btree.AbstractBTree.submit(AbstractBTree.java:3263)
        at com.bigdata.sparse.SparseRowStore.read(SparseRowStore.java:537)
        at com.bigdata.sparse.SparseRowStore.read(SparseRowStore.java:420)
        at com.bigdata.relation.locator.DefaultResourceLocator.locateResourceOn(DefaultResourceLocator.java:910)
        at com.bigdata.relation.locator.DefaultResourceLocator.locateResource(DefaultResourceLocator.java:586)
        at com.bigdata.relation.locator.DefaultResourceLocator.cacheMiss(DefaultResourceLocator.java:395)
        at com.bigdata.relation.locator.DefaultResourceLocator.locate(DefaultResourceLocator.java:347)
        at com.bigdata.rdf.sail.BigdataSail.getConnectionInternal(BigdataSail.java:1140)
        ... 12 more
Caused by: com.bigdata.rdf.vocab.BaseVocabulary$VocabularyVersioningException
        at com.bigdata.rdf.vocab.BaseVocabulary.readVersion2(BaseVocabulary.java:680)
        at com.bigdata.rdf.vocab.BaseVocabulary.readExternal(BaseVocabulary.java:458)
        at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:2118)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
        at com.bigdata.io.SerializerUtil.deserialize(SerializerUtil.java:231)
        ... 26 more

        at org.wikidata.query.rdf.tool.rdf.client.RdfClient.execute(RdfClient.java:226)
        at org.wikidata.query.rdf.tool.rdf.client.RdfClient.update(RdfClient.java:104)
        at org.wikidata.query.rdf.tool.rdf.RdfRepository.updateLeftOffTime(RdfRepository.java:532)
        at org.wikidata.query.rdf.tool.change.ChangeSourceContext.getStartTime(ChangeSourceContext.java:88)
        at org.wikidata.query.rdf.tool.Update.initialize(Update.java:145)
        at org.wikidata.query.rdf.tool.Update.main(Update.java:98)

This is my current YAML (it still isn't working):

# Wikibase with Query Service
#
# This docker-compose example can be used to pull the images from docker hub.
#
# Examples:
#
# Access Wikibase via "http://localhost:8181"
#   (or "http://$(docker-machine ip):8181" if using docker-machine)
#
# Access Query Service via "http://localhost:8282"
#   (or "http://$(docker-machine ip):8282" if using docker-machine)
version: '3'

services:
  wikibase:
    image: wikibase/wikibase:1.35-bundle
    links:
      - mysql
    ports:
    # CONFIG - Change the 8181 here to expose Wikibase & MediaWiki on a different port
     - "8181:80"
    volumes:
      - mediawiki-images-data:/var/www/html/images
      - quickstatements-data:/quickstatements/data
      #- ./wikibase-basic-local/LocalSettings.php:/var/www/html/LocalSettings.php
      #- ./wikibase-basic-local/custom.png:/var/www/html/resources/assets/wiki.png
    depends_on:
    - mysql
    - elasticsearch
    restart: unless-stopped
    networks:
      default:
        aliases:
         - wikibase.svc
         # CONFIG - Add your real wikibase hostname here, only for internal names and when NOT terminating SSL outside the container.
         - localhost:8181
    environment:
      - DB_SERVER=mysql.svc:3306
      - MW_ELASTIC_HOST=elasticsearch.svc
      - MW_ELASTIC_PORT=9200
      # CONFIG - Change the default values below
      - MW_ADMIN_NAME=WikibaseAdmin
      - MW_ADMIN_PASS=WikibaseDockerAdminPass
      - MW_ADMIN_EMAIL=admin@example.com
      - MW_WG_SECRET_KEY=secretkey
      # CONFIG - Change the default values below (should match mysql values in this file)
      - DB_USER=wikiuser
      - DB_PASS=sqlpass
      - DB_NAME=my_wiki
      - QS_PUBLIC_SCHEME_HOST_AND_PORT=http://localhost:9191
  mysql:
    image: mariadb:10.3
    restart: unless-stopped
    volumes:
      - mediawiki-mysql-data:/var/lib/mysql
    environment:
      MYSQL_RANDOM_ROOT_PASSWORD: 'yes'
      # CONFIG - Change the default values below (should match values passed to wikibase)
      MYSQL_DATABASE: 'my_wiki'
      MYSQL_USER: 'wikiuser'
      MYSQL_PASSWORD: 'sqlpass'
    networks:
      default:
        aliases:
         - mysql.svc
  wdqs-frontend:
    image: wikibase/wdqs-frontend:latest
    restart: unless-stopped
    ports:
    # CONFIG - Change the 8282 here to expose the Query Service UI on a different port
     - "8282:80"
    depends_on:
    - wdqs-proxy
    networks:
      default:
        aliases:
         - wdqs-frontend.svc
    environment:
      - WIKIBASE_HOST=wikibase.svc
      - WDQS_HOST=wdqs-proxy.svc
  wdqs:
    image: wikibase/wdqs:0.3.40
    restart: unless-stopped
    volumes:
      - query-service-data:/wdqs/data
    command: /runBlazegraph.sh
    networks:
      default:
        aliases:
         - wdqs.svc
    environment:
      - WIKIBASE_HOST=localhost:8181
      #wikibase.svc
      - WDQS_HOST=wdqs.svc
      - WDQS_PORT=9999
    expose:
      - 9999
  wdqs-proxy:
    image: wikibase/wdqs-proxy
    restart: unless-stopped
    environment:
      - PROXY_PASS_HOST=wdqs.svc:9999
    ports:
     - "8989:80"
    depends_on:
    - wdqs
    networks:
      default:
        aliases:
         - wdqs-proxy.svc
  wdqs-updater:
    image: wikibase/wdqs:0.3.40
    restart: unless-stopped
    command: /runUpdate.sh
    depends_on:
    - wdqs
    - wikibase
    networks:
      default:
        aliases:
         - wdqs-updater.svc
    environment:
     - WIKIBASE_HOST=localhost:8181
     #wikibase.svc
     - WDQS_HOST=wdqs.svc
     - WDQS_PORT=9999
  elasticsearch:
    image: wikibase/elasticsearch:6.5.4-extra
    restart: unless-stopped
    networks:
      default:
        aliases:
         - elasticsearch.svc
    environment:
      discovery.type: single-node
      ES_JAVA_OPTS: "-Xms512m -Xmx512m"
  # CONFING, in order to not load quickstatements then remove this entire section
  quickstatements:
    image: wikibase/quickstatements:latest
    ports:
     - "9191:80"
    depends_on:
     - wikibase
    volumes:
     - quickstatements-data:/quickstatements/data
    networks:
      default:
        aliases:
         - quickstatements.svc
    environment:
      - QS_PUBLIC_SCHEME_HOST_AND_PORT=http://localhost:9191
      - WB_PUBLIC_SCHEME_HOST_AND_PORT=http://localhost:8181
      - WIKIBASE_SCHEME_AND_HOST=http://wikibase.svc
      - WB_PROPERTY_NAMESPACE=122
      - "WB_PROPERTY_PREFIX=Property:"
      - WB_ITEM_NAMESPACE=120
      - "WB_ITEM_PREFIX=Item:"

volumes:
  mediawiki-mysql-data:
  mediawiki-images-data:
  query-service-data:
  quickstatements-data:
aliases:
 - wikibase.svc
 # CONFIG - Add your real wikibase hostname here, only for internal names and when NOT terminating SSL outside the container.
 - localhost:8181

I have not seen a host with a port used for a docker network alias before and I am not sure it will work as expected, this could be what is breaking things.

environment:
 - WIKIBASE_HOST=localhost:8181

This is asking for localhost within the wdqs-updater service container.
This will no exist.
You as exposing port 8181 on the docker host, not within any container.
The example uses wikibase.svc which is internal to the docker network and allows direct communication between the services.
If you want to connect to the port on your host machine you will either need to use your host machine IP as seen from docker, or host.docker.internal

However, now the items have dead links like http://wikibase.svc/entity/Q39, so I'm working on replacing the "http://wikibase.svc" with "localhost:8181" as "localhost:8181/entity/Q39" resolves correctly.

See https://doc.wikimedia.org/Wikibase/master/php/md_docs_topics_options.html#conceptBaseUri

Thank you, @Addshore!

The case mentioned above should not really be happening during regular operation of a wikibase, but perhaps we need to make the last step here (resetting the timestamp) more resilient, and perhaps the default behaviour when using an empty wikibase a bit better (the ability to reset was added in T197658).
This would need some collaboration between wmde and the wikidata query service team.

That would be great!

It seems the main thing in this issue is using a proper option --start DATE --init (as by @Addshore in T186161#6922518) or -s DATE (as by @Louperivois in T186161#6684236) with 'runUpdate.sh'. Though I am still curious why does the place, where I set a time zone in LocalSettings.php.template, have an effect on the updater as I mentioned above.

Just a minor comment: may be docker exec demo-wdqs bash could be changed to a more common docker exec wikibase_wdqs_1 bash at https://www.mediawiki.org/wiki/Wikibase/FAQ#Why_doesn't_the_query_service_update?.

Just a minor comment: may be docker exec demo-wdqs bash could be changed to a more common docker exec wikibase_wdqs_1 bash at https://www.mediawiki.org/wiki/Wikibase/FAQ#Why_doesn't_the_query_service_update?.

Thanks for pointing this out! Changed. I also encourage folks to edit the FAQ as they would any other wiki page.

Just a minor comment: may be docker exec demo-wdqs bash could be changed to a more common docker exec wikibase_wdqs_1 bash at https://www.mediawiki.org/wiki/Wikibase/FAQ#Why_doesn't_the_query_service_update?.

Thanks for pointing this out! Changed. I also encourage folks to edit the FAQ as they would any other wiki page.

This example is meant to exist outside of any other setup, any name can be used.
If the example includes wikibase_wdqs_1 people may end up doing unexpected things to unexpect query services, hence the use of some other name.

Though I am still curious why does the place, where I set a time zone in LocalSettings.php.template, have an effect on the updater as I mentioned above.

This will likely change the actual time that ends up being reported by mediawiki in recent changes, thus also affect the time that is set in the query service.

@Addshore I'm confused by your link (https://doc.wikimedia.org/Wikibase/master/php/md_docs_topics_options.html#conceptBaseUri); so if I want to change the URIs from the default http://wikibase.svc/entity/Q39 to localhost:8181/entity/Q39, I do this by adding something to LocalSettings.php?

The conceptBaseUri says it is constructed from $wgServer by default, the value this has upon initialization is $wgServer = WebRequest::detectServer(); so based on the provided documentation I should just set it as $wgServer = "localhost:8181"?

Thank you!

UPDATE #1: Deleted everything to start from scratch for the fourth (or fifth?) time. I followed all of the steps and changed $wgServer = "localhost:8181" for loading in data. This actually made it so localhost:8181 stopped working in the browser, so this is (probably) not the solution.

Updater still isn't working, same stack as I posted above.

UPDATE #2: I've totally given up on this; this is an absolute nightmare just to get data from one instance to another and it seems nearly impossible to navigate. If you're in this situation, I'd just recommend giving up and either adding in everything manually or picking a different database system.

@Addshore I'm confused by your link (https://doc.wikimedia.org/Wikibase/master/php/md_docs_topics_options.html#conceptBaseUri); so if I want to change the URIs from the default http://wikibase.svc/entity/Q39 to localhost:8181/entity/Q39, I do this by adding something to LocalSettings.php?

The conceptBaseUri says it is constructed from $wgServer by default, the value this has upon initialization is $wgServer = WebRequest::detectServer(); so based on the provided documentation I should just set it as $wgServer = "localhost:8181"?

Thank you!

UPDATE #1: Deleted everything to start from scratch for the fourth (or fifth?) time. I followed all of the steps and changed $wgServer = "localhost:8181" for loading in data. This actually made it so localhost:8181 stopped working in the browser, so this is (probably) not the solution.

Updater still isn't working, same stack as I posted above.

UPDATE #2: I've totally given up on this; this is an absolute nightmare just to get data from one instance to another and it seems nearly impossible to navigate. If you're in this situation, I'd just recommend giving up and either adding in everything manually or picking a different database system.

Any update on this? @Superraptor123

@Argahsuknesib I got frustrated and abandoned the platform for about a year sadly;

I did get to a point where I was a little less jaded and wanted to pick it up again, so I left behind the idea of using Docker and the idea of using a Windows-based platform. I moved to go with an Ubuntu server without Docker for the next attempt, which took me a little over a month to get set up, and it appears to be (mostly) working now. It was a lot of trial and error, but ultimately it could have been worse. The one issue I will say, which I ran into with my security team, is the use of log4j within the system.

For more info regarding issues I ran into with the new Ubuntu, non-Docker setup (which I ended up solving on my own), see: https://phabricator.wikimedia.org/T308180, https://phabricator.wikimedia.org/T308625, and https://phabricator.wikimedia.org/T307951.

Thought this might be the best thread to ask on.

Getting this error while running the wikidata runUpdater.sh

java.lang.IllegalArgumentException: Buffering capacity 2097152 exceeded

I don't see where I can set this config to a higher number.

full debug log:

12:46:53.179 [update 4] DEBUG o.w.q.r.t.w.WikibaseRepository - Fetching rdf from https://www.wikidata.org/wiki/Special:EntityData/Q103933897.ttl?flavor=dump&nocache=1656679613179
12:46:53.186 [update 7] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 95 ms
12:46:53.187 [update 7] DEBUG org.wikidata.query.rdf.tool.Updater - Processing data for Q111804028@1651920458@20220601124651|1703581390
12:46:53.187 [update 7] DEBUG o.w.q.r.t.w.WikibaseRepository - Fetching rdf from https://www.wikidata.org/wiki/Special:EntityData/Q111804028.ttl?flavor=dump&nocache=1656679613187
12:46:53.214 [update 5] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 101 ms
12:46:53.214 [update 5] DEBUG org.wikidata.query.rdf.tool.Updater - Processing data for Q94562706@1651920485@20220601124654|1703581419
12:46:53.214 [update 5] DEBUG o.w.q.r.t.w.WikibaseRepository - Fetching rdf from https://www.wikidata.org/wiki/Special:EntityData/Q94562706.ttl?flavor=dump&nocache=1656679613214
12:46:53.228 [update 1] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 327 ms
12:46:53.228 [update 0] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 106 ms
12:46:53.228 [update 1] DEBUG org.wikidata.query.rdf.tool.Updater - Processing data for Q5896227@1651920427@20220601124649|1703581359
12:46:53.228 [update 1] DEBUG o.w.q.r.t.w.WikibaseRepository - Fetching rdf from https://www.wikidata.org/wiki/Special:EntityData/Q5896227.ttl?flavor=dump&nocache=1656679613228
12:46:53.228 [update 0] DEBUG org.wikidata.query.rdf.tool.Updater - Processing data for Q50093816@1651920497@20220601124655|1703581428
12:46:53.228 [update 0] DEBUG o.w.q.r.t.w.WikibaseRepository - Fetching rdf from https://www.wikidata.org/wiki/Special:EntityData/Q50093816.ttl?flavor=dump&nocache=1656679613228
12:46:53.230 [update 6] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 110 ms
12:46:53.231 [update 6] DEBUG org.wikidata.query.rdf.tool.Updater - Processing data for Q103933898@1651920437@20220601124650|1703581371
12:46:53.231 [update 6] DEBUG o.w.q.r.t.w.WikibaseRepository - Fetching rdf from https://www.wikidata.org/wiki/Special:EntityData/Q103933898.ttl?flavor=dump&nocache=1656679613231
12:46:53.269 [update 3] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 216 ms
12:46:53.271 [update 9] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 142 ms
12:46:53.279 [update 2] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 202 ms
12:46:53.281 [update 8] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 108 ms
12:46:53.298 [update 7] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 111 ms
12:46:53.300 [update 4] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 121 ms
12:46:53.322 [update 1] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 94 ms
12:46:53.332 [update 6] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 101 ms
12:46:53.355 [update 0] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 127 ms
12:46:53.395 [update 5] DEBUG o.w.q.r.t.w.WikibaseRepository - Done in 180 ms
12:46:53.424 [main] DEBUG o.w.query.rdf.tool.rdf.RdfRepository - Processing 80 IDs and 15199 statements
12:46:53.467 [main] DEBUG o.w.q.rdf.tool.rdf.client.RdfClient - Completed in 42 ms
12:46:53.606 [main] DEBUG o.w.query.rdf.tool.rdf.RdfRepository - Sending query 4331986 bytes
12:47:03.497 [main] INFO o.w.query.rdf.tool.HttpClientUtils - HTTP request failed: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Buffering capacity 2097152 exceeded, attempt 1, will retry
12:47:08.990 [main] INFO o.w.query.rdf.tool.HttpClientUtils - HTTP request failed: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Buffering capacity 2097152 exceeded, attempt 2, will retry
12:47:16.425 [main] INFO o.w.query.rdf.tool.HttpClientUtils - HTTP request failed: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Buffering capacity 2097152 exceeded, attempt 3, will retry