This may be a cookie issue. When logging in from a private window I cannot recreate on the first login, each time I've tried I've been sent to the wikimedia login page. After logging in, I can recreate by logging back out in quarry. Though in that instance I am never sent to the wikimedia login page, either I get the error, or I am logged in without logging back in on wikimedia.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Wed, Mar 29
Tue, Mar 28
Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: [pid: 17737|app: 0|req: 44/125] 172.16.5.238 () {52 vars in 851 bytes} [Tue Mar 28 15:33:46 2023] GET /login?next=/ => generated 567 bytes in 148 msecs (HTTP/1.1 302) 4 headers in 343 bytes (1 switches on core 0) Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: [2023-03-28 15:33:47,622] ERROR in app: Exception on /oauth-callback [GET] Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: Traceback (most recent call last): Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: File "/srv/quarry/venv/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: response = self.full_dispatch_request() Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: File "/srv/quarry/venv/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: rv = self.handle_user_exception(e) Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: File "/srv/quarry/venv/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: reraise(exc_type, exc_value, tb) Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: File "/srv/quarry/venv/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: raise value Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: File "/srv/quarry/venv/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: rv = self.dispatch_request() Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: File "/srv/quarry/venv/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: return self.view_functions[rule.endpoint](**req.view_args) Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: File "./quarry/web/login.py", line 51, in oauth_callback Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: session["request_token"], request.query_string Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: File "/srv/quarry/venv/lib/python3.7/site-packages/mwoauth/handshaker.py", line 109, in complete Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: user_agent=self.user_agent) Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: File "/srv/quarry/venv/lib/python3.7/site-packages/mwoauth/functions.py", line 193, in complete Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: credentials.get('oauth_token')[0], Mar 28 15:33:47 quarry-web-02 uwsgi-quarry-web[17729]: TypeError: 'NoneType' object is not subscriptable
@Legoktm That seems like an excellent thing to do! How do we do that?
Mon, Mar 27
Further work from today suggests this is still happening, some of the time. Was working earlier today, now it is mostly back where it is. Where I cannot access one existing cluster, or create a new cluster. Though one of the existing clusters I can still access, with some errors, from the bastion:
$ kubectl get nodes E0327 15:14:32.941043 398223 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0327 15:14:32.968114 398223 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0327 15:14:32.971772 398223 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0327 15:14:32.975277 398223 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request NAME STATUS ROLES AGE VERSION paws-dev-123-34-yt34gl2fr7xa-master-0 Ready master 54d v1.23.15 paws-dev-123-34-yt34gl2fr7xa-node-0 Ready <none> 46d v1.23.15 paws-dev-123-34-yt34gl2fr7xa-node-1 Ready <none> 46d v1.23.15
All seems to be working again. I can get to the existing clusters, and build new ones. Thanks!
Thu, Mar 23
Wed, Mar 22
In T135908#8718121, @Nikki wrote:In T135908#6827926, @Krinkle wrote:As I understand it, past versions of draft queries (not published queries) are not kept. So the dozen drafts I have at https://quarry.wmflabs.org/Krinkle have (by my choosing) already been broken for all intents and purposes.
They are kept, you can click the history button to see the previous versions of the query and their results.
Tue, Mar 21
Mon, Mar 20
This is resolved by using a credential with the "Unrestricted (dangerous)" checkbox selected. Thank you @taavi for the solution
Sat, Mar 18
I'm seeing the same problem in codfw1dev when I try to create a cluster with terraform. Probably I haven't set up the application credential correctly, though I would need help identifying how to set it up correctly. I can create a VM, so the cred has some access.
Fri, Mar 17
Wed, Mar 15
Could this be the cause of T332194 ?
Perhaps related to T330759
Looking a little closer it would appear that libicu-dev is already installed
@PAWS:~$ dpkg -l | grep icu ii icu-devtools 70.1-2 amd64 Development utilities for International Components for Unicode ii libicu-dev:amd64 70.1-2 amd64 Development files for International Components for Unicode ii libicu70:amd64 70.1-2 amd64 International Components for Unicode
I'm wondering if this has to do with libicu70, when perhaps it is libicu66 that is expected?
Cluster failing to build:
| status_reason | Failed to create trustee or trust for Cluster: e850f43f-59eb-48de-a65d-022397d96baf |
Settled on:
openstack coe cluster template create paws-k8s23 --image magnum-fedora-coreos-34 --external-network wan-transport-eqiad --fixed-network lan-flat-cloudinstances2b --fixed-subnet cloud-instances2-b-eqiad --dns-nameserver 8.8.8.8 --network-driver flannel --docker-storage-driver overlay2 --docker-volume-size 80 --master-flavor g3.cores2.ram4.disk20 --flavor g3.cores8.ram32.disk20 --coe kubernetes --labels kube_tag=v1.23.15-rancher1-linux-amd64,hyperkube_prefix=docker.io/rancher/,cloud_provider_enabled=true --floating-ip-disabled
for production
Tue, Mar 14
In T188684#8693913, @Dominicbm wrote:Based on this comment, I am confused if the timeout was ever increased? I recently experienced a session getting killed after only a few hours, but I don't have exact timesteamps.
Mon, Mar 13
created temporary-redirect-host.paws.eqiad1.wikimedia.cloud
Fri, Mar 10
C'est fait!
Thu, Mar 9
Tue, Mar 7
This is an effect of the OS_PROJECT_ID not being set to the project containing the cluster
Mon, Mar 6
We can test out the PR locally when it is finished building. Though we probably won't deploy until next week when PAWS is back to one cluster.
PR looks great, thanks for putting it in. I'm going to merge it after PAWS settles on one cluster next week
https://lists.wikimedia.org/hyperkitty/list/cloud-announce@lists.wikimedia.org/thread/5RSH7DAGGIYHCSFBCV62COONUYN73E7K/
Fri, Mar 3
the nfs server is allowing only nodes that have the "default" security group. Asking in the mailing list if there is an option to get magnum to assign additional security groups to nodes.
Thu, Mar 2
pr 263 appears to be working, though there is one change to openrefine in the new cluster that is not in the code in 263. As such is this blocking anything? If it is alright I can test it fully and deploy it on the week of the 13th when the old cluster is removed. If it is blocking anything, I think we can still get it in, just would be a little more exciting :p
Feb 28 2023
Some notes in https://wikitech.wikimedia.org/wiki/PAWS/Admin on how to add a key
pr 16 works great! Thanks @taavi !
@samuelguebo This works great, thanks!
Feb 27 2023
Superset seems to have caching https://superset.apache.org/docs/installation/cache/