Page MenuHomePhabricator

"413 Request Entity Too Large" / Kubernetes 2020?
Closed, ResolvedPublic


The QuickStatements tool used to be able to process large texts. Since I switched to the new "kubernetes 2020", many people keep getting a 413 error even on medium-size texts, which I can confirm.

I have tried to change upload_max_size and post_max_size in PHP, to no avail.

Can this be configured somewhere by myself, or could the (default?) configuration be changed?

Event Timeline

Update: Setting server.max-request-size = 1000000 in $HOME/.lighttpd.conf and restarting the webservice did not help.

bd808 triaged this task as High priority.Jan 25 2020, 12:06 AM
bd808 added a subscriber: bd808.

This sounds like it could be something in the new ingress layer that sits in-between the front proxy which we have had for years and the lighttpd process running inside a Kubernetes pod. The legacy Kubernetes cluster did not have this second layer of proxy so this would be a reasonable place for us to start looking for new issues.

For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size.

To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation: 8m

We have client_max_body_size 128m; in dynamicproxy/urlproxy config, so I'm going to set that same default across the nginx-ingress vhosts using the ConfigMap option mentioned in the quoted docs.

That's a bag of all kinds of possibilities.

First: Is the only change here before the error moving to the new cluster @Magnus?

Obviously there's some code changes on PHP to fix things. I have some ideas on the new ingress layer to check, but I want to rule out other changes (new PHP version, etc) first. lighttpd and all that shouldn't change unless the php version did.

On the ingress end, I'm looking at a few settings.

Hot patched ConfigMap:

$ sudo -i kubectl get configmap nginx-configuration -n ingress-nginx -o yaml
apiVersion: v1
  proxy-body-size: 128M
  use-forwarded-headers: "true"
kind: ConfigMap
  annotations: |
  creationTimestamp: "2019-11-07T13:11:19Z"
  labels: ingress-nginx ingress-nginx
  name: nginx-configuration
  namespace: ingress-nginx
  resourceVersion: "17204804"
  selfLink: /api/v1/namespaces/ingress-nginx/configmaps/nginx-configuration
  uid: 2879271f-4129-47f5-8b5d-d37ab92aa0ec

Change 567167 had a related patch set uploaded (by BryanDavis; owner: Bryan Davis):
[operations/puppet@production] Toolforge: increase nginx-ingress client_max_body_size to match dynamicproxy

@Magnus can you do some large upload to check that my change from T243580#5830975 fixed throughput for you? I don't have any proper dataset to pass to quickstatements to test.

Change 567167 merged by Bstorm:
[operations/puppet@production] Toolforge: increase nginx-ingress client_max_body_size to match dynamicproxy

I made a large upload with QuickStatements and it worked.

I made a large upload with QuickStatements and it worked.