Hi, I'm experimenting with a new tool named similarity, which attempts to load a big dataset in memory at startup. It never actually finishes its startup process, and after a few minutes I get the following in uwsgi.log:
detected binary path: /usr/bin/uwsgi-core
your processes number limit is 63707
your process address space limit is 4294967296 bytes (4096 MB)
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
SIGINT/SIGQUIT received...killing workers...
worker 1 buried after 1 seconds
worker 2 buried after 1 seconds
worker 3 buried after 1 seconds
worker 4 buried after 1 seconds
goodbye to uWSGI.
When running it locally on my laptop, I see the tool taking up 3.6GiB of memory, with 1.2GiB RSS. It seems plausible that it's reaching the 4GiB limit and receiving a SIGQUIT because of that - but please correct me if this assumption is incorrect.
I do plan on making the tool more memory-efficient eventually, but if memory is indeed the issue, would it be possible to increase the limit so I can continue prototyping it? Judging from the memory consumption locally, it seems a small bump would be enough, but ideally it would be nice to have, say, 10GiB so I won't need to worry about it very soon.