There is no connection limit in Parsoid, it just keeps asynchronously accepting requests until it runs out of memory and stalls due to emergency GC or crashes, or the event loop gets so slow as to stall the automatic accept() handler. I don't think this is a good failure mode.
Ideally, requests would be queued in a low-memory state. With a patch to node, we could use the kernel backlog by avoiding an accept() call when the connection limit is reached. Without patching node, we could accept requests and put them into a job queue which would then be executed by the worker with limited concurrency. So a new parse operation will not start until the old one is complete.
Node has net.Server.maxConnections, but its behaviour is not ideal -- it will accept and then immediately reset the connection, causing upstream failure, similar to MySQL. Maybe we could use it in conjunction with a job queue type solution. We can accept thousands of requests, we just can't parse thousands of large articles at the same time.