Since the deployment of MediaWiki 1.36/wmf.13, text extracted from the text layer of DjVu files is subjected to HTML entity encoding for even some basic ASCII characters. This shows up to users as   (space), ' (single quote), and (newline) in the ProofreadPage page editing interface in the Page: namespace. example page
There should generally be no HTML entity encoding visible here: the displayed text in the WikiEditor should be very nearly the raw content of the file's text layer.
So far as I know the provenance here is that when a file is uploaded its text layer is extracted by MediaWiki-DjVu and stuffed into some field of the image metadata in the database, lightly wrapped in an XML structure (see T192866). When a user opens a (redlink) wikipage in the Page: namespace, ProofreadPage extracts that field, removes the XML, and nukes a few select character codes (CR, LF, VS, PB: control characters in this context, not obviously relevant to this issue. See T230415.), before preloading the text into the WikiEditor text field.
In one of these steps it seems like a change has introduced HTML entity encoding that was not there before.
The obvious suspects here are MediaWiki-DjVu, ProofreadPage, and WikiEditor; with greatest probability being MediaWiki-DjVu. I'm not aware of changes to either of these in this release, and I've found nothing in the release notes. I know some work has been done on using Remex for character entity reference support and validation in MW over that last two-ish years, but I'm not aware of any recent such changes (and, again, nothing in the release notes). No clue who owns remex and its uses. Parser? VE?
(CC to @Soda who has been working on PRP recently and might know off-hand whether there has been any relevant changes to the code there)