Sun, Oct 4
Some more information on WASM here - https://webassembly.org
I am not aware of any yet., so if this ticket is kept open it should be marked as low priority.
Sat, Oct 3
WASM = Web assembly.
Sep 20 2020
In respect of recent template fixes I've been providing ( Protected page is first followed by the sandbox . If you want fuller diffs for the pages before they were sandboxed please LMK.)
Sep 19 2020
Sep 15 2020
Sep 13 2020
Sep 5 2020
Aug 21 2020
Confirming that this issue is still present as of August 2020.
Aug 19 2020
Kahholz: Would you be interested in porting this Gadget to English Wikisource?
Jul 5 2020
If you want to set up a mechanism to pull HUGE quantities of JPEG/JPEG2000/TIFF scans from IA, I'm more than open to the suggestions you mention here. :)
Jul 4 2020
Jul 3 2020
The ticket at Ghostscript suggested this : https://bugs.ghostscript.com/show_bug.cgi?id=702531#c1, which is a tweak to the invocation used to render the PDF. It also suggested using something called convert. (On Wikimedia this would be the Thumbor library ?)
Jul 2 2020
Question: When it resizes, is Thumbor using a rescale or a Resample?
From some investigations of DPI levels using IfranView, I think this 'bug' can be solved by upping the DPI for generated images on PDF.
Is this also related to T224355 ?
Jul 1 2020
@AntiCompositeNumber : Can you consider some test renderings at 150, 300, 600 and 1200 dpi respectively?
Option 5. Provide an option in the Image handling at mediawiki to render at a higher DPI. ( I have a strong hunch these are likely to have been scanned at least 300/600 dpi if not higher. I wonder what is typical in an archival situation?)
Logged upstream as https://bugs.ghostscript.com/show_bug.cgi?id=702531
- Batch uploading the Djvu equivalents for PDF files is feasible, If automated ( Anyone want to write a script?). However in some instances I am wondering if the DJVU's at IA are generated from the PDF.. and thus might inherit related issues.)
(I'm also checking the output in gsview , which is taking for... absolutely... e..v..e..r to render even single pages.)
Aklapper: Internally PDf.js decodes JPEG images so it can render them in the viewer.
[strike]The viewer code Mozilla Firefox appears to be using is - https://github.com/mozilla/pdf.js (Apache license) . [/strike]
@Aklapper : Thanks for the update.
Fae: I've also noted a quality issue with some PDF uploads,
being one, Where the page image displayed in the interface is of a lower quality than that in the actual PDF ( compared to a direct display of the page in Adobe Reader, which was also unable to export a usable image to older image viewer tools.)
Jun 24 2020
Jun 22 2020
Not necessarily a check suitable for all uploads, but comparing a new upload against a hash for an 'office' actioned removal might be useful. You could compute an SHA-1 on a new upload and compare it against ones previously removed. (Like is done in ehcking for duplicate uploads). This could prevent accidental re-upload of previously removed material, with a suitable warning to the uploader.
Jun 20 2020
I'd like to able to do a "prefix" filter as well. On Wikisource, goups of Lint errors typically need to be fixed across a number of Page: for a given Index: and at present the Special:LintErrors page doesn't let me do that level of search "granularity".
Jun 18 2020
Update: All the files in the relevant category should now be uploaded, so closing out this request, but if additional volumes or bad uploads come to light, feel free to re-open.
Jun 14 2020
I wasn't sure what tickets had been filed regarding the large file upload issues from Special:Upload..
I have not filed a specific phabricator ticket for the Special:Upload upload by URL failing (in respect of large uploads), as I was told this was a known issue already.
Approximate number of remaining volumes is around 120 or so, mostly pre 1950.
In order that is some standardisation in respect of naming and metadata , ( I hope the use of template style syntax for parameters is acceptable.). Is this detailed enough?
Fae may be able to advise on which volumes are still to be uploaded, as they have access to the logs of thier own scripts.
It's taken multiple attempts to get - https://archive.org/details/catalogofcopyrig263libr
The PDF just deoesn't want to RELIABLY upload , even when I use a chkunked upload...
It fails repeatedly...
Jun 13 2020
Jun 12 2020
Rendering or transcoding an uploaded TEX file (or markup) to PDF (on the server side) would also be an acceptable compromise, and I have amended the task description accordingly.
May 20 2020
(with apologies for the convoluted process involved in providing the expanded explanations)
With BR you get :=
Someone with appropriate expertise (I don't have it) should sit down and fully document what's actually happening in the parser in various instances.
That would place a hard-newline<br> into the output, potentially resulting in undesirable additional white-space, and
crucially would not necessarily insert the context change that the parser (in it's current form needs to see to correctly insert the P opening and closing tag)
@Xover: It's not just a DIV vs span ...
May 19 2020
May 18 2020
Thank you for the re-format :)
I've removed the followup comment you mention. If you think the UL->DIV->TABLE issue should be moved to a new ticket, so this one doesn't become a "laundry" list. I'm happy to oblige (once I have a clearly idea about what I think might be going on).
Which do you suggest? (re Project tags) . The one I added for Wikisource was removed.
Here the only additional whitespace is the line feeds between the template starts, the nominal text and the template end call as far as I can determine.
May 15 2020
It's a concern, given the tools that might be needed to provide the support for that format.