Apr 22 2020
Jun 14 2017
In my experience (and I have somewhat failed in this at the past) the best method would be to start a simple 'discussion' about the perceived issue, suggest a solution, and then ask the community to debate the matter. Once there is at least some degree of consensus about the correct solution, then ask the community to actually vote on approving it.
@CKoerner_WMF I think the most important thing that needs to be understood here is that any decision to enable mp3 uploading on Commons that is not implemented as the result of a explicit decision by the Commons community is likely to end badly. Now that the legal issues have been cleared up, and it's established that the technical implementation is not a problem, most of this discussion really needs to move onwiki, so that the community as a whole will have an 'investment' in the process of deciding how it is implemented. If a plan such as yours (which is not bad) is presented to the community as a 'fait accompli', even if merely for discussion, there will be grumbling.
May 30 2017
@fgiunchedi Thanks for jumping onto this so quickly. I'll start working on resetting the others, and let you know if the problem still exists.
May 29 2017
@fgiunchedi Much smaller files have the same problem. See https://commons.wikimedia.org/wiki/File:Janusz_Cedro_live_in_concert-_Amazing_Grace.ogv for example, which is only 17MB.
Some new examples (with audio files)...
@Koavf TinEye reverse search (which is a gadget on Commons) handles cropped images quite well. It's not available to us as an 'automatic' method, due to cost, but they allow a number of free searches per day....using the gadget, they are counted against the specific requesting user, not the WMF or labs.
@Fastily I think the main concern here, really, is simply Commons.... for any other wiki, the volume of 'valid' audio or video uploads should be sufficiently low that abuse would be easily recognizable.
I fixed two of those by uploading a new copy from youtube (the source), and got the author (Anna Frodesiak) to upload a new copy of another one.
May 17 2017
While Commons isn't specifically intended for reading books, a nice way to do so would be a definite plus (and the IA's tool is awesome). Perhaps it could be setup to load instead of MediaViewer when looking at multipage files.
May 8 2017
@matmarex That's truly esoteric.
Apr 25 2017
Apr 21 2017
@Fae https://commons.wikimedia.org/wiki/Commons:FAQ#What_does_the_upload_error_This_file_contains_HTML_or_script_code_that_may_be_erroneously_interpreted_by_a_web_browser._mean.3F is relevant... the error, apparently, is actually that it's not detecting them all.
Apr 20 2017
I'll give some examples when I next see it.
@brion Dug this up out of the old stuff. Based on recent logs, this still happens... since we repurged all the file pages of videos with no transcodes, I have been watching the move log and resetting the recent moves.
Apr 17 2017
This is now down to less than 400 transcodes.
Apr 16 2017
To clarify my last comment, if we enable mp3 we are going to be swamped with massive amounts of copyvios.
Apr 11 2017
The enwiki article claims the last US patent on mp3 expires on 16 April 2017, though it's not referenced other than to the patents themselves.
All done, and in the correct category now.
@Ivanhercaz I wrote a quick script to use the API to 'forcelinkupdate' the 400-odd files. It's nearly done now.
@Ivanhercaz Purging the categories does not make any difference, because category membership is a property of the file pages themselves, not the categories. The solution is to either wait for the job queue to catch up, or to force it by performing a null edit to each file page. See https://www.mediawiki.org/wiki/Manual:Purge#Null_edits.
Apr 5 2017
https://commons.wikimedia.org/wiki/File:Walking_Keage_Incline.webm reappeared, and has been reset
Apr 1 2017
Of the ones I listed before (videos), these are now back...
The current list of affected files (at least, of ones with a failed transcode that make them apparent) is....
@Ankry Just now, for me at least, that link works. The errors seem to be intermittent.
I reset the transcodes that had been running for egregiously long periods... this included ones running for 10+ hours that should have run in a minute or two. The transcodes then showed up in https://quarry.wmflabs.org/query/17726. The list was initially 90 or so... most have since 'reappeared', and I have then reset them to get the transcodes done.
The video 'works' because when you simply play it, you view some transcode based on your preferences. If you try to view the original video by clicking the link directly under the thumbnail, you get a 404.
Mar 28 2017
Mar 19 2017
I have actually been doing this (through the API) in reasonable-sized batches for several days now.
Feb 18 2017
Feb 17 2017
But I would think that the main evidence for such a consensus would be that such files have been marked for 'delayed speedy' for close to two years now.
@Aklapper There was an RFC in 2015... https://commons.wikimedia.org/wiki/Commons:Requests_for_comment/Flickr_and_PD_images
@matmarex If the UW is actually requiring the uploader to input a license tag other than the PDM, then it's probably fine.... this came up at a DR, and possibly I misunderstood exactly what the tool was doing. There is a consensus on Commons that the PDM 'in and of itself' is not an acceptable license, but it sounds like you are saying that the tool requires the uploader to do what the template itself would ask for. If that's the case, then we can close this as invalid I guess.
Feb 16 2017
Feb 15 2017
Feb 10 2017
Leaflet (using OSM data) would be a better solution.
Feb 9 2017
@TheDJ See the hits of https://quarry.wmflabs.org/query/14916. The first 40 or so are similar issues (transcodes of redirects that won't go away). That list had originally had, IIRC, about a thousand, but the vast majority were, like I said, 'easily fixable'. I haven't looked at the ones that showed up on it since Dispenser reset everything.
Feb 8 2017
There are 139k in the queue, and 112k unitialized... that's surely the cause.
@TheDJ Steinsplitter had moved a 'large' number of videos from .ogg to .ogv, and for these two there were entries left in the transcode table for the redirects. Back when I had started working on cleaning up the many transcode issues months ago, there were many such files..., it seemed to be because the file was moved 'while' it had running transcodes, and those transcodes then errored out because the file no longer existed, or because the file was deleted while it was still running.
Jan 26 2017
The relevant table line is https://quarry.wmflabs.org/query/15801 <- error message was 'timeout'
Jan 25 2017
@TheDJ The search checks for the existence of 'not null' in transcode_time_success and transcode_time error. The ones that were originally in the report, other than the 35 there now, were all from 'years ago', and the ones I checked at the time were rather 'obviously' deleted shortly after being uploaded. As it stands now, if the file is deleted or renamed shortly after upload, the entries in the transcode table go away, so that seems to have been long ago fixed.
I was asked to mention this here, although I'm unsure if it's actually a 'replication' issue.
There is an issue that occurs sometimes (such as when the servers are restarted) where transcodes end up with both a 'success' and a 'failure' status in the SQL... they are shown on the file pages as successfully transcoded, but it seems that, in general, they were not.
Jan 15 2017
Jan 14 2017
@brion Yes, the 'pending' queue is now empty, other than brief spikes (mainly due to spurts of large uploads, or me throwing old failed stuff back through) and it re-empties itself in a reasonable time.
Jan 13 2017
Jan 12 2017
Jan 10 2017
Just to update, the transcoders are still (as of now) mainly processing old multi-GB files.
Jan 8 2017
@tomasz I think that it's important to prevent anything but 'new uploads' being thrown in the queue at least until it's actually completely caught up, and it's hard to estimate because it will depend on just how long the remaining ones take to be run. The queue is now down to 1393 transcodes, but it's also effectively been sorted by all this to many of the worst 'time intense' transcodes.... I have seen probably close to a dozen in there that are over 3GB in size. They will probably fail, but they need time to do so to get out the the queue.
@Jasonanaggie There are other issues with the special page not correctly showing some tasks that are actually 'running'. Transcodes are being run, and the queue is going down noticeably, it's just that a significant number of the remaining ones are multi-GB files.
Jan 6 2017
@hoo FYI, due to a recent operations issue, all of the backlogged video transcodes were booted out of the queue, and have to be restarted (at a sane rate, ofc)... I'm booting your videos back through first.
Dec 27 2016
Dec 26 2016
@elukey Just to be clear, I have not (other than possibly incidentally) been putting old failed transcodes back on the queue since the backlog exploded (my comment there was back in mid-November). I have been resetting the transcodes with 'broken' (as in, both queued and errored) DB statuses that were created when the servers were reset on the 19th, but those don't 'add anything' to the backlog.... they were already queued.
@Yann I have the impression that more action will be taken after the holidays.
Dec 22 2016
Jobs seem to be processing without problems... I have seen tasks completing successfully in significant numbers (hundreds), and very few recent failures.
@Pokefan95 When I said 1920P, https://commons.wikimedia.org/wiki/File:Moscow_Ring_Railway_full_trip_-_view_from_ES2G_train.webm is actually the one I was thinking of... which yes, is not actually 1920P (which is not a real thing), but it is a 2.5GB full-HD video... I was vaguely making the point of 'insanely large files' in a very generic manner.
Dec 21 2016
This seems to have been subsumed into later tasks. I'll reopen a new task for the specific issues I'm still seeing.
This is known, and being worked. It's simply that the backlog became extremely large due to a high number of 'huge' (1920P, and an hour long) and a (resolved) bug that caused the servers to try to start an insane number of concurrent tasks. More transcoders have been added to the cluster, and the backlog should fall over time. Transcodes marked as "Added to Job queue [INVALID] ago" are indeed being run, there are just 10k tasks in the queue.
Dec 19 2016
I appreciate the thanks...really. I'd already been working on kicking failed transcodes back through at a 'sane' rate, (on a mass scale, I mean, I had eliminated the 15k 'unitialized' transcodes, plus others that were added when the file pages were purged, presumable due to new targets being added) when this started. Since I'm not a 'coder-type', I can't really grasp the logic that is used to choose what to start next, I'm just basing my actions on the experience I have gained from actually 'using' the system. I do understand SQL, though, enough to watch the entries in the actual transcode table and how they are changed as things run.
To update, the backlog is now over 8000... when around, I have been kicking 'excessively large' (the rule of thumb I have been using is HD, and over 200MB) transcodes off the processing queue, by resetting them. This causes them to fail around half an hour later, instead of the 7+ hours it would take otherwise, and after doing so consistently for 5-6 hours the queue indeed starts to fall, but then I go to sleep and it goes back up by even more. The admin UI for TimedMediaHandler really should provide a more meaningful way to manage running transcodes.
Dec 16 2016
I'm 'owning' this because I'm taking the (annoying) responsibility of sitting on the Commons transcode queue, and trying to get everything that's 'pending' actually done.
- Video scalers are insufficiently powerful to transcode the 'very large' files people are uploading now.
- TimedMediaHandler will continue starting transcodes until the server runs out of memory, instead of considering number of CPUs.
- Resetting a running task does not immediately kill the running transcode
- ^^ will both indicate a transcode is 'error' on the file page, and list it as 'queued' on the TMH special page.
- ^^ will (apparently if the transcode completes before it dies) sometimes result in an entry in the transcode table with both a 'time of success' and a 'error time'. Such entries are shown with a 'negative number of seconds' as how long they took to transcode on the file page.
- The entries here are not examples of this, they are mainly old entries for files that were renamed. I 'fixed' the examples that were there by resetting them.
- When a transcode is reset while running, and then 'fails', it creates an entry in the transcode table that is both 'an error' and 'queued'. This is actually useful, even though obviously not a design feature, as it has allowed removing large files from the queue without simply having them restart. When those files are 'again' reset (removing the error message) they go back to the 'head' of the processing queue, apparently... at least, they are the next started, ahead of the other 6500 pending transcodes.
- Moving files does not always remove entries from the transcode table.
- Deleting files does not always remove entries from the transcode table. Undeleting, and then redeleting, the files does.
[1:31pm] Revent: Dereckson: PLEASE do not, at this time, run more server-side video uploads.
[1:31pm] Revent: PLEASE!
[1:31pm] Revent: In brief?
[1:31pm] Revent: https://ganglia.wikimedia.org/latest/?r=month&cs=&ce=&c=Video+scalers+eqiad&h=&tab=m&vn=&hide-hf=false&m=cpu_report&sh=1&z=small&hc=4&host_regex=&max_graphs=0&s=by+name
[1:32pm] Revent: There are over 6500 transcodes sitting in the queue.
[1:33pm] Revent: Most will run for 7 hours or so, and then time out… and the machine will try to run 100 at a time, and not process ANYTHING successfully.
[1:33pm] Revent: There are a couple of tasks about needing to upgrade them.
[1:34pm] Revent: I’ve been messing with it, getting stuff to actually run, and in the process discovered several ‘more’ bugs in TimedMediaHandler.
[1:35pm] Revent: The code apparently defines ‘capacity’ as ‘amount of ram’, not ‘number of CPUs.
(expression of dismay)
[1:35pm] Revent: You can’t multitask transcodes.
[1:36pm] Revent: (at least, not helpfully…. it’s a one-per-core thing)
[1:37pm] Revent: In brief…. I have been exploiting ‘another’ bug, by ‘resetting’ transcodes that are in the process of running, to halt the transcodes of all files over 200MB to let the servers at least catch up with the ‘normal’ business.
[1:38pm] Revent: It puts them back in the queue, but (based on how I read what it does to the DB) does not immediately halt the task, the task fails when it finds it’s ‘working files’ went away.
[1:39pm] Revent: It then adds an error message to the file in the transcode table, etc, which apparently prevents it from getting reprocessed even though it’s ‘queued'.
[1:39pm] Revent: It shows as ‘error’ on the file page, even tho it’s listed in the count of the queue.
[1:40pm] Revent: resetting it again makes it run, but puts it at the ‘front’ of the queue for some reason.
[1:40pm] Revent: (this has all been rather annoying, but has at least been getting ‘some’ files run, and brought the queue down some, slowly.
[1:41pm] Revent: I’m watching what I’m actually ‘doing’ with quarry, so I actually know which ones need reset (tho they show up in the queue anyhow)
[1:42pm] Revent: But, please, please, please, don’t dump 1000 more transcodes on the queue, lol. At least not right now.
[1:43pm] Revent: Dereckson: ^ hopefully all that makes some degree of sense.
Nov 16 2016
"Maury Markowitz" was compromised on enwiki within the last hour or so, the drama continues. See https://en.wikipedia.org/wiki/Special:Contributions/Maury_Markowitz
Nov 15 2016
Just as an 'update', after being educated on how to use Quarry to look at older parts of the broken transcode list, I've been working on poking ones through the queue again. The ones it's showing me are from June of 2013... many are 'short' files, that complete successfully within less than a minute once kicked back in the queue. I've pushed several hundred (I did not keep exact track) back through already.
Nov 13 2016
Ok. As it now stands, all of the broken transcodes 'exposed' by TimedMediaHandler on Commons (I mean, the list at https://commons.wikimedia.org/wiki/Special:TimedMediaHandler ) are either...
A. Subjects of a bug that prevents 'any' successful transcode.
B. Subjects of a bug that prevents transcoding from OGV to WebM.
or C. Very long (over an hour) and large (from ~750k up to about 3G) that have simply failed to transcode after repeated attempts, even when run 'one at a time'. These often error out after 5-6 hours.
Nov 11 2016
Since a fair number of deletions are handled from DR pages, it would be particularly nice to 'hook' this into DelReqHandler, so that a deleting admin is prompted (if deleting from that page) that transfers to fair-use wikis might be appropriate.
Nov 10 2016
https://commons.wikimedia.org/wiki/File:Oct_6,_1996_-_1st_Presidential_Debate_Clinton_%26_Dole.webm at 720P Webm, and
https://commons.wikimedia.org/wiki/File:Moscow_Ring_Railway_full_trip_-_view_from_ES2G_train.webm at 1080P ogg.