We do have the duration ahead of time since it's extracted server-side at upload time -- it's on an attribute like data-durationhint="95.937". Not sure if there's a way to pass that into video.js's controls.
Note that the new video.js-based player mode shows the remaining time on the control bar instead of the elapsed time as in the old player (which is obsolete and will not be changed):
Issue is still current; fix presumably got rolled back by the rollback of 1.31-wmf.22 (T183961)?
Confirmed this is now working. :)
Per https://www.mediawiki.org/wiki/Manual:Tagline_(Site_Subtitle) it's hidden by default, and must be un-hidden manually in site CSS.
@Jan_Dittrich is this for during playback, or before clicking the play button, or both?
Wed, Feb 21
Thanks @matmarex. :) Now renders correctly, just with the unexpected initial orientation.
3D has been deployed to Commons, but not yet to the other wikis, so files can be uploaded but not yet used in articles. It'd be real nice to have mobile support working when that further deployment goes out so we don't have a bifurcated experience for our users -- is there any current work on this?
Note that for WMF production I think we'd also need to change our thumbor configuration to make sure the files get passed through the ffmpeg handler:
Quick note about glTF (T187844) regarding level of detail meshes and textures:
- textures in glTF are compressed (PNG or JPEG) and can be embedded in the file (both text and binary versions, though binary is much more efficient)
- in binary format, meshes and textures appear after the JSON-level scene description, so _in theory_, if the file lays out its resources in the proper order you can stream in multiple levels of detail by loading resources only when needed
- that's not supported by three.js's current glTF loader, though
See also T60478
One more thing -- in the binary format, the raw binary chunk comes after the JSON chunk and has no footer/trailer at the end, so it'd be possible to create a binary glTF file that's parseable as .zip/.jar which uses a trailer instead of a header. Our existing upload-time checks should catch this case.
Peeking at the spec, the problem is going to be with external files for textures and binary data -- JSON glTF files can either contain their own data as base-64 data: URIs, or reference external files, so a glTF file *can* be standalone but doesn't *have* to be.
Facebook seems to be using three.js for viewing -- https://threejs.org/docs/#examples/loaders/GLTFLoader
Note glTF is supported by the BabylonJS engine - https://doc.babylonjs.com/how_to/gltf - which is under Apache 2 license. I don't know how heavyweight it is for a minimal viewer (it's a fairly full-featured 3d engine which can be used for games and such as well as simple model viewing).
Note there's some folks interested in IIIF/Wikimedia discussion from the IIIF end; some notes started recently on this doc: https://docs.google.com/document/d/1lqtwd1rwUIck6nmetxtmQkzuTpgnwOSPREhH9YEjmHI/edit
I would second this. IE 10 is very rare since only Windows 8 still has it as a maximum version, and Windows 8 is aggressively EOL (Windows Update offers to update you to Windows 8.1, which has IE 11).
Mon, Feb 19
Sat, Feb 17
fix merged -> resolved
Fri, Feb 16
I didn't have any MPEG-TS files handy but I did a quick test creating one with ffmpeg -- ffmpeg will take it as input and convert it just fine, but getid3's handling is incomplete and it errors out.
Thu, Feb 15
Upstream issues reported:
- https://github.com/JamesHeinrich/getID3/issues/148 - aspect ratio on MPEG-2
- https://github.com/JamesHeinrich/getID3/issues/149 - missing audio track info on some files
So far so good with the spike patch, except that some files I've tested don't detect the audio track, possibly some incomplete support in getid3. It also returns the wrong pixel aspect ratio for MPEG-2 video, which I was able to work around on the TMH end for now.
Now detects interlaced input and applies yadif deinterlacing filter on transcodes.
I fixed the aspect ratio problem I originally saw, but need to figure out how to detect interlacing and then apply deinterlacing on the transcodes...
So I think this is gonna be MPEG-2 transport stream or something? I'm a little rusty. Lemme do a quick spike with samples from Internet Archive and see what they look like.
Mon, Feb 12
I'm not using them for anything; should be clear to wipe them as long as the live servers serving the video scaler queues are not affected. :D
Wed, Feb 7
I'm cleaning this one up, since videojs really wants webvtt it's a requirement for switching to get things set up cleaner. Also needed for cross-site InstantCommons usage.
I get a full 150 megabits download (my bandwidth cap) on that file from ulsfo, and about 100 megabits from my Linode server (tested with scp instead of wget, so might vary).
In the middle of the week it seems less congested than on the weekend, still on the same route. Seeing up to 32 megabits download, which is more reasonable but still less than I should be able to get (150 is my theoretical local download cap, and I can reach or surpass it from my linode server).
Sun, Feb 4
I'm encountering this problem again; the routes seem to have changed but symptoms are similar -- I see about 1-2 megabit/s downloads from eqiad (either media-streaming.wmflabs.org or dumps.wikimedia.org) from my Comcast IP in Portland.
Thu, Feb 1
Is this territory covered by thumbor?
@Jdlrobson good point -- I've changed the wording.
Tue, Jan 30
Mon, Jan 29
This got done a while back -- now we use the JS IMEs from UniversalLanguageSelector extension.
(this was a while ago)
(this was a while ago)
Resolved in ogv.js 1.5.6: https://gerrit.wikimedia.org/r/#/c/406606/
de-assigning, not active on this
This instance got rebuilt a while back and uses a lot less data now.
Bump -- am doing cleanup on OGVKit this month, with an eye towards prepping it for CocoaPods deployment and patching into the iOS app. Note we no longer need Ogg support as badly since we're now shipping patent-free MP3 audio, but will still need the WebM support for videos.
Dropping this old experimental idea task (though we're now more seriously talking about a PHP version of Parsoid for the future, which will be on separate tasks).
Fri, Jan 26
Ah, good catch -- that'll use browser language not site language I suspect. Needs some investigating.
Thu, Jan 25
Yeah, one or the other will be required -- normally allow_url_fopen is on by default as far as I know.
Wed, Jan 24
Do you mean for all playable media on Commons? That tool currently only allows support for up to 10 files, but I plan to add category support soon. Beyond that, I can look into adding mass querying of playcounts like we do for Massviews (except for pageviews), but if you want all playable media, across the board, it may be better to do some one-off analysis, unless we start regularly precomputing this data and serving it from an API. @Harej probably knows more.
I understand the global state issue, but not sure I understand the type safety issue... ie, would this aim towards allowing static analysis to confirm that hooks have the right signature? Or something more general?
Jan 24 2018
We may want to consider either temp breaking the audio subtitles or doing a modal popup for the special case. todo -> decide :D
Jan 23 2018
Note it might be worth including the SRT to WebVTT converter server-side so people who have .srt files from another system can still import them without manually converting. Should probably check out the options on Amara etc.
Can you try enabling debug logging and or inserting a new image to see if any error messages get logged? It may be failing to connect for some reason; most likely is SSL certificate issues.
Updated to v6.6.0 in the above \o/
What's the exact failure that happens without php curl extension? As noted on the code review, InstantCommons *should* work fine with the fopen-based MWHttpRequest backend as well as with curl.
Noticed the vegetarian option for lunch Tuesday was 'available on request' instead of being put out with the meaty food. This may be an issue for people.
Feedback from me: I actually liked the several keynotes from other perspectives. But some ran overtime and affected ability to get things done in sessions; we need to adequately timebox.
Feedback from a participant: it's unclear what the next steps are -- some sessions haven't identified clear action items or didn't clearly assign owners to them. Worried about the idea that we're bringing up the same issues year after year.
Participants -- we need owners for the proposed action items!
Updated ver of feedback I sent internally yesterday, to make sure we don't lose it. Nothing too private I think. :)
Note I'm increasing my current work on TMH this quarter, to include:
- finishing the replacement of the old Kaltura player frontend with a modern, maintained VideoJS-based frontend
- following the above, the removal of the MwEmbedPlayer side-extension and its confusing setup, and all the related bits
- ^ this alone will greatly simplify the code base
- further improvements to the admin interface (Special:TimedMediaHandler) and the job queue infrastructure
- cleanup of the configuration and extension setup (finish migration to extension.json)
Jan 21 2018
I think there's definitely some overlap with the outside-WikimedIa MediaWiki users, many of whom rn some sorts of knowledge bases or internal or external documentation, or folk research on cultural things that aren't Wikimedia centric in how they're treated.
Just want to make sure that issues get captured that may affect architecture decisions over in other working group topics -- knowing what's truly important or necessary or difficult for non Wikimedia users will help us make decisions. Is pure lamp stack on shared hosts actually an issue? Is there a middle ground with service deployment, or tools we can help build to deploy them? Do we need cleaner interfaces for customization, different types of page handling and UI, etc?
I think we do need a top down vision driven by WMF's top level strategy, against which we can ask specific questions and make specific decisions. A few notes:
- Being a good FLOSS citizen means both sufficiently funding our own development, and not making it unnecessarily hard on other contributor-users. To me that implies we do have an interest in having a good system of layers that can expand from core to use additional services where needed. It also opens the question of to what degree we want to help drive external development work, and helping to either fund it or find orgs who will.
- restbase, multimedia, etc etc are mostly details and we need to find the upper level issue. With restbase, it's a tool that other tools rely on, which to me feels like a core service, but it could stay layered too. We do though need to make sure it's known how to build on those layers.
- I think building a good api layer that UI can implement on top of is a really good idea. The current form-submit CRUD behavior is awkward to work with, and special pages are impossible to generalize well to mobile etc. This is all stuff that we can progressively enhance on the web, there's no need to leave old browsers completely behind. But we need to put in the work and most importantly we need to make the decision to do it.
- I think we need a good installer for dev and tiny installs. Vagrant partially fills this role but it's not easy to deploy and has some maintenance difficulties... and few/no resources assigned to keeping it going, it's mostly a labor of love.
- database access is lik a monolithic kernel, which means any security hole can reach a lot of internal data. I think long term we should radically change how we store private data like user password hashes, suppressed pages, up addresses. This would be far reaching potentially, by could be done in baby steps starting with separating password hashes to a service, etc.
Jan 20 2018
File has been fixed and now renders.