Page MenuHomePhabricator

Epic saga: immersive hypermedia (Myst for Wikipedia)
Open, LowPublic

Description

Text and photos and videos are great, but today we don't have the same kind of pervasive hyperlinking in media that makes disappearing down a Wikipedia clicking journey both fun and educational.

Imagine, if you will, that flat photos, panoramas, and videos were as capable of linking to each other and showing related data in various formats as our text articles are. Multiple 360 panoramas and close up photos of a historic place could be linked together into an immersive experience; anything from a simple virtual tour to a fully annotated visual "article" able to take you to related concepts, places, times, etc.

In short, imagine playing Myst, but it's a useful, educational set of Wikipedia resources. ;)

Or think of HyperCard, if you prefer...

Related subtasks are numerous:

  • create an annotation/linking system for media that can take on the features of the existing annotations thingy done in client JS on Commons
  • extend in various ways including direct linking and a pluggable way to specify media-type-specific coordinates that can be extended for:
    • flat photos with 2d shapes
    • panoramas/photo spheres with 2d shapes that can extend across the seam boundaries
    • videos with 2d shapes with in/out time points, that can also change shape/position over time
    • 360-degree video plus seam boundaries like photo spheres
    • 3D versions of all the above with depth coordinates
    • 3D object spaces in 3D models
    • movable object attachment in an interactive widget
    • ... Etc ...
  • enhance the display engines to support marking and selecting annotations/links
  • pluggable interface for editing annotations too
  • possibly a way to add all these annotations in specific collection views as well as directly on a source file
    • for example, allow same source image to appear with different annotations in different "experiences", the way links on image maps can be added on specific articles separately from the annotations on commons

Related Objects

StatusSubtypeAssignedTask
OpenNone
OpenFeatureNone
ResolvedTheDJ
ResolvedMarkTraceur
DuplicateSnhkicker
ResolvedMarkTraceur
ResolvedNone
ResolvedNone
Resolved Gilles
Resolved Gilles
ResolvedReedy
ResolvedRicordisamoa
ResolvedMarkTraceur
ResolvedJdforrester-WMF
ResolvedJdforrester-WMF
ResolvedMarkTraceur
ResolvedMarkTraceur
Resolved Gilles
ResolvedCKoerner_WMF
Resolveddr0ptp4kt
Resolvedmatthiasmullie
OpenNone
ResolvedCKoerner_WMF
Resolvedmatthiasmullie
OpenNone
OpenFeatureNone
OpenNone
Resolveddschwen
Resolvedvalhallasw
OpenNone
Resolved Gilles
Resolved Gilles
OpenNone
StalledNone
ResolvedTheDJ
ResolvedTheDJ
ResolvedTheDJ
OpenNone
OpenFeatureNone
OpenNone
DeclinedNone

Event Timeline

That sounds very much like Google street view (you may get the same problems they had too, i.e. mandatory blurry faces).

Also, I believe Wiki Love Monuments folks would have a field day doing this kind of project.

That sounds very much like Google street view (you may get the same problems they had too, i.e. mandatory blurry faces).

I suspect the auto-photography of places and people without consent would be out of our budget once we factor in all the cars. ;) but yes -- there's definitely some overlap in a viewing model with linked 360 panoramas.

Also, I believe Wiki Love Monuments folks would have a field day doing this kind of project.

Yes yes yes. :) There's an aerial photography contest going on too, these sorts of things would go lovely with them.

Yes, yes and yes. One of the challenges is that Commons seems to be the default crossroads for these types of projects, and they are (understandably) very conservative, not just with copyright but about adopting anything that will break functionality for all the projects that rely on it. For example, even getting 3D object upload working has been a long slog.

This convinces me more each day we need an experimental "Tools Server" for just playing with media. Or we need a separate project/foundation just for the multimedia. But WikimediaMedia just sounds bad. :)

I just discovered an experiment how 3-D models can be combined with Wikipedia articles : http://en.volupedia.org/wiki/Stegosaurus

Awesome to see this finally in motion

Jdforrester-WMF moved this task from Untriaged to Backlog on the Multimedia board.

Seven years later, I'm tempted to surface this for the Wikimedia Hackathon 2023, or at least discuss the best possibilities using open-source tools.

One of the best non-open tools is Kuula.co, which provides a full "Myst" type authoring environment similar to Hypercard of the 1980s. It uses 360 photos as its basis, and creates virtual tours linking them together.

The best open-source analog to that right now seems to be Marzipano, but it is only a set of node.js modules that provides to tooling, but not the authoring environment.
https://github.com/google/marzipano

I could imagine providing a way to specify a virtual tour info on a wiki page, and then pointing to a toolforge implementation of Marzipano to parse it, and provide the immersive experience. More thoughts and discussion welcome.

For the 2024 Hackathon in Tallinn, I prototyped an example virtual tour using the underlying Panellum software, which takes a JSON definition file for nodes, images, and hotspots. An example can be seen here:

https://panoviewer.toolforge.org/tourbeta/

The relevant definition file, converted to TOML can be found at:
https://commons.wikimedia.org/wiki/User:Fuzheado/Panellum_Tour