Page MenuHomePhabricator

Wikimedia Technical Conference 2019 Session: Release "strategies" for MediaWiki and other elements of Wikimedia platform, for safe and efficient deployment and hosting
Closed, ResolvedPublic

Description

Session

  • Track: Deploying and Hosting
  • Topic: Release "strategies" for MediaWiki and other elements of Wikimedia platform, for safe and efficient deployment and hosting

Description

The session is dedicated to discussing containers as a method of releasing changes for MediaWiki and other Wikimedia software that would provide simpler ways for non-Wikimedia installations to install, configure and update theirs sites.

WMDE has provided Wikibase and some related software systems (e.g. SPARQL query service) in a form of Docker container, that non-Wikimedia (a.k.a. third-party) use to install and update on their services for a while now (2 years). WMDE surely isn't the only one who has been using containerization.

In the session we'd like to look into compiling a (preliminary) list of requirements for such solution to be useful and powerful, from both consumers of such release process, and the developer crew.

The images will be created using the build-pipeline, and hosted on the WMF image registry. The images will not be used in WMF production in any way.

Questions to answer and discuss

Usecases: What are the most basic use cases and requirements for operating MediaWiki and Wikimedia services that should be considered?
Significance:

Configuration & Magic: What should configuration look like for mediawiki containers? And how should the installer and updater play into this?
Significance:

Extensions & Skins: How should extensions (and skins) be included in mediawiki containers? What 'flavours' should be supporting?
Significance:

Other services: What services, other than just MediaWiki, need or should be deployable using containers from the Wikimedia world for other users?
Significance:

Related Issues

Pre-reading for all Participants


Notes document(s)

https://etherpad.wikimedia.org/p/WMTC19-T234644

Notes and Facilitation guidance

https://www.mediawiki.org/wiki/Wikimedia_Technical_Conference/2019/NotesandFacilitation


Session Leader(s)

Session Scribes

Session Facilitator

  • Aubrey

Session Style / Format

  • Directed unconference

Session Leaders please:

  • Add more details to this task description.
  • Coordinate any pre-event discussions (here on Phab, IRC, email, hangout, etc).
  • Outline the plan for discussing this topic at the event.
  • Optionally, include what this session will not try to solve.
  • Update this task with summaries of any pre-event discussions.
  • Include ways for people not attending to be involved in discussions before the event and afterwards.

Post-event summary:

  • ...

Post-event action items:

  • @Addshore to get the lists of raw thoughts / question answers prioritized by participants (and others?)
  • @Addshore / WMDE to work toward moving current wikibase images to deployment-pipeline
  • @Addshore to create a mw.org page covering the learnings and priorities of the session

Event Timeline

So we would potentially discuss things like red/blue, checksumming, a better maintenance mode for upgrades, database changes rollback, the web installer and a web configurator ?

Seems awfully broad to me (though I'd love to discuss all of them). I'd rather have us zoom in on 1 or 2 of these and really get going. I mean none of it seems super rocket science to me (pun intended @Darenwelsh) and I think we also concluded the same at last years event when we looked at the solutions of several 3rd parties in this area. I think this is one of the sessions where we should really start picking what we want to do (preferably before the session), and how we will deliver it (during the session).

etc that is "same" as the one operating in production

I think that in particular is already coming up in numerous of the other sessions in this theme. Better avoid that.

Thanks @TheDJ. The session in the current shape is indeed pretty broad, I'll try to focus the scope. I dare to call out to all folks who are interested in the session proposal to state here what specific topics they had in mind when they read it. That'd help me with narrowing down the scope here in a way that it is still meaningful for participants.

If the attempt to scope down the session and find the leader fails, I'd rather skip it completely.

I think lost of discussion has already happened in previous years around points 2 and 3.
IMO all that actually needs to happen there is some prioritization and work actually being done focusing in those areas.

As for point 1, I think this strongly relates to T234641

Deploying changes/releasing for WMF infrastructure is indeed pretty well covered already in T234641. Local environment setting is also a bit of different can of worms, which I don't think should be mixed here to allow for good discussion and outcomes.
Here's how I would scope down the session:

The session is dedicated to into alternative methods of releasing changes to MediaWiki and other Wikimedia software that wold provide simpler ways for non-Wikimedia installations to install, configure and update theirs sites.
Since a while WMDE has providing Wikibase and some related software systems (e.g. SPARQL query service) in a form of Docker container, that non-Wikimedia (a.k.a. third-party) use to install and update on their servers. WMDE surely isn't the only one who has been using containerization
In the session we'd like to look into whether there is an interest and, possibly, a need for wider adoption of similar model in the broader Wikimedia world? We'll be also interesting in compiling a (preliminary) list of requirements for such solution to be useful and powerful, from both consumers of such release process, and the developer crew.

Admitedly, this is surely skewed by WMDE perspective. I'd be very curious whether such angle on the release process topic is also something which is relevant for other folks?
@Tgr @Samwilson @bd808 @Addshore @TheDJ @kostajh I pinging you directly as you've shown interest on this topic, either by commenting on adding tokens.
Anyone feeling like helping with co-leading this from non-WMDE angle would be greatly welcome!

I believe such session would be, to a degree at least, complementing to/covering some aspects of what T234651 might be intending to deal with. Any thoughts on this @kaldari?

Two very different (although not necessarily exclusive) potential topics are a better installer / upgrader (extension store, web upgrades, admin panel etc) which is something @CCicalese_WMF has made plans about in the past, and containerization (one of the recommendations from T206059: Wikimedia Technical Conference 2018 Session - Choosing installation methods and environments for 3rd party users was that if the WMF commits to support containerized or packaged MW that is easy to install and maintain, on a virtual server, we can drop shared hosting support).

My main question would be, what's the chance that resources will actually be committed to whatever the session outcome is? WMF leadership has so far fairly consistently ignored everything that has "3rd party" in it, and without any chance for resourcing, this session is probably not a productive way of spending time.

@Tgr: good point again. For me the containerization is better suited and more appealing because it fits better to my interpretation of the "Developer Productivity" theme, whereas better installer/upgrader is crossing into the Product territory, especially if that'd mean creating features that not yet exist in the product.
Also, I'd indeed see this as a continuation to T206059. I believe would be contributing to the developer productivity that certain limitations ("shared hosting support", for example) could be removed/loosened.

My main question would be, what's the chance that resources will actually be committed to whatever the session outcome is? WMF leadership has so far fairly consistently ignored everything that has "3rd party" in it, and without any chance for resourcing, this session is probably not a productive way of spending time.

I cannot say for the WMF of course, but WMDE is committed pretty clear to striving to treat non-Wikimedia (third-party) "sector" as target group for whom we develop Wikibase and related software, see e.g. "Wikidata and Wikibase vision" and Strategy for Wikibase Ecosystem

That all said, I do agree that the energy and effort of participants should not be invested in topics and that are known for not leading to anything happening/changing.

WMDE-leszek renamed this task from Wikimedia Technical Conference 2019 Session: Release "strategies" for MediaWiki and other elements of Wikimedia platform, for safe and efficient deployment and hosting to Using containerization as a release strategies for MediaWiki and other "Wikimedia software" for non-Wikimedia ("third-party").Oct 17 2019, 11:45 AM
WMDE-leszek updated the task description. (Show Details)

Having discussed this briefly with @Addshore we've decided to scope the session down. I've updated the description etc. @Addshore also volunteered to (co-)lead the session (applause)!
@Krinkle: would you be interested in co-leading it with Adam?

WMDE-leszek renamed this task from Using containerization as a release strategies for MediaWiki and other "Wikimedia software" for non-Wikimedia ("third-party") to Wikimedia Technical Conference 2019 Session: Using containerization as a release strategies for MediaWiki and other "Wikimedia software" for non-Wikimedia ("third-party").Oct 17 2019, 12:08 PM

In the session we'd like to look into whether there is an interest and, possibly, a need for wider adoption of similar model in the broader Wikimedia world? We'll be also interesting in compiling a (preliminary) list of requirements for such solution to be useful and powerful, from both consumers of such release process, and the developer crew.

So the new scope is "Should there be MediaWiki Docker images?"? Since there is already a volunteer based project (MediaWiki-Docker) around this, and clear signaling from the Foundation that Docker images are "the future" for production deployments (T228676: Self-service Deployment Pipeline), I think this question is answered.

The second focus mentioned in the description feels more potentially useful for a Technical Conference session. Something like "Document user stories and system requirements for operating a MediaWiki wiki (or wikifarm?) using Docker containers as the basic means of deployment." sounds like an activity that a room of 10-20 people could make real progress on in 1 hour. There is still an open question of who (volunteer or Foundation/affiliate staff) would then take the output of such a session as input into an actual project for delivering technology and processes to fulfill the stories and requirements.

whereas better installer/upgrader is crossing into the Product territory

I guess that's one area where "product" as a concept and "Product" as a department do not align. Features like that are owned by the Core Platform Team, who do indeed have product managers.

Anyway I think containerization is a good thing to focus on. If it works out, the traditional installer and upgrader will mostly be obsoloted anyway. (The extension store not so much, and how to integrate that or the extension system in general with containers is a very relevant question.)

So the new scope is "Should there be MediaWiki Docker images?"? Since there is already a volunteer based project (MediaWiki-Docker) around this, and clear signaling from the Foundation that Docker images are "the future" for production deployments (T228676: Self-service Deployment Pipeline), I think this question is answered.

I guess the current MediaWiki docker images on GitHub in the mediawiki-docker repo cover the MediaWiki core case rather well.
There are discussions to be had around how extensions could and or should play a role in those images.
For wikibase we currently have a plain wikibase image, and then a bundle image including some relevant extensions.
There should also be some discussions around how these images should actually work etc.
These images also do not cover the other "wikimedia software" that is becoming more important when running a MediaWiki. Of course once everything is put through the build pipeline we will end up with docker images of some sort, but will they actually be useful for third parties and other users? That planning has to happen or the case may be that we end up maintaining 2 sets of docker images for each service.

The second focus mentioned in the description feels more potentially useful for a Technical Conference session. Something like "Document user stories and system requirements for operating a MediaWiki wiki (or wikifarm?) using Docker containers as the basic means of deployment." sounds like an activity that a room of 10-20 people could make real progress on in 1 hour. There is still an open question of who (volunteer or Foundation/affiliate staff) would then take the output of such a session as input into an actual project for delivering technology and processes to fulfill the stories and requirements.

From the experience of having users deploy wikibase and surrounding services using docker images over the past years there is defintly a need for documentation of best practices for upgrades etc.
System requirements also tie into my previous comment about not wanting to maintain 2 docker images for each service.

I'm more than happy to turn the outputs of this and other sessions into a plan, but of course I have no resources to make that plan happen, that needs buy in from else where.

debt triaged this task as Medium priority.Oct 22 2019, 6:58 PM

(Programming note)

This session was accepted and will be scheduled.

Notes to the session leader

  • Please continue to scope this session and post the session's goals and main questions into the task description.
    • If your topic is too big for one session, work with your Program Committee contact to break it down even further.
    • Session descriptions need to be completely finalized by November 1, 2019.
  • Please build your session collaboratively!
    • You should consider breakout groups with report-backs, using posters / post-its to visualize thoughts and themes, or any other collaborative meeting method you like.
    • If you need to have any large group discussions they must be planned out, specific, and focused.
    • A brief summary of your session format will need to go in the associated Phabricator task.
    • Some ideas from the old WMF Team Practices Group.
  • If you have any pre-session suggested reading or any specific ideas that you would like your attendees to think about in advance of your session, please state that explicitly in your session’s task.
    • Please put this at the top of your Phabricator task under the label “Pre-reading for all Participants.”

Notes to those interested in attending this session

(or those wanting to engage before the event because they are not attending)

  • If the session leader is asking for feedback, please engage!
  • Please do any pre-session reading that the leader would like you to do.
debt renamed this task from Wikimedia Technical Conference 2019 Session: Using containerization as a release strategies for MediaWiki and other "Wikimedia software" for non-Wikimedia ("third-party") to Wikimedia Technical Conference 2019 Session: Release "strategies" for MediaWiki and other elements of Wikimedia platform, for safe and efficient deployment and hosting.Oct 25 2019, 9:26 PM
debt updated the task description. (Show Details)

If it works out, the traditional installer and upgrader will mostly be obsoleted anyway.

I guess that depends on exactly how the containerization issue is tackled. There is no reason you can't load up a container and go through the regular web flow of installing and use the standard upgrade process too.

In my experience the Wikibase docker images have been useful to quickly set up demo sites. Unfortunately, per my understanding they are not production-ready images. Some of the demo sites have turned into production-like sites with possibly unoptimized, insecure settings. I also have no idea about the best practices about how to run docker stuff in production. It seems it is assumed people know how to use docker (in production), but I'd say it's still pretty unique skill. Not to mention old distributions which may not have docker available. And for small sites, docker is not yet easy enough to make a case to add another complex moving part to the deployment process.

The Wikibase docker is not enough enough to setup a production site. All the extensions and services should be there to be run via the same system and not mix different kind of setups.

I also echo for need for better documentation how to regular tasks like upgrading, backups, migration to another server, etc.

In my experience the Wikibase docker images have been useful to quickly set up demo sites. Unfortunately, per my understanding they are not production-ready images.

When these images were initially created, they were a side project of mine.
In the years since then they have been maintained and I would say all are technically production ready, expect for T237248 which needs fixing.
Of course saying they are production ready is pointless if people don't understand how to use them etc as you touch on in the rest of your comment.

We also have a quick start guide using docker-compose which itself is definitely not "production ready" and should not be used in production without modification.
https://github.com/wmde/wikibase-docker/blob/master/README-compose.md

I also have no idea about the best practices about how to run docker stuff in production. It seems it is assumed people know how to use docker (in production), but I'd say it's still pretty unique skill.

Yes, this is one thing that some users have struggled with over the past years, although all of the documentation is technically out there on the web for people to consume, on the whole better documentation is needed as well as links to guides etc.

The Wikibase docker is not enough enough to setup a production site. All the extensions and services should be there to be run via the same system and not mix different kind of setups.

This is one of the questions I think we should discuss during the session.
For wikibase, we provide a base image, just with wikibase, and a bundle image, which include some spam protection, and other useful extensions.
Saying all of the extensions should be there is probably not a great idea. What are all of the extensions? How big is this image going to be?
One of the reasons we decided to bundle extensions in the images at all is because there is no easy way to install extensions from the UI etc.

I highly doubt that any site will have the exact requirements of the images that we currently build, although many test sites do.
But, the image provides a base from which you can build, add custom extensions, code, settings, etc, as well as a self documenting system for setting up wikibase and the surrounding services.

I also echo for need for better documentation how to regular tasks like upgrading, backups, migration to another server, etc.

+1, this is a problem that we have not solved yet, and also needs to be discussed.

I have written a few blog posts on a few topics, but nothing that is centralized and supported by either WMDE or WMF yet:

I'd like to share a few thoughts from the perspective of running a Wikibase since 2015 and having switched to the Docker distribution in 2017, including data migration:

The Docker distribution more or less has removed many technical barriers towards getting a Wikibase up and running. It seems like focusing on maintenance and customization would make the most sense in order to increase the number of continuously available Wikibase instances.

To me, these issues seem to have the most potential impact:

  1. Transparent use of the Query Service: users shouldn't be required to think about the QS and the Wikibase as separate entities, and they should be able to just handle backing up and restoring a single set of data—very likely the MySQL database. Interactions in between Wikibase and the QS should happen in the background.
  2. One of the main reasons why folks would run their own Wikibase is the ability to customize it according to their needs. The way how LocalSettings.php in the Docker distro is handled makes this a little bit difficult: Either all of your settings are managed automatically, or you have to do everything yourself. Maybe having the ability to append to the automatically managed configuration—for instance via a LocalSettings.php.d directory—could ensure that any prefab processes working on the database would always work, since the configuration would always be the same on all deployments. Any extensions would could be snippets to add to the defaults.
  3. Documentation is great, but even better would be configuration options and scripts that do what users need: set up name spaces, backup and restore data, install Extensions, etc. A script like wikibase-manager that did things like wikibase-manager data reset, wikibase-manager data export , etc, translating input to some clever docker-compose exec ... commands would be extremely useful.
  4. In linked data URIs are super important, but the ambiguity of how the different docker images talk to each other makes it hard to understand from what information concept URIs are actually generated, and how to move from test to production. There are many mistakes to be made when you have even the option of giving the conatiners aliases, or having to set two unrelated variables to the same value in order to make everything work. The more this could be consolidated, the better.

Another point that came up at the start of Monday:

wikimedia has custom PHP versions, Apache versions, etc. Are these actually desirable for releases / 3rd party users? How about the docker library images?
How does this alter the desire to have 1 docker image for everything, Vs having to maintain multiple versions

wikimedia has custom PHP versions, Apache versions, etc. Are these actually desirable for releases / 3rd party users? How about the docker library images?

From my perspective: The attractiveness of a Docker distribution is that most technical challenges for getting a tool to work have been solved and are embodied in the container. If I would like to tweak setups different from what is recommended by the container authors I could set up my own system or build my own containers.

So if for instance the Wikibase Docker container would be using a custom version of PHP and Apache I wouldn't't mind, just as I wouldn't mind if it uses the Ubuntu defaults, as long as the container works as documented :)

Exported etherpad notes:

Wikimedia Technical Conference
Atlanta, GA USA
November 12 - 15, 2019

Session Name / Topic
Release "strategies" for MediaWiki and other elements of Wikimedia platform, for safe and efficient deployment and hosting
Session Leader: Adam; Facilitator: Aubrey; Scribe: Brennen
https://phabricator.wikimedia.org/T234644

Session Attendees
Adam, Lars, Piotr, Florian, Tyler (Scribe II), Cindy, Brennen (Scribe), Darren, Gergo, Sam

Notes:

  • Containerize It
  • Adam: Slowly been scoping this down - 4 questions we'll go through one at a time
  • Consolidation
    • WMD wikibase docker images
      • mw + wikibase + other
      • blazegraph
      • static webpage ui
      • elasticsearch
      • quickstatements (tool)
    • mediawiki volunteer images (mw-docker) - on GitHub
    • images from other services and teams
  • Hopefully, the things that come out of this session will make the top list
  • Focus on
    • what these should look like
    • How config should work
    • What extensions should be in there
    • How?
    • The same for skins
    • Other services
    • Versions of all of these
  • Don't focus on
    • Who is going to maintain them
    • How they'll be built
    • Where they'll live
    • WMF Production
  • These aren't WMF production images, but they should be at the level where they can be used in prod by others.
  • Some sort of priority for all of levels of configuration
  • Question 1: Use cases
    • What are the most basic uses cases and requirements for operating MW and Wikimedia services that should be considered
    • [paper exercise - write down answers to these questions]
    • Raw unprioritized thoughts below:
      • mw core + most used extensions (quite broad)
      • performant job queue (not run on page load)
      • not including elastic search (not in the default image)
      • proper object + parser + any caching
      • persistant uploads (not saved in container)
      • persistent configuration
      • easy to setup databases
      • interffaces to the outside world (comming with dbs)
      • underlying software choices (php versions, apache vs nginx)
      • how do you upgrade that underlying software (php etc.)
      • How do you change your configuration
      • not having spam comming in with default configuration
      • how to import data for testing
      • backups / disaster recovery (incluidng dumps, and restore process)
      • Sanitized data replication (similar ot tool labs in wmf land)
      • Migration from different setup
      • Installations of mediawiki extensions (php extensions too), easy commands in docker
      • one container that rules them all? or multiple?
      • proxy for services, can proxy to production wmf restbase
      • development container (log in and work in container)
      • full access to server vs going down the other way (toward shared hosting)
      • wikifarms (multisite)
      • upgrading pieces of the environment (inidividual extensions)
      • statistics and pingback for usage
      • mwstake ref implementaiton sunflower, list of extensions that provide a good coverage of non wikimedia usecases
      • rollbacks
      • debugging, default logging setup, enable debugging toolbar for vertain people
      • special:verison contain info for bug report
      • tunneling to things for cool debugging
      • federation support out of the box (more than just commons)
      • Default configs of the things included are all sane, and the extension works with no extra config
      • container size
      • integration with single sign on etc (identifi management)
      • CI usecases
  • Not only about local development
  • Not in scope: proxying to wmf services
  • Question 2: Config & Magic
    • What should config look like for MW containers?
    • How should the installer & updater play into this?
    • Are you providing LocalSettings.php or is there some other magic mechanism?
    • P: You can't make everyone happy
    • F: mounting configuration as an evrionment value/or local settings set as an environment
    • P: Passing all an env file
    • F: wgServer database and the rest in the config file
    • S: storing things in the database would be nice, but not yet a thing
    • P: volume as a config directory
    • S: single volume with image directory, backup dumps directory, localsettings.php, is that the concept?
    • F: override the default settings
    • P: lots of seperate volumes from each purpose
    • S: in container have a Local settings and optionally require a mounted localvolume
    • F: Installer I can't see how it would work
    • L: What is this?
    • F: Does a lot of stuff
    • S: pre-seeding the localsettings.php will prevent running the installer; however, running updater is possible
    • P: the things that you answer in the installation process get set as environment variables
    • S: discourse restores from backup
  • Question 3: Extensions & Skins
    • How should extensions & skins be included in MW containers?
    • What flavors should we be supporting?
    • Adam: So many combinations of extensions & skins - what we do for the Wikibase images is MW + Wikibase - then we also have a baseline bundle image that includes things like spam protection, nuke, etc. 99% of people would be happy. Customization is another thing.
      • Base image with nothing in it, but what other packages make sense.
    • Cindy: Is it possible to have containers that coexist where you've got base mediawiki then assemble with it additional containers that have extensions
    • Adam: Not really. You could have the code but not have it enabled, but it takes up space.
    • Gergo: You can do config management... Having some kind of container builder would be a cool approach, but... Generally you define some bundles
      • Semantic MW
        • Cargo vs. ...
      • Wikibase people who want to use their own Wikibase
      • People who want to get a wide audience to use MW and want all the user-friendly things - Flow, VisualEditor, etc.
      • Tarball users - or a replacement for the tarball
    • Skins
      • Adam: WMF is gonna end up with a 2 gig or more docker images
      • Adam: How do we feel about skins?
      • Gergo: A lot of people just want what WMF goes with (currently Vector) - if you look at a company with custom thing ...?
      • Chameleon - European Space Agency - Bootstrap-based
      • Gergo: Making skins a thing you can plug in seems like it'd be popular
      • Cindy: There are maybe 6 commonly-used skins
      • Adam: This ends up being a matrix because you have to combine the skin choices with other variations on MW
      • Cindy: This is where it'd be nice to plug containers together
      • Gergo: This is where a container builder would be really useful
      • Adam: We could provide a well-documented, well-thought-out tool that allows you to build images - generate Dockerfile
        • That's a thing with the Wikibase side of things - have a tool for generating people's Dockerfiles - wikibase-docker-compose-generator or something like that (side note: it was called blubber first)
      • Adam: Docs need to be written to help peple understand what htis means
        • Been down a path of discovery with wikibase docker images
      • Cindy: If we could build a tool... Had a wiki
      • Gergo: SRE might be moving in the direction of building such a tool
      • Cindy: Darren & I were talking about NASA doing a lot of work starting from a bare-metal VM and working up - something something Ansible
        • Concentrate on MediaWiki configuration and above
  • Report: Extensions & Skins
    • There should be a base image that includes Wikipedia skin, no extensions
    • Would probably want to stick with set of extensions that are in the tarball
    • Talked about lots of other groups of extensions tha tmight make sense - semantic flavor, editing-focused flavor with VE etc., came back to maybe that's too much to support
    • Came up with a tool for doing that last stage of customization - going from base image, providng a way to build Dockerfile, configuration, extension list
    • There's a barrier to entry for containers in general
    • Documentation of all of these things and writing out all the steps is very important
    • Question: MediaWiki store?
      • There is a security issue
      • THere is a rollback issue: installing an extension that breaks mediawiki leaves you with a broken MediaWiki
      • There are proofs of concept; it's desired in the community.
    • Adam: Didn't really touch on this, but what does WordPress do?
  • Report: Config & Magic
    • Solution for what config should look lke for containers: Environment variables for some stuff like wgServer and database vars, then also a mounted volume for LocalSettings - a LocalSettings inside the container that's very minimal requires.
    • Run update and not the installer because you've got a LocalSettings
    • Talked about what magic there needs to be surrounding process
    • No one was in favor of magic
    • It's already a confusing world
    • Adam: On the magic point - we create Wikibase Docker images, varying levels of magic - we want to remove the magic
    • What is magic?
      • Running install.php
      • Oauth consumer
      • ElasticSearch things
      • Blazegraph
      • The more you do this, the worse it is.
  • Question 4: Other services
    • What services, other than just MediaWiki, need or should be deployable using containers from the Wikimedia world for other users?
      • xtools
      • restbase (page previews requires this)
      • DB for mw (other thatn mysql should be an option)
      • proton
      • elasticsearch
      • kibana based logging
      • parsoid
      • quarry
      • thumbor
      • email sending
      • analytics (pageviews)
      • citeod
      • graphoid
      • *oid
      • monitoring of docker container (env monitoring tools)
      • pool counter
      • jobqueue, kafka?
      • caching (varnish? ATS?)
      • map rendering
  • Trimming and organization of external notes
    • [spam discussion]
    • backups / rollbacks / migration from different setup
    • Volumes vs. object store
    • Importing data for testing / database setup - touches other database questions
  • Adam: General thoughts about discussion?
  • Sam: Getting people using them is a good idea
  • Adam: From the wikibase side, people already do. We want to put everything a lot closer to what WMF have.
  • Gergo: Cloud VPS puppet role for setting up such a dockerized wiki on cloud VPS.
  • Unconference session may be added for this topic.
  • Tyler: Who's using Wikibase docker images?
  • Adam:
    • ???
    • Experimental use
    • wmde/wikibase-docker
    • Probably a lot more people than we knew about...
    • ElasticSearch
      • Evil magic

ACTION ITEMS

  • Adam to get the lists of raw thoughts / question answers prioritized by participants (and others?)
  • Adam / WMDE to work toward moving current wikibase images to deployment-pipeline
  • Adam to create a mw.org page covering the learnings and priorities of the session

Thanks for making this a good session at TechConf this year. Follow-up actions are recorded in a central planning spreadsheet (owned by me) and I'll begin farming them out to responsible parties in January 2020.