We are planning to switch the wikipedia.org and other portal code from being stored in meta, and instead to be stored in gerrit. There are no new scripts, so we don't see any security issues. However, we want Security to know what's happening, and to have a chance to speak up if they have any concerns.
|Resolved||debt||T112172 EPIC: of epics Wikipedia.org Portal UX tests to run|
|Resolved||debt||T112173 EPIC: [Portal A/B test 1]: Change the size of search text field and search button on Wikipedia.org|
|Resolved||csteipp||T117512 Give security a heads-up about plans and scripts to deploy wikipedia.org portal from gerrit|
Assigning to Chris, so he can triage and suggest any necessary next steps, such as scheduling a security review. Sorry about dropping this without notice, but until now nobody thought there were security concerns (and we still think there probably aren't).
We don't want to wait until the last minute for a security review. It would be very helpful to at least get the "triage" done now. If it needs a full review, that will have to be scheduled (and could potentially wait for some definition of consensus), but if it doesn't, it would be great to know right away, so we no longer have to schedule around this task as a potential blocker.
@ksmith, what is the timeline for this?
In general, my team should be involved when you're designing how this is going to work (when you have 80% of ideas together about what the process will look like, where you're going to store the data, what other systems you need to interact with), and then a code review prior to deployment. The first part should really be 1-2 hours of meetings to talk through what needs to be designed into the system. The code review is a check to make sure that you did those, and a second check to catch mistakes your developers have made in the implementation.
Where is the team at in the process currently?
@csteipp: This is an odd case, because there isn't actually any new code (or at least that's what I have been told). Currently there are scripts that extract HTML/JS/CSS from a template page on meta, and push them to the servers. The proposed new process would use existing (puppet?) scripts to pull the same HTML/JS/CSS from gerrit instead. There is no data, and permissions are handled through the standard existing gerrit and puppet systems.
We hope to deploy this "soon" (perhaps this week or next), pending resolving some community concerns about the approach. We weren't going to bring this to you at all, but I wanted to err on the side of caution (because of my security background). Especially since it's a widely used page.
I'm quite familiar with the current process, but I don't know what you're planning for the future. @ksmith, can you add the developers who are working on this? I think at this point we need to schedule 45-60 minutes to talk through the strategy, and then we can decide what needs code review.
Had a meeting with @EBernhardson and @MaxSem, and their plan is to get rid of extract2.php, and replace the redirect in apache with an alias to a static html files (https://github.com/wikimedia/wikimedia-portals).
I think this is a good direction. No need for an explicit code review before deployment, I think everyone involved knows how to avoid dom-xss issues.
If the team decides to move to a templating framework, or otherwise starts generating content, I'll want to review that. But at this time, happy to see this move forward as is.