Wikimedia Security Teamhttps://phabricator.wikimedia.org/phame/blog/feed/13/2023-02-08T10:16:42+00:00A place for the [[ https://www.mediawiki.org/wiki/Wikimedia_Security_Team | Wikimedia Security Team ]].Creating a pentesting processhttps://phabricator.wikimedia.org/phame/post/view/293/Mstyles (Maryum)2022-08-15T19:32:27+00:002022-10-24T18:16:26+00:00

By @Mstyles and edited by @Cleo_Lemoisson

"Over the last quarters, the Application Security team has developed several services geared towards increasing the security of the code written at the Foundation. Most notably, we created an automated security pipeline and continued our security reviews as part of the readiness steps for production deployment. But, as this review process is more focused on new code that is about to be deployed, we needed a way to audit pieces of code that were already in production. This is where our pentesting program comes in!"

What is a pentest?

Penetration testing is a type of audit run on larger code bases by specialized external contractors. Combining internal reviews and external pentesting efforts allows for a thorough analysis of the code. While internal reviews have a deeper understanding of the context, external audits adopt a bigger picture approach which uncovers problems that could have otherwise been missed.

Pentests are usually run according to a black, white or gray box approach:

  • Black box penetration testing is done without any special permissions and is an attempt to gain access to systems similar to how external attackers would.
  • White box penetration testing is done with access to account logins and sometimes source code information
  • Gray box penetration testing combines aspects of black and white box testing. The pentesters have access to privileged accounts and do source code reviews, but also try a black box approach of gaining access to the system.

The gray box approach is the one the Security Team usually selects for WMF pentesting cycles.

Why do we pentest? And who needs it?

You might have heard of the critical issue was found in log4j in February 2022 - this was a pretty big one! This is the exact kind of thing pentesting is designed to avoid. By hiring external auditors, we want to try and avoid such vulnerabilities to ever live in our code and become public. As no review method is foolproof, we feel like having both internal and external reviews strengthen our chances to produce the most secure code possible.

The security team is looking for software running in WMF production and that would have a high impact on users if it were to become compromised. Past areas that have been tested include Mobile, Fundraising and Cloud VPS. We’ve also done assessments for third party software used at the foundation such as Mailman 3, Apereo CAS and the trusted runners in Gitlab. If you feel like you are working with software that could fit in those criteria, please reach out to us!

How is it typically run?
A typical pentesting process has several steps:
  • Scoping: this step is usually done prior to the start of the engagement. Some vendors have a scoping worksheet that has all of the documentation links and a short description of what’s being tested and any goals the testing engagement might have.
  • Kick-off meeting: a pentesting engagement starts with a kickoff meeting gathering the testers and the development team. During this meeting, the auditors will ask for clarifications about the source code, context and expected workflow of the application.
  • Audit: the pentesting team performs their tests. This step can last between two and three weeks depending on the scope of the audit.
  • Debrief meeting: the pentesting team issues a report containing a list of issues ranked with severity. This report is presented to the development team
  • Mitigation strategy: this is where the development team assesses the uncovered vulnerability and decides on the best remediation strategy. Ideally, at the minimum any critical or high severity issues would be addressed as soon as possible. Lower priority vulnerability can either be fixed at a later date or accepted as a known risk and entered in the risk registry.

It is worthwhile to note that WMF context and open-source philosophy differs from most vendors’ appreciation of risks. Therefore, some uncovered problems are in fact voluntary features of our way of working. Such differences include what information is made public and what is accessible on the public internet .

Different firms have different processes, but as a part of changing how we approach pentesting, we want to develop a standard approach regardless of what vendor is performing the assessment.

What does pentesting currently look like at the Foundation?

The program is still very much taking shape! Since 2018, we have performed 30 audits including from Mobile to Fundraising. Mediawiki extensions have a clear pipeline defined for application security reviews via the deployment checklist.

Past pentesting engagements have exposed different issues ranging from critical that were fixed immediately to best practices that certain projects were not adhering to.

Some audits also confirm that our code is secure! Recently, an assessment performed on Wikimeda Cloud Virtual Private Services ended up with the testers being unable to access other projects or the underlying hardware during their several weeks of testing. This means that any poor choices made by individual contributors to cloud projects, such as out of date packages or improperly stored credentials cannot impact other cloud projects or take down the underlying hardware.

Of course, doing pentesting at WMF is not without challenge. Communication has been one of them, since different teams use different communication formats. Some critical infrastructure, such as Central Auth, have no official WMF team and only a few community maintainers. This, combined with very little on wiki documentation, makes it difficult for testers to understand the system. Moreover, managing the remediation projects that are not supported is challenging because those phabricator tickets will add to the thousands of open or stalled ones.

Help us design the future of the pentesting program!

While successful, this pilot phase highlighted the need to develop a set of criteria to identify good “candidates” for pentesting engagements. As we want this process to be as collaborative as possible, we’d like to hold a meeting with people from various tech departments to discuss areas we might have overlooked in the past pentesting projects.

As we move forward with that, we want to create a similar pipeline and route to pentesting various areas of Mediawiki and other WMF projects.

For future pentesting assessments, we are looking for software that we’re using that might have never been reviewed or code that’s been in production a long time, but wasn’t reviewed recently or ever by the security team. As a part of a new pentesting process, we’ll start a list of previous engagements and when they were performed. There is a lot of code written by WMF employees and the technical community, and only so much pentesting budget. We’re focused on code that is in production and if attackers gained access, many users would be impacted.

Application Security Pipeline in Gitlab: A Journey!https://phabricator.wikimedia.org/phame/post/view/291/sbassett (Scott Bassett)2022-07-20T16:55:09+00:002023-02-08T10:16:42+00:00

By: @mmartorana and @sbassett

Some history

For about a decade now, the combination of gerrit, zuul and jenkins have been used as the primary means of code review and continuous integration for most Wikimedia codebases. While these systems have been used successfully and are customized to support various workflows and developer needs, they have not helped facilitate the development of a robust application security pipeline within CI. While efforts have been made within the security space - with phan and the phan-taint-check plugin, libraryupgrader, and an occasional custom eslint rule - Wikimedia codebases have not taken full advantage of the current suite of open-source application security tooling that drives modern security automation. Given the aforementioned deficits and the announcement of Wikimedia migrating to Gitlab as a git front-end and CI/CD system, the Wikimedia Security-Team decided to explore what a modern application security pipeline within Gitlab could look like.

Our development path and roadmap

When the Gitlab migration was announced, the Wikimedia Security-Team saw great potential in the development of a robust application security pipeline to further improve application security testing and to make a concerted effort to shift left (wikipedia, snyk, Accelerate). Gitlab and its modern CI/CD functionality was a great candidate to help us explore the architecture and implementation of an application security pipeline for Wikimedia codebases, as it satisfied a number of desired outcomes including user-friendliness, convenience and impact.

Over the past couple of quarters, members of the Wikimedia Security-Team have created a number of security includes which employ Gitlab’s intuitive CI/CD functionality, particularly their means of including various yaml configuration files as components within different CI/CD stages. We initially focused this work upon several common languages used within Wikimedia projects: PHP, JavaScript, Python and Golang. Though it should be noted that the Gitlab security includes project is open to all contributors and, given Gitlab’s flexibility and simplicity, will hopefully encourage both improving existing include files while also driving support for the creation of new include files to support additional languages.

A basic example

During the aforementioned development cycle, the Wikimedia Security-Team compiled some basic mediawiki.org documentation to help developers get started with the configuration of their Gitlab repositories to run various security-related tests during CI. One specific example we explored was that of the function-schemata codebase, as used for the Abstract Wikipedia project. We migrated a test version of the repository over to Gitlab and set up a simple, security-focused .gitlab-ci.yml. This obviously would not be a complete .gitlab-ci.yml file for most codebases, but let’s focus upon the security-relevant pieces for now. First we see several environment variables defined under the variables yaml key. These serve to configure various docker images, tool cli options, etc. and are documented within the application security pipeline documentation. Then we see a list of included CI files, referenced via raw file URLs and indicating a specific tagged release. These correspond to specific tools to run during the default test phase of a repository’s CI pipeline. We can see that npm audit, npm outdated, semgrep (with certain javascript-specific rules sets) and osv’s scanner cli will all be run. In addition to these included files, we are also including Gitlab’s built-in SAST functionality (currently blocked on T312961) which, while limited in certain ways, can provide for additional security analysis. We can then see some sample pipeline output which displays the output of the tools which were run and indicates passing and failing tests.

Some opinionated decisions and current caveats would include:

  1. Only being able to run the tools within the security include files under Gitlab’s test CI stage.
  2. Having the security include files run for every branch which triggers the default CI pipeline (we’d definitely like to support custom branch and tag configurations at some point)
  3. Only utilizing OSI- and free-culture-compliant tools and databases (likely perceived as a positive for many)
  4. Presenting all results publicly as is the default configuration for repositories and pipelines within Wikimedia’s installation of Gitlab, as it currently is within gerrit and jenkins and a value of most FOSS projects.

It should be noted for the last two issues that some discussion did occur within various Phabricator tasks (T304737, T301018) and the current state of the CI includes was determined to be the best path forward at this time.

The future we would like to embrace

The Wikimedia Security-Team is obviously very enthusiastic about our work thus far in developing an application security pipeline for Wikimedia codebases migrating to Gitlab. In the coming development cycles, we plan to address bugs, evaluate and improve current CI include offerings as well as develop (and strongly encourage others to develop) new and useful CI includes. Finally - we welcome any and all constructive feedback on how to best improve upon this initial offering of security-focused CI includes.

References

Addressing bug from 2019: information about private, security-related Phab ticketshttps://phabricator.wikimedia.org/phame/post/view/200/Dsharpe (Dsharpe)2020-07-06T17:04:33+00:002020-07-17T02:51:02+00:00

Today, we are writing to share the discovery and squashing of a bug that occurred earlier this year. This particular bug was also one of the rare instances in which we kept a Phabricator ticket private to address a security issue. To help address questions about when and why we make a security-related ticket private, we’re also sharing some insight into what happens when a private ticket about a security issue is closed.

Late last year, User:Suffusion of Yellow spotted a bug that could have allowed an association to be made between logged-in and non-logged-in edits made from the same IP address. Users with dynamic IP addresses could have been affected, even if they personally never made any non-logged-in edits.

Suffusion of Yellow created a Phabricator ticket about it, and immediately worked to get eyes on the issue. The bug was repaired with their help. We’re grateful for their sharp eyes and their diligent work to diagnose and fix the problem. As part of our normal procedure, the Security team investigated once the bug was resolved. They found no evidence of exploit. We are not able to reveal further technical details about the bug, and here is why:

When a Phabricator ticket discussing a security bug is closed, Legal and Security teams at the Wikimedia Foundation evaluate whether or not to make the ticket public. Our default is for all security tickets to become public after they are closed, so that members of the communities can see what issues have been identified and fixed. The majority of tickets end up public. But once in a while, we need to keep a ticket private.

We have a formal policy we use to determine whether a ticket can be publicly viewable, and it calls for consideration of the following factors:

  • Does the ticket contain non-public personal data? For example, in the case of an attempt to compromise an account, the ticket may include IP addresses normally associated with the account, to identify login attempts by an attacker.
  • Does the ticket contain technical information that could be exploited by an attacker? For example, in discussing a bug that was ultimately resolved, a ticket may include information about other potential bugs or vulnerabilities.
  • Does the ticket contain legally sensitive information? For example, a ticket may contain confidential legal advice from Foundation lawyers, or information that could harm the Foundation’s legal strategy.

In this case, we evaluated the ticket and decided that it could not be made public based on the criteria listed above.

Even when we can’t make a ticket public, we can sometimes announce that a bug has been identified and resolved in another venue, such as this blog. In this case, Suffusion of Yellow encouraged us to make the ticket public, and while pandemic-related staff changes have caused a delay, that request reminded us to follow through with this post. We appreciate their diligence. Keeping the projects secure is a true partnership between the communities of users and Foundation technical staff, and we are committed to keeping users informed as much as possible.

Respectfully,

David Sharpe
Senior Information Security Analyst
Wikimedia Foundation

Changes to Security Team Workflowhttps://phabricator.wikimedia.org/phame/post/view/187/JBennett (John Bennett)2020-02-03T20:07:58+00:002020-02-20T18:18:41+00:00

In an effort to create a repeatable, streamlined process for consumption of security services the Security Team has been working on changes and improvements to our workflows. Much of this effort is an attempt to consolidate work intake for our team in order to more effectively communicate status, priority and scheduling. This is step 1 and we expect future changes as our tooling, capabilities and processes mature.

How to collaborate with the Security Team

The Security Team works in an iterative manner to build new and mature existing security services as we face new threats and identify new risks. For a list of currently deployed services available in this iteration please review our services page.

The initial point of contact for the majority of our services is now a consistent Request For Services [2] (RFS) form [3].

The two workflow exceptions to RFS are the Privacy Engineering [4] service and Security Readiness Review [5] process which already had established methods that are working well.

If the RFS forms are confusing or don't lead you to answers you need try security-help@wikimedia.org to get assistance with finding the right service, process, or person

security@wikimedia.org will continue to be our primarily external reporting channel

Coming changes in Phabricator

We will be disabling the workboard on the Privacy [6] project. This workboard is not actively or consistently cultivated and often confuses those who interact with it. Privacy is a legitimate tag to be used in many cases, but the resourced privacy contingent within WMF will be using the Privacy engineering [7] component.

We will be disabling the workboard for the Security [8] project. Like the Privacy project this workboard is not actively or consistently cultivated and is confusing. Tasks which are actively resourced should have an associated group [9] tag such as Security Team [10].

The Security project will be broken up into subprojects with meaningful names that indicate user relation to the Security landscape. This is in service to Security no longer serving double duty as an ACL and a group project. This closes long standing debt and mirrors work done in T90491 for SRE to improve transparency. This means an ACL*Security-Issues project will be created and Security will still be available to link cross cutting issues, but will also allow equal footing for membership for all Phabricator users.

Other Changes

A quick callout to the consistency [11] and Gerrit sections of our team handbook [12]. As a team we have agreed that all changesets we interact on need a linked task with the Security-Team tag.

security@ will soon be managed as a Google group collaborative inbox [13] as outlined in T243446, This will allow for an improved workflow and consistency in interactions with inquiries.

Thanks
John

[1] Security Services
https://www.mediawiki.org/wiki/Wikimedia_Security_Team/Services
[2] RFS docs
https://www.mediawiki.org/wiki/Security/SOP/Requests_For_Service
[3] RFS form
https://phabricator.wikimedia.org/maniphest/task/edit/form/72/
[4] Privacy Engineering form
https://form.asana.com/?hash=554c8a8dbf8e96b2612c15eba479287f9ecce3cbaa09e235243e691339ac8fa4&id=1143023741172306
[5] Readiness Review SOP
https://www.mediawiki.org/wiki/Security/SOP/Security_Readiness_Reviews
[6] Phab Privacy tag
https://phabricator.wikimedia.org/tag/privacy/
[7] Privacy Engineering Project
https://phabricator.wikimedia.org/project/view/4425/
[8] Security Tag
https://phabricator.wikimedia.org/tag/security/
[9] Phab Project types
https://www.mediawiki.org/wiki/Phabricator/Project_management#Types_of_Projects
[10] Security Team tag
https://phabricator.wikimedia.org/tag/security-team/
[11] Security Team Handbook
https://www.mediawiki.org/wiki/Wikimedia_Security_Team/Handbook#Consistency
[12] Secteam handbook-gerrit
https://www.mediawiki.org/wiki/Wikimedia_Security_Team/Handbook#Gerrit
[13] Google collab inbox
https://support.google.com/a/answer/167430?hl=en

14 January 2020 security incident on Phabricatorhttps://phabricator.wikimedia.org/phame/post/view/185/Dsharpe (Dsharpe)2020-01-16T22:20:34+00:002020-03-21T21:20:24+00:00

On 14 January 2020, staff at the Wikimedia Foundation discovered that a data file exported from the Wikimedia Phabricator installation, our engineering task and ticket tracking system, had been made publicly available. The file was leaked accidentally; there was no intrusion. We have no evidence that it was ever viewed or accessed. The Foundation's Security team immediately began investigating the incident and removing the related files. The data dump included limited non-public information such as private tickets, login access tokens, and the second factor of the two-factor authentication keys for Phabricator accounts. Passwords and full login information for Phabricator were not affected -- that information is stored in another, unaffected system.

The Security team has investigated and assesses that there is no known impact from this incident. However, out of an abundance of caution, we are resetting all Two-Factor Authentication keys for Phabricator and invalidating the exposed login access tokens. Additionally, we continue to encourage people to engage in online security best practices, such as keeping your software updated and resetting your passwords regularly.

The Foundation will continue to investigate this incident and take steps to prevent it from occurring again in the future. In the meantime, Phabricator is online and functioning normally. We regret any inconvenience this may have caused and will provide updates if we learn of any further impact.

Respectfully,

David Sharpe
Senior Information Security Analyst
Wikimedia Foundation

translatewiki.net security incidenthttps://phabricator.wikimedia.org/phame/post/view/121/JBennett (John Bennett)2018-10-10T20:14:20+00:002019-06-01T21:46:48+00:00

What happened?
On September 24, 2018 a series of malicious edit attempts were detected on translatewiki.net. In general, these included attempts to inject malicious javascript, threatening messages and porn.

Upon detection it was determined that while the attacker’s attempts were unsuccessful there was a vulnerability that if properly leveraged could affect users. Because of the vulnerability it was decided to temporarily disable translation updates until countermeasures could be applied.

What information was involved?
No sensitive information was disclosed.

What are we doing about it?
The security team and others at the foundation have been working with translatewiki.net to add security relevant checks into the deployment process. While we currently have appropriate countermeasures in place we will continue to partner with translatewiki.net to add more robust security processes in the future. Translation updates will go out with the train while we continue to address architectural issues uncovered during the security incident investigation.

John Bennett
Director of Security, Wikimedia Foundation

Additional details on OurMinehttps://phabricator.wikimedia.org/phame/post/view/114/JBennett (John Bennett)2018-09-07T18:37:47+00:002018-09-14T16:18:07+00:00

The guard rails I'll be following will be around the original blog post created by Darian Patrick in November 2016. I'll do my best to fill in what gaps I can.

What Happened?
The attackers targeted a small group of privileged and high profile users. It is most likely that the attackers were using passwords that had been published as part of dump of other compromised websites such as LinkedIn. This notion was also confirmed by compromised users that they were in fact recycling passwords across multiple sites with known password dumps. There was no evidence of system compromise.

What information was involved?
There is no evidence of any personal information being disclosed beyond usernames and passwords.

What was done about it?
Improved alerting and reporting to identify dictionary and brute force attacks
Extended password policy to mitigate attacks

John Bennett
Director of Security, Wikimedia Foundation

Details of dictionary attack from May 2018https://phabricator.wikimedia.org/phame/post/view/113/JBennett (John Bennett)2018-09-07T18:37:33+00:002018-11-01T12:29:49+00:00

What happened?

On May 3rd 2018 a large spike in the number of login attempts was detected on English Wikipedia due to a dictionary attack sourcing primarily from a single internet service provider.

Several hours into the attack the security team and others at the Foundation launched countermeasures mitigating the attacker's efforts. While the countermeasures were successful, end users continued to receive "failed login" notifications emails as usual.

What information was involved?

Users whose accounts were compromised were contacted or blocked. Information disclosed consisted of usernames and passwords derived as part of the dictionary attack. No personal information was disclosed.

What are we doing about it?

Changes to password policies: The security team and others at the Foundation are evaluating our current password policy with the intention of strengthening it to better protect online identities, promote a culture of security, and to align with best practices. More on this in the coming weeks but it’s definitely a step in the right direction.

Routine security assessments: Starting at the end of September, the security team will begin a series of penetration tests to assess some of our current controls and capabilities.

As the Security team grows (we’re hiring) we will expand our capabilities to include additional assessments such as routine dictionary attacks to identify poorly credentialed accounts, penetration testing, policy updates, and additional security controls and countermeasures.

Other technical controls and countermeasures: While we can’t disclose our exact countermeasures, we have a series of additional technical controls and countermeasures that will be implemented in the near future.

Security Awareness: There are several changes coming and to support these changes the security team will be launching various security awareness campaigns in the coming months.

John Bennett
Director of Security, Wikimedia Foundation