HomePhabricator
Creating a pentesting process

By @Mstyles and edited by @Cleo_Lemoisson

"Over the last quarters, the Application Security team has developed several services geared towards increasing the security of the code written at the Foundation. Most notably, we created an automated security pipeline and continued our security reviews as part of the readiness steps for production deployment. But, as this review process is more focused on new code that is about to be deployed, we needed a way to audit pieces of code that were already in production. This is where our pentesting program comes in!"

What is a pentest?

Penetration testing is a type of audit run on larger code bases by specialized external contractors. Combining internal reviews and external pentesting efforts allows for a thorough analysis of the code. While internal reviews have a deeper understanding of the context, external audits adopt a bigger picture approach which uncovers problems that could have otherwise been missed.

Pentests are usually run according to a black, white or gray box approach:

  • Black box penetration testing is done without any special permissions and is an attempt to gain access to systems similar to how external attackers would.
  • White box penetration testing is done with access to account logins and sometimes source code information
  • Gray box penetration testing combines aspects of black and white box testing. The pentesters have access to privileged accounts and do source code reviews, but also try a black box approach of gaining access to the system.

The gray box approach is the one the Security Team usually selects for WMF pentesting cycles.

Why do we pentest? And who needs it?

You might have heard of the critical issue was found in log4j in February 2022 - this was a pretty big one! This is the exact kind of thing pentesting is designed to avoid. By hiring external auditors, we want to try and avoid such vulnerabilities to ever live in our code and become public. As no review method is foolproof, we feel like having both internal and external reviews strengthen our chances to produce the most secure code possible.

The security team is looking for software running in WMF production and that would have a high impact on users if it were to become compromised. Past areas that have been tested include Mobile, Fundraising and Cloud VPS. We’ve also done assessments for third party software used at the foundation such as Mailman 3, Apereo CAS and the trusted runners in Gitlab. If you feel like you are working with software that could fit in those criteria, please reach out to us!

How is it typically run?
A typical pentesting process has several steps:
  • Scoping: this step is usually done prior to the start of the engagement. Some vendors have a scoping worksheet that has all of the documentation links and a short description of what’s being tested and any goals the testing engagement might have.
  • Kick-off meeting: a pentesting engagement starts with a kickoff meeting gathering the testers and the development team. During this meeting, the auditors will ask for clarifications about the source code, context and expected workflow of the application.
  • Audit: the pentesting team performs their tests. This step can last between two and three weeks depending on the scope of the audit.
  • Debrief meeting: the pentesting team issues a report containing a list of issues ranked with severity. This report is presented to the development team
  • Mitigation strategy: this is where the development team assesses the uncovered vulnerability and decides on the best remediation strategy. Ideally, at the minimum any critical or high severity issues would be addressed as soon as possible. Lower priority vulnerability can either be fixed at a later date or accepted as a known risk and entered in the risk registry.

It is worthwhile to note that WMF context and open-source philosophy differs from most vendors’ appreciation of risks. Therefore, some uncovered problems are in fact voluntary features of our way of working. Such differences include what information is made public and what is accessible on the public internet .

Different firms have different processes, but as a part of changing how we approach pentesting, we want to develop a standard approach regardless of what vendor is performing the assessment.

What does pentesting currently look like at the Foundation?

The program is still very much taking shape! Since 2018, we have performed 30 audits including from Mobile to Fundraising. Mediawiki extensions have a clear pipeline defined for application security reviews via the deployment checklist.

Past pentesting engagements have exposed different issues ranging from critical that were fixed immediately to best practices that certain projects were not adhering to.

Some audits also confirm that our code is secure! Recently, an assessment performed on Wikimeda Cloud Virtual Private Services ended up with the testers being unable to access other projects or the underlying hardware during their several weeks of testing. This means that any poor choices made by individual contributors to cloud projects, such as out of date packages or improperly stored credentials cannot impact other cloud projects or take down the underlying hardware.

Of course, doing pentesting at WMF is not without challenge. Communication has been one of them, since different teams use different communication formats. Some critical infrastructure, such as Central Auth, have no official WMF team and only a few community maintainers. This, combined with very little on wiki documentation, makes it difficult for testers to understand the system. Moreover, managing the remediation projects that are not supported is challenging because those phabricator tickets will add to the thousands of open or stalled ones.

Help us design the future of the pentesting program!

While successful, this pilot phase highlighted the need to develop a set of criteria to identify good “candidates” for pentesting engagements. As we want this process to be as collaborative as possible, we’d like to hold a meeting with people from various tech departments to discuss areas we might have overlooked in the past pentesting projects.

As we move forward with that, we want to create a similar pipeline and route to pentesting various areas of Mediawiki and other WMF projects.

For future pentesting assessments, we are looking for software that we’re using that might have never been reviewed or code that’s been in production a long time, but wasn’t reviewed recently or ever by the security team. As a part of a new pentesting process, we’ll start a list of previous engagements and when they were performed. There is a lot of code written by WMF employees and the technical community, and only so much pentesting budget. We’re focused on code that is in production and if attackers gained access, many users would be impacted.

Written by Mstyles on Aug 15 2022, 7:32 PM.
User
Projects
Subscribers
Cleo_Lemoisson
Tokens
"Like" token, awarded by AnneT."Like" token, awarded by sguebo_WMF."Like" token, awarded by Quiddity."Like" token, awarded by sbassett.

Event Timeline