Page MenuHomePhabricator

Design and designate a core annual plan metric for community resilience and sustainability (SDS 1.1.1)
Closed, ResolvedPublic

Description

Drive working group efforts to establish a new contributors core annual plan metric for community resilience & sustainability aka "health" in contribution to SDS 1.1.1:

If we form a working group that establishes norms for metric requirements and designates a data steward for the contributor metric area, we will be able to meet the requirements for SDS1.1 for one of the four core metric areas.

This is the portion of the hypothesis work which will be led by Movement Insights with a group that includes subject matter experts, participants who can provide scientific or empirical evidence to support the metric, and data querying designs.

  • Draft the project plan for SDS 1.1 - Contributors / Community Health metric
  • Organize temporary working group focused on contributor research
  • Identify key prioritization lenses for criteria to evaluate potential metrics
  • Step 1: Operationally define metric-relevant terms and draft inclusion/exclusion criteria for potential metrics.
  • Step 2: Collaborate on Discovery and Research 
  • Step 3: Collaborative Review & Decision-Making
  • Data steward and key data consumer sign-off [In progress]
  • Develop theories of change for use cases [Added]
  • Advance new contributors metric for data pipelining to production [Up next]
  • Establish a new contributor metric for community resilience and sustainability aka Health

Event Timeline

Jaime and I have completed the close out report .
We have also drafted the new hypotheses for SDS1.1 in our project charter (see Relevance and Content metrics)
Both are pending review and approval.

Summary
We have decided to close out the hypothesis.

Status Summary

Contributors Metric Working Group

Following a careful process, the working group was able to meet most of the original success criteria outlined in the project charter. At the end of our deep dive, shared reviews, and assessments there were clear front-runners from the concept area of community health as “Communities are stable or growing.” Each had potential relevance as a metric in all three annual plan focus spaces of Trust & Safety services, support for volunteer governance, and at-scale interventions to increase the number of editors. The two leading metrics in that area were:
Stability: change in size of active editors as a percent of the previous year's same month, organized by categorizing the wiki as "stable or growing" vs not.
Stability: number of active editors by the number of months they have been active in a row.
Each of these two metrics presented different strengths and weaknesses, the first, while based on existing and public data, the metric has been internal only, while a standard approach, there are no validated benchmarks available, and it has not been productionized although there is a notebook for running the query code. The second, while having some research-relevant findings, is limited in applicability to small wikis providing somewhat less annual plan feedback loop potential in the product intervention space and also coming with a baked-in assumption that active editors wouldn’t occasionally take a month off causing pause for T&S and volunteer governance support contexts.
The most promising metrics from the other two concept areas were, for “Admins have capacity to handle tasks” the most promising metric: Ratio of admins to editors (active admins vs active new editors). However, it was identified to have challenges in application to small wikis and in that it lacks an important content component. For the concept area “A community is resilient and cannot be captured”, the most promising metric seems to be the edits Gini coefficient (a measure of how evenly edits are spread across editors. 0 is the most equal - every editor has made the same number of edits - and 1 the most unequal - a single editor has made every edit). Still, it would take some careful review and consultation before the metric design in our context could become an actionable metric.
In the end, the metric that received the highest rating from the process was a stability metric aimed at monitoring that “Communities are stable or growing”:
Stability: the change in size of active editors as a percent of the previous year's same month, organized by categorizing the wiki as "stable or growing" vs not.
While we expected some challenges would be resolved in implementation, for example, ensuring the right population breaks (e.g., advanced rights editors), reporting cadence, deciding which rights/actions to include, and how to store the interval vs categorical output metrics. In our efforts to advance the metric for implementation, we ran into blocks when it came to satisfying emerging annual plan metric criteria for Product & Tech use cases. After review of the final metric recommendation, three concerns were raised that we respond to in this Memo: Product connections to the community health metric , they were:
This metric is not sensitive enough to product-focused interventions
This metric is sensitive to socio-political-policy-driven shifts beyond the product space
This metric is too noisy

After multiple reviews Selena Deckelmann made the decision to not accept the recommended stability metric and directed that metric definition work continues as a separate effort led by Marshall Miller to define a product focused measurement plan that feeds into the emerging ‘multi-generational’ product strategy. As a result of these prioritization changes, our data governance work will pivot to focus on the three remaining core annual plan metrics focused on: Relevance, Effectiveness, and Content.

Data Governance Working Group

Beginning with one-to-one sessions with individuals who had a data governance vision for the Foundation, a working group of members from Research & Decision Science and Data Platform Engineering was created to establish a shared understanding of Data Strategy and Management and to prioritize the pillars of Data Governance that are needed to govern the Contributors metric data. While the groundwork for Data Governance was laid by Senior leadership a couple years ago (see Data Governance Charter Document - Draft and Data Governance ) and the Data Engineering team (Data Governance Design , Data Governance Framework - Draft ) , with SDS1.1.1 we were able to focus on the adoption of a data governance framework starting with the implementation of Data Stewardship. We were able to collaborate with other SDS OKRs that focussed on Data Quality and Data Lineage.

We identified 8 pillars for Data Governance (Business Use, Metric Design, Data Stewardship, Data Literacy, Data Quality, Security, Data Transparency, Lifecycle management) and quickly realized that all of them will not be accomplished as a part of the SDS1.1.1 Hypothesis. Likely, each of the pillars will likely need individual hypotheses in the future.

We presented the concept of Data Stewardship to the Data Platform Engineering, Product Analytics, the Research & Decision Science teams, resulting in positive feedback and increased engagement with the topic. In addition, we shared the Data Governance learning journey with Tajh and Mark during the VP office hours and received additional positive feedback.

While data stewardship calls for 7 distinct roles - Technical Data Stewards, Business Data Stewards, Enterprise Data Steward, Domain data Stewards, Data Governance Manager, Data Governance Program Office, and Data Governors, each with a different set of responsibilities, we decided to adopt a lightweight version of data stewardship to begin implementation and focussed on developing 3 major roles - Technical Business Stewards, Business Data Stewards, and forming a Data Governance Council ( with the supporting role of the steering committee that consists of program sponsors i.e. executives) given the smaller size of our organization. We were able to train the data stewards selected for the Contributors metric and were able to go over the basic principles of data governance and stewardship and introduced the Data Stewardship RASCI: Roles and responsibilities to help them get acquainted with their roles and responsibilities.

Key Learnings

Data Governance Working Group
Although we did not produce a governed community health metric as specified in the hypothesis, there were strong learnings from this work:
Key Learning No. 1: Having fewer roles allowed us flexibility in shaping the Data Stewardship RASCI: Roles and responsibilities and in training the data stewards.
Key Learning No. 2: The selection criteria for the Contributors metric data stewards focussed on looking into who worked closely with the data we want to steward and who relied upon or benefited from it the most. In other words, who cared about what happens to that data. This selection process will be continued for the other core metrics we plan to govern.
Key Learning No. 3: We collaborated with SDS 3.3 to understand the data quality issues we’ve encountered and what we can learn from them and will look into adopting the outcome of SDS 2.6.1 so that we can view the output in dashboards after each run of the core metrics pipeline and receive alerts.
Key Learning No. 4: Mapping the pillars to the SDS KRs and Hypothesis unveiled that while we had initiatives in place for Data Quality (SDS3.3), Data Transparency (SDS2.1 now deprecated), we didn’t have any OKRs associated with data security and the topics associated with security. We now have SDS2.5.4 owned by Hal Triedman to create a healthy privacy-aware culture within the Product and Technology department and develop policy-compliant shared analytics/telemetry tools.

Acknowledgements
Thank you to the working group members - Olja, Andreas, Desiree, Mikhail, Joseph, Luke, Virginia, and Jaime for your contributions and support. Thank you Kate, Omari, and Mariya for your guidance and feedback we will be carrying on with the next hypothesis iteration.

Contributors Metric Working Group
Although we did not produce the deliverable that the hypothesis talked about in terms of a community health metric for contributors being in place by January 2024, we learned a lot:
Key Learning No. 1: The contributors metric working group process was successful in many ways from gathering diverse experts from across the organization, engaging them in research discovery and research learning, facilitating group brainstorms, shared assessment and group decision-making.
Key Learning No. 2: Changes to organizational priorities are ongoing and we should develop hypotheses with shorter timelines, smaller amounts of work, and more explicit stakeholder review milestones.
These are described in more detail below.
Key Learning No. 1 The contributors metric working group process was successful in quickly gathering diverse experts from across the organization, engaging them in research discovery and learning, as well as in facilitated brainstorms, assessments and decision-making steps. Some of the alterations which had to be made were including additional folks in the working group as we went which was important for identifying connections of metric options to potential business use cases. In order to capture the practices along with what made them effective, also what could potentially make them more effective, a few of the working group members worked to outline the working group process including the collaborative steps we took on our course from project outset to conclusion.
Hypothesis/Recommendation:
The process ran both on time and effectively. The process documentation should help to guide practices in future similar metrics work research and design efforts.
Power mapping of stakeholders, internal and external to the organization, would likely have prevented the need for late additions to the working group and better ensured equity in inclusion to the small group.
Key Learning No. 2
Given ongoing changes in high-level strategic plans consider the following options:
Presenting a set of recommendations rather than the winning most recommendation. In the case of the community health recommendation, the second and third next potential metrics presented additional challenges to the concerns raised however, so not as useful in this case.
Clarify decision authority for the working group’s recommendation at the outset - Again, power mapping stakeholders at the outset may have clarified veto authority at the outset, however, it is unclear if this was an unclarified or changed assumption.
Acknowledgments
Thank you Tanja Anđić, Pablo Aragón, JR Branaa, Suman Cherukuwada, Jan Eissfeldt, Maya Kampurath, Rebecca Maung, Mikhail Popov, Omari Sefu, Sam Walton for your contributions and support. Thank you also to the SDS 1.1 steering committee for your guidance and input along the way.

SDS 1.1 Hypotheses Next Steps Recommendations:
If we document the essential metric criteria for the Relevance metric and establish a clear owner for the metric we will be able to increase the quality and reliability of the unique devices datasets and provide a more dependable annual plan metric.
If we establish a governance council and maintain a repository of artifacts for data practices and document data management decisions then we can build a data governance framework for the organization.
If we work to identify the success criteria of the content gap metrics at an operational level with the executive level stakeholders, and then consult internal technical and business data experts, we can provide a recommendation to improve the design of the content gaps metrics so that monthly signals are more clearly interpretable and actionable.

Thank you

nshahquinn-wmf renamed this task from SDS1.1 Hypothesis 1 to Design and designate a core annual plan metric for community resilience and sustainability (SDS 1.1.1).Mar 15 2024, 2:01 AM
nshahquinn-wmf updated the task description. (Show Details)
nshahquinn-wmf added a subscriber: Samwalton9-WMF.