Assess engagement and edit data to determine if Decisions criteria listed here have been met.
- If less than 45% of edits are scored as a 3 or higher then we will pivot to a different suggested edit. If 46%-70% of edits are scored a 3 or higher we will improve guidance or use AI to better assist users. If 71% or higher of edits are scored a 3 or higher we will scale feature
- If average edit per unique user is under 3 we will pivot to a different suggested edit. If it is 3-6, we will consider interventions to reduce friction. If it is 7 or higher we can scale
- If user edit through feature a second day we should proceed with improvements and scaling
- If less than 55% or less of users are satisfied with feature we will not scale without making changes
- If more there are more than 30% of users find the task too difficult we will create an intervention to reduce difficulty before scaling. If 80% or more users find the task too difficult we will consider abandoning depending on supplementary responses.
- If the skip rate is 20% higher than image captions on Android we will consider pivoting to a different suggested edit unless evidence points to an intervention that could reduce this rate.
- If we do not have at least 50 people try the feature we will do direct outreach to gain more edits
- How does the predicted revert rate compare to Android's Image Captions Suggested Edit?
- Is there a difference user reported feedback based on their familiarity with editing and writing alt-text
- Is there a difference in metrics by language?