######Background
The Alt Text experiment will require the review and grading of user-generated alt text by a third-party accessibility organization. For them to adequately judge the quality of the alt text within its context, we will need the following information: alt text entered, caption, article name, and the image itself. We are also assessing whether this task is appropriate for less experienced editors and newcomers, and if their alt text scores differ systematically from more experienced editors, so we need to know the edit count for those completing alt text.
Why instrument this before the alt text experiment begins?
We anticipate that the rate of folks adding alt text in response to our prompt will be much higher than control
- Control Group A: on Android around 8% of image recs edits had alt text
- Group B: around 73% of image recs were captioned, we estimate 60% of those prompted will publish alt text.
This large difference could leave us with many more examples of Alt text from the variant group to grade, and not enough from control. In the case that we need more control alt text to compare with, having this instrumentation in place would allow us to pull alt text information from 30-days prior on image recs.
#####Requirements:
- Update Image recommendations instrumentation as highlighted in this [[ https://docs.google.com/presentation/d/1VCY9Oq-3Atis6-gvkd_1p-3Qb4P3PCptmiGEdyS3JI8/edit#slide=id.g2caa8170c27_0_129 | slide ]]
- For each image recommendation edit, add this information into the edit summary success event:
- Caption (string of user-generated text)
- Alt text (string of user-generated text)
- Article title
- Image file name
- Editor name
- Edit count
#####Timing:
Aim to release updated instrumentation by 1 month before the experiment starts, around the end of July