This is a follow on to T105201, separated out in part so I can get some experience working on this.
We can't currently track whether someone got to a particular results page because their query didn't warrant suggestions, or because they clicked on a suggestion. Some people (like me) will click on the suggested query even though it gives the same results in order to "clear" the error from the screen. (It's distracting and reminds me how poorly I type.)
In the limit, this could skew our stat. Imagine 100 queries, 50 with suggestions. That's a 50% suggestion rate. If all 50 users clicked on the suggested query, then we'd have 150 queries, 50 with suggestions, for a 33% suggestion rate. The effect in real life won't be that big, but we can measure it instead of guessing.
We should also be able to find queries that give suggestions that in turn generate additional suggestions (if someone clicks on them)—which is interesting, but might also give insight in how to improve suggestions.