In general, the app cache purging for MediaWiki works like this:
Case I:
a) User changes some asset
b) Cache keys and CDN may be purged
c) User sees the new asset (e.g. via a post-save redirect), ChronologyProtector and sticky DC cookie makes sure they see the new value and cache misses on the asset see the changed data when writing back to the cache
d) CDN caches the new asset
Case II:
a) User changes some asset
b) Cache keys and CDN may be purged
c) Some other user sees the asset later and the slaves are caught by now. They see the new value and cache misses on the asset see the changed data when writing back to the cache
d) CDN caches the new asset
The slaves and WAN cache quickly converge on the newest values. However, one can imagine another case...
Case III:
a) User changes some asset
b) Cache keys and CDN may be purged
c) Some other user sees the new asset before slaves are caught by now (bad luck). They see the old value and cache misses on the asset see the old data when writing back to the cache. The slaves and WAN cache will still converge to the right value soon. But...
d) CDN caches the old asset and is stuck for the full TTL (or until purge or new changes)
This is not typically a big problem for many assets given that:
a) Rapidly changing dynamic content is usually uncached or has a very low TTL (e.g. RecentChanges)
b) Other assets are less likely to have this kind of coincidence happen (like random pages)
However, popular articles are assets where this is more likely to occur (e.g. "Barack Obama", featured articles, ect...).
Probably the easiest solution is to do a second "rebound" CDN-only purge, after ~WANObjectCache::HOLDOFF_TTL. This is the effective slave lag SLA limit. This could use the job queue and is fairly cheap since the actual app cache (e.g. parser cache) is not cleared.