Messages should be cached in process.
As a starting point I would suggest the TTL is 1min. Assuming 1request/second and only 1 instance this gives us a hit rate of around (60/59)=98% and reduces our number of hits to the API by a factor of 60. On the other hand 1 min is an acceptable delay to get new messages.
After talking on IRC with the serviceops people it sound like only having 1 instance is likely but we should be prepared for it to scale up slightly if needed with load. We would expect ca. 1-3 instances (probably 1) and **not** 10-50.
**Notes**
* there is a [[ https://www.npmjs.com/package/axios-cache-adapter | axios cache adapter ]]
* there is also [[ https://github.com/kuitos/axios-extensions | axios-extensions ]]
* https://github.com/axios/axios/issues/31 has some simple library-free suggestions
**Technical notes**:
* can be enabled on a repo/action level (not only globally)
* TTL can be configured on repo/action level, config coming from environment variable
* ideally not touching existing repos but added transparently
* independently of implementation, the cache data source should only maintain a limited number of items - hard coding this to 1000 should be a safe bet
* do not implement you own cache solution! code for wiring up existing libraries (e.g. https://www.npmjs.com/package/lru-cache) should suffice