Most likely we will have missed a couple things while refactoring puppet. Setup two clusters on the relforge servers as a testbed before setting up multiple clusters on the regular prod clusters.
|Resolved||EBernhardson||T183281 [epic] ELK upgrade to 6.x (elasticsearch, kibana, logstash)|
|Resolved||None||T183282 [epic] Search cluster upgrade to 6.x|
|Resolved||debt||T193654 [epic] Run multiple elasticsearch clusters on same hardware|
|Resolved||Gehel||T198352 Setup two elasticsearch clusters on relforge to test multi-instance|
The current puppet code for tlsproxy::localssl does not allow for multiple $default_server, even when on different ports.
This led to an interesting conversation on how to differentiate the different elasticsearch instances, beside using different ports. Options:
- differentiate on TCP port only (first instance on 9243, second on 9443)
- differentiate on server names
- differentiate on IP
- a combination of some of the above
- seems the simpler solution, it matches the expectations of the clients and does not have any significant drawback that we could find
- SAN / SNI support in HTTP libraries is often broken if supported at all. Our current clients might be OK (unchecked), but if we can avoid the pain, we should
- it would work fine, but since we want (at least at some point) to have elasticsearch listening only on localhost, elasticsearch will be on different ports already. Exposing this mapping to the TLS endpoint as well seems simpler and less surprising. (yes, we could use different lo:aliases, but that seems even more confusing for not much gain).
- why not ?
it seems to work well, the problems identified so far are:
- hotthread script (T209030)
- firewall config, mwmaint1002 is unable to talk to relforge:
sudo iptables -n -L | grep 10.64.16.77 ACCEPT tcp -- 10.64.16.77 0.0.0.0/0 tcp dpt:9243
It's very probable that it's a specific thing to relforge but I suggest to see if we have other ad hoc rules that may need to be adjusted with new ports.