After discussion with the Traffic team, this task is to track the testing and, if successful/valuable, production deployment of a system to offload ICMP pings to a dedicated host.
Large amount of ICMP echo request toward our main IPs, usually used by people and machines to test their connectivity to the Internet, has been causing issue. For example reaching rate limiters thresholds (set to not overwhelm our servers) and dropping monitoring ICMP requests.
**1st part, to deploy a test instance in eqiad**
 Get a VM in a public vlan (ping1001.wikimedia.org ?)
 Reserve a test public IP in the LVS range in DNS
 Assign the IP to the VM's loopback IP
 Add a firewall rule on cr1/2-eqiad to redirect icmp requests
set firewall family inet filter border-in4 term offload-ping4 from protocol icmp
set firewall family inet filter border-in4 term offload-ping4 from icmp-type echo-requst
set firewall family inet filter border-in4 term offload-ping4 from destination-address <test-LVS-IP>
set firewall family inet filter border-in4 term offload-ping4 then next-ip <VM-IP>
From there pings sent to the test IP should be replied by the the VM.
Internally, pings to a LVS VIP should be replied by host behind the LVS
Externally, they should be replied by the VM.
 Add VM to standard monitoring (Icinga, Prometheus, etc)
 Ensure external monitoring does ICMP checks for the LVS VIPs (and not hostname)
 Ensure availability of the service hosted on the LVS VIP is externally monitored by a check different than ICMP
The previous 2 points are to prevent people (and availability stats) to think the actual service (eg. wikipedia.org) is down, when only the ICMP server is.
 Write documentation (eg. how to disable redirect)
 Optional: Write an ICMP dashboard in Grafana
**2nd part, catch real ICMP traffic in eqiad**
 Assign 220.127.116.11 (text-lb.eqiad.wikimedia.org) to the VM's loopback IP
 Update the cr1/2-eqiad firewall rule
 Verify monitoring is happy
 Decommission the test VIP
**3rd part, if eqiad deployment satisfying, duplicate in codfw**
**4th part, deploy to POPs**
 Either order dedicated hardware or wait for VM solution to be available on the site.
 Duplicate to puppet
If required, be implemented with two hosts per sites, sharing a VIP using VRRP or BGP (preferred). On day 1 or at a later iteration.
* Results could be considered as "lying", as pings to a host would be replied by a different host (might confuse troubleshooting)
* Redundancy would mean using 3 public IPs (2 reals + 1 VIP) per site
* List of ping targets to "catch" needs to be maintained in 2 more tools (puppet + network automation)