Page MenuHomePhabricator

WMCS Eqiad: Enable IPv6 in cloud vrf on switches
Closed, ResolvedPublic

Description

The WMCS network infrastructure in Eqiad has a dedicated VRF for internal traffic that should be isolated from WMF production realm networks, as described on Wikitech here. This network has only been running IPv4 since it was created.

In order to support the overall IPv6 transition, and specifically to enable cloud services to be offered internally/externally on IPv6 we need to upgrade this network by adding v6 addressing to all interfaces and enabling routing protocols appropriate for v6. The steps are as follows:

  • Allocate IPv6 address ranges (public & private) for use by cloud services in eqiad
  • Update RIR records, RPKI ROAs to list the new public range
  • Announce the public range to our upstream BGP transit and peers from eqiad
  • Assign IP addressing in Netbox to all cloud-vrf interfaces and add reverse DNS snippets in dns repo
  • Configure the IPv6 addresses on all cloudsw interfaces belonging to the cloud vrf
  • Enable OSPF for all loopback and xlink interfaces on cloudsw belonging to the VRF
  • Enable IBGP between cloudsw devices in the cloud vrf over their loopback IPs
  • Enable EBGP between cloudsw spine devices and eqiad core routers for IPv6 SAFI
  • Add required static routes towards cloudgw for ranges used by openstack
  • Test IPv6 connectivity from cloudgw, cloudnet and other sources to validate routing works as expected
NOTE: Cloud hosts are not yet configured with thier v6 IPs so could not do those tests at last step

With these steps complete the cloud team should be able to proceed with the IPv6 VXLAN migration, and also begin to look at announcing IPv6 service VIPs for the related services currently only available through IPv4.

Related Objects

Event Timeline

cmooney triaged this task as Medium priority.

@aborrero as discussed we can possibly arrange a window for Thurs Mar 27th to carry out the remaining steps?

Unlike the previous attempt I will leave the configuration to just the cloud vrf interfaces for OSPF, which being more limited in scope hopefully won't cause any issues. Nevertheless I think it best we are aware there is some risk here and have people available to run a few checks as the config is rolled out. Rollback can be quick if we do happen to notice any problems.

Config to be applied in first step - P74416

@aborrero as discussed we can possibly arrange a window for Thurs Mar 27th to carry out the remaining steps?

Yes. I have scheduled this, and will be sending an announcement to the community via email.

Unlike the previous attempt I will leave the configuration to just the cloud vrf interfaces for OSPF, which being more limited in scope hopefully won't cause any issues. Nevertheless I think it best we are aware there is some risk here and have people available to run a few checks as the config is rolled out. Rollback can be quick if we do happen to notice any problems.

I will be available during the operation window, 2025-03-27 at 12:30 UTC

Mentioned in SAL (#wikimedia-operations) [2025-03-27T12:36:27Z] <topranks> enabling IPv6 on cloudsw devices in eqiad T389958

Ok the OSPF step is complete, all the switches are running OSFP3 in the 'cloud' routing instance and learning each other's loopback IPs. See P74464.

The next step is to enable IBGP between these loopback IPs, but not exchange any routes initially - config P74465

Mentioned in SAL (#wikimedia-operations) [2025-03-27T13:06:41Z] <topranks> adding IBGP peerings between loopbacks in cloud-vrf on cloudsw devices in eqiad T389958

there was a major network outage as a result of the operations that affected all WMCS systems, including Ceph and Toolforge kubernetes.

Just to confirm the timeline of events:

Mar 27 13:06:57: IBGP configuration commited on all 4 cloudsw, enabling IBGP in the cloud vrf between loopback IPs, configured to not import/export any routes
Mar 27 13:14:46: New IBGP configuration reverted on all 4 cloudsw

The configuration added can be seen here: P74465

Notes

IBGP established as expected, and no routes were exchanged.

cmooney@cloudsw1-c8-eqiad> show bgp summary instance cloud group cloudvrf_ibgp | find "^Peer"
Peer                     AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
2a02:ec80:a000:ffff::2       64710          6          4       0       0        1:22 Establ
  cloud.inet6.0: 0/0/0/0
2a02:ec80:a000:ffff::3       64710          6          4       0       0        1:20 Establ
  cloud.inet6.0: 0/0/0/0
2a02:ec80:a000:ffff::4       64710          6          4       0       0        1:22 Establ
  cloud.inet6.0: 0/0/0/0

Despite this we had an almost total cessation of traffic, both in the default instance (prod. realm) and cloud vrf. BGP sessions controlling routing in those networks remained up, but it seems the somehow the changes affected what routes were accepted by the switches in in e4/f4 from the "spines" in c8/d5, across all VRFs and address families.

For instance this ping in the default table from the lo0.0 IPv6 IP to cloudvirt1060 (on cloud-hosts1-f4-eqiad) failed:

cmooney@cloudsw1-d5-eqiad> ping 2620:0:861:11d:10:64:149:12 source 2620:0:861:11b::253       
PING6(56=40+8+8 bytes) 2620:0:861:11b::253 --> 2620:0:861:11d:10:64:149:12
^C
--- 2620:0:861:11d:10:64:149:12 ping6 statistics ---
4 packets transmitted, 0 packets received, 100% packet loss

Route to the cloud-hosts1-f4-eqiad subnet looked ok and unchanged:

cmooney@cloudsw1-d5-eqiad> show route 2620:0:861:11d:10:64:149:12 

inet6.0: 940 destinations, 1761 routes (940 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2620:0:861:11d::/64*[BGP/170] 32w1d 20:35:19, localpref 100
                      AS path: 4264710004 I, validation-state: unverified
                    >  to 2620:0:861:fe0f::2 via irb.1111
                    [BGP/170] 5d 18:26:27, localpref 100
                      AS path: 4264710004 I, validation-state: unverified
                    >  to 2620:0:861:fe0b::1 via irb.1116

However after executing the rollback the pings began working again:

cmooney@cloudsw1-d5-eqiad# show | compare 
[edit routing-instances cloud protocols bgp]
-      group cloudvrf_ibgp {
-          type internal;
-          import NONE;
-          family inet6 {
-              unicast {
-                  prefix-limit {
-                      maximum 1000;
-                  }
-              }
-          }
-          export NONE;
-          cluster 185.15.56.253;
-          local-as 64710;
-          multipath;
-          bfd-liveness-detection {
-              minimum-interval 1000;
-          }
-          neighbor 2a02:ec80:a000:ffff::1 {
-              description cloudsw1-c8;
-              peer-as 64710;
-          }
-          neighbor 2a02:ec80:a000:ffff::3 {
-              description cloudsw1-e4;
-              peer-as 64710;
-          }
-          neighbor 2a02:ec80:a000:ffff::4 {
-              description cloudsw1-f4;
-              peer-as 64710;
-          }
-      }

{master:0}[edit]
cmooney@cloudsw1-d5-eqiad# commit 
configuration check succeeds
commit complete

{master:0}[edit]
cmooney@cloudsw1-d5-eqiad# exit 
Exiting configuration mode

{master:0}
cmooney@cloudsw1-d5-eqiad> ping 2620:0:861:11d:10:64:149:12 source 2620:0:861:11b::253                            
PING6(56=40+8+8 bytes) 2620:0:861:11b::253 --> 2620:0:861:11d:10:64:149:12
16 bytes from 2620:0:861:11d:10:64:149:12, icmp_seq=0 hlim=63 time=0.820 ms
16 bytes from 2620:0:861:11d:10:64:149:12, icmp_seq=1 hlim=63 time=30.562 ms

The routing table for the destination remains unchanged:

cmooney@cloudsw1-d5-eqiad> show route 2620:0:861:11d:10:64:149:12 

inet6.0: 940 destinations, 1761 routes (940 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2620:0:861:11d::/64*[BGP/170] 32w1d 21:27:54, localpref 100
                      AS path: 4264710004 I, validation-state: unverified
                    >  to 2620:0:861:fe0f::2 via irb.1111
                    [BGP/170] 5d 19:19:02, localpref 100
                      AS path: 4264710004 I, validation-state: unverified
                    >  to 2620:0:861:fe0b::1 via irb.1116
Problems in E4/F4

I believe this issue may have been on the "other side" of this however, on the switch in rack F4 (the other side of irb.1111), and affecting it's route back to this source IP.

Examining the logs we can see that right after the new BGP sessions come up it starts complaining of having no route in the main table:

Mar 27 13:07:12  cloudsw1-f4-eqiad bfdd[9097]: BFDD_TRAP_MHOP_STATE_UP: local discriminator: 48, new state: up, peer addr: 2a02:ec80:a000:ffff::1
Mar 27 13:07:13  cloudsw1-f4-eqiad bfdd[9097]: BFDD_TRAP_MHOP_STATE_UP: local discriminator: 49, new state: up, peer addr: 2a02:ec80:a000:ffff::2
Mar 27 13:07:17  cloudsw1-f4-eqiad bfdd[9097]: BFDD_TRAP_MHOP_STATE_UP: local discriminator: 50, new state: up, peer addr: 2a02:ec80:a000:ffff::3
Mar 27 13:07:53  cloudsw1-f4-eqiad jdhcpd: DH_SVC_SENDMSG_FAILURE: sendmsg() from 10.64.149.1 to port 67 at 208.80.154.74 via interface 579 and outgoing routing instance default failed: No route to host

The last line is the system saying it has no route to our install server in the main table. I was not able to check this when we had problems, but now with things working this looks as expected. On cloudsw1-f4-eqiad the route is learnt in EBGP from cloudsw1-c8-eqiad and cloudsw1-d5-eqiad, who in turn are receiving it from the core routers:

cmooney@cloudsw1-f4-eqiad> show route table inet.0 208.80.154.74 terse           

inet.0: 1140 destinations, 2276 routes (1140 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

A V Destination        P Prf   Metric 1   Metric 2  Next hop        AS path
* ? 208.80.154.64/26   B 170        100                             64710 14907 I
  unverified                                        10.64.147.2
                                                   >10.64.147.6
  ?                    B 170        100                             64710 14907 I
  unverified                                       >10.64.147.6

Those BGP sessions on cloudsw1-f4-eqiad have been stable:

cmooney@cloudsw1-f4-eqiad> show bgp summary group prod_ebgp4 | match "^[0-9]|^Peer" 
Peer                     AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
10.64.147.2           64710     599741     601558       0     110 27w1d 22:12:42 Establ
10.64.147.6           64710     709363     711754       0      93 32w1d 21:38:14 Establ

On cloudsw1-d5-eqiad the route has been learnt from the core routers for 18 weeks:

cmooney@cloudsw1-d5-eqiad> show route 208.80.154.64/26 detail 

inet.0: 1144 destinations, 2171 routes (1144 active, 0 holddown, 0 hidden)
208.80.154.64/26 (2 entries, 1 announced)
        *BGP    Preference: 170/-101
                Next hop type: Router, Next hop index: 1985
                Address: 0x8bb8748
                Next-hop reference count: 1683
                Source: 10.64.147.14
                Next hop: 10.64.147.14 via xe-0/0/0.1100, selected
                Session Id: 0
                State: <Active Ext>
                Local AS: 64710 Peer AS: 14907
                Age: 18w6d 18:43:12 	Metric: 0 
                Validation State: unverified 
                Task: BGP_14907.10.64.147.14
                Announcement bits (3): 0-KRT 3-BGP_RT_Background 4-Resolve tree 1 
                AS path: 14907 I 
                Accepted
                Localpref: 100
                Router ID: 208.80.154.197

HOWEVER, on cloudsw1-f4-eqiad it only knows about it since the config was reverted an hour ago:

cmooney@cloudsw1-e4-eqiad> show route 208.80.154.64/26 detail 

inet.0: 1140 destinations, 2276 routes (1140 active, 0 holddown, 0 hidden)
208.80.154.64/26 (2 entries, 1 announced)
        *BGP    Preference: 170/-101
                Next hop type: Router, Next hop index: 0
                Address: 0xd893b04
                Next-hop reference count: 1133
                Source: 10.64.147.0
                Next hop: 10.64.147.0 via irb.1108
                Session Id: 0x0
                Next hop: 10.64.147.4 via irb.1110, selected
                Session Id: 0x0
                State: <Active Ext>
                Local AS: 4264710003 Peer AS: 64710
                Age: 1:00:49 
                Validation State: unverified 
                Task: BGP_64710.10.64.147.0
                Announcement bits (2): 0-KRT 4-BGP_RT_Background 
                AS path: 64710 14907 I 
                Accepted Multipath
                Localpref: 100
                Router ID: 10.64.146.252
Missing routes?

So did for some reason the added config - in the cloud vrf, for IPv6, with no routes exchanged - prevent cloudsw1-d5-eqiad from sending this route? Or cloudsw1-f4-eqiad from accepting it? In the main table in IPv4 - which shouldn't be affected by anything in the cloud vrf?

It seems so. We see the same pattern for the default route and all others:

cmooney@cloudsw1-f4-eqiad> show route 0.0.0.0/0 detail                                  

inet.0: 1140 destinations, 2276 routes (1140 active, 0 holddown, 0 hidden)
0.0.0.0/0 (2 entries, 1 announced)
        *BGP    Preference: 170/-101
                Next hop type: Router, Next hop index: 0
                Address: 0xd893a78
                Next-hop reference count: 1133
                Source: 10.64.147.2
                Next hop: 10.64.147.2 via irb.1109, selected
                Session Id: 0x0
                Next hop: 10.64.147.6 via irb.1111
                Session Id: 0x0
                State: <Active Ext>
                Local AS: 4264710004 Peer AS: 64710
                Age: 1:08:43 
                Validation State: unverified 
                Task: BGP_64710.10.64.147.2
                Announcement bits (2): 0-KRT 4-BGP_RT_Background 
                AS path: 64710 14907 I 
                Accepted Multipath
                Localpref: 100
                Router ID: 10.64.146.252
cmooney@cloudsw1-f4-eqiad> show route table inet.0 protocol bgp detail | match "Age" 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49            
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49 
                Age: 1:07:49

Our BGP graphs suggest that cloudsw1-d5-eqiad sent the same number of IPv4 routes to cloudsw1-f4-eqiad in the production realm throughout:

image.png (728×1 px, 162 KB)

We can also see that cloudsw1-f4-eqiad continued to receive these route announcements, however it stopped accepting them for some reason.

image.png (728×1 px, 171 KB)

The question remains why would it stop accepting these routes? Could the addition of a new BGP group, in a different routing instance, change the policy for this? Or did something else somehow change whereby it thought the next-hop was invalid or something? Seems very odd.

Conclusion

At least we know why we had problems forwarding, and can be more specific about the issue. Namely:

  • For some reason cloudsw1-e4 and cloudsw1-f4 stopped accepting EBGP rotues they were learning from cloudsw1-c8 and cloudsw1-d5
  • In both vrfs, and for both IP versions, despite the changes only being to cloud IPv6
  • This prevented all comms to E4/F4 (roughly half the cloud hosts offline)
  • Removing the new IBGP group in the cloud vrf caused them to accept the routes again, restoring comms

Right now I am not sure how to progress. I think it's too risky to re-attempt this without properly understanding how or why this could occur. The only way I can think to progress would be to replicate the cloudsw infra precisely in a lab and apply try to reproduce the issue, then spend time trying to troubleshoot in that environment as to why these routes are rejected.

@aborrero @taavi one thing we could maybe try, if we wanted to make progress sooner (i.e. without replicating the setup elsewhere):

  • Add static default routes in all VRFs on the switches in E4/F4, with next-hop going to C8/D5
    • Basically a static route mirroring what they learn in BGP
  • Re-apply the config, expecting those switches will stop accepting BGP routes
    • The statics should take over and be used, keeping traffic moving

I won't say it's not still without risk, given our experience today and last week. But it is maybe a way to reproduce the issue without affecting traffic, provided there are not more bugs / things I haven't properly considered.

Otherwise I'm happy to work on replicating it elsewhere and not taking any more chances.

@aborrero @taavi one thing we could maybe try, if we wanted to make progress sooner (i.e. without replicating the setup elsewhere):

  • Add static default routes in all VRFs on the switches in E4/F4, with next-hop going to C8/D5
    • Basically a static route mirroring what they learn in BGP
  • Re-apply the config, expecting those switches will stop accepting BGP routes
    • The statics should take over and be used, keeping traffic moving

I won't say it's not still without risk, given our experience today and last week. But it is maybe a way to reproduce the issue without affecting traffic, provided there are not more bugs / things I haven't properly considered.

Otherwise I'm happy to work on replicating it elsewhere and not taking any more chances.

Yes, lets try with the static routes. Thanks!

Yes, lets try with the static routes. Thanks!

Thanks Arturo - can we arrange a window for this? I can do it any day (tomorrow is an option).

My basic plan is here: P74565, idea is to focus on rack F4, with plan to cover the gaps we could have if the problem occurs again by adding static routes in advance. Worst-case scenario, given we've isolated the scope of the problems last week, we should detect any issue quickly and revert in a hurry.

Yes, lets try with the static routes. Thanks!

Thanks Arturo - can we arrange a window for this? I can do it any day (tomorrow is an option).

My basic plan is here: P74565, idea is to focus on rack F4, with plan to cover the gaps we could have if the problem occurs again by adding static routes in advance. Worst-case scenario, given we've isolated the scope of the problems last week, we should detect any issue quickly and revert in a hurry.

Yes, tomorrow 2025-04-03 @ 11:30 UTC we can have a window (24h from now). I'll send an announcement to the community.

Ok so we followed the steps as outlined above and were able to bring up IBGP between cloudsw1-c8-eqiad and cloudsw1-f4-eqiad in the cloud vrf and inet6 address family. The same situation occured with cloudsw1-f4-eqiad rejecting the EBGP routes it was receiving, however because we had the static routes in place traffic was unaffected.

We were able to see the stated reason for the route rejection then, for instance in the prod realm IPv4:

cmooney@cloudsw1-f4-eqiad> show route receive-protocol bgp 10.64.147.2 hidden detail    
Apr 03 12:44:47

inet.0: 1139 destinations, 2275 routes (9 active, 0 holddown, 2266 hidden)
  0.0.0.0/0 (3 entries, 1 announced)
     Nexthop: 10.64.147.2
     AS path: 64710 14907 I  (Looped: 64710) 
     Hidden reason: AS path loop

So with these two commands added on the switch:

set routing-instances cloud protocols bgp group cloudvrf_ibgp local-as 64710
set routing-instances cloud protocols bgp group cloudvrf_ibgp neighbor 2a02:ec80:a000:ffff::1 peer-as 64710

It seems the device as a whole consider 64710 to be its own AS, and reject it on an EBGP route, even for other routing instances. I tried allowing loops on the ebgp session that was rejecting them:

set protocols bgp group prod_ebgp4 local-as 4264710004 loops 1

But the output was the exact same. This seems to be a limitation, bordering on a bug, in Junipers routing-instance implementation, where spoofing your AS in one leads to the system acting as if it was using that same AS in another.

Either way it's clear that we cannot migrate by adding an IBGP peering in the Cloud VRF in paralell with EBGP in the production VRF. I think the best way forward is to make use of the fact we are currently routing with static routes, tear down the EBGP sessions completely and rebuild everything as IBGP. That's the ultimate plan anyway, but we'll need to do a "big bang" rather than a steady migration. The presence of the statics allow that though so it should be fine.

Change #1134234 had a related patch set uploaded (by Cathal Mooney; author: Cathal Mooney):

[operations/homer/public@master] Cloudsw: adjust routing-policies to reflect change to IBGP

https://gerrit.wikimedia.org/r/1134234

Mentioned in SAL (#wikimedia-operations) [2025-04-07T11:25:58Z] <topranks> enable EBGP between cr1-eqiad and cloudsw1-c8-eqiad (IPv6 / cloud vrf) T389958

Mentioned in SAL (#wikimedia-operations) [2025-04-07T11:38:35Z] <topranks> enable EBGP between cr2-eqiad and cloudsw1-d5-eqiad (IPv6 / cloud vrf) T389958

Mentioned in SAL (#wikimedia-operations) [2025-04-07T12:32:23Z] <topranks> cloudsw1-c8-eqiad: add routes for WMCS OpenStack IPv6 aggregate to cloudgw VIP T389958

Mentioned in SAL (#wikimedia-operations) [2025-04-07T12:35:51Z] <topranks> cloudsw1-d5-eqiad: add routes for WMCS OpenStack IPv6 aggregate to cloudgw VIP T389958

Thankfully all works are now in place for this, after a few little blips on the way.

The issues we had Thursday were due to the IBGP route reflection re-writing next-hops when it should not, causing routes to flap and routing loops between c8 and d5 for traffic in racks e4/f4. This has been address and patch submitted (see here). The static routes that were in place to "cover" the BGP ranges which the config was adjusted have now been removed and all is stable.

Additionally the aggregate config and static rotues towards the cloudgw VIPs is in place, and we are announcing 2a02:ec80:a000::/48 to the internet as a result. Routing looks fine and should enable the configuration of ranges in OpenStack when we want to proceed.

cathal@officepc:~$ mtr -z -b -w -c 10 2a02:ec80:a000::1 
Start: 2025-04-07T13:47:55+0100
HOST: officepc                                                                               Loss%   Snt   Last   Avg  Best  Wrst StDev
  1. AS5466   pool-ipv6-pd.agg1.srl.blp-srl.eir.ie (2001:bb6:8b70:9e00::1)                    0.0%    10    0.5   0.5   0.2   0.6   0.1
  2. AS5466   agg1.srl.blp-srl.eircom.net (2001:bb0:6:a11d::1)                                0.0%    10   13.0  12.3   4.8  22.5   6.8
  3. AS5466   2001:bb0:6:a197::1                                                              0.0%    10    5.1   5.0   4.6   5.3   0.2
  4. AS1299   dln-b3-link.ip.twelve99.net (2001:2035:0:9eb::1)                               30.0%    10    4.7   4.9   4.7   5.3   0.2
  5. AS1299   dln-b4-v6.ip.twelve99.net (2001:2034:0:201::1)                                  0.0%    10    5.3   5.5   5.0   6.3   0.4
  6. AS1299   man-b2-v6.ip.twelve99.net (2001:2034:0:13::1)                                   0.0%    10   91.2  91.1  89.3  92.2   0.7
  7. AS1299   ldn-bb2-v6.ip.twelve99.net (2001:2034:1:ca::1)                                 40.0%    10   16.1  15.6  15.2  16.1   0.3
  8. AS???    ???                                                                            100.0    10    0.0   0.0   0.0   0.0   0.0
  9. AS???    ???                                                                            100.0    10    0.0   0.0   0.0   0.0   0.0
 10. AS1299   ash-b2-link.ip.twelve99.net (2001:2035:0:a98::1)                                0.0%    10   89.3  89.5  89.0  90.5   0.6
 11. AS1299   wikimedia-ic-308845.ip.twelve99-cust.net (2001:2035:0:a98::2)                   0.0%    10   88.7  88.6  88.2  89.8   0.6
 12. AS???    xe-0-0-0-1103.cloudsw1-d5-eqiad.wikimedia.org (2a02:ec80:a000:fe02::2)          0.0%    10   89.0  91.3  88.6 113.4   7.7
 13. AS???    irb-1104.cloudsw1-c8-eqiad.eqiad1.wikimediacloud.org (2a02:ec80:a000:fe05::1)   0.0%    10   86.9  89.6  86.7 114.1   8.6

Change #1134234 merged by jenkins-bot:

[operations/homer/public@master] Cloudsw: adjust routing-policies to reflect change to IBGP

https://gerrit.wikimedia.org/r/1134234