Page MenuHomePhabricator

IPv6 BFD Sessions Failing from Bird (Anycast VMs) to Juniper QFX in drmrs
Closed, ResolvedPublic


Just creating this task to document an issue that's been observed between the L3 access switches in drmrs and the doh/durum VMs there (running Bird daemon for BGP and BFD).

BGP has established between the VMs and the top-of-rack switches, however the BFD sessions on IPv6 remain down. As they have never been "up", and transitioned to a down state, the BGP sessions are working so this is not an operational issue.

Checking the switch shows they have decided that these sessions should be multi-hop BFD:

cmooney@asw1-b13-drmrs> show bfd session address 2a02:ec80:600:102:10:136:1:23 extensive | match "Session type" 
 Session type: Multi hop BFD

A PCAP reveals that this is the case, the switch is sending packets to destination UDP port 4784 (multihop bfd), whereas the VMs are sending to port 3784 (regular bfd).

This is not occuring with the BFD sessions to the same VMs over IPv4. The suspicion would be that the switches are defaulting to "multi-hop" as the link-local and global unicast IPs of the two peers are on different subnets. Which differs from the v4 case where both sides are using IPs on the same subnet.

Forcing the sessions to single-hop mode as shown below causes them to start working:

cmooney@asw1-b12-drmrs> show configuration protocols bgp group Anycast6 bfd-liveness-detection 
minimum-interval 300;
session-mode single-hop;
cmooney@asw1-b12-drmrs> show bfd session | match ^2a02                                            
2a02:ec80:600:1:185:15:58:11 Up    irb.611        0.900     0.300        3   
2a02:ec80:600:101:10:136:0:21 Up   irb.621        0.900     0.300        3

We will need to consider this in light of T209989 however, in terms of what the generic/global config should be here.

As all our peerings are single-hop, in that the VMs running Bird and router/switch are always L2 adjacent, it makes sense that the sessions should be single hop. But there is probably some nuance to the TTL/Hop-Limit being set either side. In drmrs both sides seem to be setting that to 255 and thus we don't see similar problems to those in the task referenced above.

Related Objects

Event Timeline

cmooney created this task.
Restricted Application added a subscriber: Aklapper. · View Herald Transcript

For comparisons sake the session from cr1-codfw to doh2001 are up, and using multi-hop mode. These are similar to drmrs in that one side is using link-local and the other global unicast. Both sides are sending with TTL of 255.

Unsure what exactly the difference might be here, possibly code on MX vs QFX or JunOS version is behaving slightly differently.

Thinking about this further I think it works from the CRs because the peering is from the local public/private subnet to the loopback IP of the CRs.

The loopback IP of the CRs is not on-link, so BIRD is using multi-hop BFD in that case, and the session establishes. In drmrs Bird sees the link-local peer for the session as on-link, understandably, and is doing single-hop.

I'm not sure it's worth reworking the Anycast config for the CRs to not use the site loopback, which would allow for us to force "single-hop" mode.

So probably the best option here is to modify our automation to configure the "single-hop" command in the Anycast6 group on L3 switches, but leave it out on CR routers.

@ayounsi interested in your thoughts/suggestions here.

thanks for documenting it, and yes, I fully agree.

We have BGP configured to the core-routers loopback in many different locations so it's not wise to start changing that.

I didn't know about session-mode single-hop; but it seems like a clean way to solve the issue (or device limitation).

Change 839634 had a related patch set uploaded (by Cathal Mooney; author: Cathal Mooney):

[operations/homer/public@master] Add explicit BFD session mode (single/multi-hop) to Anycast groups

Diff if the above patch is merged (running from my laptop with updated template):

Changes for 8 devices: ['', '', '', '', '', '', '', '']

[edit protocols bgp group Anycast4 bfd-liveness-detection]
+     session-mode multihop;
[edit protocols bgp group Anycast6 bfd-liveness-detection]
+     session-mode multihop;

Changes for 1 devices: ['']

[edit protocols bgp group Anycast6 bfd-liveness-detection]
+     session-mode multihop;

Change 839634 merged by jenkins-bot:

[operations/homer/public@master] Add explicit BFD session mode (single/multi-hop) to Anycast groups

Change applied across all routers now, so hopefully the last we see this kind of issue.