Page MenuHomePhabricator

Survey available/unused ports on eqiad pfw's
Closed, ResolvedPublic

Description

In T140991 Faidon did a quick check of available ports via remote console and it looks like there may be several unused/unconfigured ports. I think some of the confusion may be due to plugging in the second NIC on several hosts to the opposite PFW from the primary NIC for failover, but at this point I don't know where to find that information. Please survey what's physically attached so we can figure out whether we have free ports.

Event Timeline

I did a check on all ports and verified each one.

pfw1
0 -> indium
1 -> payment1001
2 -> payment1003
3 -> pay-lvs1001
4 -> pay-lvs1001 eth2 (doesn’t appear to be active)
5 -> silicon
6 -> tellurium
7 -> barium
8 -> lutetium
9 -> db1008
10 -> americium
11 -> (empty)
12 -> pfw2 0/12
13 -> pfw2 0/13
14 -> pfw2 0/14
15 -> pfw2 0/15

pfw2
0 -> payyments1002
1 -> payments1004
2 -> pay-lvs1002
3 -> pay-lvs1002 eth2
4 -> boron
5 -> samarium
6 -> db1025
7 -> thulium
8 -> bismuth
9 -> frqueue1002
10 -> berrylium
11 -> frqueue1001
12 -> pfw1
13 -> pfw1
14 -> pfw1
15 -> pfw1

OK, did a little more investigation.

pfw-eqiad is a cluster of two SRX650s, each with 4x1Gbps built-in ports, 16x1Gbps in a GPIM and 2x10G in an XPIM.

The SRX platform in a cluster mode (two nodes) reserves a certain amount of ports for a) management, b) inter-chassis control, c) fabric d) switch fabric (yes, it's confusing). The assignment of which and how many ports for (a) and (b) is not configurable, but for (c) and (d) it is.

Chris' report above is for the 16x1Gbps card. The builtin 4x1G ports are:

  • ge-0/0/0 -> node0's fxp0 (management port), connected to a mgmt switch
  • ge-0/0/1 -> node0's control port, connected to ge-9/0/1
  • ge-0/0/2 -> FREE (usually reserved for fabric port, but we use another port for that)
  • ge-0/0/3 -> FREE (usually reserved for fabric port, but we use another port for that)
  • ge-9/0/0 -> node1's fxp0 (management port), connected to a mgmt switch
  • ge-9/0/1 -> node1's control port, connected to ge-0/0/1
  • ge-9/0/2 -> FREE (usually reserved for fabric port, but we use another port for that)
  • ge-9/0/3 -> FREE (usually reserved for fabric port, but we use another port for that)

The 4x10G linecard ports are:

  • xe-6/0/0 -> link to cr1-eqiad
  • xe-6/0/1 -> fab0 (fabric port), connected to xe-15/0/1
  • xe-15/0/0 -> link to cr2-eqiad
  • xe-15/0/1 -> fab1 (fabric port), connected to xe-6/0/1

Chris above said that ports 12-13-14-15 are connected between the two SRXes (so ge-2/0/12 <-> ge-11/0/12 and so forth). 12-13 serve indeed as the switch fabric ports (swfab0/swfab1 for each node respectively).

14-15, however, are *not* configured as switch fabrics or any other configuration for that matter. They are connected and up but not being used as far as I can see. We could add them to the bundle and expand the switch fabric from 2Gbps full-duplex to 4Gbps, or we could reuse them as server ports (and thus get 4 additional ports for servers). Or even do half and half (upgrade to 3Gbps and get 2 additional ports for servers).

Jgreen claimed this task.