In T140991 Faidon did a quick check of available ports via remote console and it looks like there may be several unused/unconfigured ports. I think some of the confusion may be due to plugging in the second NIC on several hosts to the opposite PFW from the primary NIC for failover, but at this point I don't know where to find that information. Please survey what's physically attached so we can figure out whether we have free ports.
Description
Status | Subtype | Assigned | Task | ||
---|---|---|---|---|---|
Resolved | None | T182030 Scope contribution tracking | |||
Duplicate | None | T158009 [Epic] Contribution tracking reform | |||
Resolved | Ejegg | T86253 Make Contribution Tracking not a SPOF | |||
Open | None | T119813 Make contribution_source into a proper thing or retire | |||
Resolved | Ejegg | T119556 [epic] SPOF: Use Redis as backend store for contribution_tracking | |||
Declined | Jgreen | T120464 Deploy Redis 3 to frack | |||
Resolved | None | T117466 Q3 GOALS! (January-March) Keep at top of Q3 column | |||
Resolved | None | T108229 [Epic] SPOF: Replace ActiveMQ donation queues with a more robust software stack | |||
Resolved | Jgreen | T130283 Provision Redis cluster for Fundraising | |||
Resolved | Jgreen | T133524 frack eqiad hardware refresh | |||
Resolved | Jgreen | T137150 replace silicon & aluminium with new hardware running jessie | |||
Resolved | Jgreen | T140991 put pfw1- ge-2/0/11 in the 'fundraising' vlan for new host frqueue1001 | |||
Resolved | Jgreen | T141363 Survey available/unused ports on eqiad pfw's |
Event Timeline
I did a check on all ports and verified each one.
pfw1
0 -> indium
1 -> payment1001
2 -> payment1003
3 -> pay-lvs1001
4 -> pay-lvs1001 eth2 (doesn’t appear to be active)
5 -> silicon
6 -> tellurium
7 -> barium
8 -> lutetium
9 -> db1008
10 -> americium
11 -> (empty)
12 -> pfw2 0/12
13 -> pfw2 0/13
14 -> pfw2 0/14
15 -> pfw2 0/15
pfw2
0 -> payyments1002
1 -> payments1004
2 -> pay-lvs1002
3 -> pay-lvs1002 eth2
4 -> boron
5 -> samarium
6 -> db1025
7 -> thulium
8 -> bismuth
9 -> frqueue1002
10 -> berrylium
11 -> frqueue1001
12 -> pfw1
13 -> pfw1
14 -> pfw1
15 -> pfw1
OK, did a little more investigation.
pfw-eqiad is a cluster of two SRX650s, each with 4x1Gbps built-in ports, 16x1Gbps in a GPIM and 2x10G in an XPIM.
The SRX platform in a cluster mode (two nodes) reserves a certain amount of ports for a) management, b) inter-chassis control, c) fabric d) switch fabric (yes, it's confusing). The assignment of which and how many ports for (a) and (b) is not configurable, but for (c) and (d) it is.
Chris' report above is for the 16x1Gbps card. The builtin 4x1G ports are:
- ge-0/0/0 -> node0's fxp0 (management port), connected to a mgmt switch
- ge-0/0/1 -> node0's control port, connected to ge-9/0/1
- ge-0/0/2 -> FREE (usually reserved for fabric port, but we use another port for that)
- ge-0/0/3 -> FREE (usually reserved for fabric port, but we use another port for that)
- ge-9/0/0 -> node1's fxp0 (management port), connected to a mgmt switch
- ge-9/0/1 -> node1's control port, connected to ge-0/0/1
- ge-9/0/2 -> FREE (usually reserved for fabric port, but we use another port for that)
- ge-9/0/3 -> FREE (usually reserved for fabric port, but we use another port for that)
The 4x10G linecard ports are:
- xe-6/0/0 -> link to cr1-eqiad
- xe-6/0/1 -> fab0 (fabric port), connected to xe-15/0/1
- xe-15/0/0 -> link to cr2-eqiad
- xe-15/0/1 -> fab1 (fabric port), connected to xe-6/0/1
Chris above said that ports 12-13-14-15 are connected between the two SRXes (so ge-2/0/12 <-> ge-11/0/12 and so forth). 12-13 serve indeed as the switch fabric ports (swfab0/swfab1 for each node respectively).
14-15, however, are *not* configured as switch fabrics or any other configuration for that matter. They are connected and up but not being used as far as I can see. We could add them to the bundle and expand the switch fabric from 2Gbps full-duplex to 4Gbps, or we could reuse them as server ports (and thus get 4 additional ports for servers). Or even do half and half (upgrade to 3Gbps and get 2 additional ports for servers).