Thu, Jul 2
So - how do we make progress here? Any thoughts on who/how? :) Some of these features could really make a tremendous amount of difference to our network operations and future planning, so I'm super excited about seeing these into fruition!
Wed, Jul 1
I was bitten by this again today - ping!
Fri, Jun 26
Thu, Jun 25
To add to the above, I'm also wondering how difficult it would be to also include AS *names*, e.g. coming from the MaxMind GeoIP ASN database. I think we've used that database before, maybe for pageview data? Could we perhaps use Druid lookups for this to avoid adding another (identical) dimension to the data set?
Wed, Jun 24
I took a look at that list above. It's really not very actionable -- most of these are very large networks that have a restrictive settlement-free peering policy. For the few that remain, we have either established peerings already or have sent unanswered peering requests, which mostly means that they are not actively peering or we are too small for them to care about.
Thu, Jun 18
Thu, Jun 11
Jun 4 2020
This is now set up on SFMIX's end and up:
On your side please plumb 126.96.36.199/24 and 2001:504:30::ba01:4907:1/64. Usual sane BGP peering rules apply - no broadcast traffic (DHCP, CDP, etc), see https://sfmix.org/connect/guide.
We request at least one required BGP session (to our looking glass) and optional sessions for the route servers
The looking glass is AS12276 at 188.8.131.52 and 2001:504:30::ba01:2276:1. You should announce all your routes to the looking glass, but expect no routes to be announced to you.
We'll push out configs to support these peers this evening.
Jun 3 2020
May 19 2020
Are there any updates to this task and any particular reasons it's been held up? While this was never super urgent, we're now at the ~one year mark since this was ordered and delivered to the data center. Plus I think because at the time the upgrade was imminent, we only bought support for the new switch and not the old, so we're operating with unsupported HW right now. It'd be great if this were to be completed soon. Thanks!
May 15 2020
If three ports are permanently failed, I'm not sure how we could ever trust that switch again. Perhaps it's better to do a painful but planned replacement rather than have it fail at some inconvenient time and having to rush a replacement then?
May 12 2020
I know that historically MaxMind has claimed they update the data roughly on a weekly basis, and maybe in this case it was a normal weekly update and we're just misaligned with their weeks? In any case, the current geoipdate seems to be smart enough to checksum the existing databases and not re-download pointless duplicates, so we could probably run it more often on the puppetmasters.
May 8 2020
LoA received and cross-connect task created.
Apr 30 2020
I just submitted their form.
Apr 27 2020
Interesting idea! Couple of notes:
- What do you mean by "virtual links" and Netbox not supporting them? Is that VLANs for our transports over the PtMP VPLS?
- What do you envision the difference to be between "primary" and "preferred"? (I know you said TBD, but curious :)
- It'd be interesting to see how this would look like before we start adding the fields. That may help us figure out what the right values for those fields may be. Would it make sense to list our links in a Phaste or spreadsheet or something and figure out if the output makes sense?
Apr 14 2020
I think the original intention of this will be addressed by periodic audits that we'll eventually do. I'll decline this for the reasons I mentioned above, but if anyone feels strongly about this, feel free to reopen :)
So breaking down the (very reasonable!) ask, I think there are afew different things at play here:
- Access to iDRAC/iLO so that John can e.g. look at HW status and get reports that vendors ask for. This in turn requires:
- Access to the password store. There is already a "dcops" group with the right access, so we can have John added there. Should be simple, as far as I can tell.
- Access to the mgmt IP network remotely. Right now that's firewalled to the cumin hosts, access to which ties to a bigger project (see below). However, that's perhaps an unnecessary dependency and maybe we can easily work around that (e.g. with a separate bastion for mgmt?). @MoritzMuehlenhoff, @jbond any thoughts here?
- Access to execute cumin cookbooks, like reimaging. That right now is tied to global root, which is a privilege that we can't easily grant. Fixing that limitation has been on our radar, including the PoC work that was part of our Q3 OKRs (T244840). It's definitely not there yet and it's going to take a few months to fully materialize, unfortunately.
Apr 13 2020
If I understand it correctly, this task is specifically about a box that was returned to the spare pool and then was reallocated for a new purpose but kept its old data. We should definitely wipe in those cases. I think that has been standard practice in the past, but perhaps not well-documented or applied uniformly? I'm not sure, something to dig in more for sure :)
Apr 11 2020
The master branch of operations/software/keyholder is not ready for a release at this time, so please don't tag, package or deploy this at this state. There are a bunch of pending changes in Gerrit for about a year, plus more that I've queued up locally (because it's hard to manage dozens of dependent git commits with Gerrit…). If y'all are willing to review these I can clean them up and prepare a release; if not, then I can pick this up and make some progress. Let me know!
Apr 8 2020
Apr 3 2020
Apr 2 2020
Ah! That's awesome to hear. May I suggest to resolve this (and the associated "upgrade firmware"?) task then, and reopen if we have another one of these?
Apr 1 2020
What's the latest here? I haven't heard about these crashes lately but it may just be that I missed it. Do we know more about this now?
Mar 27 2020
@wiki_willy is finalizing the end of our leasing agreement. Once that's done, we'd be the "owner" of all of those assets, and thus we can remove the "owner" field from Netbox. Reassigning to Willy to let us know when that's done :)
Mar 26 2020
Mar 19 2020
Mar 18 2020
Reopening this per IRC, and given this is a prod/WMCS task affecting prod in major ways.
Mar 17 2020
Mar 15 2020
Mar 12 2020
Oh, that sounds perfect, let's do that :) We should also try with a build with the right make flags etc. (something like TARGET=SKYLAKEX like the FAQ says). Thanks all!
Mar 11 2020
OK, so to recap, I read two concerns:
Mar 6 2020
We have one global account, migrated from a previous system. I wasn't able to find how to create individual accounts, so that will do I guess :)
Mar 3 2020
Feb 20 2020
WMCS hosts are in the production VLANs, managed by the production puppet etc. Practically speaking, we use tenants to exclude fr-tech/OIT/RIPE hosts from reports (that e.g. alert if an active host is not present in PuppetDB or vice-versa), and will likely also use it to exclude them from the in-progress IP assignment/bootstrapping work. If we were to assign a tenant to those hosts, we'd have to special-case it pretty much everywhere to treat it like the "production" tenant (which is now the "null" tenant).
Feb 17 2020
Feb 13 2020
Feb 12 2020
First off: I have prototype code that supports UDP Echo and SSE, but not Kafka. It's not something that it's fully ready or tested yet. This has been developed over weekends/holidays etc., as a fun project -- and I can't promise I'll find spare time to add more stuff to it right now. Someone that can commit to it -staff or volunteer- should pick it up at some point and maybe also add Kafka in the process. We still have an open item and pending conversation on where ownership for the service itself lies.
The way this works now is that the entire MW fleet sends UDP packets to a specific IP (kraz) using the so-called "echo" protocol (= #channel<tab>message). We could theoretically switch this to a multicast address in order to get the ability of having multiple listeners (all connecting to separate IRC servers, each on each listener's localhost perhaps?), but noone has invested the time to do this and set up those multiple frontends.
Feb 7 2020
Please file a procurement task for Willy/Rob to execute on :)
Jan 23 2020
Correct. Also check the export templates (in the admin interface) for references to those fields.
(@Volans is not in Traffic), but regardles... judging from @BBlack comments before the flurry of Gerrit commits, it seems like I misunderstood where this lies. This is not blocked on Traffic, but with DC Ops. Reassigning to @RobH and apologies for the added confusion!
Traffic team, ping? This task has been open since August last year and as I was just saying on IRC, cp1008 is a constant outlier in all of our reports, projections, planning etc. Its purchase date is Jan 27th, 2011, 9 years ago almost to the day :)
Jan 22 2020
Hey - this was a Q2 task but it hasn't seen an update in a while. What's the status?
Jan 20 2020
@ayounsi, what's the status here?
Jan 17 2020
Could we import into Netbox now, and then change & document the setup at our convenience? It feels like documenting the existing situation and changing it are orthogonal to each other - any reason to block one on the other?
What is the status of this?
I've seen this issue before, and if I recall correctly, it was an issue with the Python 3.4 backport. I think the latest backport for 3.4.10-1~stretch1 should fix it.
Jan 16 2020
I think increasing the availability and resilience of this service is an excellent idea! However, adding more servers to per site feels like a requirement, and a standard Pybal/IPVS setup sounds much more appropriate than anycast for this use case.
Jan 14 2020
Splitting the internal apt repository from the install roles/servers sounds good -- it's more of a historical artifact than anything else. You probably know this already but do note that the install server does not provide just TFTP, but also HTTP (and that is actually favored these days), so we would need to have a webserver running on the install servers.
Jan 13 2020
@Volans, out of curiosity, why was this required? Note that the concept of "rows" doesn't apply in this site, it's just two racks next to each other :)
This task is about preparing "Phame to support heavy traffic for a Tech Department blog", which is not the plan anymore. We should probably decline this task in favor of another more-generic task ("set up a tech department blog"). @Bmueller, @srodlund, thoughts?
Jan 10 2020
I've updated the aforementioned apt repository with 3.8.1-2~buster1 packages Someone in SRE that's more familiar with how we do things these days (maybe @MoritzMuehlenhoff?) can update our reprepro to include that.
Dec 21 2019
- The canonical location is nowadays https://people.debian.org/~paravoid/python-all/ (which I maintain on my free time). We (Wikimedia) probably should set up a reprepro import for that.
- The above repository has 3.8.0 beta4 for buster, I'll need to update that for a more recent version (currently looks like 3.8.1). I can do so soon-ish.
- That said, I don't have any intentions to backport 3.8 to stretch.
The owner field will have to stay with us for a little while longer (until the end of Q4). The other two ("Support until" and "Support contract") can be dropped at our earliest convenience. Adjustments need to be made in at least the export templates and maybe even reports. @Volans and/or @crusnov, that's now over to you. (Hopefully the backups work in case we later realize it's a mistake)
Dec 16 2019
Thanks @ayounsi! Appreciate the follow up. What exactly did you ask them to do in this last communication?
Dec 13 2019
Note that R440s comprise 23.5% of the whole fleet, 84.1% of all servers purchased in the last 12 months, and 67.5% of all servers purchased in the last 24 months (I wish I had a graph!). Given this sample size, this may be just correlated to R440s and not specifically tied to them.
Thanks @Krinkle, very much appreciate all this! I have code from a couple of weeks ago that basically implements all this: consuming from SSE and formatting into IRC logging messages, but by using log_action_comment. It needs some more polishing and repository creation etc. I'll add you as code reviewer once I find some time to work on something better than Gist; hopefully during the end of year holidays.