User Details
- User Since
- Oct 7 2014, 10:21 AM (451 w, 2 h)
- Availability
- Available
- IRC Nick
- paravoid
- LDAP User
- Faidon Liambotis
- MediaWiki User
- Unknown
Nov 28 2022
Nov 2 2022
Aug 2 2022
Jun 18 2022
May 6 2022
So first of all, when I look into this (#6671) Atlas' probe page, I see this:
The LIR us.wmf has shared administration of this probe.
The management of this probe is allowed for the following individuals:
- Arzhel Younsi
- Cathal Mooney
Which I believe means that we have two separate access rights delegation: an org-based, and then an "Other users" one with you two being explicitly assigned rights to it). So, the hope is that you should have all access rights that I have :)
May 1 2022
Mar 1 2022
Seek to opt-out via Yandex' webmaster tools. I have no idea how to get access to this but presumably we could work it out.
Feb 14 2022
My (perhaps dated or incorrect) understanding is that:
- We currently have no RBAC in Logstash;
- Everyone in the "NDA" group have access to all data stored in Logstash;
- Access to access logs in general is more restricted, to a subset of NDA users, to the analytics-privatedata group (membership managed by the D/E team);
- sampled-1000 is a subset of access logs, available in the centrallog hosts, where only ops/roots have access to (so even more restricted)
Jan 25 2022
Hi @AndyRussG - you mentioned that "[Bing] has an option to import domain verifications from Google Search Console"; is there another option, such as doing the Bing domain verification separately from anything Google-related? That would be preferrable I think. Otherwise, it sounds like this may have the potential to share non-public data that Google has for our properties, to Microsoft, and therefore I think the most prudent would be to ask for the Legal/Privacy team to evaluate and clear this ask. Hope that makes sense - thanks!
Jan 3 2022
Not sure if this has been flagged by anyone else or considered but note that our mirror is an official mirror for Debian, Ubuntu and Tails. For at least Debian, sodium's IPs are in the ftp.us.debian.org rotation (and thus has to be an A/AAAA rather than a CNAME). I still see sodium's IPs there. I think Debian has some automated machinery to update these IPs but I'm not sure what triggers it - so be careful when turning off sodium. We're also a push mirror, which means that Debian's infrastructure triggers an update through SSH; not sure if this works yet?
Dec 3 2021
@eliza we're looking into this - next update in 15mins.
Nov 14 2021
There is definitely a noticeable difference in traffic patterns from Nov 4th or so:
I disabled the Equinix IXP port on cr1-eqiad, xe-3/0/6, just a few moments ago, in order to mitigate this issue. Checked with @ayounsi on IRC first, who is now aware of this task.
Aug 27 2021
There are some ongoing conversations with the WMCS team regarding the placement of their infrastructure in our network/infrastructure, and I think it would be good to resolve that first, before moving forward on implementing this. Setting this to Stalled - hope that makes sense!
Jul 2 2021
Jun 30 2021
Jun 29 2021
Thank you @jbond for picking this up and sheperding it - appreciate it!
Jun 26 2021
Thank you @jbond for raising this topic!
Jun 24 2021
Prioritization-wise, is there a reason why we're going for an IPv6 allocation while our IPv4 segmentation is still in flux or in progress? I fear that we're adding more features/problems to the mix without having set and implemented clear boundaries first, and making an already complex situation more complex (e.g. more filters to maintain) so I'd like to hear more about those trade offs and perhaps wait.
Jun 23 2021
Jun 11 2021
Jun 1 2021
If you're talking about my 2014 commit… if I recall correctly¹ this was in order to minimize changes between different distribution and enforce a unified policy (this was part of a larger patch series to put some structure around sudoers). I opted into env_keep because that's what folks were most used to and "secure enough". I don't have an opinion these days or whether it should be removed or not :)
May 26 2021
May 20 2021
Given a) this was linked during budgeting in the context of of our cross-DC bandwidth and for a substantial amount of cost b) off{site,line} backups is one of our priorities, I'm setting the priority of this task to High and asking our netops folks to have a look, Cc @joanna_borun.
Apr 19 2021
I killed that domain in 2014 (operations/dns 3a7f472cb3e9bcd03f0492cfdd8c0a2156f448d3). Noone has complained since to my knowledge, and I'd recommend to not reintroduce this redirect at this point. It was confusing to begin with: before that transition main mail exchangers and the mailing list service was all in the same box; these days they are (thankfully) separate, but the side-effect is that "mail" as a label is much more ambiguous. HTH!
@CDanis could you look at this soon? Thanks!
Apr 16 2021
SGTM :)
Mar 17 2021
@crusnov maybe you can have a look?
Mar 16 2021
I think I've implemented this -- it's been a while :)
Mar 5 2021
Mar 4 2021
(I'd suggest to focus on the nitty-gritty like SSH keys later -- I'm not the right person to ask for these either :)
Judging from the last two lines of that transcript, I've been summoned :)
Could you clarify the scope between:
- production hosts that currently have WMCS as the service team (cloudvirt, cloudcephosd, etc.)
- Cloud VPSes that the WMCS team currently semi-manages (i.e. that have other roots, possibly custom puppetmasters etc.)
- Cloud VPSes that the WMCS team is currently managing fully (operates config mgmt such as the puppetmaster), not necessarily exclusively (e.g. I think Toolforge has additional admins)
Mar 3 2021
I believe the Atlas is a PCEngines APU, so you'll need a null modem cable or adapter (RXD->TXD, TXD->RXD, etc.) If this is a Cisco rollover cable, it would do the trick, but your DB9<->RJ45 adapter should not be a crossover adapter, as that would swap crossover twice end-to-end and cancel each other out :)
Mar 1 2021
Feb 13 2021
To clarify the task's scope here, and the need from a network operations angle: as a service provider, providing effectively unrestricted IPv4 connectivity from our public cloud to the rest of the internet we need, for various reasons, the ability to identify and/or block the source of traffic in e.g. an incoming third-party report or request, and to be able to do so retroactively with timestamps into the past as well. (This is not a new requirement, nor the result of recent changes in cloud networking -- just something we're overdue for).
Jan 19 2021
Dec 7 2020
It feels like there are multiple issues being discussed here, so perhaps it's worth breaking this down and talking about some of these issues separately? The last few comments seem to be about the IP numbering and assignment issue, so I'll focus on that below.
Dec 5 2020
Dec 4 2020
OK, to add a little more color:
- The VLAN configuration is not important. brctl addif brq7425e328-56 eno2np1 is enough to reproduce this behavior.
- I was thinking why bridge would matter (thinking hwmode/EVB etc. originally). I had tried setting promisc mode to no effect, but with a clearer mind this morning, I tried promisc + down/up and managed to reproduce, without a bridge being involved. ip link set promisc on dev eno2np1; ip link set down dev eno2np1; ip link set up eno2np1 reproduces it, ip link set promisc off dev eno2np1 restores connectivity.
Dec 3 2020
Arzhel nerd-sniped me with this.
Dec 2 2020
Nov 25 2020
Nov 23 2020
Thanks - can you file a procurement request to that effect (& then resolve this task)?
Per @ayounsi above, "Last missing info is cable IDs". I don't see that as having taken place yet, right? The Cables report is even emitting soft-warnings about it (warnings that we should convert to errors once this work completes). Reopening the task, as it was probably resolved by mistake.
Oct 22 2020
Oct 19 2020
Yay, that's awesome! You can't imagine how much time this would save!
Oct 16 2020
From the Netbox changelog ("Changelog" tab on the device) it looks like some changes were made on September 28th by @Cmjohnson and later one change on Oct 6th by @wiki_willy. Specifically:
Sep 24 2020
I wonder as what kind of ASN would these flows show up as (esp. with confederations!), as well as whether we could have a dimension to be able to differentiate between internet traffic, and backhaul traffic. We'd also need a dimension of "site" to be able to filter or slice for traffic from esams to eqiad like the parent task required, right? Also see T254332, which also makes me wonder whether adding all of these different dimensions is going to start being a problem :)
Sep 21 2020
BTW, one dangerous impact of this (as with all ECMP!) is that it would harder to notice a situation where we don't have enough capacity to carry regular amounts of traffic when one of the paths is down for whatever reason. We could perhaps mitigate this by tuning our monitoring to alert on 40-50% utilization, at least for the common cases of link redundancy (codfw/eqdfw, eqiad/codfw). So this will still get us extra capacity for "abnormal" conditions (like edge in eqiad but MW & Swift on codfw etc.) but still alert us to the situation where we don't have enough capacity for normal levels of traffic.
Sep 17 2020
SGTM!
Sep 16 2020
Sep 14 2020
Sep 11 2020
Broadly speaking:
- We shouldn't have outstanding alerts open (or even acknowledged) for more than a few days. If there is an alert, it means there is an abnormal condition that requires fixing. If the issues require a significant amount of work to address, then a a task should be created and the alert acknowledged with the task in the comment while it's getting fixed. I'd expect the DC Ops teams to be primary for such alerts and act on them, but also everyone in SRE is expected to triage alerts and reach out to owners and file tasks about them (like @ayounsi did here)
- If there are false positives often, then this is something that we should fix. We probably need one or more separate task for this, that describes conditions under which an alert is triggered erroneously, so that we can fix this. I'd expect the DC Ops team to be filing this task, and I/F to change the report to meet the adjusted needs.
- The test_missing_assets_from_accounting report is already (and has always been) ignoring discrepancies for items where the purchase date is in the last 90 days. This is configurable and we can tune it further to some other value but it was picked as long enough for accounting to process invoices, and too long to have fallen out of memory (or vendor engagement is over, team changes etc.). If there is a persistent backlog in Finance >90d it'd be good to know and adjust.
Sep 7 2020
@jcrespo & @akosiaris may I ask you to figure this out in a different task? This is a generic task about dozens of servers, so by discussing details about a couple of them we're going to lose the bigger picture :)
Aug 18 2020
Ping? Besides the issues identified by @ayounsi just above, I see that in another comment above @ayounsi mentioned "wipe the switch" but then I saw the switch was removed. @Cmjohnson, can you confirm the switch was wiped before (or after) its removal? (Any reason we didn't go the decom task route here like we normally do?)
Aug 17 2020
@wiki_willy, what's the latest here? What's blocking us from having decom tasks for all of the items above?
Aug 4 2020
Bump! What's the latest here?
Jul 22 2020
We still seem to have remnants of PIM-RP:
faidon@re0.cr2-codfw> show configuration | display set | match 208.80.153.194 set interfaces lo0 unit 0 family inet address 208.80.153.194/32
Jul 21 2020
It looks like both of these issues are resolved now! Boldly resolving :)
Jul 16 2020
To give a little more context: in response to us requesting an extension for the v2 anchors, the RIPE NCC team reached out to ask if they can run a test upgrade on our of anchors (which I of course said OK to!).
Jul 4 2020
With a cursory look from yesterday, the following issues apply or would need further investigation:
- We did not run lilypond in a firejail due to a mediawiki-config configuration bug.
- We did not run lilypond in safe mode, on purpose, as safe mode breaks a number of common features. With a very small sample, about 50% of our existing files break. @Platonides may have better numbers. Some of these may be intentional (for e.g. resource use), but some may be unintentional (e.g. color definition are not defined as symbols). It does not feel like the safe codepaths are well used or tested, which is a problem on its own.
- Lilypond's code does not seem to be safe-by-default. -dsafe is not the default and only buried in the documentation. Variables/methods are unsafe by default, e.g. define-public is unsafe and define-safe-public is the safe version, rather than vice-versa. In many places the mode is not called "safe", but "safer", which is... scary. Lilypond has a --jail option that recommends instead of safe, but which is nothing but a setgid/setuid/chroot/chdir; hardly secure.
- "The Guile interpreter is part of LilyPond, which means that Scheme can be included in LilyPond input files. There are several methods for including Scheme in LilyPond". Guile is a powerful language, with POSIX in its stdlib, as well as Dynamic FFI, essentially allowing arbitrary code execution by design. Guile has a sandboxed evaluation mode (h/t @CDanis), but Lilypond does not seem to employ it. Effectively, this a "Microsoft Excel runs macros by default" blast from the past situation :)
- Besides the use of Scheme per se, Lilypond also uses PostScript as an intermediate format, relying on Ghostscript to convert to PNG. It does not call Ghostscript with -dSAFER, or in some cases calls it with -dNOSAFER. This is explicit, present in the version we run in production, but also as recent as with commits as recent as 2 weeks ago (Revert adding .setsafe for Ghostscript command). It also allows users to embed arbitrary postscript using \postscript, effectively allowing arbitrary code execution, even in safe mode. This is perhaps also indicative of upstream's attitude towards considering all input as trusted.
- Similar injection code paths could be present in other backends, including e.g. its SVG output; it's unclear whether it allows arbitrary SVG elements to be included (including maybe <script>?). I also don't think we use SVG in production right now? But one could imagine an otherwise innocuous change being deployed to enable it in the future, so we should at least evaluate this or add a bunch of warnings for our future selves.
- All in all, I think this needs to be discussed with upstream, to hopefully result into a mindset shift with regards to whether input is considered trusted or untrusted by default. In its current state, I don't think it's reasonable for users to even run this on their desktops with anything but scores they've personally handcrafted, or for distributors like Debian to ship this without warnings to that effect.
(see T257092 for more about this)
Jul 2 2020
So - how do we make progress here? Any thoughts on who/how? :) Some of these features could really make a tremendous amount of difference to our network operations and future planning, so I'm super excited about seeing these into fruition!
Jul 1 2020
I was bitten by this again today - ping!
Jun 26 2020
Jun 25 2020
To add to the above, I'm also wondering how difficult it would be to also include AS *names*, e.g. coming from the MaxMind GeoIP ASN database. I think we've used that database before, maybe for pageview data? Could we perhaps use Druid lookups for this to avoid adding another (identical) dimension to the data set?
Jun 24 2020
I took a look at that list above. It's really not very actionable -- most of these are very large networks that have a restrictive settlement-free peering policy. For the few that remain, we have either established peerings already or have sent unanswered peering requests, which mostly means that they are not actively peering or we are too small for them to care about.
Jun 18 2020
Jun 11 2020
Approved.
Jun 4 2020
This is now set up on SFMIX's end and up:
On your side please plumb 206.197.187.82/24 and 2001:504:30::ba01:4907:1/64. Usual sane BGP peering rules apply - no broadcast traffic (DHCP, CDP, etc), see https://sfmix.org/connect/guide.
We request at least one required BGP session (to our looking glass) and optional sessions for the route servers
The looking glass is AS12276 at 206.197.187.1 and 2001:504:30::ba01:2276:1. You should announce all your routes to the looking glass, but expect no routes to be announced to you.We'll push out configs to support these peers this evening.
Jun 3 2020
May 19 2020
Are there any updates to this task and any particular reasons it's been held up? While this was never super urgent, we're now at the ~one year mark since this was ordered and delivered to the data center. Plus I think because at the time the upgrade was imminent, we only bought support for the new switch and not the old, so we're operating with unsupported HW right now. It'd be great if this were to be completed soon. Thanks!
May 15 2020
If three ports are permanently failed, I'm not sure how we could ever trust that switch again. Perhaps it's better to do a painful but planned replacement rather than have it fail at some inconvenient time and having to rush a replacement then?
May 12 2020
I know that historically MaxMind has claimed they update the data roughly on a weekly basis, and maybe in this case it was a normal weekly update and we're just misaligned with their weeks? In any case, the current geoipdate seems to be smart enough to checksum the existing databases and not re-download pointless duplicates, so we could probably run it more often on the puppetmasters.
May 8 2020
LoA received and cross-connect task created.