Google me.
(Blurb: huwp founder, osm contributor, dmoz/hu section founder; doing all kinds of weird stuff with email, linux, perl, various networkng equipments and stuff. Talking/writing too much. https://en.wikipedia.org/wiki/Peter_Gervai )
Google me.
(Blurb: huwp founder, osm contributor, dmoz/hu section founder; doing all kinds of weird stuff with email, linux, perl, various networkng equipments and stuff. Talking/writing too much. https://en.wikipedia.org/wiki/Peter_Gervai )
Old mailman was able to forward spam to an email address (supposedly the admin), and I have been using it on my old lists to forward spam to my spam-learning email address. I do not see this in Postorius, but it would be a generic solution.
My suggestions:
In T276523#6886438, @ssastry wrote:this page should be split up on ukwiki.
I have tried to notify a local admin: https://uk.wikipedia.org/wiki/%D0%9E%D0%B1%D0%B3%D0%BE%D0%B2%D0%BE%D1%80%D0%B5%D0%BD%D0%BD%D1%8F_%D0%BA%D0%BE%D1%80%D0%B8%D1%81%D1%82%D1%83%D0%B2%D0%B0%D1%87%D0%B0:Andriy.v#Deletion_of_some_extreme_sized_pages
This does not resolve the problem on mediawiki side, obviosuly.
As a sidenote: I have checked a lot of alternatives in the past and have had a second round when otrs ag pulled the plug but found no real replacement. Some offers were dead simple : way too simple for real use. Some were tailored for very specific needs (usually tied with computer code management). Some had horrible source code (which, when mixed with php usually leads to really bad things). RT got lot of facelift in the recent years but I am still not convinced of that UI.
Generally I see two problems:
We are using FS storage since the beginning, and it usually goes well without large problems. So far I remember only one problem which was present in OTRSv4: attachments with bad encoding sometimes were saved uing a name which was not translated back to the original well.
I have briefly looked at the packages and they don't seem to have impossible dependencies (apart from java8 but it's still available) and I may try to move it over to a debian, but it's not done yet.
Let me make some uncalled comments. :-)
I run BBB on the ancient ubuntu VM they want, as a separate system, and it runs fine (provided you firewall everything else). It performs much better than jitsi: rarely loses streams, rarely get stuck on video and can sustain 15-20 videostreams on a relatively small server (with lots of threads though: the VM has 18 cpu threads allowed). (As a sidenote it seems to be a client problem for jitsi.)
In T222458#5858780, @JJMC89 wrote:
Well, there is room search if anyone wonders…. (it's just an example.)
Hmm, the problem is possibly here:
You are wrong, it was rejected by, as I have mentioned several times, by mx1001.wikimedia.org [2620:0:861:3:208:80:154:76]. What you are seeing is the rejection based on the rejection of mx1001. The important part is:
It is still not fixed, but I have a recent sample.
Sorry for being unclear, it wasn't intentional. There was a report of missing mail sent from Wikipedia (wikipedia posted a notice to the user that an email has been sent and the email never have been arrived) and I started inspecting mailserver logs for unusual traffic from anything wikipedia related around the same timeframe.
If anyone really want to do anything about it I can spend time and testing on it [basically sending mail myself and correlate with logs], but not before, since unfortunately my time is a scarce resource.
Steps to reproduce: I can't tell you, since this is an incoming email. I have included possibly all the information required to look it up (except the specific timestamp, 2018-10-18 19:49:57 CET) in the mailserver logs, but obviously I do not have any further information on a mail I neither originated nor received. :-) It's from wikipedia, and judging by the sender it may have been generated from something on huwiki, so my educated guess was email-to-a-registered-user-from-the-website.
And the conclusion was …?
It's a clamd bug + a signature bug. The signature has been fixed the same day it's been fucked up, and clamd will be updated to fix the problem (which resulted dangling filehandles, out of file descriptors, not deleted tmp files and more). Should have been error-free if sigs were updated.
Whoever uses it should be covered by the SPF anyway, that's the point.
I know I am lazy so I still haven't decyphered the configs how you handle spamd, but a few notes in the dark:
spam = everybody/defer_ok
A bit of a latecomer but I would comment that by some (and this some seems to be more some than the meek supporters* ;-)) the Schulze STV (https://en.wikipedia.org/wiki/Schulze_STV) is considered pretty useful in the real life scenarios (and usually recommended over other multi-winner systems by geeks).
@Nemo_bis uh, these servers are basically idle. Any SPF checking may be okay, fork or otherwise.
Just as a sidenote: be aware that wildcards are only wildcard one level up, not any; *.wikimedia.org matches robh.wikimedia.org but not server01.robh.wikimedia.org (which became obvious on the OSM tileservers on labs).
In T160529#3164199, @Nemo_bis wrote:In T160529#3162334, @KTC wrote:AFAIK, from the list members email server point of view, any SPF check will pass since it's checking WMF's mailman server.
Indeed, see example (from a gmail recipient address):
Received-SPF: pass (google.com: domain of wikiquote-l-bounces@lists.wikimedia.org designates 208.80.154.75 as permitted sender) client-ip=208.80.154.75;
Do we need to install spf-tools-perl and set CHECK_RCPT_SPF=true in https://phabricator.wikimedia.org/diffusion/OPUP/browse/production/modules/role/templates/exim/exim4.conf.mx.erb ?
https://wiki.debian.org/Exim#SPF_filtering
In T160529#3162334, @KTC wrote:In T160529#3161835, @grin wrote:Dropping/autorejecting email with matching header
X-Spam-Score: .+\+\+\+\+\+
(which is above spam scrote 5.00) probably helps a lot.That's not something someone in my position can do since the email never goes through the legitimate (i.e. SPF authorised) server. It goes straight to WMF's server who send it out to list members. AFAIK, from the list members email server point of view, any SPF check will pass since it's checking WMF's mailman server.
In T160529#3135437, @KTC wrote:I'll also accept suggestion for what I can do on my end.
Dropping/autorejecting email with matching header
X-Spam-Score: .+\+\+\+\+\+
(which is above spam scrote 5.00) probably helps a lot.
Am I right to guess that we don't do (strict or else) SPF checking while we definitely should? Exim can handle SPF just fine alone, as well as spamassassin.
It's also a bit weird that we let an email to go with the flow with 10+ spam points, but maybe there are hist[oe]rical reasons...
In T161256#3138330, @Peachey88 wrote:In T161256#3138272, @grin wrote:I would expect some background check from you before answering. Let me do it then. HTTP/2 support by browser versions:
…
Some of these are pretty recent versions. I don't really agree your optimism about coverage.I believe @MaxSem was referring to MediaWiki official level of support for various internet browsers (see https://www.mediawiki.org/wiki/Compatibility) rather than browsers support levels for HTTP/2 processing.
In T161256#3137827, @MaxSem wrote:The only valid use for labs is WMF projects,
In T161256#3128643, @MaxSem wrote:Now, in the time of HTTP/2.0 over TLS, there are modern pipelining techniques that render multiple domains not needed.
Just don't forget that we're talking about the Real World™, where Internet Exploder v5.0 is still reality. Not that I say I want to support that but SPDY/HTTP2 isn't that ubiquitous and older clients may well hit rate limits hard.
People with godmode flags may check how many requests are and are not using HTTP2, and help to make informed decisions.
Thanks for the reminder, I've got a word back from MQ, and they said, that in 2014 MapQuest served 380 million Open Tiles per day, 9.3 million Open geocodes per day, and 38 million Open reverse geocodes per day (these numbers were readily available).
Whenever I had to do such a service it's getting done by a really simple mailforwarder. Every user have a hashed mailbox, say, u8ee7d5a0@private.wikipedia.org (and even the hash could be generated from the account name and not from the email, if one's worrying about deniability), do not even have to be created as it may be generated on the fly. Outbound email uses this sender, and all replies get processed and forwarded to the user's real email address. In theory I can do this for you if you have a spare CT/VM with access to user email addresses (or a copy of it) and have a net connection.
Another sidenote: this decision should have a good visibility to the people planning server resources.
And I try to ask around MapQuest what traffic levels did they observe before throwing it in.
In T141815#2773589, @Gehel wrote:
The time the link went away has there been any VRRP change?
In T146968#2696232, @pajz wrote:Now, I can't say anything definite given the relevant servers are operated by the WMF, so I suppose only they'd be able to provide perfectly up-to-date information,
Geez, that was 11 years ago. :-P
(testing lurking on phabricator made me see this ;-))
my 2'cents: since defgw was not pingable I'd check (apart from arp) irqs on the machine, I suspect you've checked that there was nothing in syslog saying stuck ethernet rings or device. if it was on v6 the gw may play tricks but it's usually doesn't happen on static v4 configs.
as a sidenote this also happen on cabling problems when only one wire is faulty (no link loss but loss of one direction), usually happens when someone's fiddling around. switch hardly can say anything useful, much more helpful would be the counters on the machine eth.
sorry for chiming in. :-)
In T144508#2625975, @Ryan_Lane wrote:
I respectfully disagree with most of the points, but as it's been said before: I have noted that the topic should be considered complex in case of a decision should be reached.
@BBlack thanks for the detailed reply. I try not to talk apart this task, so I try hard to be brief.
In T144508#2604050, @BBlack wrote:
As a sidenote: migrating all the eggs of the whole world into one basket of github seem to be a bad long-term strategy. I'd say doing it independently should be preferred.