Sat, Oct 19
IABot is seeing all of these as alive now, since you reported the links as dead. They were automatically reset. I don't see what else to do here?
Domain has been whitelisted.
Fri, Oct 18
@Anomie I can confirm that your suggestion to normalize the UTF text does the trick. Thank you.
Thu, Oct 10
I added dev-master to my composer json. This should fix the reported issue.
No further response
This was already implemented before this ticket was opened.
Because deadurl is no longer a supported alias on enwiki, and because the current definitions are being drawn from the enwiki definitions, deadurl is not acknowledged on the template at all. Thus they are ignored.
IABot cannot be expected to accurately detect these cases. Please use https://tools.wmflabs.org/iabot/index.php?page=manageurlsingle&wiki=enwiki to adjust the URL metadata of the links it rescues. You can disassociate the, or associate a different, archive there. Changes are applied immediately, and a bot job can be started on applicable pages afterwards.
Please use https://tools.wmflabs.org/iabot/index.php?page=reportfalsepositive&wiki=alswiki to report false positives
The bot correctly matched the format being used in the template and applied the same format to the new dates it added. There is no bug here.
https://tools.wmflabs.org/iabot/index.php?page=manageurlsingle&url=https%3A%2F%2Fwww.bipm.org%2Fen%2Fmeasurement-units%2F suggests that it found the source with the access date of 2018-11-23 somewhere on some wiki. IABot picked the closest snapshot to that and saved it for future use as it is a guaranteed working snapshot with similar, if not the same, data as the original from that time frame. If you want the bot to use a different snapshot, you can use the provided link and change the archive URL the bot should use. Changes are applied immediately.
Tue, Oct 8
Wed, Oct 2
Mon, Sep 30
Sun, Sep 29
Sep 18 2019
Seems to be working now on my end. :-)
Sep 16 2019
@Anomie what's the latest about this bug? I see it's awaiting review, but when will it get merged and deployed? This is sadly holding up work on my end.
Sep 15 2019
I see nothing to indicate you should be hitting a barrier. Please make sure the interface is set to the English Wikipedia. Please also report what username you are using. I found "Brown Chocolate" which is a member of "basic user" which has the permissions needed to run the bot.
Errors are populated here, which given the edit rate of the bot, and based off of a random sample of what IABot broke vs what humans broke, we see that of the millions of pages it edited, this all that it broke. We should certainly commission a bot to clean up the mess, however.
The error rate is still very, very low from what I'm seeing. I'm not using that as a reason to dismiss your concerns, I just need to look at what's happening. I'm not fond of patch jobs. I want to fix this at the root.
"Indeed, it might be best if IABot didn't edit articles where it can't predict the outcome of its own edits." <-- I'm referring to that. IABot can't know when an edit it makes might have an unpredictable outcome. That would require serious Machine Learning. Something that's not slated to be implemented until further down the road (IABot v3). That's still some years off at least.
That's also not feasible. There's no way for IABot to even know that the edit is unpredictable to begin with them. The solution is to try and establish why the bot only touches one ref and ignores the other.
@Mikeblas I honestly don't know if this is worth fixing in IABot now. I don't even know what reflist is doing to trigger this error. And I can't program IABot to predict what references inside templates will do on other wikis either. This will need a different solution.
The deduping has been disabled.
Sep 14 2019
The domain is whitelisted because IABot cannot reliably assess the URL. As for it ignoring deadurl, it is no longer supported and is no longer acknowledged. A different bot should go around replace deadurl with url-status.
Looking more closely at the example, it looks like "ARIA News 28 Oct" was defined 3 times, where one had vastly different content, and the other was an identical duplicate. I don't see an error here other than IABot exposing that there was a ref error not being caught earlier.
Just looking at the first example, I'm already confused. It deduped a named reference but a different reference lit up with a red error. Can you help me understand what is going on here?
Sep 11 2019
Sep 10 2019
I couldn't say. I can't even see which page it's trying to pull in that given batch.
This isn't the first time I've run my script against IABot's contributions, so I would say it is a recent issue.
Sep 9 2019
Sep 8 2019
On wiki template data could have recently been changed. The bot will read that as well.
If it's producing a blank parameter, than somewhere in the template data, that parameter was marked as required.
Sep 7 2019
Sep 4 2019
Added url-access for all book references not going to specific pages as the ones going to a specific page do not require registration to view.
Sep 3 2019
I cannot establish what is going on here. When pulling up the URL I see one page being mentioned and it's the same one you mentioned.
Aug 31 2019
First off there are no such things as URLs without a schema (HTTP/HTTPS/FTP). The https does not redirect anywhere.
Aug 29 2019
Also just leaving this here as well. I have now changed my on wiki hash.
I'm still in Germany, so I will be going to bed now. :-). Thanks @bd808
Welp my phone got logged out somehow. So now I only have one device left with an active Phab session going.
I have 2FA enabled and my identity recently confirmed multiple times on Wikipedia, most recently on this very thread. Can I just post a 2FA reset confirm post on my talk page?
Oh wait. You disabled my Wikipedia 2FA. I wanted my Phabricator 2FA disabled.
2FA is still active on my Phabricator.
I’ll do one better, I’ll forward the email of your email. :-)
I see one page. https://zh-yue.wikipedia.org/wiki/土木工程
Aug 27 2019
I somehow managed to miss this ticket this entire time. But with IABot's judgement override active, it will treat any archive URL in the url parameter as a dead URL since the original cannot be linked. As such it will mark it as such despite whitelisting as it is deferring to human's judgement that made that edit in the first place. Whitelisting simply inhibits the bot from using its own judgement on those URLs.
Aug 26 2019
Unclear what is being asked. If it's what I think it is, this beyond the scope of IABot and not doable.
Aug 25 2019
Aug 24 2019
I intent to have it ready by the end of August.
Aug 23 2019
Boldly closing this. Internet Archive actively crawls all WMF projects looking for new links and IABot handles Wikidata.
There is insufficient information in this report to effectively diagnose the issue. Unable to reproduce.
Aug 22 2019
Yes. This appears to be fixed now.
Also preventing logins too. Now it really needs a quick fix.