As a Wikipedia editor, I disagree with the notion that the Beta tab is "really just one of the preferences pages" and that the fact that the link is rarely clicked suggests that it should be removed. From my point of view, the Beta tab is a slowly, but constantly changing overview of upcoming features. Based on my own usage of Wikipedia, the fact that the link is not used much is due to two reasons: It's impossible to know when something new is available without actually clicking the link and it's so rare that a new Beta feature arrives that it's really not worth checking on a regular basis. Hence, I'm afraid that if you remove the link entirely even fewer users will discover the Beta features.
- Feed Queries
- All Stories
- Search
- Feed Search
- Transactions
- Transaction Logs
Oct 31 2021
Oct 4 2021
Sep 15 2021
In T289619#7355379, @RHo wrote:I think even for long usernames the importance would be for the watchlist to be hidden first, as it is more "secondary" in importance in comparison to accessing one's user page, usertalk page (and newcomer homepage for those with GrowthExperiment features).
Sep 8 2021
In T288638#7328969, @alexhollender wrote:One thing that is maybe worth considering is your language in the task description, specifically "I'm very certain that this is quite common among Wikipedia authors". I personally try to phrase things as questions, rather than statements, especially if I don't have data to show. I wonder how the tone of the task description might have changed, and perhaps invited a more collaborative discussion, if you had said "I wonder if this is common for other editors as well?". For me the way you phrased it perhaps added a sense of urgency that, again just in my experience, isn't necessarily helpful to a collaborative and clear problem solving process.
Sep 1 2021
Aug 27 2021
In T288638#7296733, @alexhollender wrote:at the same time I feel a bit saddened, because it feels like the people giving us feedback don't trust us, or at the very least are not assuming good faith.
Aug 18 2021
In T288638#7285276, @ovasileva wrote:That said, we are exploring a few different options on presenting the watchlist in an easier manner and hope to have an update and decision made on next steps by the end of this week.
Aug 13 2021
Out of curiosity, if the problem statement is "it takes 2 clicks " couldn't this be fixed by learning and using the keyboard access keys?
In T288638#7279145, @Aklapper wrote:[...] I wonder why you have to log in so often (using different devices across places?), wondering why you would not directly start at Special:Watchlist (e.g. via a bookmark in your browser) and keep that page opened.
Aug 12 2021
In T288638#7277114, @Aklapper wrote:Hi, in my understanding, it is one additional click? How does that "break a common workflow"?
Aug 11 2021
Apr 12 2021
Apr 11 2021
Feb 20 2021
The idea also just came up at the German-language Wikipedia AdminCon, both in the context of supporting newcomers as well as for live communication between admins.
Oct 6 2020
Don't worry, you've come to the right place!
Aug 30 2020
Aug 25 2020
Thanks, I ran the bot on some of the articles mentioned above and it was fine. So far, there is only one new case where v2.0.5 messed up a reference: diff
Aug 20 2020
Why does the parser make any assumptions regarding the content of what it parses? In my understanding, it should produce some abstract representation, which the bot then modifies and which is then converted back to source code. The parser has no need to know anything about the difference between a cite template and an archive template.
From my limited understanding, I also don't think that Parsoid is the way to go. As I mentioned in that other ticket, in preparation of cleaning up the {{Literatur}} issues on dewp, I've written a small template lexer/parser that is "roundtrip safe", i.e. maintains all whitespaces etc. and can handle infinitely nested templates. It creates a representation where each template is an instance of an object, so it's easy to check and modify (e.g., continuing the example from above, one can just check whether cite_template.url == archive_template.url).
Perhaps that's something to consider for an initiative like T251966: Migrate IABot parsing code to annotated HTML (Parsoid)? As that ticket points out, there are quite a few issues that could benefit from a more robust and clearly separated syntax parser.
As far as I'm aware, this kind of merging is a new issue. I cannot remember that this has been reported before.
It is not merging templates correctly. Take for example the first diff:
Aug 17 2020
I removed the whitelisting for humo.be and set the URL to dead.
Aug 16 2020
Just to be clear: You want the bot to
- never add dataarchivio to cita web
- never add accessso to cita web (which is an information that the bot finds from searching in the article's version history)
- always use 18 ottobre 2007 as the date format for itwiki
As I understand it, the bot actually "believes" that it is responsible for adding the archives as well, because it is switching a formerly live link to "dead" and adds the archive information, which happens to have already been added previously. It would only report "tagged as dead" if it did not find an archive.
Probably related to T259011: Posted to an English Wikipedia talk page in a non-English language.
Judging from the Recent Changes page you linked, the problem seems to have disappeared (or be very infrequent). Do you have any other specific examples?
Can you please provide some example diffs?
I assume that @Green_Cardamom is the appropriate contact for this edit.
There are many reasons why the bot does not detect/edit URLs as dead. For example, it checks several times over the span of a couple of weeks before declaring a particular URL as dead.
Not so sure what happened there. It seems that this is an edit effected by @Green_Cardamom's bot?
Updating URLs is out of scope for IABot. In those cases, please replace the URL in the article. (If there are many instances, perhaps use a bot/file a bot request.)
I'll be bold and consider this one resolved/outdated.
What's the general status for humo.be?
URL has been whitelisted in the database.
Set URL to live in the database.
URL has been set to live in the database.
Whitelisted *.jype.com.
Database has already been adjusted accordingly.
Domains have already been whitelisted/set to live in the database.
Set URL status to live in database.
Domain www.sdats.ch has been whitelisted.
Domain is already whitelisted.
Set URL to live in the database
I don't think there is anything that the IABot can do here.