We are beginning to use the hashtag for #1lib1ref . From what I am seeing, we are missing something like 1/3-1/2 of the edits in the campaign, because the editors forget to add the hashtag to edit summaries -- or its not intuitive. See http://tools.wmflabs.org/hashtags/search/1lib1ref as compared to the communication of edits at https://twitter.com/search?q=%231Lib1Ref
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jan 13 2016
Jan 12 2016
@Krenair could you review r/263145?
@kaldari needs to be on this as well.
Dec 11 2015
@yuvipanda, @coren, and @Andrew: Could one of you help with the configuration on this? We would like to make sure that @ThatAndromeda who is a contractor for TWL, can work on our project in a timely manner.
Dec 9 2015
@Hydriz sorry for the slow ersponse, lost the email in a busy month: we are primarily creating this queue to allow us to keep Internet Archive informed about what the Wikimedia tech community is doing related to their services. We don't neccessarily expect the work to be tasked as part of the project,. I don't see why your item shouldn't be included in this queue.
Dec 4 2015
Dec 2 2015
Very exciting! Keep up the good work!
Nov 30 2015
@DarTar Sending a scheduling email now -> please let me know if it works.
Nov 26 2015
If there is a good window of time that we know that this will be implmented, The-Wikipedia-Library has a few key partners that we could ask about specific traffic effects, to understand rates and ratios, etc. @Samwalton9 does most of our metrics discussion with our partners, so will be the contact alongside me for @DarTar for Comms and/or tracking with publishers.
Nov 25 2015
@Legoktm can we provide you any more information? Did your initial investigation of implementing prove fruitful?
@Rdicerb whats the status/progress on this? We have been talking to a number of our partners about this issue, and it would be nice to have a window/timeline/prospect for if/when the origin data can be communicated.
Nov 24 2015
@ThatAndromeda, do you have any questions about the instructions at getting on to labs? If you ping the irc channel Cloud-Services, it should help.
Nov 20 2015
@coren and @Andrew . We need to set up a labs instance for getting our contractor @ThatAndromeda on labs to build T102048 . How can we get that process started? What information do we need to get? Who would be the best person to jumpstart this? cc:@Ocaasi
Nov 18 2015
In T115119#1814343, @Halfak wrote:I'm not sure that truncation is an effective strategy to prevent collecting PI -- especially one that works at 2000 chars.
I was thinking a truncation in the 500 character range
@Halfak: the useful bit of that url though, is the id= and pg= variables: so all of the analytical bits would be in those first 100 characters. Looks like the 2000 character no-limit default should be fine. If we cut urls down, it would only effect a limited number of urls (and only save a minimal amount of space/storage), unless we are worried about the personal information concern per @Milimetric
@Halfak can you think of a use case for anything more extensive than 400 characters? Investigating search queries/API activity/other service calls being done in urls as part of anti-spam work?
My gut instinct is the main use cases need the 300-400 character range: most tracking tools/strategies for our community's needs will terminate after the first couple path items, and for scholarly sources, URI standardization is favoring much shorter urls. Special:LinkSearch will still register the item, if someone wants to retrieve it, so truncation is mostly for usability from the analysis end. The most systematic way to figure this out, would be to look at the link table, do a count of urls lengths, and shoot for something that supports ~80% of the links. I did a quick search through urls that I know to be consistently long, and most of them are ranging in the 200-250 character range.
Nov 17 2015
@Legoktm If it does, I can't think of a reason why we would need super long urls: there is a point where that information has extreme diminishing returns, so could be culled.
@Magnus Discovered I could use PagePile to hack the tool: https://tools.wmflabs.org/glamtools/treeviews/?q=%7B%22pagepile%22%3A%221311%22%2C%22rows%22%3A%5B%5D%7D but the Project Space is not collecting pageview information: is that a problem with the new API? Is it not reading Project: as project space as (with https://analytics.wmflabs.org/demo/pageview-api/ it auto corrects to Wikipedia:) ?
Nov 16 2015
Nov 12 2015
@Aklapper Where is the template for doing this? Would love to add a link, but not seeing a readily available tool for signalling that (did a little digging in the infobox, but didn't find it).
@Nikkimaria mapped the workflow has been documented at (private Google doc) https://docs.google.com/document/d/1eLHwmsACjY5fXG14Q-BrZDAhy-u-aMVEDgo4Zkr4Cvg/edit#heading=h.83ipbxzh48l5
Hi @Aklapper this is in based on her external bid to develop the tool. I will share with you privately.
Nov 6 2015
Nov 2 2015
Oct 27 2015
This is of interest to the Wikipedia library: some of our business case for donations from Publishers is the traffic case, and in the long run, this will be an important element of arguments that support more Open Access sourcing. I am imagining GLAM-Wiki would also be interested in accuracy for this kind of referral data.
Oct 26 2015
Oct 23 2015
@Mrjohncummings you might want to update your request then, to reflect the need for a tool or more transparent interface for this data.
@Nemo_bis - there is no easy way to get at this data through either the commons interface or a tool, I think what he is requesting is a tool or visible stat that helps improve the visibility of these numbers/stats. Something like a stats.grok equivilant.
Oct 22 2015
This in part, is related to our new request at https://phabricator.wikimedia.org/T115119 .
Oct 19 2015
@Aklapper I was trying to do it with the whole of the description: internet archive is frequently a tool used in reference to other strategies for fixing materials on wiki. I want to be able to give our external partners at IA the opportunity to review all thats related to them, to get a sense of our technical interconnectedness. Sorry for not responding sooner, been a busy week.
Oct 16 2015
Oct 13 2015
@Aklapper It appears that I can't create a Herald rule to add tasks to the Project (only personal subscriptions) automatically, if it contains "Internet Archive", "archive.org", or "Wayback Machine". Can you set that up? There are likely not many situations where I would need another kind of permission to do this.
@Halfak created a schema at https://phabricator.wikimedia.org/T115119 that would improve Special:LinkSearch so that we can keep track of external link changes in Event Logging, which would make the visualization and development of this functionality easiser.
Brilliant! Thanks so much!
*TWL is facilitating these liaisons; its largely handshake agreements right now on very low-commitment projects: there will be more formal documentation as it progresses.
@Aklapper We are fascilitating a number of liasons with IA, including https://phabricator.wikimedia.org/T112881 and https://phabricator.wikimedia.org/T115224. And they are also working on the video project, which TWL has not involvement in: https://phabricator.wikimedia.org/T99216 .