Fri, Feb 15
If we share these on-wiki but then go with a solution like a browser extension, is that going to be a confusing shift?
Thu, Feb 14
Wed, Feb 13
It's maybe worth pointing out in reference to Max's comment that we took the existing code and Max just adapted it to our purposes.
Tue, Feb 12
The accuracy of the data is important but to a point where a non-wiki-expert human should care. That is, it might be more confusing to try to explain to a user what we mean by "suppressed or revision-deleted" than it is to say something "due to the intricacies of how wiki edits work, these numbers have a 3-5 point margin of error."
Fri, Feb 8
Seems like we could maybe just use the next deploy as an opportunity to deploy to Stretch.
Thu, Feb 7
We were talking about two different things. In fact, in this case, a unit test is likely the better of the two options because I agree that QA on this with real data is probably impossible (or too much work to be reasonable).
Could you possibly fudge some fixture data that causes the problem?
I've commented on the Github PR.
That is even easier than what I was thinking. I like that we aren't touching the translations array from SvgFile at all.
That looks easier than we thought.
This shows me what I couldn't quite work out from the code. In some instances, the name collision would cause a reversal of the effective block?
Yes, I think we do want all the fonts. From the patch, it seems just as easy to include them as not.
Wed, Feb 6
@jmatazzoni Is this the question we talked about earlier this week? It seems like it. In that case, Moriel said it better than I could have. If you were asking me something different, let me know.
I see it both ways. Refreshing over a dozen times, I see the image about half of the time. Sometimes, it does appear on the first load of a given URL and sometimes it doesn't.
Can you describe what the effect of this would be (before and after your change) if the actions didn't match?
Tue, Feb 5
I was able to test this briefly locally and it worked for a time but as @Niharika says, the rendering seems to be non-functional now. I haven't had a chance to dig into any errors or anything yet though.
I agree with @Tchanders that it makes sense to have this in one place on save.
Mon, Feb 4
If we all agree with @Tchanders solution, we can create a few tasks to cover that work. I like the simplicity, at least in reasoning, of this approach. I imagine the code might get a bit squirrelly. However, being able to easily describe how this works means we have a clearer path to build it which is valuable.
Thu, Jan 24
Wed, Jan 23
There is an export CSV function for each graph so you'll get each wiki's data separately. You may need to login to see it.
Jan 23 2019
That's a good idea @Huji, if it's not too much work.
Jan 22 2019
I've done some investigation and I cannot find any of the code in the "revert" patch in the code path where the exception is thrown. The dates are quite compelling.
Middle age? hahahaha
Jan 21 2019
Is that a parameter of the OAuth permissions we request or is it something more?
Here's how I understand it: The page IDs will be used to find the users that have edited those pages within the timeframe of the event. Then, with this list of users, we can do what we need to find the 7-day retention.
Jan 18 2019
I've done some cursory research on this topic.
In our engineering meeting and in my discussions with Joe, we agreed to continue working toward this goal. However, we will deprioritize the optimizations for this feature until we have more of the metrics being generated. The engineers will continue to build those metrics with this feature in mind so we don't have to do a lot of rework when we come to it.
Jan 17 2019
Yeah, I'd agree. That's why I said that it isn't "required" to do this work. However, as you say, if these statistics are needed in any form, doing them with statsd does make sense.
I'd agree that this is not required for SVG Translate. Is there a task to create the log file you describe?
Jan 16 2019
We added piwik support for Event Metrics which should give us the data this task asks for.
Jan 15 2019
You and your laptops.
These issues are certainly real but seem somewhat theoretical. In practice, I struggle to believe that these scenarios will happen on a regular basis.
Jan 14 2019
According to the wiki linked in the description, this change appears to have happened.
Jan 10 2019
It is incorrect for me. I see it the same as yours.
We will probably want to revalidate. Browsers change pretty fast even if our code hasn't.
Should we mark this as stalled and put it back in the queue?
Jan 9 2019
@Samwilson That does seem unrelated. It is interesting because we largely don't intentionally mess with coordinates or any kind of positioning.
Jan 7 2019
I apologize. I didn't look closely enough at the content of the patch.
@Samwilson Should we mark this as in-sprint and in-progress and assigned to you?
Jan 4 2019
We will definitely want to be able to see how this changes over time. Does that mean we'd have to save the data someplace? Or will we be able to pull whichever time period we want when we want to?
It sort of depends on what that time frame is and how you want to compare. For instance, if you need to know that on the first Tuesday of January 2018 there were 125 thingamabobs and on that same day in 2019 there were 116 thingamabobs, that necessitates us using a particular technology. If instead, you want to see something like, "It looks like there are about 10% more thingamabobs each week for the last 6 weeks," then we might use a different technology.
Jan 2 2019
Great! I actually did try it earlier and forgot to come back to update here.
I'm noting this for implementation.
When do people get stats? Specifically:
Track whether people get events during the event (by downloading, or visiting/updating either the Event Summary or Edit List pages, I suppose?). Vs.
After the event, how often and for how long do they get stats (i.e., are they continuing to use the impact metrics we've created)?
@jmatazzoni I have a few clarifying questions below. As this is in our agenda today, feel free to answer in that meeting but we should record the results/decisions in this task.
@Prtksxna I think we fixed the Docker stuff at the offsite unless it has broken again. I'll give it a try shortly and see if it's working with the current master.
Dec 21 2018
I hadn't considered just suppressing the errors in that way. Most (all?) of the errors are likely to not be user-fixable. It's probably smart to hide them as much as possible and warn the user only when we know we cannot possibly allow them to continue.
Dec 20 2018
I think to support Prateek's design, we'd need to avoid using the handy model saving functions and write some direct DB insert/update code in the model to handle each participant individually. Is that what this looks like @MusikAnimal?
Dec 19 2018
Invalid because it duplicates T204904.
It's not clear that this is the direction the team wants to move for this project. More discussions to be had before this can or should be estimated.
We should discuss this in the Engineering Meeting to create concrete tasks that can be estimated.
Dec 18 2018
I read through the Symfony docs and I didn't see any reasonable way to do this. Are there hooks into the routing features that I'm not looking at?
Dec 17 2018
@MusikAnimal fixed this as I was writing the task.
@MusikAnimal Cool. I didn't have the full picture. All good.
Right now, it looks like all of the work that could cause this error happens in SvgFile::analyse(). That function sets the values for some class variables. It seem like we have a few options:
- change all of analyse and break it apart into multiple functions and redo how we use those class variables (ugh!),
- or do a try/catch everywhere it's called
- or have the analyse function also set a class variable that is something like parseErrors which we can check after calling analyse to see what happened
- or your idea here
I thought this would be really easy. Then, I realized that Symfony routes are not specified with regex. As best I can tell, this is going to be a bit more involved than it seemed on the surface.
@MusikAnimal It might make sense to have both a job_started and a job_status in the database. Then, you could have logic that says, "If a job is 'started' and has been running for X minutes, mark it as failed." That's sort of poor man's error catcher and cleanup logic. Ideally, the job would catch a timeout and report itself as failed but that might be easier said than done.
Dec 14 2018
@nettrom_WMF Are you using IPython alone or within Jupyter?
@Reedy Thanks for posting on the talk page. I hope that helps some folks figure out their issue if they run into it.
Dec 13 2018
@chasemp I've linked the accounts now.
The upshot of all of this is the code will live in MediaWiki Core.
Dec 12 2018
Dec 11 2018
I wonder if we could have access to turn this on for the grantmetrics database: https://mariadb.com/kb/en/library/slow-query-log-overview/
Dec 10 2018
Is this the RSVG we are talking about? https://www.npmjs.com/package/rsvg
Dec 5 2018
Thanks @Aklapper. I'm still learning about the layers of communication that exist around here.
Dec 4 2018
Is there community input required to +2 @Reedy's patch for MW core? If so, we should start that now. I guess that would happen on Meta?
Nov 30 2018
Noting here that we are no longer going to provide the Contributions report.
Interesting. It does make me wonder if we should message the user about partial translations existing. Had we decided previously that we wouldn't let users choose partial languages from which to translate? It makes sense because if there's no text then the translator doesn't know what to write.