Page MenuHomePhabricator

Developer summit session: Pageview API from the Event Bus perspective
Closed, ResolvedPublic

Description

Goal

At the end of The Mediawiki Developer Summit, we want consensus on what data the community and WMF want to be able to query for. This task's purpose for the MWDS has expanded to talking about the Event Bus and data flows in general. See this comment for detail: T112956#1903466

More Information

  • Status of Discussion: Brainstorming, gathering short and long term requirements
  • Background Information: [1] and basically, any data requests ever made on analytics-l
  • Related Tasks All tasks related to the pageview API are tagged {slug} in the title, but this discussion goes beyond the Pageview API

[1] https://phabricator.wikimedia.org/T44259

Background for the pageview API

After the much anticipated [1] pageview API is finally real and running, we are reaching out to you - people who need this data. Whether you build tools or research this fascinating movement of ours, we want to know where you think we should go next, both in the short term and the long term.

Now that we have a clear, solid infrastructure to both compute and serve data, our limits are only privacy, security, and budget. So come, join the discussion in the comments on this task. Help us know what data you need next.

Problem with the pageview API

We (team Analytics) have too much data to safely release everything. We need to know what use cases people have and which is the most important or common use case, so we can prioritize. The more data we release, the harder it gets to keep it safe from de-anonymization attacks. So finding out exactly what's important is crucial.

Background for the Event Bus

We have been working on a prototype to standardize data flows at WMF. We're collaborating with the Services team and you can see the discussion and progress on the main tracking task, T114443.

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes

Yes, that's not possible right now, the monthly data per article is pretty sizable and we decided to hold off on adding that right now. Would you do us a big favor and give it a shot by requesting daily data and summing up the numbers? I'd love to hear from you if that runs slowly or sucks for other reasons. Then we could use that as an argument that we have to add monthly data. To be clear, this is not hard to do technically, it just uses precious resources and we need a good use case.

Thanks @Milimetric, http://wikifamo.us is now using the API and it seems to work fine for the moment. I will try to monitor and let you know if I can spot any issue. Next step for me to improve performance of my app will be the possibility to query multiple articles at once (some basic explanation on what has been done so far)

I don't know where this would fit in terms of priority, but something I was just looking for is what might be called "Referrer URL" in a web sever log file. I'm not looking for any external information that would be a privacy concern. What I would like to see is which internal page links are highly used to access other pages, and which ones aren't. In other words, when looking at "What links here", I'd like to be able to find out which of those links are promoting the page, and which ones aren't.

This information could be used to determine pages that need to be improved, or for Wikiversity, which page formats users find to be useful and helpful, and which ones aren't. It might also be used on projects like Wikipedia to determine which pages should be combined, as they are always referenced together, and which pages have no consistent cross-viewing.

@Dave_Braunschweig

While the referrer use case falls outside the pageview API it is certainly one we have looked at internally. If you are interested on that you should e-mail analytics@ and someone can probably point you to the latest research on this regard.

I don't know where is the best place to report this but when I had an average time of 200ms for each request last week, I am mainly today at 1s, is this because the service starts to be more used? Is there any target time that is acceptable for this API ?

I don't know where is the best place to report this but when I had an average time of 200ms for each request last week, I am mainly today at 1s, is this because the service starts to be more used? Is there any target time that is acceptable for this API ?

@Symac:
Thanks you for provide us this information, the service being new, we do not yet monitor response time.
The system is currently under pressure while we backfill data. I expect the current backfilling batch to end tomorrow, and will then let our system to digest it slowly.
Please let us know if you see improvement in the next days.
I'll make we post some information before the next round of backfill.

Hi, thank you for the API, interface is very nice and pageviews stats without spiders are very helpful.

Is there a way to download all the data per page for one hour ? something similar to the old format http://dumps.wikimedia.org/other/pagecounts-raw/ but with user/spider/bot agent info.

I was using pageviews for the wikiscan.org website which display most actives pages for each day and month like http://fr.wikiscan.org/date/201511/pages, actually this is only for French Wikipedia but the plan is to support most of the Wikimedia wikis. I need to rewrite the pageview code to better scale with multi-wiki, its actually disabled but there are archives with them, for example January 2015 https://web.archive.org/web/20150416220957/http://wikiscan.org/date/201501/pages (pageviews are in the first column).

I would like to use pageview data excluding spiders and bots but I need some kind of bulk download for all projects like it was before or at least per project.

I would like to use pageview data excluding spiders and bots but I need some kind of bulk download for all projects like it was before or at least per project.

We do have dumps in the same format as the old ones available here: http://dumps.wikimedia.org/other/pageviews/ [1]

The per-project hourly pageviews are available back to May. They're not split up by spiders vs. non-spiders, but that's something we can provide if the demand justifies it.

[1] we haven't announced this because we want to have a conversation with the community about which datasets to keep going forward (right now we have too many and another risks making the rest less useful).

@Symac: The poor response times are likely due to lack of caching and thus going to storage every time. We are working on fixing that and will update this ticket when done. You can follow progress here: https://phabricator.wikimedia.org/T119886

We do have dumps in the same format as the old ones available here: http://dumps.wikimedia.org/other/pageviews/ [1]

The per-project hourly pageviews are available back to May. They're not split up by spiders vs. non-spiders, but that's something we can provide if the demand justifies it.

Thank you, I really would like to see agent type in those dumps, this would be a good step for more reliable pageview data. I've seen too much strange numbers, people don't care about spiders and bots visits, they just want user data. For example the most "viewed" article for French WP in 2012 was obviously wrong, this was covered by online press : http://www.slate.fr/story/66771/wikipedia-visites-houx-crenele
The best imho would be to add 3 columns to the dumps, one for each agent type, or provide at least separate files with user data only.

I'm not sure where to share this code with the community, so I'll add it here for reference. I created a Python script to generate monthly stats for en.wikiversity, and also functions to pull and tally individual page stats from a list of pages. The code is at:
https://en.wikiversity.org/wiki/MediaWiki_API/Pageview_API

To see sample output, the generated statistics list for November is at:
https://en.wikiversity.org/wiki/Wikiversity:Statistics/2015-11

To the developers, kudos! Running the monthly statistics in the past has typically taken two or three days to download and extract the data. The November run is under 30 seconds.

The new data in http://dumps.wikimedia.org/other/pageviews/ already exclude spider requests, so contain user data only.
The way I'm reading @Milimetric's comment is amount of spiders requests could be added as a separate metric if called for.
I would be (somewhat) interested to see the overall share of spider traffic per project (not per wiki), but no big deal at all.
We could do that with a internal hive job using sampled data.

I will publish new pageview data per wiki/project using this new data feed, hopefully this week.

Running the monthly statistics in the past has typically taken two or three days to download and extract the data. The November run is under 30 seconds.

Fantastic!

I'm not sure where to share this code with the community, so I'll add it here for reference. I created a Python script to generate monthly stats for en.wikiversity, and also functions to pull and tally individual page stats from a list of pages. The code is at:
https://en.wikiversity.org/wiki/MediaWiki_API/Pageview_API

Dave, you can put that code up on github if you like, or gerrit, and I'd be happy to help by making comments. For example, the article data returned from all the endpoints is pure JSON now, so you can parse the output as JSON and just access the field you need to without the regular expressions you're using.

To see sample output, the generated statistics list for November is at:
https://en.wikiversity.org/wiki/Wikiversity:Statistics/2015-11

To the developers, kudos! Running the monthly statistics in the past has typically taken two or three days to download and extract the data. The November run is under 30 seconds.

:) sweet, hope to make that even better with caching.

The new data in http://dumps.wikimedia.org/other/pageviews/ already exclude spider requests, so contain user data only.
The way I'm reading @Milimetric's comment is amount of spiders requests could be added as a separate metric if called for.

@Akeron: yes, sorry, Erik's totally right here. That data has spiders filtered out, but doesn't have the breakdown as you requested it.

I would be (somewhat) interested to see the overall share of spider traffic per project (not per wiki), but no big deal at all.
We could do that with a internal hive job using sampled data.

Yes, I think this is what I was getting at with the decision to add it to the dumps or not. I think we should add it to the dumps only if so many people ask for this data regularly that it's cheaper to add more storage capacity than it is for us to just give data to those interested.

@Akeron: yes, sorry, Erik's totally right here. That data has spiders filtered out, but doesn't have the breakdown as you requested it.

Please note that we filter out self-identified spiders. We know we have quite a bit more bot traffic that is currently tagged as coming from users but we are not tackling that problem quite yet. cc @Milimetric, @ezachte

@Akeron: yes, sorry, Erik's totally right here. That data has spiders filtered out, but doesn't have the breakdown as you requested it.

Please note that we filter out self-identified spiders. We know we have quite a bit more bot traffic that is currently tagged as coming from users but we are not tackling that problem quite yet. cc @Milimetric, @ezachte

We should be more accurate here. The pageview definition we use refers to "spider"s as user agents we identify to be web crawlers. We refer to "bot"s as user agents that self-identify as Wikimedia bots. We are fairly certain our spider detection is accurate, but we improve it whenever we see a problem. Our bot detection is a collaboration with the community of bot operators, and that's not yet accurate.

@Akeron: yes, sorry, Erik's totally right here. That data has spiders filtered out, but doesn't have the breakdown as you requested it.

Good, this is enough for my current needs.

Please note that we filter out self-identified spiders. We know we have quite a bit more bot traffic that is currently tagged as coming from users but we are not tackling that problem quite yet. cc @Milimetric, @ezachte

Maybe you could use javascript to generate the GET, like GoogleAnalytics does with createElement('script'), then pointing src to an analytic server, some spiders spoof UA but they don't execute js.

Also, something I would like to see in the future is unique IP hits, at least on 1 hour. So a single IP could not generate more than 1 hit for a single page in one hour. Actually it's very easy to add thousands of hits to any page with a regular Internet connection.

@Slaporte just suggested another feature that would make life easier: It would be nice if the Top API included the Page ID and the Namespace ID. The ID helps lookup articles more easily... and the Namespace is useful for filtering out non-articles (especially when looking at other languages).

@Slaporte just suggested another feature that would make life easier: It would be nice if the Top API included the Page ID and the Namespace ID. The ID helps lookup articles more easily... and the Namespace is useful for filtering out non-articles (especially when looking at other languages).

That kind of richer data is something we absolutely want to work on. But as of now we have no easy way to bring in mediawiki data into a sane queryable store

I'm going to copy in a comment from T119593: Define the list of "must have" sessions for WikiDev '16 and then reply here, since I think this is a better context for a discussion:

@Milimetric DevSummit sessions are intended to for discussion and decision making, not for mere presentations. To me, things that need discussion are good session topics, while plain heads-up presentations would be better candidates for lightning talks.

I love you, @daniel, but I'm gonna scream :) I am trying to say that this session *IS A DISCUSSION*. I don't understand what about it is leading people to think otherwise, and I'd really appreciate some help. Thank you and sorry for the screaming.

@Milimetric, I don't mind the "screaming", but I do mind lack of clarity on goals. What we have here in this Phab task is an unstructured conversation that you are asking to manifest itself as an unstructured conversation in real life. If the goal of a session is "gathering short and long term requirements", where are you gathering them to? This task? A wiki page? Etherpad? New Phab tasks? Your head? I think we'll all have an easier time advocating for this session with some clarity as to what list(s) we plan to build as a group, how we plan to build it/them, and what we plan to do with the list(s).

@Milimetric, I don't mind the "screaming", but I do mind lack of clarity on goals. What we have here in this Phab task is an unstructured conversation that you are asking to manifest itself as an unstructured conversation in real life. If the goal of a session is "gathering short and long term requirements", where are you gathering them to? This task? A wiki page? Etherpad? New Phab tasks? Your head? I think we'll all have an easier time advocating for this session with some clarity as to what list(s) we plan to build as a group, how we plan to build it/them, and what we plan to do with the list(s).

The goal is "At the end of The Mediawiki Developer Summit, we want consensus on what data the community and WMF want to be able to query for." The other sections, including "Status of Discussion" were added to keep in line with the format that Quim outlined in the call for these proposals. Brainstorming is the current *status*, not the goal.

But you'd be right if you're thinking the goal statement is also a bit vague. I'll try to expand and maybe you can help move the parts that make sense into the task description.

I basically see a connection between this API and how we as an organization will operate in the near future. We consume huge amounts of data, process it as a stream or in batches, optionally sanitize and aggregate, and serve the results. These steps are generally the same whether they're taken by content analysis tools like ORES, statistics tools, static dumps processing, job queues like the emerging Event Bus, fundraising decision tools, or more things that I'm probably not aware of. It seems like a waste for all of us to work in silos without ever talking to each other and seeing how our work is related. I'm not suggesting that we should all drop what we're doing and adopt One True Platform, *far* from it. But it would be nice to talk about the flows of data at the foundation, and what our plans are for organizing and querying these flows. The result of this discussion should be that we all are more connected to each other's work and know what parts of each other's projects might make sense to share. I want to capture this result in a Wiki article that details the organization's current approach to data flows.

I think this discussion can be quite broad, but has a fixed deliverable that seems pretty reasonable and likely to happen. And if the discussion gets too far off topic, we can use the pageview API as a way to ground it back to specifics.

I'll just add my own quick perspective on this. The previous method for accessing log files by having to download all data for all projects by hour was research-prohibitive. The Pageview API is a significant improvement for quickly getting individual pages or top 1000 results.

Hourly and daily results are fine. As others have mentioned, monthly rollups would be good. On the smaller wikis, usage changes greatly from day to day and sometimes even month to month. For example, Wikiversity has a strong academic cycle, with usage peaks in November/December and April/May.

For us, year over year comparisons by month are more meaningful for long term analysis than month to month comparisons. My biggest request would be to backload the data to January 1, 2015 (or earlier), so that we can do year-over-year comparisons with Pageview data.

Thanks for your consideration.

For us, year over year comparisons by month are more meaningful for long term analysis than month to month comparisons. My biggest request would be to backload the data to January 1, 2015 (or earlier), so that we can do year-over-year comparisons with Pageview data.

Thanks very much, Dave. Sadly we don't have this same quality data (broken up by mobile / desktop and spider / user, etc.) any further back than May 2015. We do have the other data with fewer dimensions, and we could load it but we think it'll be hard to explain how data coming from the same API is actually really different from one month to the next. Another idea we had was to keep generating the "bad" data so people could use that to compare, and then at some point when we have enough good data, turn off that old stream.

Would adding the old data with a different tag work? The historical / "bad" data could be loaded with a "historical" or "deprecated" tag. And if you're willing, load through June 30, 2016. That's a 13/14 month overlap, allowing full comparison of old and new measurements. After June 2016, only new data would be available.

First, my apologies for being so grumpy in my last response. I think there is likely some very good conversation that can come from this.

I think this discussion can be quite broad, but has a fixed deliverable that seems pretty reasonable and likely to happen. And if the discussion gets too far off topic, we can use the pageview API as a way to ground it back to specifics.

What's the fixed deliverable?

I was tempted to leave you just to answer that one question, because we really need a crisp answer to that question for this to be worth anyone's time. However, I suspect I know what your answer is going to be:

At the end of The Mediawiki Developer Summit, we want consensus on what data the community and WMF want to be able to query for

That leads me to several questions: why can't you do this online? What tech talk should people have watched before attending this session? What do you expect people to have read to be useful participants in the conversation? If you are doing this online, where is the document describing what you hope/expect the consensus to be?

First, my apologies for being so grumpy in my last response. I think there is likely some very good conversation that can come from this.

Oh no apologies necessary at all, and I very much appreciate the help with my phrasing and explanation. Before you reached out, I was feeling a bit like a foreigner playing baseball: the rules weren't very clear and no matter how hard I tried, I wasn't getting anything right :)

What's the fixed deliverable?

I was referring to "I want to capture this result in a Wiki article that details the organization's current approach to data flows." So the Wiki article is the deliverable. I will be recording the session in some way and starting the document based on the meeting. But I want it to be a living document, and I think, if we get the participation I'm hoping for, a very useful one.

I was tempted to leave you just to answer that one question, because we really need a crisp answer to that question for this to be worth anyone's time. However, I suspect I know what your answer is going to be:

At the end of The Mediawiki Developer Summit, we want consensus on what data the community and WMF want to be able to query for

That leads me to several questions: why can't you do this online?

The consensus on what kind of data we want to be able to query for is sort of the problem statement. So I only see that as part of the meeting. I think it's important for everyone working on similar problems to hear each other in real time. Because, as we can see from this discussion, online conversations in the middle of hectic drama and ends of quarters doesn't always yield the best results.

What tech talk should people have watched before attending this session?

I'm actually mostly interested in sharing experiences with other people working on data flows. So they don't have to prepare at all, other than to know their own domain and be willing to listen and think critically about how others are approaching similar problems. Even if at first they don't think the problems are super similar.

What do you expect people to have read to be useful participants in the conversation?

Same thing as above. I think we'll benefit from the real-time exchange of ideas and listening more than any knowledge or decision that, I agree, could be achieved asynchronously online.

If you are doing this online, where is the document describing what you hope/expect the consensus to be?

I think this discussion is helping me formulate my thoughts. When we're done and if you agree we have something useful here, I'll summarize it in a document. But it seems like that would be more noise if this session ends up failing to get approval.

@Legoktm:

There are 50+ tasks I tried to read, so I skimmed a good number of them, apologies if I missed anything. The main things I saw explaining what the session was about were T112956#1725965 and T112956#1761944, which I read as "presentation and Q&A", which is what tech talks are great for.

I see, these comments are definitely outdated and represented a misunderstanding by the team early on. I corrected this in the description of the task hopefully.

Furthermore, it looks like a lot of people have already started to engage on the Phabricator task (which is great!), so I'm left wondering, what's left to discuss in person? How valuable will an in person session be compared to what's already going on right now?

This is a valid question and I don't want to push too strongly here, it's up to the organizers what gets included. But my case is as follows:

We (Analytics) explored different ways of serving huge amounts of data to the public and we think our experience here is going to help as our APIs get richer. We want to answer two main questions with this session. First, are we duplicating infrastructure and investigative work? Second, what kind of data does everyone want to expose via APIs and how do we want to integrate this data into our products? IRC / Phabricator are great, but I feel like a focused discussion with a broader audience would be more likely to address these questions efficiently. Right now the discussions are restricted to people who have been following the Pageview API specifically, and I'd love to hear from a much broader audience. I have the distinct feeling that I'm trapped in a bubble and I want this session to burst that bubble.

Honestly, if you're looking for a "broader audience", the developer summit is exactly the wrong place. I would bet that most of the people who will be in attendance are already aware of the pageview API, through reading wikitech-l/wikimedia-l/tech news/etc. If you want a broad audience, it's definitely not going to be at an in person event, it'll be by messaging and talking with local communities at village pumps, contacting specific mailing lists, talking to GLAM orgs, etc. Have those avenues been explored?

Honestly, if you're looking for a "broader audience", the developer summit is exactly the wrong place.

Good point. I *believe* that the proposal here is to use the attendees as a *means* to reach a broader audience. For instance, if one of the attendee produced a very striking tool or visualisation, the thing could hit the news and hence convey the message about the underlying infrastructure to a broader audience. Am I correct that this is the goal?

Honestly, if you're looking for a "broader audience", the developer summit is exactly the wrong place. I would bet that most of the people who will be in attendance are already aware of the pageview API, through reading wikitech-l/wikimedia-l/tech news/etc. If you want a broad audience, it's definitely not going to be at an in person event, it'll be by messaging and talking with local communities at village pumps, contacting specific mailing lists, talking to GLAM orgs, etc. Have those avenues been explored?

Breadth is not just about numbers, it's about variety. Until now, the people who know about the pageview API are people interested in counting pageviews and a few people interested in integrating that data into the front end of our projects. And almost nobody who's working on very similar data flow problems (collect, compute, serve data). So I want a more varied conversation. And I think the developer summit is exactly the place where I can find the most variety. All other suggestions, like village pumps, specific mailing lists, etc. are the opposite of what I think of as breadth. They sound like approaches one would take if one were doing a depth-first search.

Honestly, if you're looking for a "broader audience", the developer summit is exactly the wrong place.

Good point. I *believe* that the proposal here is to use the attendees as a *means* to reach a broader audience. For instance, if one of the attendee produced a very striking tool or visualisation, the thing could hit the news and hence convey the message about the underlying infrastructure to a broader audience. Am I correct that this is the goal?

Not really, @Nemo_bis, though of course that would be an awesome side effect :) The goal is not so much about the pageview API specifically as about the more general challenge of data flows and how the Wiki movement is tackling it.

Honestly, if you're looking for a "broader audience", the developer summit is exactly the wrong place. I would bet that most of the people who will be in attendance are already aware of the pageview API, through reading wikitech-l/wikimedia-l/tech news/etc. If you want a broad audience, it's definitely not going to be at an in person event, it'll be by messaging and talking with local communities at village pumps, contacting specific mailing lists, talking to GLAM orgs, etc. Have those avenues been explored?

Breadth is not just about numbers, it's about variety. Until now, the people who know about the pageview API are people interested in counting pageviews and a few people interested in integrating that data into the front end of our projects. And almost nobody who's working on very similar data flow problems (collect, compute, serve data). So I want a more varied conversation. And I think the developer summit is exactly the place where I can find the most variety. All other suggestions, like village pumps, specific mailing lists, etc. are the opposite of what I think of as breadth. They sound like approaches one would take if one were doing a depth-first search.

I think we may have different understandings of what "breadth" means. At WMDS you're most likely to find developers who speak English, are comfortable with face-to-face meetings, can afford to take a week off of work/school, etc. If that subset of people still provides enough variety, great! But it's not what I would call breadth.

The goal is not so much about the pageview API specifically as about the more general challenge of data flows and how the Wiki movement is tackling it.

I'm not really sure what "challenge of data flows" really means, but maybe this session needs to be re-titled?

It would be nice if the API was available from api.php on wikis, like the rest of APIs are, instead of that separate place you've got to know of. Similarly "web interface" of it should then be in the usual [[Special:ApiSandbox]] rather than somewhere on a separate site. It makes life rather hard to remember where is what when all the things are scattered throughout different places rather than gathered in one.

It was mentioned in IRC that the current way is because the data are not stored in MW at the moment, but, well, I believe, it is possible to device an extension which will make them attached to MW.

It would also be nice to have XML output (api.php gives XML output for instance) so that people who are more used to parse or read XML do not need to adapt.

It would be nice if the API was available from api.php on wikis, like the rest of APIs are, instead of that separate place you've got to know of. Similarly "web interface" of it should then be in the usual [[Special:ApiSandbox]] rather than somewhere on a separate site. It makes life rather hard to remember where is what when all the things are scattered throughout different places rather than gathered in one.

It was mentioned in IRC that the current way is because the data are not stored in MW at the moment, but, well, I believe, it is possible to device an extension which will make them attached to MW.

It would also be nice to have XML output (api.php gives XML output for instance) so that people who are more used to parse or read XML do not need to adapt.

These problems are common to all the 'new' 'REST' APIs...

It would be nice if the API was available from api.php on wikis,

I agree, that would be nice.

It was mentioned in IRC that the current way is because the data are not stored in MW at the moment, but, well, I believe, it is possible to device an extension which will make them attached to MW.

Such an extension is generically possible, for example the ApiFeatureUsage extension fetches data from an Elasticsearch instance that was originally inserted by Logstash.

I don't know anything about the architecture behind this particular feature to be able to guess at how feasible it would be in this case, though. For example, can the backend be safely accessed from the apaches that handle api.php? Is there a way to efficiently query the views for up to 5000 pages rather than hitting https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/{project}/{access}/{agent}/{article}/{granularity}/{start}/{end} for 5000 values of {article}?

I'd be willing to assist someone from Analytics if they're interested in putting together such an extension. Looking at the available API in restbase, I'd recommend three modules in the extension: one query prop module to be equivalent to the "per-article" mode (and taking the {article} parameter from the standard query pageset mechanism), one query list module (that could be used as a generator) for "top" mode, and one query meta module for the "aggregate" mode. Except maybe for "aggregate" mode where there's the possibility to query for all wikis, {project} could just be the current wiki.

one query prop module to be equivalent to the "per-article" mode

This should probably be the first step. Can you please file it as subtask of T43327: Add page views graph(s) to MediaWiki's info action for Wikimedia wikis?

As a general response to the pageview API on MW questions:

  • There are good reasons for the API to be available separately. Our statistics are not always tied to a specific wiki project, for example.
  • +1 for providing a bridge to the pageview API from MW, if people find that useful. I'd need some concrete evidence that enough people would actually use this to justify the effort. Once we have that evidence, @Anomie I'm happy to work with you to get it done.

I think we may have different understandings of what "breadth" means. At WMDS you're most likely to find developers who speak English, are comfortable with face-to-face meetings, can afford to take a week off of work/school, etc. If that subset of people still provides enough variety, great! But it's not what I would call breadth.

As far as I'm concerned, anyone speaking Romanian, Spanish, French, Portuguese, Tamil, Hindi, and even a bit of German can participate. We have resources to make that work, but it'll definitely be trickier. And maybe the WMDS isn't the best place to get *that* kind of breadth, but I'm not really sure *any* venue would give me that kind of breadth. I'm also not really looking for breadth of opinion as in going and asking a professional Chef what they think about data flows. That's not really relevant. I'm interested in the breadth of opinion of everyone dealing with data flows specifically. You haven't given me reason to believe the vast majority of those people will be somewhere other than WMDS.

The goal is not so much about the pageview API specifically as about the more general challenge of data flows and how the Wiki movement is tackling it.

I'm not really sure what "challenge of data flows" really means, but maybe this session needs to be re-titled?

I shall re-title this shortly and post an explanation. Basically, the pageview API is just the serving end of a flow of data through our systems. Approaches like Confluent try to standardize these flows: http://docs.confluent.io/ We're working on such an approach as part of the Event Bus project. So that's what I'll change this session to reflect.

Milimetric renamed this task from Developer summit session: Pageview API overview to Developer summit session: Pageview API from the Event Bus perspective.Dec 24 2015, 2:57 PM

I've re-titled this session to more accurately describe what we'd like to discuss. We want to continue talking about what we've been calling Event Bus (T114443). I could, for example, show how we could migrate the pageview API to use the Event Bus prototype we have running. And someone else working on a different data flow, like ORES, could discuss how they could adapt *that* to use the Event Bus. We saw a lot of interest in the Event Bus when @Ottomata first talked about it, and we want to basically check in with that interest.

In the Event Bus RFC meeting (E86), @ori says: "The success criterion for this system (and this project) is its ability to unify the set of partial and divergent implementations that currently exist. The work of refactoring / consolidating specific existing implementations should not fall exclusively on any one team; it is a joint responsibility."

I think that's basically why this is a complicated subject. Because it does require us to step outside our normal silos and collaborate. Let's get together and figure out how that's worked so far and where we go from here.

p.s. I should explain this phrase "data flow" that I keep using. For the pageview API, the data flow is roughly:

  • Varnishes log web requests
  • varnishkafka produces those logs to a Kafka topic
  • Camus buckets those logs and writes hourly batches to HDFS
  • Oozie runs Hive to aggregate that data (this takes a few steps)
  • aggregated data is loaded into Cassandra
  • pageview API serves data via RESTBase

So, generally, a data flow is the collection, processing, and serving of one or more logical types of data. I personally think about this pretty generic concept the same way the confluent platform does: http://docs.confluent.io/

  • +1 for providing a bridge to the pageview API from MW, if people find that useful. I'd need some concrete evidence that enough people would actually use this to justify the effort. Once we have that evidence, @Anomie I'm happy to work with you to get it done.

Just to clarify this. I really *do* mean I'm happy to help with this, and I can help gather this evidence. I just like justifying my own projects to myself since I've been burned too many times by bad product management [1]. The evidence I'm talking about is not like six sigma rigorous proof or anything. I just mean, can we find like five to ten people that will say this is actually useful.

And as a general FYI, I'm not a political beast. I try to say what I mean and mean what I say.

[1] bad product management in my past life as a consultant, not at WMF with the exception of wikimetrics which I take a lot of the blame for.

I don't quite see how EventBus is connected to pageviews. I am not aware of any plans to hold page view related events in EventBus. High-volume request logs will remain in the existing analytics Kafka cluster. Could you clarify?

It would be nice if the API was available from api.php on wikis,

I do not see this as a good thing (maybe I am missing something here?) from an api design standpoint we certainly do not want a "1 api to return all kinds of data". Makes sense to have different endpoints for different types of data and wiki content per project is really not related to pageviews. Having a different endpoint gives us the freedom to have a different response envelope, url structure and error codes.

I do not see this as a good thing (maybe I am missing something here?) from an api design standpoint we certainly do not want a "1 api to return all kinds of data".

{{citation needed}}

It seems to me that people tend to prefer to use one API rather than multiple, as long as the one API can handle returning everything they need in a reasonably efficient manner.

Makes sense to have different endpoints for different types of data and wiki content per project is really not related to pageviews.

But pageview data is related to wiki content. People very much want to know which content is viewed most often.

Having a different endpoint gives us the freedom to have a different response envelope, url structure and error codes.

Why is that freedom a desirable or necessary thing for this pageview API? End users would likely find it more useful if the pageview API were to have a standard response envelope, url structure, and error reporting mechanism rather than something idiosyncratic.

In the case of api.php's "fetch something about many pages" model versus a potential restbase "fetch everything about one page" thing, different request formats are necessary since a non-REST-style request is probably impossible to cache-purge sanely while a REST-style request has difficulty with sensibly dealing with arbitrary sets of multiple objects and with limiting the facets of data actually returned to something besides "all" and "one thing". But I can't think how the cache-purge issue might apply to this pageview API, or how the response envelope or error codes would matter at all.

Why is that freedom a desirable or necessary thing for this pageview API? End users would likely find it more useful if the pageview API were to have a standard response envelope, url structure, and error reporting mechanism rather than something idiosyncratic.

Sorry but following a rest convention is not idiosincratic, now mw api doesn't do that (not saying it should) and from a technical standpoint mixing response styles in one endpoint is an obvious technical smell that we should not need a citation to justify.

Also, please have in mind that our technical stack has nothing to do with php+mysql which is an entirely different matter.

Why is that freedom a desirable or necessary thing for this pageview API? End users would likely find it more useful if the pageview API were to have a standard response envelope, url structure, and error reporting mechanism rather than something idiosyncratic.

Sorry but following a rest convention is not idiosincratic, now mw api doesn't do that (not saying it should) and from a technical standpoint mixing response styles in one endpoint is an obvious technical smell that we should not need a citation to justify.

You seem to have ignored my question in favor of an off-topic assertion that REST is a convention rather than a general guideline for making conventions and an even-more-off-topic comment about mixing response styles.

Also, please have in mind that our technical stack has nothing to do with php+mysql which is an entirely different matter.

That is also an entirely irrelevant matter. See the ApiFeatureUsage extension (already mentioned above) for another example of something that has nothing to do with php+mysql but is still integrated with the action API.

@Anomie: Sorry but we are talking pass each other, we can continue this conversation perhaps more efficiently in person. You stated that "pageview API has an idiosincratic convention" and I was pointing out that no, not really. mediawiki api works according to some technology convention that made sense when it was created and the pageview API uses REST to communicate its results and (leaving the question of the nature of the data aside) having different query schemes is enough of a technical argument to warrant a different endpoint.

That is also an entirely irrelevant matter. See the ApiFeatureUsage extension (already mentioned above) for another
example of something that has nothing to do with php+mysql but is still integrated with the action API.

I see, then if you have a strong use case to wrap the API seems that there is already a technical way to achieve it.

@Anomie: Sorry but we are talking pass each other, we can continue this conversation perhaps more efficiently in person.

Being remote myself, I prefer to have conversations in places such as Phabricator where other people can see them too.

You stated that "pageview API has an idiosincratic convention"

I didn't say it has one, I was replying to your statement that "flexibility" is somehow a good thing with a counterexample as to how it could be a bad thing.

and the pageview API uses REST to communicate its results and (leaving the question of the nature of the data aside) having different query schemes is enough of a technical argument to warrant a different endpoint.

OTOH, you still haven't given any reason that the pageview API actually needs a REST interface to function, versus that that's just the way it's currently implemented. But no one is suggesting removing the REST interface, the suggestion here is to make it possible to access the same data via the action API.

I just mean, can we find like five to ten people that will say this is actually useful.

I think a prop (batch) API would be quite useful; not sure about the other two. Some time ago I wanted to write a gadget to color links in an article based on their traffic but there was no reasonable API to use for that.

@Anomie: Sorry but we are talking pass each other, we can continue this conversation perhaps more efficiently in person.

Please don't resort to this. Our predominant form of communication is online. If the two of you are talking past each other, please use online communication to resolve it rather than waiting until you are face-to-face.

@Anomie's stated preference is for public conversation, and I think I understand why. If I were his shoes, I would fear being badgered into accepting a suboptimal choice, and then have my private acquiescence later trotted out publicly as "well, you know Anomie, and I talked to him and even he agrees this is the best solution".

That said, I don't think it's strictly necessary for the two of you to have a public conversation to come to mutual understanding. Public conversation is way more difficult than private conversation, because doing it well either requires stating everything in a way that is difficult/impossible to misinterpret, or living with the consequences of potential misinterpretations. Wikimedians generally accept that as a cost of doing business, and many of us are forgiving of thinkos and strive to assume good faith, but the assumption of good faith is very uneven in our world.

A private conversation is going to need to be coupled with building trust that both parties will leave that conversation with mutual assurance that each of you will be reliable witnesses when discussing the topic.

Regardless of whether you choose to continue this conversation publicly or privately, could I ask that you continue it online rather than letting a misunderstanding fester until next week?

@Anomie: What is your question exactly, can you please restate it? (I am kind of lost in this back and forth)

@Anomie: What is your question exactly, can you please restate it? (I am kind of lost in this back and forth)

In T112956#1906190, you stated that you didn't think that making this data available via api.php would be a good thing. The reasons you gave were:

  1. Having all data available via one API is bad design.
  2. Wiki content per project is not related to pageviews.
  3. Having a different endpoint gives freedom to have a different response envelope, url structure and error codes.

If I missed or mischaracterized any, please correct me.

I questioned all three of these reasons. Leaving aside my own digressions, the specific questions are:

  1. How is it bad design? People tend to find it more convenient to use one API rather than several different APIs with different conventions.
  2. How so? People have long wanted to know which content on the wiki is viewed most, and how often certain content is viewed.
  3. Two questions here:
    1. The pageview API seems simple enough that the specific response envelope, url structure and error reporting mechanism shouldn't really make any difference. I presume the restbase-based API is currently using whatever's standard for restbase, why would you want to change that?
    2. More specifically, what requirements does the pageview API have that are incompatible with the response envelope, url structure, and error reporting mechanism used by the action API?

In T119029, @GWicke says this proposal is "basically presentation / feedback sessions about specific APIs, which could probably also be effective in a break-out session". Given the robust conversation here, do you concur? If so, do you see a time on the schedule it would make sense to schedule this which would get the necessary participants together? As it stands, we don't have time scheduled for this outside of the Tuesday, 11:30am slot for T119029: WikiDev 16 working area: Content access and APIs (see the Tuesday schedule)

I don't quite see how EventBus is connected to pageviews. I am not aware of any plans to hold page view related events in EventBus. High-volume request logs will remain in the existing analytics Kafka cluster. Could you clarify?

Webrequests are not much connected to the EventBus project, but there very may well be a stream of schema-ed pageviews. I think Dan expects this conversation to drift towards data streams as a concept.

@Anomie: I think your original point that I want to refer back to is this one:

In T112956#1901909, @Base wrote:
It would be nice if the API was available from api.php on wikis,

Do you have a use case in mind that is not satisfied by the API in its current form? I believe there are extensions already querying and graphing this data in mediawiki but if you have a use case please list it along. I think what you are saying is that having the data available in mw api is more "convenient" on that regard I concur with what @Milimetric said above:

+1 for providing a bridge to the pageview API from MW, if people find that useful. I'd need some concrete evidence that enough people would actually use this to justify the effort. Once we have that evidence, @Anomie I'm happy to work with you to get it done.

I questioned all three of these reasons. Leaving aside my own digressions, the specific questions are:

Let's please start with use cases that speak to the value proposition of adding a bridge that appeal to broad audience and we can discuss merits of api design after.

@Anomie: I think your original point that I want to refer back to is this one:

In T112956#1901909, @Base wrote:
It would be nice if the API was available from api.php on wikis,

Err, that was @Base, not me.

Do you have a use case in mind that is not satisfied by the API in its current form?

I don't have any use case at the moment, but I'm not the one who proposed it either.

I could see people wanting to:

  • Access pageviews data along with other API query data, instead of having to make a separate call.
  • Use the list of top-viewed pages as a generator.
  • Generally be able to use a familiar API instead of an unfamiliar one.

I questioned all three of these reasons. Leaving aside my own digressions, the specific questions are:

Let's please start with use cases that speak to the value proposition of adding a bridge that appeal to broad audience and we can discuss merits of api design after.

Then why did you bring all that up in the first place?

I agree with Anomie here. Wikimedia infrastructure is already too
fragmented, it needs to be more uniform and general

Chiming in here because this question came up in discussions regarding the ORES API. It seems that we are discussing this as either/or when I think we can have both since APIs can consume other APIs.

As an API developer, I don't want to be constrained in that *everything* needs to operate behind either api.php? or restbase_v1/ (so, +1 @Nuria). This is because the API that we develop might not always make sense within the conventions of these spaces. I think it makes more sense that new API's adopt an appropriately flexible endpoint first and that we work on integration/bridges afterwards.

E.g. ORES scores edits, so it fits well within api.php?query=revisions and restbase_v1/pages/revisions, but it also has endpoints that provide information about model fitness and other details that don't really make sense in these locations. api.php and restbase_v1/ can consume the service just like any user and act as a bridge. In this case, we'd have both the flexibility to do new things with APIs *and* we can include the relevant outputs in api.php? and restbase_v1/. @Anomie, at some point, I'd like to discuss with you what ORES integration in api.php would look like. I've already talked to @GWicke about what a bridge into restbase_v1 will look like.

The tradeoff of this strategy, of course, is that we'll have two ways to get at the same data. Honestly, I think that this is more desirable than the alternatives. If you want to get at the data through a familiar interface and are happy with the limitation imposed, use the extension to a standard API. If you need more or you want to work with something that's cutting edge and doesn't have a bridge/integration yet, learn how the flexible endpoint works.

E.g. ORES scores edits, so it fits well within api.php?query=revisions and restbase_v1/pages/revisions, but it also has endpoints that provide information about model fitness and other details that don't really make sense in these locations.

Model fitness and such could fit into the action API as a query meta module, if it's useful data.

@Anomie, at some point, I'd like to discuss with you what ORES integration in api.php would look like.

Yes, we should definitely do that.

The tradeoff of this strategy, of course, is that we'll have two ways to get at the same data. Honestly, I think that this is more desirable than the alternatives. If you want to get at the data through a familiar interface and are happy with the limitation imposed, use the extension to a standard API. If you need more or you want to work with something that's cutting edge and doesn't have a bridge/integration yet, learn how the flexible endpoint works.

+1

@Anomie and @Halfak, this is a great conversation! My 2c:

E.g. ORES scores edits, so it fits well within api.php?query=revisions and restbase_v1/pages/revisions, but it also has endpoints that provide information about model fitness and other details that don't really make sense in these locations.

Model fitness and such could fit into the action API as a query meta module, if it's useful data.

@Anomie: how do you anticipate figuring out if it's useful? My guess is that the best way to do that is to only expose a modest amount of the service data via the action API (a.k.a. "api.php?"), and then see what the demand is for expanding it. I don't think it's reasonable to invest in a full-featured shim unless+until there's demonstrated utility with a smaller version of it that has the data that's easier to see the fit for (e.g. what @Halfak is describing). Is that what you were anticipating as well, or did you have different thoughts?

Hey folks, I'm glad the talk of ORES sparked some useful discussion, but that's out of place here. So I created a new card for us to explore the implications and do some planning. See T122689

See also a session we're holding at the dev summit regarding integrating ORES and the editquality-modeling models into Wikipedian practice: T114246 "Quality control and newcomer socialization tools with revscoring and ORES".

I came across another idea for an addition to the API: return the top 100 user pages per day. Or more generically for any given namespace.

E.g. ORES scores edits, so it fits well within api.php?query=revisions and restbase_v1/pages/revisions, but it also has endpoints that provide information about model fitness and other details that don't really make sense in these locations.

Model fitness and such could fit into the action API as a query meta module, if it's useful data.

@Anomie: how do you anticipate figuring out if it's useful?

Since I know nothing about the data in question, I have no basis to even begin to give an opinion as to whether it would be useful or not.

IMO the process would be to look at the data and what has been requested already (if anything), and decide whether it seems like it's useful enough to be worth the development effort and increased API complexity.

I just mean, can we find like five to ten people that will say this is actually useful.

I think a prop (batch) API would be quite useful; not sure about the other two. Some time ago I wanted to write a gadget to color links in an article based on their traffic but there was no reasonable API to use for that.

Thank you, Gergo, this is a great use case. Examples like this are more useful than waxing poetic on API theology, no offense to the devout poets :)

Wikimedia Developer Summit 2016 ended two weeks ago. This task is still open. If the session in this task took place, please make sure 1) that the session Etherpad notes are linked from this task, 2) that followup tasks for any actions identified have been created and linked from this task, 3) to change the status of this task to "resolved". If this session did not take place, change the task status to "declined". If this task itself has become a well-defined action which is not finished yet, drag and drop this task into the "Work continues after Summit" column on the project workboard. Thank you for your help!

This task took place, and it now serves two purposes. First is as a place where we gathered a *lot* of amazing input for the future of the Pageview API. This input will be processed and considered in our prioritization, but probably not this quarter as we're still working on some infrastructure and capacity issues first.

Second is, as a meeting point for the session at the dev summit. Etherpad notes from the summit are here:

https://etherpad.wikimedia.org/p/WikiDev16-DataFlows

The follow up from the summit meeting is that we are all going to gather and centralize work on the Event Bus. @Ottomata's writing a wiki page and we'll announce it on analytics / wikitech / engineering when it's in good shape. I'll coordinate meetings with people interested, so let me know if you're interested.