Page MenuHomePhabricator

Run Pywikibot tests against Win32 using Appveyor
Closed, ResolvedPublic


It would be beneficial to set a the 'wikimedia' account on Appveyor ( to run builds of Pywikibot on each merge, like is done using Travis-CI for each merge.

Pywikibot now has a good array of tests that are running on Appveyor, using a Appveyor build control file configured with 10 build configurations , with an intermittent cross-platform error (T95139) that appears more frequently on Win32. A build looks like:

Most importantly this includes tests against four Python versions which are not tested by Travis-CI: 2.6.6, 2.7.0, 3.3.0 and 3.4.0

(the Travis-CI versions are pre-installed 'latest' point release for each Python version, so they cant be selected to be older releases).

How to set up the Appveyor builds is documented:

Event Timeline

jayvdb raised the priority of this task from to Needs Triage.
jayvdb updated the task description. (Show Details)
jayvdb added projects: Pywikibot-tests, Pywikibot.
jayvdb added subscribers: Unknown Object (MLST), Aklapper, Ricordisamoa, jayvdb.

Probably first issue is whether there is a reason to not add a 'wikimedia' account on Appveyor? I am guessing WMF staff need to approve that.

Travis-CI is more clearly supporting open-source, allowing concurrent builds even on multiple projects under the 'wikimedia' account.
Appveyor only permits one concurrent build per username for free.

That means if Pywikibot is doing 10 builds per merge, it will likely be working 24 hrs a day on Pywikibot merges, give or take, and therefore not be suitable for adding another Wikimedia project to Appveyor. There is an option in Appveyor build settings to skip builds when a more recent merge has been done, which will reduce this problem a little, but that is only a bandaid.

@Qgil, is there any process for using the 'Wikimedia' github account for new services?

I think this should be fine, AFAIK no one else under the wikimedia umbrella is using appveyor, so we shouldn't run into issues with hogging all the build time.

Since it's less open-source than travis, we should aim to use travis as much as possilbe and only use appveyor for what travis doesn't support.

@Qgil, is there any process for using the 'Wikimedia' github account for new services?

I confess not knowing who handles Wikimedia @ GitHub. Maybe @demon knows?

Regarding the open-sourced-ness , their organisation github is , where their main website is all open (but not their CI engine).

As I was debugging a problem with Coveralls, a little digging indicates they are not very open source either, and the github account Wikimedia is used quite a bit there ( Their github organisation , and the main issue tracker is an empty github repo: .

Will look for ways to run more of our builds on Travis - I have a WINE travis setup we could use to do some Win32 testing, but a native Win32 environment will always be beneficial for our large Win32 userbase of bot runners, and Appveyor is the best native CI that I can find.

If we use (see T101218) we could run it using “our” github account. There might be just a problem with who can actually see the environmental variables (I guess the password will be in it).

On the essay, wrt Wikimedia, I've replied at

In the case of Appveyor/Travis builds, I dont believe it clearly fits into this SaaSS definition. Appveyor/Travis builds are more 'Service as a Platform/Infrastructure'. Most relevant is that these builds are run on publicly submitted patches. There is no privacy aspect.

There is very little vendor lockin. We provide the build script to Appveyor/Travis, and they convert that into a log file with a final status - ok or fail. The new data generated, the log file, can be downloaded from both services. The build script is portable to other CI platforms if required. The same build process can be run on workstations or Wikimedia servers, as the build script only depends on Open Source tools - with the exception of using Visual C instead of MinGW on Appveyor, but that is Python setuptools/MinGW ' problem to solve. Automating these builds using Appveyor/Travis speeds up review and regression processes, but we'd quickly adjust if these cloud services stopped offering free compute time to open source projects.

More interesting is the case of Coveralls/Codecov, specifically because the coverage data is an additional output of the automated builds. The Appveyor/Travis builds send a large dataset to Coveralls/Codecov, and that original data cant be downloaded from either of them. Codecov provides an API for aggregated data to be extracted ( , and Coveralls seems to have no API to extract data (push only API here:

Turns out @jayvdb has enough rights on the Github 'wikimedia' account. So go ahead and add Appveyor for pywikibot :-}


I agree with GNU philosophy, but then we are not going to prevent devs from using tools that are rather useful, do not threaten our policy and are in no way a blocker to our dev workflow. We can afford Travis / AppVeyor to be down right now, so it is not an issue :-}

Having the big thumbs up to proceed, I tried setting up using my Appveyor account, and it creates the rather ugly URL
(which is triggered on any new revision to wikimedia/pywikibot-core)

A little digging shows the problem is that an org account need to be created manually

So, I did that, and the result is the following prettier URL:
(which is also triggered for each revision to wikimedia/pywikibot-core)

As the 'wikimedia' account was created manually, the credentials need to be given to someone in WMF for safe keeping.

As the 'wikimedia' account was created manually, the credentials need to be given to someone in WMF for safe keeping.

That is a good idea. Can you contact SRE ? We have a password / credential mechanism somewhere, though I can't remember where it is documented :-(

jayvdb claimed this task.

This was done. Not sure what is happening with the subtask.