Page MenuHomePhabricator

Optimize Wikipedia PNG Logo
Closed, DuplicatePublic



I was thinking about optimizing PNG on various sites with high traffic. So I tried Wikipedia: I right clicked from my browser on the Wikipedia logo (the one we can see on every pages), then saved it to my computer. I ran a PNG optimizer which combine many optimizers (ImageOptim) then I saw that it has reduced the size of the image by 5,9%. It's not huge, but I thought it could be a good new for Wikipedia (I think the logo is downloaded thousands times a day).

So I sent a mail to Wikipedia support team (OTRS info-fr) and Thibaud Payet encouraged me to open a ticket. That's why I open it now!

The optimized image is here: As you may know, PNG optimization is lossless, so it it's exactly the same quality. I asked on stackoverflow about CPU load client side for decoding optimized images, and it does not seems to have any impact (I also read that it may have a positive impact even client side!). more info here : and there

I would be happy to optimize more image for you (also, I would be really proud too!).

Side note: it's the french logo, because I'm french, but I can try on some other logos!



Event Timeline

Restricted Application added a subscriber: Aklapper. · View Herald Transcript


all logos should be already optimalized by OptiPNG (the command recommended for optimalization is optipng -o7 <imagename.png>). It is up to discussion if other optimalization tool should be used but at least from my point of view, it must be usable at Linux and be a command line tool. By the way, all logos are in You can clone the repo and browse the directory instead of downloading each logo to your computer separately (in addition frwiki (and other wikis) has three version of logo, named frwiki-2x.png, frwiki-1.5x.png and frwiki.png; the first two are HD ones, the last one is standard one).

Best regards,
Martin Urbanec

Ok. I'm not sure to understand what you suggest me to do. Can I open a pull request on this Github repo after optimizing all images? Or should I follow this tutorial: then send optimized png? Or do you prefer I give up because priority is lowest?

Best regards,

I think the main point here is we'd rather have a reproducible method for optimizing these images which works on our Linux and open-source based infrastructure. Having a third party optimize one of our many PNGs once manually is interesting, but this doesn't scale to the many other PNGs which may be spread around many other repos and sources, and more importantly the work will be lost the next time someone uploads new PNG content updates (e.g. visual re-designs or tweaks for new display types).

If you've found a tool that optimizes some of our logo PNGs better than they're already optimized, can you document a process we could use to apply this universally? As in, using the open source FooPNG v1.3 with the arguments foopng -a1 -z3 is what seems to be an improvement on our current optimization?

Ok. I'm not sure to understand what you suggest me to do. Can I open a pull request on this Github repo after optimizing all images? Or should I follow this tutorial: then send optimized png? Or do you prefer I give up because priority is lowest?

Best regards,

Please do not open pull requests, they won't get merged. The GitHub repo is just a mirror from gerrit, if you wish to upload a change, follow The priority is lowest because it requires a lot of discussions and this task won't be processable soon.

Also see @BBlack's comment.

(Priority levels corresponds to the urgency of the task and the speed required, particularly in Wikimedia-Site-requests)

Ok thanks to all! (I'm still discovering the contribution rules, so thanks to help and mentor me!)

can you document a process we could use to apply this universally

Unfortunately not! I used a desktop app for mac which uses PNGOUT, AdvPNG, Pngcrush, OptiPNG, JpegOptim, MozJPEG, jpegtran, and Gifsicle but I have no idea what it actually does. Though, it's open source: Maybe we could contact the author via github and ask her about this? I could do that if you want!

Anyway, looking at operations-mediawiki-config repo, the majority of files did not change in one year; I'm aware optimizing all these image manually is a temporary fix, and a waste of time and not reusable/reproductible, BUT would it be better for the planet to do this instead of not doing this? (it's a real question, I'm not sure, and I would be happy to hear your answer!)

So, what I suggest is:

  1. I (or you, as you want) contact the ImageOptim author
  2. Let's commit these optimisation anyway (not sure, but... why not?)

Are you OK with this?

Well, for developing a process you aren't required to know what the tool does in the background (although it's better to know than don't know). When I decide to update (or upload new) static PNG file I know steps which I must process in order to update a project-logo (I can tell you the steps if you wish). One of the steps is to do the optimalization by optipng -o7. If we decide to replace optipng with something else, we must know what to do in order to use that other tool. Also that tool must satisfy some requirements (be usable from command line and be usable on Linux). Before this we can't do the switch :).

I don't think that doing one-time optimalization (which is unreproducible for the majority which Mac-only solution is) is better than not doing it.

@Rap2h You can indeed contact the author and ask them the whole process to realise similar results, that would be a good idea, but as @Urbanecm and @BBlack highlighted, we need a CLI process.

Hi! I'm the author of ImageOptim.

For static images I'd recommend:

  • optipng followed by [zopfli-png]( optipng sometimes helps by choosing a better color mode. Zopfli is the best compression for PNG you can get, but it's quite slow.

For dynamically generated images (such as SVG to PNG conversions, thumbnails of larger PNG images):

pngquant is "lossy", but you can choose amount of loss and skip files that don't benefit from the conversion. I'd recommend:

pngquant --skip-if-larger --quality=70-100

It will produce only good-looking well-compressed files, or pass the original file through when it can't. If you then compress the result with AdvPNG you should be getting pretty good compression all of the time.

@Rap2h So the next step would be to evaluate if the procedures make sense when used for the Wikimedia logos.

For example for the second method, we can start:

  1. start from
  2. generate a PNG:
  3. compare optipng -o7 output with the pngquant/advpng method, find the acceptable quality parameter (check also the 2x variant: replace 135px- by 270px- in the URL above)

I'm available on Freenode (Dereckson my nick) to discuss the results or provide a server account to test.

The idea of the test is to determine if the logo quality loss is acceptable and allow to really reduce more the PNG size, and also if the processing time is somewhat reasonable.

@Rap2h any update on this? do you need assistance for something?

@Derekson Sorry for my late answer. I'm sorry but I think I can't sensibly help: no skill nor knowledge on this subject, unfortunately no time right now (I'm a tinkerer who tried something on his mac and posted a naive idea). Thanks anyway!

Okay, so what we would need is to compare the CURRENT optimized logos with the optimized logos by the NEWLY SUGGESTED METHOD to compare weight and quality, ie produce both and check if they seem similar or not, and if weight gain justifies treatment time.

[ Releasing, so someone can tweak this if they want. ]