Zim files efficiently compress HTML enabling a large number of wiki articles to be transferred to clients using minimal bandwidth and minimal storage. It also allows for efficient searching of the content.
As we begin to support Zim files in our products some questions come to mind, especially when contrasting the file with standard DBs:
When comparing the compression with say a SQLite DB (or Mongo, PostrgreSQL), what are the average space savings?
Can we efficiently iterate all title metadata for showing the articles as a list? (Like a DB cursor)
Is there any possibility to update a single article without re-writing the entire file? What would it take to do this?