Page MenuHomePhabricator

A Wiki Dedicated Browser
Closed, InvalidPublic

Description

Author: djacobi

Description:
Hi, i want to request the implementation of a simple web script which do not
return a web page but returns the raw wikicode, to let browsers programs to
download only de code and parse it on the client PC.

What we win with this:

  • Better browsing experience for all users.
  • Save bandwidth connection.
  • Save server processing (that actually is a lot for old machines like my MMX).
  • Better O.S. integration (i'm sure that all linux distributions will add a

program like M$ Encarta but with the content of all wikis).

A feature like this is not hard to do, you just need to make a script which must
not parse the wikicode. And can be called with the programs like this.
www.somewiki.com/wc.php?=some_wiki_content
And with the same feature can be implemented a program like Babylon Translator
which could use the GPL'd content of a translation wikitionary. With short
content to fit in smalls windows.

Also with the same ideology (of parsing on the client machine) can be
implemented an option to parse in the client web browser with javascript. Using
the same script mentioned.
Even, this can accelerate a lot the speed of the web. So the client cached web
doasn't need to change the entire html code, just the wiki code and nothing more.

Look, i was thinking on this pseudo-code.

  1. Clients call the page www.somewiki.com
  2. Server Sends a wiki main page with the javascript parser included.

2 bis) The Javascript Parser can be stored completele in a cookie.

So, before the server sends the page:
a) It checks a parser version key on the cookie.
b) If it is outdated, the script saves the new javascript parser on the

cookie.

c) The server sends the html code with a javascript that loads and run

the parser from the cookie.

  1. The parser finds and store in a variable the content of the wiki page (wiki

code).

  1. If the user clicks in a link, it just will redownload the new wikicode and

clean the old one, but the main html will not change.

With this method will save a lot of bandwidth and server processing. I think
that the server load will be reduced to something like 20%. Which is a very
small digit.

People or organizations with the willingness of make a GPL'd documentation wiki
of anything doasn't need to carry all the data-processing on its servers. People
how want read it will need process the data with its computers.


Version: unspecified
Severity: normal
Platform: PC

Details

Reference
bz2328

Event Timeline

bzimport raised the priority of this task from to Medium.Nov 21 2014, 8:31 PM
bzimport set Reference to bz2328.
bzimport added a subscriber: Unknown Object (MLST).

Special:Export or action=raw can already be used to return raw wikitext of a page.

djacobi wrote:

Excelent, im working on it. :)

But a method to return the search results in xml will be fine too.