Page MenuHomePhabricator

#switch or #ifeq: checks should first HTML-unescape the strings they compare
Closed, ResolvedPublic


When doing a #switch or #ifeq on a {{PAGENAME}} argument, and page title contains an apostrophe (for example, [[L'Aquila]]), it doesn't match correctly. For example:

|L'Aquila = OK
|L&Aquila = "Unexpected match"

doesn't return "OK", but "Unexpected match". I tested on en.wikipedia and it.wikipedia.

Note that the following correctly returns "OK":

|L'Aquila = OK
|L&Aquila = "Unexpected match"

And {{PAGENAME}} alone correctly returns "L'Aquila" if you display it on a rendered wiki page, but together they don't work... Normally the HTML-encoding should NEVER be performed by any builting parser function, it should only occur in the final stage of Mediawiki, after the full template expansions and transclusions and processing of all parser functions and magic keywords, during the conversion of the wiki code to HTML (and its beautification with HTML-tidy).

Mising the processing layers creates later other ambiguities and can potentially create new unexpected collisions between strings that are normally distinct only in one layer or the other. Suich collisions could generate some security risks, or could allow some attackers to avoid some protection or detection mechanisms, or could forbid some Wiki maintenance tools to work properly.

So the fix proposed in #ifeq: and #switch: (HTML-decode their input parameters to compare) is only a weak work-around (that creates new problems by adding new risks of collisions).

The real fix should be to drop the incorrect HTML-encoding of what "(BASE/SUB/FULL)(PAGE/SUBJECT/TALK)NAME" parser functions return: this should be the unaltered Wiki page name (without any HTML-escaping or URL-escaping); the URL-escaping is only use in "[BASE/SUB/FULL](PAGE/SUBJECT/TALK)NAMEE".
The HTML-escaping is clearly undesirable, these parser functions are not named "[BASE/SUB/FULL](PAGE/SUBJECT/TALK)NAMEH"

Version: 1.19
Severity: enhancement
OS: Windows XP
Platform: PC
See Also:



Event Timeline

bzimport raised the priority of this task from to Low.Nov 22 2014, 12:16 AM
bzimport set Reference to bz35628.
bzimport added a subscriber: Unknown Object (MLST).

Note that the following shows "39" -- so that gives you a work around.


L'Aquila = OK
L = not ok
L&rapos;Aquila = rapos
L'Aquila = apos
L'Aquila = 39


Thank you, the same happens with titles containing & or ", which are converted to & and ". Many other symbols work normally.
The problem is in PAGENAME, looking at this behavior does not seem to be intentional

Thinking about this, I'm not sure if it would make sense to fix this -- it might cause problems for others.

  • Bug 35746 has been marked as a duplicate of this bug. ***

a swtich should not make the difference between a character that is represented by a numeric character reference of natively.

so if a templace is encoded like this:


or like this:


this should work equally when passing it the parameter 1=@ or 1=@ or 1=@

All numeric character references (plus some wellknown named character references that are warrantied to be suppoorted everywhere in XML and HTML; i.e. the 5 standard ones: & < > " &pos;) should be treated everywhere as counting for 1 Unicode character (excactly like the UTF-8 sequences of bytes represening this character). All valid syntaxes for numeric character references should be accepted (decimal and hexadecimal), as long as they designate a valid Unicode code point (in the valid numeric range from U+0000 to U+10FFFF), and that code point is assigned to a valid character (excluding codepoints assigned to surrogates, and codepoints assigned to non-characters like U+FFFE), and that character can be part of a valid HTML document (so, excluding most C0 and C1 controls, and converting all the few acceptable controls only as SPACE U+0020 or LINEFEED U+000A after unification of CR+LF into a single linefeed).

This should be a simple way to escape every character, deprecating the use of "nowiki", ecept as an esay way that avoids using character references in the source.

But character references should be usable EVERYWHERE a valid UTF-8 sequence representing a single character is usable and not absolutely needed by the syntaxic lexer/parser (so including in the name of parser functions and magic keywords, meaning that "{{#Kf:x|y}}" will be treated equivalently to "{{#if:x|y}}". This would make the wiki syntax more compatible with various character encodings, including via imports/exports to external files.

This also means that only a few characters should NOT be representable as character references, these are:

{ }

only where they are used as separators for the recognized wiki template call and parameters syntax, and:

| =

only within template (or parserfunction) parameters in the wiki syntax, and:

: ; *

only where they are recognized at the begining of lines for lists in the wiki syntax, and:

| !

where they are recognized within wiki tables for delimiting cells/rows, and:

< " ' >

where they are used as separators for the recognized markup syntax of HTML elements or special elements like "<nowiki ... />", "<includeonly ... />" and "<gallery ... />".

In this later case, character entities should be usable as the universal way of escaping the special handling given by the wiki syntax parser.

To make things simple, the lexer used in MEdiaWiki should uniformize all input characters (whever they are encoded as UTF-8 sequences or as numeric or named character entities) into a single format, even before staring to parse the content: only the special characters needed for one step should be treated specially, and kept in their syntaxic format, all others will be uniformized by NOT using any of these special characters (if they remain present in the source, the uniformized format should be the smallest decimal numeric character reference). This would also avoid the unnecessary complexity caused by "nowiki". All parser functions should be revisited to make sure they use this "character uniformizer"...

Change 113518 had a related patch set uploaded by Brian Wolff:
Decode html entities before comparing strings in #ifeq: and #switch

Change 113518 merged by jenkins-bot:
Decode html entities before comparing strings in #ifeq: and #switch

How convenient :) Someone just reported this issue with PAGESINCATEGORY (now filed as bug 67196)

The problem is really {{PAGENAME}}, although I'm thinking it was done to prevent breaking HTML output when using {{PAGENAME}} inside HTML attributes (for example, title="Explanation of {{PAGENAME}}")

I'm wondering if this entity decoding should be done case by case or could be done for all parser functions parameters?

The problem affects templates trying to map a subpagename as a language code.
Currently {{#language:code1|code2}} causes a fatal server error (HTTP error 500) and all pages using that that template whose subpage name may contain an ASCII single or quote quote or an ampersand: {{SUBPAGENAME}} HTML encodes these characters with entities, and when this is used in the value of "code2" above, this will break.
To avoid this issue, we need a way to test if a subpagename can be a valid language code before trying to use {{#language:}}.

One way to test it includes comparing the (SUB)PAGENAME with the result of #titleparts, using a "#ifeq:" parser function call.

But if #ifeq: is HTML-decoding its compared items, it will alway reply that the (SUB)PAGENAME and #titleparts are equal, so it will no longer be alble to detect invalid language codes. As a result we'll get HTTP error 500 at amny random pages using some templates when viewing a subpage including that template and whose subpagename contains an apostrophe-quote, or double quote, or a few other characters.

An alternative would require using a Lua module for testing the validity of language codes. But in my opinion "#language:" MUST be urgently fixed to not crash when there are HTML entities in its second parameter (if this occurs, it should handle the case gracefull as if we specified an unknown/unsupported target language code.

Note that this critical bug of #language occurs in very important pages, notably many "Main pages" of wiki ˆprojects", or one of their subpages that are transcluding a page trying to display a list of alternate languages, using the content language of the current page (which may be translated).

As long as this critical bug of "#language:", the fix for "#ifeq:" or "#switch:" should be delayed (or be prapared to see lots of HTTP error 500 in server logs and many pages not rendered at all.

Is there a bug open about {{#language:}} causing HTTP 500 errors? because the true error is #language, not what has been fixed here. Under any circumstances should a parser function throw an unhandled exception based on user input.

I've filed bug 67241 about the {{#language:}} issue, which I was unable to reproduce locally.

(Mormegil Jul 11 18:04)

Patch Set 8:
This change broke all inline coordinates on cswiki (until I fixed the
template) because of a small wikitext interpretation change. Formerly,
“{{#switch:x|y=z|#default}}” would render empty, while currently, it
renders as “#default”. The input wikitext is arguably wrong (an equal sign
is missing there, it should be “...|#default=}}”), and it is debatable
what is _better_ behavior in that case. However, forgetting an equal sign is
an easy error to make, especially when it used to work fine.
The original behavior was more or less a random byproduct, I’d say. (Keeping
$test from “$mwDefault->matchStartAndRemove( $test )” to be used in the
final “return $test;”.) The current behavior is arguably more logical, but
in the name of backwards (bug-for-bug?) compatibility, we might want to do
“$lastItem = $decodedTest” next to “$defaultFound = true;”... Dunno.

Verdy_p set Security to None.
Verdy_p renamed this task from #switch or #ifeq: checks should be HTML escaped to #switch or #ifeq: checks should first HTML-unescape the strings they compare.Jun 12 2015, 7:17 PM
Verdy_p updated the task description. (Show Details)