We currently assume an uncertainty interval of +/-1 or the least significant digit of a given quantity:
10.2 => 10.2 +/- 0.1 => interval [ 10.1, 10.3 ] => uncertainty magnitude 0.2 13e3 => 13e3 +/- 1e3 => interval [ 12e3, 14e3 ] => uncertainty magnitude 2e3
This means the magnitude of the uncertainty interval is twice the magnitude of the least significant digit. It seems more intuitive to assume the size of the interval to be one order of magnitude:
10.2 => 10.2 +/- 0.05 => interval [ 10.15, 10.25 ] => uncertainty magnitude 0.1 13e3 => 13e3 +/- 0.5e3 => interval [ 12.5e3, 13.5e3 ] => uncertainty magnitude 1e3
This is also supported by the Wikipedia article on the quantification of accuracy and precision:
https://en.wikipedia.org/wiki/Accuracy_and_precision#Quantification
Perhaps the current implementation came about due to confusion about the magnitude of the uncertainty, and the +/- notation.