10.01 Important differences between the definition of error used on the Science Campus and other texts.

In many other texts error is defined as the difference between the true value and the nominal value. It is difficult to see how this approach is justified because the nominal value is just the value at the midpoint of the uncertainty interval and for a single measurement the mid point has no special significance, all values within the uncertainty interval are equally valid*.

In the following sections it will be argued that there is no justification for using the nominal value as the reference point for determining error and that doing so results in inconsistent definitions of accuracy that conflict with common sense. It also predicts that different levels of precision will have no affect at all on the accuracy of the meaurement so for instance a measurement of 2.0mm +/- 100Km would be regarded to be just as accurate as 2.0 mm +/- 0.1mm!!!!!
Measuring the error between the true value and the furthest limit of the uncertainty interval however truly reflects the largest possible error in the measurement and also explains how precision affects accuracy. This leads to a consistent definition of accuracy that agrees with common sense.

*Note
It is true that when a measurement value has been determined from statistical analysis of repeated measurements there can be a higher probability of the true value lying closer to the mid point, however you still cannot just disregard any of the values lying within the uncertainty interval when considering the error in your measurement.