6.08 Variation in repeated values

When measurements are repeated they generally produce slightly different results. This can be due to two factors, random errors in the measurement technique used (see later) or variability in the quantity being measured (e.g. slight variations say in the output from an electrical heater due to supply voltage fluctuations).

In a basic approach we could simply take the range of changing values as the uncertainty interval. So you could quote the mid range of the values along with the uncertainty. However it is possible to produce a more accurate and precise measurement value from the data available.

Initially you can look at the data trend to spot erroneous values. i.e if 98% of your instrument readings were within a relatively narrow interval and a few values fell well outside this range you would be justified in regarding these few readings as errors and removing them from your data. This would reduce the uncertainty and improve the accuracy of your measurement value.

A more sophisticated approach is to first plot the values on a graph and analyse them statistically. Graphs can reveal standard patterns in the data. One example is called a normal distribution. For data that forms a normal distribution we calculate two values from the data, one is the mean value and the other is called the standard deviation. Statistically it has been proved that for a normal distribution roughly 66% of all values will lie within one standard deviation of the mean value and 95% of all values will lie within two standard deviations.

To illustrate this consider the following example Repeated measurements of the timing of an event produce values ranging from 2.20 to 2.80 seconds. (To simplify this example we will ignore any uncertainty due to the equipment or technique used) Initially we might represent this as 2.5 +/- 0.3 seconds

On closer analysis we find that 99% of the values actually fall within an interval of 2.35 to 2.65 seconds. Therefore we regard the few values that are significantly outside this range as errors. We now use the remaining data to produce a measurement value of 2.50 +/- 0.15 seconds

When plotted on a graph we find the data forms a normal distribution. The mean value for the data is found to be 2.50 and the standard deviation is found to be 0.05. Therefore the measurement value can be given as 2.50 +/- 0.05 seconds at a confidence of 66% Or 2.50 +/- 0.1 seconds at a confidence of 95%

Confidence value


It can be seen that these values introduce a new term called the confidence value. The value given states the confidence that the true value actually lies within the uncertainty interval specified.

Rapidly changing values


In the above example we are assuming that the variations are not occurring so quickly that it would prevent us from taking a precise individual readings. However in some cases the variations may occur so rapidly that they hinder us from taking any individual measurements.

One problem that concerns us in these situations is the response time associated with some types of measuring instruments. E.g. consider an analogue voltage meter with a measuring scale and a needle to indicate the value. If the voltage is varying very rapidly (say fluctuating between two different values 50 times a second!) then the needle will simply not be able to move quickly enough to show these changes and it would just settle on an average position. This would give us a false precision in the value obtained from the meter reading. A digital electronic voltmeter can respond to changes in the measurement value much more quickly and the fluctuations in the voltage would be apparent by the fluctuations in some of the digits in the voltage reading. In this case there are two options

1. Take a manual reading using the digits up to the least significant digit on the display that is remaining stable and take the uncertainty to be +/- 0.5 units of this digit.

2. Use digital logging equipment that can record many individual measurement values in a short period of time. This would give a large set of individual measurements made with a greater degree of precision.

(E.g imagine the actual value is changing about 50 times a second. If the logging equipment makes 200 measurements a second there will be very little fluctuation during each of the individual measurements so each measurement can be made using a greater number of stable significant figures.)

The net result is that you will obtain a relatively large number of more precise measurements to which you can apply statistical techniques similar to that described above in order to determine a final measurement value.