Once the uncertainty in a measured quantity x has been found, it is often necessary to calculate the consequential uncertainty in some other quantity which depends on x. Consider a function f which depends on x and/or y: f=f(x) or f=f(x,y). A series of measurements of x and/or y will yield . We will assume that the best estimate of f is or . (But note that defined this way is not always the same as the mean of f obtained from the individual values .)
Examples of the way to find , the standard deviation of f, are given below:
The standard deviation for f is obtained by adding the standard deviations for x and y ``in quadrature."
The relative error for f is obtained by adding the relative errors for x and y ``in quadrature."
where A and b are precisely-known constants and b is positive. The relative error for f is b times the relative error for x.
With the above identifications, we obtain . As another example, let f=x+y. Then df=dx +dy. We might conclude that . However, during half of the time, the deviations in x and y will be in opposite directions (as long as x and y are measured independently) so that one expects to be less than . A careful statistical analysis shows that and should be added ``in quadrature." all the rules above can easily be obtained by identifying differentials with standard deviations and by replacing addition or subtraction by addition ``in quadrature."