What is Theory of Errors? How is it defined?

0
Advertisement

What is Theory of Errors? How is it defined? What does Theory of Errors do? What problems does Theory of Errors fail to solve?

Theory of Errors

Source : pixabay.com

Theory of Errors, a theory that is used for the analysis and the interpretation of scientific measurement. Because all measurements are imperfect in some way, repetitions of a measurement under controlled conditions generally do not give identical results. Hence a set of measurements usually must be combined by some method to provide an estimate of the true value of the quantity that is being measured. One commonly used method of estimation is the method of least squares. One of the major tasks of the theory of errors is to appraise the reliability of the method of least squares as well as the reliability of other methods of estimation.

The theory of errors deals with the variability that is usually attributed to ranclom errors, which are caused by variability inherent in the process of making measurements. Other causes of variability are systematic errors— that is, biases in the measuring procedure—and variability intrinsic in the phenomena measured, for example, weather conditions.

Galileo, in 1632, was the first to formulate clear, although not mathematical, concepts concerning errors of measurement. He used these concepts to guide his analysis of astronomical observations. In 1755 the British mathematician Thomas Simpson proved that the arithmetic mean of n repeated measurements, which was already in limited use, is superior to a single measurement in a precisely specifiable sense. Problems in the theory of errors were prominent in the classical development of probability theory initiated by Laplace early in the 19th century.

The most important concemed the probability distribution of the arithmetic mean of a set of measurements, under various possible mathematical assumptions about individual measurements (or errors). Laplace and later writers proved that if the individual measurements are independent and approximately equally variable, the distribution of the arithmetic mean approaches the normal, or Gaussian, distribution as n inereases. This provides an approximation to the distribution of errors incurred when the arithmetic mean is used as an estimate.

Advertisement

The same mathematical results make it plausible that if, as Galileo, Gauss, and others suggested, a measurement error is the net result of numerous independent, largely uncontrollable, and unknown disturbances, then in many circumstances the distribution of errors should be approximately normal.

Among the remaining problems in the theory of errors are the following:

  • (1) The determination of the empirical and theoretical evidence for various assumptions concerning the form of an error distribution.
  • (2) On the basis of such assumptions, determining good or best estimates from available observations, and determining the precision of such estimates.
  • (3) Clarifying the conceptual problems involved in statistical inference.
  • (4) Extending all such considerations to more complicated problems of practical importance, notably the problem of simultaneous estimation of several unknown constants from a given body of data.

Leave A Reply