Next:
Nonlinear Data (Curve Fitting)
Up:
Interpolation and Curve Fitting
Previous:
The Equation for a
Contents
Least-Squares Approximations
Until now, we have assumed that the
data
are
accurate
,
but when these values are derived
from an experiment
, there is
some error in the measurements
.
Figure 7.8:
Resistance vs Temperature graph for the Least-Squares Approximation.
Some students are assigned to find the effect of temperature on the resistance of a metal wire.
They have recorded the temperature and resistance values in a table and have plotted their findings, as seen in Fig.
7.8
.
The graph suggest a linear relationship
.
Values for the parameters,
&
, can be obtained from the plot.
If someone else were given same data and asked to draw the line,
it is
not likely
that they would draw exactly the
same line
and they would get
different values for
&
.
A way of fitting a line to experimental data that is to
minimize the
deviations
of the points from the line.
The usual method for doing this is called the
least-squares method
.
The
deviations
are determined by the
distances between the points and the line
.
Figure 7.9:
Minimizing the deviations by making the sum a minimum.
Consider the case of only two points (See Fig.
7.9
).
Obviously, the best line passes through each point,
but any line that passes through the midpoint of the segment connecting them has a
sum of errors equal to zero
.
We might first suppose we could minimize the deviations by making
their sum a minimum
, but this is
not an adequate criterion
.
We might accept the criterion that we make the magnitude of the maximum error a minimum (the so-called
minimax
criterion).
The usual criterion is to
minimize the sum of the
squares
of the errors
, the
least-squares
principle.
Let
represent an
experimental
value, and let
be a value
from the
equation
where
is a particular value of the variable assumed to be free of error.
We wish to
determine the best values
for
&
so that the
's predict the function values that correspond to
-values.
Let errors defined by
The least-squares criterion requires that
be a minimum
.
is the number of
-pairs.
We reach the
minimum
by proper choice of the parameters
&
, so they are the
variables
of the problem.
At a minimum for
, the two partial derivatives will be zero.
Remembering that the
and
are data points unaffected by our choice our values for
and
, we have
Dividing each of these equations by
and expanding the summation, we get the so-called
normal equations
From
to
.
Solving these equations simultaneously gives the values for
slope and intercept
&
.
For the data in Fig.
7.8
we find that
Our
normal equations
are then
From these we find
,
, and
Subsections
Nonlinear Data (Curve Fitting)
Least-Squares Polynomials
Millikan oil-drop experiment
Next:
Nonlinear Data (Curve Fitting)
Up:
Interpolation and Curve Fitting
Previous:
The Equation for a
Contents