Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.1
Lecture 10
Numerical Techniques : Solving Sets
of Eq u ations - Linear Alg ebra and
Matrix Computin g I
Kirchhoffs Rules
IKC-MH.55 Scientific Computing with Python at December 29,
2023
Dr. Cem Özdo
˘
gan
Engineering Sciences Department
˙
Izmir Kâtip Çelebi University
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.2
Contents
1 Solving Sets of Equations
Matrices and Vectors
Some Special Matrices and Their Properties
Elimination Methods
Gaussian Elimination
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.3
Solving Sets of Equations
Solving sets of linear equations and eigenvalue problems
are the most frequently
used numerical procedures when
real-world situations are modelled.
Analytical solution may be feasible when the number of
unknowns is small.
However, computers outperforms to solve large systems of
linear equations such as with 100 unknowns in a
reasonable time.
1 Ma trices and Vectors
Reviews concepts of matrices and vectors in preparation.
2 E limination Methods
Describes classical methods that change a sys tem of
equations to forms that allow getting the solution by back-
substitution. How the errors of the solution can be minimized.
3 The Inverse of a Matrix
4 Iterative Methods
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.4
Matrices and Vectors I
When a system of equations has more than two or three
equations, it is difficult to discuss them without using
matrices
and vectors.
A matrix is a rectangular array of numbers in which not
only the value of the number is important but also its
position in the array.
A =
a
11
a
12
. . . a
1m
a
21
a
22
. . . a
2m
.
.
.
a
n1
a
n2
. . . a
nm
=
a
ij
,
i = 1, 2, . . . , n,
j = 1, 2, . . . , m
An m × n matrix times as n × 1 vector gives an m × 1
product (m × nn × 1) .
The general relation for Ax = b is
b
i
=
No.of .cols.
X
k=1
a
ik
x
k
, i = 1, 2, . . . , # of rows
where A is a matr ix , x and b are vectors (column vectors).
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.5
Matrices and Vectors II
This definition of matrix multiplication permits us to write
the set of linear equations
a
11
x
1
+ a
12
x
2
+ . . . + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ . . . + a
2n
x
n
= b
2
.
.
.
a
n1
x
1
+ a
n2
x
2
+ . . . + a
nn
x
n
= b
n
If the equations in any two rows is interchanged, the
solution does not change.
Multipl yi ng the equation in any row by a constant does not
change the solution.
Adding or subtracting the equation in a row to another row
does not change the solut ion.
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.6
Matrices and Vectors III
Much more simply in matrix notation, as Ax = b w here
A =
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
a
n1
a
n2
. . . a
nn
, x =
x
1
x
2
.
.
.
x
n
, b =
b
1
b
2
.
.
.
b
n
Example,
Matrix notation:
3 2 4
1 2 0
1 3 2
x
1
x
2
x
3
=
14
7
2
Set of equ ations:
3x
1
+ 2x
2
+ 4x
3
= 14
x
1
2x
2
= 7
x
1
+ 3x
2
+ 2x
3
= 2
Square matrices are particularly important when a
system of equations is to be solved.
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.7
Some Special Matrices and Their Properties I
Symmetric matrix.
A square matrix is called a symmet ric
matrix when the pairs of elements in
similar positions across the diagonal
are equal.
1 x y
x 2 z
y z 3
The transpose of a matrix is the matrix obtained by writing
the rows as columns
or by writing the columns as rows.
The symbol for the transpose of matrix A is A
T
.
A =
3 1 4
0 2 3
1 1 2
, A
T
=
3 0 1
1 2 1
4 3 2
If all the elements above/below the diagonal are zero, a
matrix is called lower/upper-triangular (L/U);
L =
x 0 0
x x 0
x x x
, U =
x x x
0 x x
0 0 x
We will deal with square matrices.
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.8
Some Special Matrices and Their Properties II
Sparse matrix. In some important applied problems, only
a few of the elements are nonzero.
Such a matrix is termed a sparse matrix and procedures
that take advantage of this sparseness are of value.
Division of matrices is not defined, but we will discuss the
inverse of a matrix.
The determinant of a square matrix is a number.
The method of calculating determinants is a lot of work if the
matrix is of large size.
Methods that triangularize a matrix, as described in next
section, are much better ways to get the determinant.
If a matrix, B, is tr iangular (either upper or lower), its
determinant is just the product of the diagonal elements:
det(B) = ΠB
ii
, i = 1, . . . , n
det
4 0 0
6 2 0
1 3 5
= 40
If we have a square matrix and
the coefficients of the
determinant are nonzero
, there
is a unique solution.
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.9
Some Special Matrices and Their Properties III
Determinants can be used to obtain the characteristic
polynomial and the eig envalues
of a matrix , which are
the roots of that polynomial.
If a matrix is triangular, it s eigenvalues are equal to the
diagonal elements
.
This follows from the fact that
its determinant is just the product of the diagonal elemen ts
and
its characteristic polynomial is the prod uct of the terms
(a
ii
λ) with i going from 1 to n, the number of rows of the
matrix.
A =
1 2 3
0 4 5
0 0 6
,
det(A λI) = det
1 λ 2 3
0 4 λ 5
0 0 6 λ
= (1 λ)(4 λ)(6 λ)
whose roots are clearly 1, 4, and 6.
It does not matter if the matrix is upper- or lower-triangular.
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.10
Elimination Methods I
Solve a set of linear (variables with rst power) equations.
If we have a system of equations that is of an upper-
triangular form
5x
1
+ 3x
2
2x
3
= 3
6x
2
+ x
3
= 1
2x
3
= 10
Then, we have the solution as: x
1
= 2, x
2
= 1, x
3
= 5
If NOT: Change the matrix of coefficients = upper-
triangular. Consider this example of three equations:
4x
1
2x
2
+ x
3
= 15
3x
1
x
2
+ 4x
3
= 8
x
1
x
2
+ 3x
3
= 13
4x
1
2x
2
+ x
3
= 15
10x
2
+ 19x
3
= 77
2x
2
+ 11x
3
= 37
4x
1
2x
2
+ x
3
= 15
10x
2
+ 19x
3
= 77
72x
3
= 216
Now we have a triangular system and the solution is
readily obtained;
1 obviously x
3
= 3 from the third equation,
2 and back-substitution into the second equation gives
x
2
= 2.
3 We continue with back-substitution by substituting both x
2
,
and x
3
into the first equation to get x
1
= 2.
Notice to the values in this example. They are getting
bigger!
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.11
Elimination Methods II
The essence of any elimination method is to reduce
the coefficient matrix to a triangular matrix and then
use back-substitution to get the solution.
We now present the same problem, solved in exactly the
same way, in matrix notation
;
4 2 1
3 1 4
1 1 3
x
1
x
2
x
3
=
15
8
13
So we work with the matrix of coefficients augmented with
the right-hand-side vector.
We perform elementary row transformations to convert A
to upper-triangular form:
4 2 1 15
3 1 4 8
1 1 3 13
,
3R
1
+ 4R
2
(1)R
1
+ 4R
3
4 2 1 15
0 10 19 77
0 2 11 37
,
2R
2
10R
3
4 2 1 15
0 10 19 77
0 0 72 216
4x
1
2x
2
+ x
3
= 15
10x
2
+ 19x
3
= 77
72x
3
= 216
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.12
Elimination Methods III
The back-substitution step can be performed quite
mechanically by solving the equations in reverse order.
That is, x
3
= 3, x
2
= 2, x
1
= 2. Same solution with the
non-matrix notation.
During the triangularization step, if a zero is encountered
on the diagonal
, we cannot us e that row to eliminate
coefficients below that zero element.
However, in that case, we can continue by interchanging
rows and eventually achieve an upper-triangular matrix of
coefficients.
The real trouble is finding a zero on the diagonal after we
have triangularized.
If that occurs, the back-substitution fails, for we cannot
divide by zero.
It also means that the determinant is zero. There is
no solution.
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.13
Elimination Methods IV
Possible approaches for the solution of the following system of
equations with coefficient matrix A.
1 Cramer’s rule.
6x
1
3x
2
+ x
3
= 11
2x
1
+ x
2
8x
3
= 15
x
1
7x
2
+ x
3
= 10
A =
6 3 1
2 1 8
1 7 1
,
Let’s denote the matrix by A
j
in which the right-hand-side
vector are substituted to the j
th
column of A matrix.
e.g., the solution for x
1
is expressed as:
x
1
=
det(A
1
)
det(A)
=
det
11 3 1
15 1 8
10 7 1
det
6 3 1
2 1 8
1 7 1
=
285
285
= 1
Similarly, the solutions x
2
and x
3
can be written.
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.14
Elimination Methods V
This method is very convenient when the number of
unknowns is as few as 3-5.
However, it is not feasible for systems with a large number
of unknowns since determinant calculation r equires many
multiplication operations.
For example, calculating a 20 × 20 determinant with
Cramer’s r ule requires 10
20
multiplications!
2 Inverse matrix. Another solution is to use the A
1
matrix,
which is the inverse of the A matrix.
Multiplying both sides of the equation A
~
x =
~
b by A
1
and
considering that A
1
A = 1,
A
1
A
~
x = A
1
~
b
A
1
A
|
{z }
1
~
x = A
1
~
b
~
x = A
1
~
b
However, this approach is also not reasonable since
calculating matrix inverses requires a large number of
operations.
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.15
Gaussian Elimination I
Therefore, adequate methods should be used in linear
equation system solutions such as without calculating
determinant or inverse matrix.
Two of the most useful methods are Gaussian
elimination and L-U decomposition.
Elimination Methods. While it may be satisfactory for hand
computations with small systems, it is inadequate for a
large sy stem (numbers may getting bigger!).
The method that is called Gaussian elimination avoids this
by subtracting a
i1
/a
11
times the first equation from the i
th
equation to make the transformed numbers in the first
column equal to zero.
We do similarly for the rest of the columns.
Observe that zeros may be created in the diagonal
positions even if they are not present in the original matrix
of coefficients.
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.16
Gaussian Elimination II
A useful s trategy to avoid (if possible) such zero divisors in
the diagonal positions is to rearrange the equations so as
to put the coefficient of largest magnitude on the diagonal
at each step.
This is called pivoting.
Generalization. Let the system of equations with N
unknowns be given as:
a
11
x
1
+ a
12
x
2
+ . . . + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ . . . + a
2n
x
n
= b
2
.
.
.
a
n1
x
1
+ a
n2
x
2
+ . . . + a
nn
x
n
= b
n
First Stage. Let a
11
6= 0 (If not, pivot ing), then multiply the
1
st
equation by (a
2
/a
11
)) and subtract it from the 2
nd
equation.
Next, again multiply the 1
st
equation by (a
31
/a
11
) and
subtract from the 3
rd
equation.
Repeat this procedure up to the n
th
equation.
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.17
Gaussian Elimination III
As a res ult, the variable x
1
is eliminated in the other
equations and the new system of equations becomes:
a
11
x
1
+ a
12
x
2
+ . . . + a
1n
x
n
= b
1
a
22
a
21
a
11
a
12
x
2
+ . . . +
a
2n
a
21
a
11
a
1n
x
n
= b
2
a
21
a
11
b
1
a
32
a
31
a
11
a
13
x
2
+ . . . +
a
3n
a
31
a
11
a
1n
x
n
= b
3
a
31
a
11
b
1
.
.
.
a
n2
a
n1
a
11
a
1n
x
2
+ . . . +
a
nn
a
nn
a
11
a
1n
x
n
= b
n
a
n1
a
11
b
1
Notice that in this new sys tem of equations at first stage,
the coefficient of the j
th
ter m in the i
th
row and the constant
b
i
are as follows:
a
(1)
ij
= a
ij
a
i1
a
11
a
1j
(i, j = 2, . . . , n)
b
(1)
i
= b
i
a
i1
a
11
b
i
(i = 2, . . . , n)
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.18
Gaussian Elimination IV
Next, let a
(1)
22
6= 0 ( If not, pivoting) in this new system of
equations, then multiply the 2
nd
equation by (a
(1)
32
/a
(1)
22
))
and subtract it from the 3
rd
equation.
Repeat this procedure up to equation n as n 1 times to
obtain the upper-triangular form:
a
11
x
1
+ a
12
x
2
+ a
13
x
3
+ . . . + a
1n
x
n
= b
1
a
(1)
22
x
2
+ a
(1)
23
x
3
+ . . . + a
(1)
2n
x
n
= b
(1)
2
a
(2)
33
x
3
+ . . . + a
(2)
3n
x
n
= b
(2)
3
.
.
.
a
(n1 )
nn
x
n
= b
(n1 )
n
As seen, the number of unknowns decreases by one in the
2
nd
and other equations as 1
st
stage, decreases once
again in the 3
rd
and other equations as 2
nd
stage and so
on. In (N 1)
th
stage, a single unknown is obtained.
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.19
Gaussian Elimination V
Then, the j
th
coefficient of the i
th
equation as k
th
stage is
as follows:
a
(k)
ij
= a
(k1)
ij
a
(k1)
ik
a
(k1)
kj
a
(k1)
kk
(i, j = 1, . . . , n) (1)
b
(k)
i
= b
(k1)
i
a
(k1)
ik
b
(k1)
k
a
(k1)
kk
(i = 1, . . . , n) (2)
After reaching to the upper-triangular form, the solution is
almost readily obtained.
From the last equation in the upper-triangular form:
x
n
= b
(n1)
n
/a
(n1)
nn
All other unknowns ar e obtained consequently by
backward-substitution. The general expression would be:
x
k
=
1
a
(k1)
kk
b
(k1)
k
n
X
j=k+1
a
(k1)
kj
x
j
(k = n 1, . . . , 1) (3)
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.20
Gaussian Elimination VI
Repeat the example of the previous section,
4 2 1 15
3 1 4 8
1 1 3 13
,
R
2
(3/4)R
1
R
3
(1/4)R
1
4 2 1 15
0 2.5 4.75 19.25
0 0.5 2.75 9.25
,
R
3
(0.5/ 2.5)R
2
4 2 1 15
0 2.5 4.75 19.25
0 0 1.8 5.40
The method we have just illustrated is called Gaussian
elimination.
In this example, no pivo ting was required to make the
largest coefficients be on the diagonal.
Back-substitution, gives us x
3
= 3, x
2
= 2, x
1
= 2
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.21
Gaussian Elimination VII
Example py-file: Show steps in Gaussian elimination and
back substitution without pivoting
. myGEshow.py
Figure: Steps in Gaussian elimination and back substitution without
pivoting.
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.22
Gaussian Elimination VIII
if we had stored the ratio of coefficients in place of zero
(we show these in parentheses), our final form would have
been
4 2 1 15
(0.75) 2.5 4.75 19.25
(0.25) (0.20) 1.8 5.40
The original matrix can be written as the product:
A =
1 0 0
0.75 1 0
0.25 0.20 1
|
{z }
L
4 2 1
0 2.5 4.75
0 0 1.8
|
{z }
U
This procedure is called a LU-decomposition of A.
(A = L U)
We have det(A) = det(U) = (4) (2.5) (1.8) = 18
When there are m row interchanges
det(A) = (1)
m
u
11
. . . u
nn
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.23
Kirchhoffs Ru les I
Find currents at each loop by using Kirchhoffs Junction & Loop
Rules:
3 unkowns, 3 equations
(R
1
+ R
2
+ R
4
)i
1
R
2
i
2
R
4
i
3
= 0
R
2
i
1
+ (R
2
+ R
3
+ R
6
)i
2
R
3
i
3
= V
1
R
4
i
1
R
3
i
2
+ (R
3
+ R
4
+ R
5
)i
3
= V
2
With the values of R
1
= R
2
= 1 , R
3
= R
4
= R
5
= R
6
= 2
and V
1
= 1 V , V
2
= 5 V . System of linear equations becomes
as follows:
4i
1
i
2
2i
3
= 0
i
1
+ 5i
2
2i
3
= 1
2i
1
2i
2
+ 6i
3
= 5
Example py-file: Kirchhoffs Rules in Gaussian elimination &
back substitution. No pivoting.
myKirchhoff.py
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.24
Kirchhoffs Ru les II
Figure: Kirchhoffs Rules in Gaussian elimination & back substitution.
No pivot ing.
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.25
Gaussian Elimination IX
Example. Solve the following system of equations using
Gaussian elimination.
2x
2
+x
4
= 0
2x
1
+2x
2
+3x
3
+2x
4
= 2
4x
1
3x
2
x
4
= 7
6x
1
+x
2
6x
3
5x
4
= 6
In addition, obtain the determinant of the coefficient matrix and
the LU decomposition of this matrix.
1 The augmented coefficient matrix is
0 2 0 1 0
2 2 3 2 2
4 3 0 1 7
6 1 6 5 6
2 We cannot permit a zero in the a
11
position because that
element is the pivot
in reducing the first c olumn.
3 We could interchange the rst row with any of the other
rows to avoid a zero divisor, but interchanging the rst and
fourth rows is our best choice. This gives
6 1 6 5 6
2 2 3 2 2
4 3 0 1 7
0 2 0 1 0
6 1 6 5 6
0 1.6667 5 3.6667 4
0 3.6667 4 4.3333 11
0 2 0 1 0
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.26
Gaussian Elimination X
4
We again interchange before reducing the second column,
not because we have a zero divisor, but because we want
to preserve accuracy. Interchanging the second and third
rows puts the element of largest magnitude on the
diagonal.
6 1 6 5 6
0 3.6667 4 4.3333 11
0 1.6667 5 3.6667 4
0 2 0 1 0
5 Now we reduce in the second column
6 1 6 5 6
0 3.6667 4 4.3333 11
0 0 6.8182 5.6364 9.0001
0 0 2.1818 3.3636 5.9999
6 No interchange is indicated in the third column. Reducing,
we get
6 1 6 5 6
0 3.6667 4 4.3333 11
0 0 6.8182 5.6364 9.0001
0 0 0 1.5600 3.1199
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.27
Gaussian Elimination XI
7
Back-substitution gives
x
1
= 0.50000, x
2
= 1.0000, x
3
= 0.33325, x
4
= 1.9999.
The correct (exact) answers are
x
1
= 1/2, x
2
= 1, x
3
= 1/3, x
4
= 2.
In this calculation we have carried five significant figures
and rounded each calculation.
Even so, we do not have five-digit accuracy in the
answers. The discrepancy is due to round off.
Example py-file: Show s teps in Gauss elimination and
back substitution with pivoting. myGEPivShow.py
Numerical Techniques:
Solving Sets of
Equations - Linear
Algebra and Matrix
Computing I
Dr. Cem Özdo
˘
gan
LOGIK
Solving Sets of
Equations
Matrices and Vectors
Some Special Matrices
and Their Properties
Elimination Methods
Gaussian Elimination
Kirchhoff’s Rules
10.28
Gaussian Elimination XII
Figure: Steps in Gaussian elimination and back substitution with
pivoting.