R

Fred E. Szabo PhD , in The Linear Algebra Survival Guide, 2015

Manipulation

Row reduction of 2-by-2 matrices

Manipulate [MatrixForm [RowReduce [{{a, −  3, 5}, {b, 8, 2}}]], {a, −   5, 5, 1}, {b, −   5, 5, 1}]

We use Manipulate, MatrixForm, and RowReduce to reduce 2-by-3 matrices to reduced row echelon form and display them two-dimensionally. If we let a  = b  =   − 5, for example, the manipulation produces the matrix

MatrixForm [RowReduce [{{−   5, −   3, 5}, {−   5, 8, 2}}]]

1 0 46 55 0 1 3 11

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124095205500254

P

Fred E. Szabo PhD , in The Linear Algebra Survival Guide, 2015

Manipulation

Exploring the pivot columns of a matrix obtained by row reduction

Manipulate [RowReduce [{{1, 1, 1}, {2, 1, 1}, {a, b, c}}], {a, −  5, 5, 1}, {b, −   5, 5, 1}, {c, −   5, 5, 1}]

We use Manipulate and RowReduce to explore the pivot columns of 3-by-3 matrices. The manipulation displays the pivot columns of the matrix

MatrixForm [A   =   {{1, 1, 1}, {2, 1, 1}, {−   2, −   3, −   3}}]

1 1 1 2 1 1 2 3 3

obtained by letting a  =   − 2, b  = c  =   − 3. The manipulation shows that the first and second columns of A are pivot columns.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124095205500230

Conditioning of Problems and Stability of Algorithms

William Ford , in Numerical Linear Algebra with Applications, 2015

Reasons Why the Study of Numerical Linear Algebra Is Necessary

Floating point roundoff and truncation error cause many problems. We have learned how to perform Gaussian elimination in order to row reduce a matrix to upper-triangular form. Unfortunately, if the pivot element is small, this can lead to serious errors in the solution. We will solve this problem in Chapter 11 by using partial pivoting. Sometimes an algorithm is simply far too slow, and Cramer's Rule is an excellent example. It is useful for theoretical purposes but, as a method of solving a linear system, should not be used for systems greater than 2   ×   2. Solving Ax  = b by finding A   1 and then computing x  = A   1 b is a poor approach. If the solution to a single system is required, one step of Gaussian elimination, properly performed, requires far fewer flops and results in less roundoff error. Even if the solution is required for many right-hand sides, we will show in Chapter 11 that first factoring A into a product of a lower- and an upper-triangular matrix and then performing forward and back substitution is much more effective. A classical mistake is to compute eigenvalues by finding the roots of the characteristic polynomial. Polynomial root finding can be very sensitive to roundoff error and give extraordinarily poor results. There are excellent algorithms for computing eigenvalues that we will study in Chapters 18 and 19. Singular values should not be found by computing the eigenvalues of A T A. There are excellent algorithms for that purpose that are not subject to as much roundoff error. Lastly, if m     n a theoretical linear algebra course deals with the system using a reduction to what is called reduced row echelon form. This will tell you whether the system has infinitely many solutions or no solution. These types of systems occur in least-squares problems, and we want a single meaningful solution. We will find one by requiring that x be such that ‖b  Ax2 is minimum.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123944351000107

Determinants and Eigenvalues

Stephen Andrilli , David Hecker , in Elementary Linear Algebra (Fourth Edition), 2010

Calculating the Determinant by Row Reduction

We will now illustrate how to use row operations to calculate the determinant of a given matrix A by finding an upper triangular matrix B that is row equivalent to A.

Example 4

Let

A = [ 0 14 8 1 3 2 2 0 6 ] .

We row reduce A to upper triangular form, as follows, keeping track of the effect on the determinant at each step:

A = [ 0 14 8 1 3 2 2 0 6 ]

( III ) : 1 2 B 1 = [ 1 3 2 0 14 8 2 0 6 ] ( | B 1 | = | A | )

( II ) : 3 2 1 + 3 B 2 = [ 1 3 2 0 14 8 0 6 10 ] ( | B 2 | = | B 1 | = | A | )

( I ) : 2 1 14 2 B 3 = [ 1 3 2 0 1 4 7 0 6 10 ] ( | B 3 | = 1 14 | B 2 | = + 1 14 | A | )

Because the last matrix B is in upper triangular form, we stop. (Notice that we do not target the entries above the main diagonal, as in reduced row echelon form.) From Theorem 3.2, | B | = ( 1 ) ( 1 ) ( 46 7 ) = 46 7 . Since | B | = + 1 14 | A | , we see that | A | = 14 | B | = 14 ( 46 7 ) = 92 .

A more convenient method of calculating |A| is to create a variable P (for "product") with initial value 1, and update P appropriately as each row operation is performed. That is, we replace the current value of P by

{ P × c for type (I) row operations P × ( 1 ) for type (III) row operations .

Of course, row operations of type (II) do not affect the determinant. Then, using the final value of P, we can solve for |A| using |B| = P|A|, where B is the upper triangular result of the row reduction process. This method is illustrated in the next example.

Example 5

Let us redo the calculation for |A| in Example 4. We create a variable P and initialize P to 1. Listed below are the row operations used in that example to convert A into upper triangular form B, with | B | = 46 7 . After each operation, we update the value of P accordingly.

Row Operation Effect P
( III ) : 1 2 Multiply P by −1 −1
( II ) : 3 2 1 + 3 No change −1
( I ) : 2 1 14 2 Multiply P by 1 14 1 14
( II ) : 3 6 2 + 3 No change 1 14

Then |A| equals the reciprocal of the final value of P times |B| that is, | A | = ( 1 / P ) | B | = 14 × 46 7 = 92 .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123747518000226

Vectors and Matrices

Frank E. Harris , in Mathematics for Physical Science and Engineering, 2014

Exercises

For each of the following equation sets:

(a)

Compute the determinant of the coefficients, using Eq. (4.61).

(b)

Row-reduce the coefficient matrix to upper triangular form and either obtain the most general solution to the equations or explain why no solution exists.

(c)

Confirm that the existence and/or uniqueness of the solutions you found in part (b) correspond to the zero (or nonzero) value you found for the determinant.

4.6.1

x - y + 2 z = 5 , 2 x + z = 3 , x + 3 y - z = - 6 .

4.6.2

2 x + 4 y + 3 z = 4 , 3 x + y + 2 z = 3 , 4 x - 2 y + z = 2 .

4.6.3

2 x + 4 y + 3 z = 2 , 3 x + y + 2 z = 1 , 4 x - 2 y + z = - 1 .

4.6.4

x - y + 2 z = 0 , 2 x + z = 0 , x + 3 y - z = 0 .

4.6.5

6 x + 2 3 y + 3 2 z = 0 , 2 x + 2 y + 6 z = 0 , ( 1 / 2 ) x + y + 1.5 z = 0 .

4.6.6

6 x + 2 3 y + 3 2 z = 3 , 2 x + 2 y + 6 z = 2 , ( 1 / 2 ) x + y + 1.5 z = 1 .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128010006000043

Determinants and Eigenvalues

Stephen Andrilli , David Hecker , in Elementary Linear Algebra (Fifth Edition), 2016

Techniques for Finding the Determinant of an n × n Matrix A

2×2 case: A = a 11 a 22 a 12 a 21 (Sections 2.4 and 3.1).

3×3 case: Basketweaving (Section 3.1).

Row reduction: Row reduce A to an upper triangular form matrix B, keeping track of the effect of each row operation on the determinant using a variable P. Then | A | = ( 1 P ) | B | , using the final value of P. Advantages: easily computerized; relatively efficient (Section 3.2).

Cofactor expansion: Multiply each element along any row or column of A by its cofactor and sum the results. Advantage: useful for matrices with many zero entries. Disadvantage: not as fast as row reduction (Sections 3.1 and 3.3).

Also remember that A = 0 if A is row equivalent to a matrix with a row or column of zeroes, or with two identical rows, or with two identical columns.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128008539000037

Finite Dimensional Vector Spaces

Stephen Andrilli , David Hecker , in Elementary Linear Algebra (Fourth Edition), 2010

Using Row Reduction to Test for Linear Independence

Notice that in Example 6, the columns of the matrix to the left of the augmentation bar are just the vectors in S. In general, to test a finite set of vectors in n for linear independence, we row reduce the matrix whose columns are the vectors in the set, and then check whether the associated homogeneous system has only the trivial solution. In practice it is not necessary to include the augmentation bar and the column of zeroes to its right, since this column never changes in the row reduction process. Thus, we have

Method to Test for Linear Independence Using Row Reduction (Independence Test Method)

Let S be a finite nonempty set of vectors in n . To determine whether S is linearly independent, perform the following steps:

Step 1: Create the matrix A whose columns are the vectors in S.

Step 2: Find B, the reduced row echelon form of A.

Step 3: If there is a pivot in every column of B, then S is linearly independent. Otherwise, S is linearly dependent.

Example 7

Consider the subset S = {[3,1,−1],[−5,−2,2],[2,2,−1]} of 3 . Using the Independence Test Method, we row reduce

[ 3 5 2 1 2 2 1 2 1 ] to obtain [ 1 0 0 0 1 0 0 0 1 ] .

Since we found a pivot in every column, the set S is linearly independent.

Example 8

Consider the subset S = {[2,5],[3,7],[4,−9],[−8,3]} of 2 . Using the Independence Test Method, we row reduce

[ 2 3 4 8 5 7 9 3 ] to obtain [ 1 0 55 65 0 1 38 46 ] .

Since we have no pivots in columns 3 and 4, the set S is linearly dependent.

In the last example, there are more columns than rows in the matrix we row reduced. Hence, there must ultimately be some column without a pivot, since each pivot is in a different row. In such cases, the original set of vectors must be linearly dependent. This motivates the following result, which we ask you to formally prove as Exercise 16:

Theorem 4.7

If S is any set in n containing k distinct vectors, where k > n, then S is linearly dependent.

The Independence Test Method can be adapted for use on vector spaces other than n , as in the next example. We will prove that the Independence Test Method is actually valid in such cases in Section 5.5.

Example 9

Consider the following subset of 22 :

S = { [ 2 3 1 4 ] , [ 1 0 1 1 ] , [ 6 1 3 2 ] , [ 11 3 2 2 ] } .

We determine whether S is linearly independent using the Independence Test Method. First, we represent the 2 × 2 matrices in S as 4-vectors. Placing them in a matrix, using each 4-vector as a column, we get

[ 2 1 6 11 3 0 1 3 1 1 3 2 4 1 2 2 ] , which reduces to [ 1 0 0 1 2 0 1 0 3 0 0 1 3 2 0 0 0 0 ] .

There is no pivot in column 4. Hence, S is linearly dependent.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123747518000202

G

Fred E. Szabo PhD , in The Linear Algebra Survival Guide, 2015

Illustration

Gauss–Jordan elimination applied to a 3-by-5 real matrix

A   =   {{6, 6, 0, 3, 5}, {4, 0, 5, 3, 8}, {3, 7, 3, 4, 4}};

RowReduce [A]

1 , 0 , 0 , 13 64 , 199 192 , 0 , 1 , 0 , 19 64 , 13 64 , 0 , 0 , 1 , 7 16 , 37 48

Gauss–Jordan elimination applied to a 5-by-3 real matrix

A   =   {{4, 8, 3}, {3, 1, 7}, {0, 0, 1}, {3, 2, 5}, {3, 6, 9}};

RowReduce [A]

{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}, {0, 0, 0}, {0, 0, 0}}

Gauss–Jordan elimination applied to a 4-by-4 real matrix

A = 0 2 3 4 0 0 0 0 6 7 0 8 0 4 1 8 ; B = 6 7 0 8 0 4 1 8 0 0 5 2 0 0 0 0 0 ;

MatrixForm [RowReduce[B]]

1 0 0 1 0 1 0 2 0 0 1 0 0 0 0 0

RowReduce [B] == RowReduce [A]

True

This last result shows that two different matrices can have the same reduced row echelon form.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012409520550014X

Systems of Ordinary Differential Equations

Martha L. Abell , James P. Braselton , in Differential Equations with Mathematica (Fourth Edition), 2016

6.1.4 Systems of Linear Equations

Mathematica offers numerous options for solving systems of linear equations. Usually, Solve[systemofequations,variables] will quickly solve a linear system. Other times, you may prefer different calculations. For example, the linear system Ax=b has a unique solution if |A|≠0. In this case A is invertible and x =A −1 b. Thus, provided you have defined matrixa and vectorb, x is given by Inverse[matrixa].vectorb. Another option is to form the augmented matrix (A|b) or A if b = 0 and then use RowReduce , which row reduces a matrix to reduced row echelon form.

Example 6.1.7

Solve each of the following systems. (a) 3 x y + 3 z = 0 6 x y 3 z = 0 6 x + y 5 z = 0 ; (b) 2 x 2 y + z = 3 x + 2 y z = 2 x y + 2 z = 5 ; and (c) 2 x y + z = 3 x + 2 y z = 2 5 x + 5 y 2 z = 0 .

Solution

(a) We first use Solve to solve the system of equations.

Solve[{3xy+3z==0, −6xy−3z==0, −6x+y−5z==0}]

{ { y 3 x 2 , z 3 x 2 } }

The result indicates that x is a free variable and that y and z depend on x. Because x is free and the authors prefer to avoid fractions, we choose x = 2t. Then, y = −3t and z = −3t so x = x y z = 2 t 3 t 3 t = 2 3 3 t . This problem has infinitely many solutions. However, notice that all solutions are linearly dependent.

A different way of solving the problem is to notice that the augmented matrix for this linear homogeneous system is 3 1 3 6 1 3 6 1 5 . We use RowReduce to row reduce the augmented matrix to reduced row echelon form.

capa={{2, −1, 3}, {−6, −2, −3}, {−6, 1, −6}}; capb=capa−(−1)IdentityMatrix[3]

{{3, −1, 3}, {−6, −1, −3}, {−6, 1, −5}}

MatrixForm[capb]

3 1 3 6 1 3 6 1 5

RowReduce[capb]

{ { 1 , 0 , 2 3 } , { 0 , 1 , 1 } , { 0 , 0 , 0 } }

If there are no restrictions on a variable, you can choose any variable to be free. However, be cautious. Sometimes there are conditions that eliminate one variable from being a free variable. and 0 = 0, which means that there is one free variable.

Using x = x y z notation, the result indicates that x + 2 3 z = 0 , yz = 0, We choose z to be free and then z = 3t. With this, we have that y = 3t and x = −2t. With this notation, we obtain that x = x y z = 2 t 3 t 3 t = 2 3 3 t , which is equivalent to the result obtained previously.

To see that the results are equivalent, choose t = −1.

For (b) we proceed in a similar way. With Solve,

Clear[x, y, z] Solve[{2xy+z==3, x+2yz==2, xy+2z==5}]

{{x → 1, y → 2, z → 3}}

we see that the solution to the system of equations is x = 1, y = 2, and z = 3. Alternatively, we write the augmented matrix for the system: 2 1 1 3 1 2 1 2 1 1 2 5 and then use RowReduce to row reduce the augmented matrix to reduced row echelon form.

a={{2, −1, 1, 3}, {1, 2, −1, 2}, {1, −1, 2, 5}};

RowReduce[a]

{{1, 0, 0, 1}, {0, 1, 0, 2}, {0, 0, 1, 3}}

Row one indicates that x = 1, row two indicates that y = 2, and row three indicates that z = 3.

For (c), we proceed in the same way as before. First, we use Solve. The result indicates that there is not a solution to the system.

Clear[x, y, z]Solve[{2xy+z==3, x+2yz==2, 5x+5y−2z==0}, {x, y, z}]

{}

On the other hand, forming the augmented matrix 2 1 1 3 1 2 1 2 5 5 2 0 and using RowReduce,

step1=RowReduce[{{2, −1, 1, 3}, {1, 2, −1, 2}, {5, 5, −2, 0}}]

{ { 1 , 0 , 1 5 , 0 } , { 0 , 1 , 3 5 , 0 } , { 0 , 0 , 0 , 1 } }

MatrixForm[step1]

1 0 1 5 0 0 1 3 5 0 0 0 0 1

shows us that x + 1 5 y = 0 , y 3 5 z = 0 , and 0 = 1. The system is inconsistent and does not have a solution.

Usually, you will obtain good results with Solve or with functions like RowReduce or Inverse. However, the different approaches may give you different insight on the problem.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128047767000061

Matrices and vectors: topics from linear algebra and vector calculus

Martha L. Abell , James P. Braselton , in Mathematica by Example (Sixth Edition), 2022

5.3.1 Fundamental subspaces associated with matrices

Let A = ( a i j ) be an n × m matrix with entry a i j in the ith row and jth column. The row space of A, row ( A ) , is the spanning set of the rows of A; the column space of A, col ( A ) , is the spanning set of the columns of A. If A is any matrix, then the dimension of the column space of A is equal to the dimension of the row space of A. The dimension of the row space (column space) of a matrix A is called the rank of A. The nullspace of A is the set of solutions to the system of equations Ax = 0 . The nullspace of A is a subspace, and its dimension is called the nullity of A. The rank of A is equal to the number of nonzero rows in the row echelon form of A; the nullity of A is equal to the number of zero rows in the row echelon form of A. Thus if A is a square matrix, the sum of the rank of A and the nullity of A is equal to the number of rows (columns) of A.

1.

NullSpace[A] returns a list of vectors, which form a basis for the nullspace (or kernel) of the matrix A.

2.

RowReduce[A] yields the reduced row echelon form of the matrix A.

Example 5.23

Place the matrix

A = ( 1 1 2 0 1 2 2 0 0 2 2 1 1 0 1 1 1 1 2 2 1 2 2 2 0 )

in reduced row echelon form. What is the rank of A? Find a basis for the nullspace of A.

Solution

We begin by defining the matrix matrixa. Then RowReduce is used to place matrixa in reduced row echelon form.

capa = { { 1 , 1 , 2 , 0 , 1 } , { 2 , 2 , 0 , 0 , 2 } ,

{ 2 , 1 , 1 , 0 , 1 } , { 1 , 1 , 1 , 2 , 2 } ,

{ 1 , 2 , 2 , 2 , 0 } } ;

RowReduce [ capa ] // MatrixForm

( 1 0 0 2 0 0 1 0 2 0 0 0 1 2 0 0 0 0 0 1 0 0 0 0 0 )

Because the row-reduced form of matrixa contains four nonzero rows, the rank of A is 4, and thus the nullity is 1. We obtain a basis for the nullspace with NullSpace.

NullSpace [ capa ]

{ { 2 , 2 , 2 , 1 , 0 } }

As expected, because the nullity is 1, a basis for the nullspace contains one vector.  

Example 5.24

Find a basis for the column space of

B = ( 1 2 2 1 2 1 1 2 2 2 1 0 0 2 1 0 0 0 2 0 2 1 0 1 2 ) .

Solution

A basis for the column space of B is the same as a basis for the row space of the transpose of B. We begin by defining matrixb, and then using Transpose to compute the transpose of matrixb, naming the resulting output tb.

matrixb = { { 1 , 2 , 2 , 1 , 2 } , { 1 , 1 , 2 , 2 , 2 } ,

{ 1 , 0 , 0 , 2 , 1 } , { 0 , 0 , 0 , 2 , 0 } ,

{ 2 , 1 , 0 , 1 , 2 } } ;

tb = Transpose [ matrixb ]

{ { 1 , 1 , 1 , 0 , 2 } , { 2 , 1 , 0 , 0 , 1 } , { 2 , 2 , 0 , 0 , 0 } , { 1 , 2 , 2 , 2 , 1 } , { 2 , 2 , 1 , 0 , 2 } }

Next, we use RowReduce to row reduce tb and name the result rrtb. A basis for the column space consists of the first four elements of rrtb. We also use Transpose to show that the first four elements of rrtb are the same as the first four columns of the transpose of rrtb. Thus the jth column of a matrix A can be extracted from A with Transpose [A][[j]].

rrtb = RowReduce [ tb ] ;

Transpose [ rrtb ] // MatrixForm

( 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 3 1 3 2 3 0 )

We extract the first four elements of rrtb with Take. The results correspond to a basis for the column space of B.

Take [ rrtb , 4 ]

{ { 1 , 0 , 0 , 0 , 1 3 } , { 0 , 1 , 0 , 0 , 1 3 } , { 0 , 0 , 1 , 0 , 2 } , { 0 , 0 , 0 , 1 , 3 } }  

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128241639000100