Linear arrangement of rows and stacks. Linear spacing of rows Value of linear spacing of matrix components

Please note that the rows and columns of the matrix can be viewed as arithmetic vectors of dimensions mі n obviously. Thus, the size matrix can be interpreted as a totality m n-mornikh or n m-worldly arithmetic vectors. By analogy with geometric vectors, we introduce the concepts of linear coherence and linear irrelevance of rows and columns of the matrix.

4.8.1. Viznachennya. Row
called linear combination of rows with coefficients
since for all elements of this series the following equality is true:

,
.

4.8.2. Viznachennya.

Rows
are called linearly lying, since this is a non-trivial linear combination equal to the zero row, then. It turns out that not all numbers are equal to zero


,
.

4.8.3. Viznachennya.

Rows
are called linearly independent, since their trivial linear combination is the same as the zero row, then.

,

4.8.4. Theorem. (Criterion for linear placement of matrix rows)

In order for the rows to be linearly arranged, it is necessary and sufficient that one of them be a linear combination of the others.

Proof:

Necessity. Let the rows go
linearly deposited, then this is a non-trivial linear combination, which is similar to the zero row:

.

Without exchanging the strength, it is acceptable that from the coefficients of the linear combination the substitution of zero (otherwise the rows can be renumbered). Having divided this relationship into , cancelable


,

Then the first row is a linear combination of the others.

Sufficiency. Take one of the rows, for example, , and a linear combination of others, such as

then there is a non-trivial linear combination of rows
, equal to the zero row:

oh, rows
linear deposits that needed to be completed.

Respect.

Similar values ​​and solidifications can be formulated for matrix columns.

§4.9. Matrix rank.

4.9.1. Viznachennya. Minor in order matrices size
is called the leader of the order with elements embroidered on the backsplash rows and stovptsiv.

4.9.2. Viznachennya. External view of zero minor order matrices size
called basic minor since all the minors of the matrix are in order
equal to zero.

Respect. The matrix can contain a number of basic minors. Obviously, all the stinks will be of the same order. It’s also a possible fit, if the matrix size
minor order In addition to zero, minors are in order
I don't know, then
.

4.9.3. Viznachennya. The rows (stowpts) that create the basic minor are called basic in rows (in rows).

4.9.4. Viznachennya. Rank The matrix is ​​called the order of the base minor. Matrix rank signified
or else
.

Respect.

It is significant that, due to the equality of rows and columns of the parent, the rank of the matrix does not change due to its transposition.

4.9.5. Theorem. (Invariance to the rank of the matrix due to elementary transformations)

The rank of the matrix does not change due to elementary transformations.

Without confirmation.

4.9.6. Theorem. (About the basic minor).

Basic rows (rows) are linearly independent. Any row (row) of a matrix can be represented as a linear combination of basic rows (rows).

Proof:

Let us carry out the proof for rows. Proof of confirmation for arguments can be carried out by analogy.

Let the matrix rank go size
more ancient , A
− basic minor. Without limiting the complexity, it is acceptable that the base minor of the expansions is in the upper left corner (otherwise, the matrix can be reduced to this form using additional elementary transformations):

.

Let’s start with the linear independence of the basic rows. The proof will be carried out in an unacceptable way. It is acceptable that the basic rows lie linearly. It follows from Theorem 4.8.4 that one of the series can be represented in the form of a linear combination of other basic series. Therefore, if we remove the specified linear combination from this row, then we subtract the zero row, which means that the minor
is equal to zero, which means the value of the base minor. In this way, we have achieved super-precision, and the linear independence of the basic rows has been achieved.

Let us now prove that each row of a matrix can be represented in the form of a linear combination of basic rows. What is the row number you are looking at? view 1 to r, then, obviously, it can be represented as a linear combination with a coefficient equal to 1 in a row and zero coefficients for other rows. Let us now show what the row number is view
before
, it can be presented as a linear combination of basic rows. Let's take a look at the minor matrix
, derivation from the base minor
additions to the row and have a good time
:

Let's show what this minor is
view
before
and for any station number view 1 to .

True, because the number of the station view 1 to r, then we can assume that there is a result from two new principles, which, obviously, is equal to zero. What's the station number? view r+1 to , and the row number view
before
, That
It is a minor of the output matrix of a higher order, lower than the base minor, and this means that it is equal to zero from the value of the base minor. With this rank, it was brought to light that the minor
is equal to zero for any row number view
before
and for any station number view 1 to . Placing it behind the remaining party, we remove:

Here
− advanced algebraic additions. Dear scho
, that's why
є basic minor. Ozhe, elements of the row k may be presented as a seemingly linear combination of the supporting elements of the basic rows with coefficients, so that they do not lie under the column number :

In this way, we achieved that a sufficient number of matrix rows can be represented in the form of a linear combination of basic rows. The theorem has been proven.

Lecture 13

4.9.7. Theorem. (About the rank of an ungenerated square matrix)

In order for a square matrix to be non-virtuous, it is necessary and sufficient that the rank of the matrix is ​​equal to the size of the matrix.

Proof:

Necessity. Let's have a square matrix size nє ungenerated, then
, Also, the primary matrix is ​​the basic minor, then.

Sufficiency. Let's go
Then the order of the basic minor is relative to the size of the matrix, therefore, the basic minor is the original matrix , then.
for the variations of the base minor.

Nasledok.

In order for a square matrix to be non-virtuous, it is necessary and sufficient that its rows be linearly independent.

Proof:

Necessity. The fragments of a square matrix are ungenerated, and their rank is equal to the size of the matrix
Then the primary matrix is ​​the base minor. Also, following Theorem 4.9.6 about the basis minor, the rows of the matrix are linearly independent.

Sufficiency. Since all rows of the matrix are linearly independent, then their rank is not less than the size of the matrix, which means
Also, following the previous Theorem 4.9.7, the matrix є unvirgin.

4.9.8. Method of oblique minors for finding the rank of a matrix.

Please note that this method is already implicitly described in the proof of the theorem about the basis minor.

4.9.8.1. Viznachennya. Minor
called Let's talk about it according to C minor
, as it is taken from the minor
adding one new row and one new column of the output matrix.

4.9.8.2. The procedure for finding the rank of a matrix using the method of oblique minors.

    We know which exact minor of the matrix is ​​different from zero.

    Let's count all the minors that we like.

    If they are equal to zero, then the stream minor is basic, and the rank of the matrix is ​​equal to the order of the stream minor.

    If among the framing minors there is at least one subdivision of zero, then it is necessary to be precise and the procedure is trivial.

We know another method for framing the minor rank of a matrix

.

It is easy to indicate a current minor of a different order, as opposed to a zero, for example,

.

We calculate the minori that frame it:




Well, since all minors of the third order, which are equal to zero, then the minor
є basic, then

Respect. From looking at the butt it is clear that it is hard work. Therefore, the method of elementary transformations is often used more often than not.

4.9.9. Knowing the rank of the matrix through the paths of elementary transformations.

On the basis of Theorem 4.9.5, it can be confirmed that the rank of the matrix does not change during elementary transformations (that is, the equivalent ranks of the matrix are equal). Therefore, the rank of the matrix is ​​equal to the rank of the step-frequency matrix, which is determined by the output elementary transformations. The rank of a staged matrix is ​​obviously the same number of non-zero rows.

The rank of the matrix is ​​significant

the path of elementary re-creation.

Let's inspect the matrix up to the speed of sight:

The number of non-zero rows of the extracted step-frequency matrix is ​​three, then

4.9.10. System rank is a vector of linear space.

Let's take a look at the vector system
some linear space . Since it is linearly independent, then it can be seen as a linearly independent subsystem.

4.9.10.1. Viznachennya. Rank of the vector system
linear space is called the maximum number of linearly independent vectors of the system. Vector system rank
signified as
.

Respect. Since the system of vectors is linearly independent, its rank is equal to the number of vectors in the system.

Let us formulate a theorem that shows the connection between the rank of the system of vectors in the linear space and the rank of the matrix.

4.9.10.2. Theorem. (About the rank of the vector system in linear space)

The rank of the system of vectors in the linear space is equal to the rank of the matrix, in which rows are the coordinates of the vectors in each basis of the linear space.

Without confirmation.

Nasledok.

In order for the system of vectors in a linear space to be linearly independent, it is necessary and sufficient to have the rank of the matrix, or the rows or rows of the coordinates of the vectors in any basis, in addition to the number of vectors of the system.

The proof is obvious.

4.9.10.3. Theorem (About the dimension of the linear shell).

Dimensions of the linear envelope of vectors
linear space the previous rank of the vector system:

Without confirmation.

Rows and stacks matrix you can see how matrix-rows and obviously, matrices. Therefore, over them, as well as other matrices, one can construct linear operations. The difference in the folding operation lies in the fact that the rows (stacks) are of the same value (height), but the mind is always determined for the rows (stacks) of the same matrix.

Linear operations on rows (stacks) make it possible to create rows (stacks) in the form of expressions α 1 a 1 + ... + α sas , where 1 , ..., as - a sufficient set of rows (stacks) of the same height (height) , and α 1, ..., α s are real numbers. These are called the types of expressions linear combinations of rows (stowpts).

Value 12.3. Rows (stovpts) a 1, ..., a s are called linearly independent, like jealousy

α 1 a 1 + ... + α s a s = 0, (12.1)

where 0 on the right side is a zero row (stovet), perhaps only with α 1 = ... = as = 0. Otherwise, if there are such active numbers α 1 , ... , α s , they are not equal to zero at the same time equalities (12.1) are concluded, and these rows (rows) are called linearly lying.

The onset of hardening is known as a criterion for linear placement.

Theorem 12.3. Rows (couples) a 1, ..., a s, s > 1, linearly lying then and only if one (one) of them is a linear combination of others.

◄ The proof is carried out for rows, and for columns it is similar.

Necessity. Since the rows a 1, ..., as are linear, then, based on the values ​​of 12.3, there are such active numbers α 1, ..., α s, not equal to zero at the same time, so α 1 a 1 +... + α sas = 0. Viberemo non-zero coefficient αα i. For singing, let it be α1. Then α 1 a 1 = (-α 2)a 2 + ... + (-α s)as і, then a 1 = (-α 2 /α 1)a 2 + ... + (-α s /α 1)as, tobto. row a 1 is represented as a linear combination of other rows.

Sufficiency. Let it be, for example, a 1 = λ 2 a 2 + ... + S a s. Then 1a 1 + (-λ 2)a 2 + ... +(-λ s) a s = 0. The first coefficient of the linear combination of the previous units, then. vin is non-zero. Extended to 12.3, rows a 1, ..., a s linearly.

Theorem 12.4. Let the rows (stacks) a 1 , ..., a s be linearly independent, but you want one of the rows (stacks) b 1 ,..., b l be their linear combination. Then all rows (stacks) a 1, ..., a s, b 1, ..., b l are linear deposits.

◄ For example, b 1 is a linear combination of a 1, ..., a s, then. b 1 = ? Up to this linear combination, rows (stacks) b 2 , ..., bl (for l > 1) with zero coefficients are added: b 1 = α 1 a 1 + ... + α sas + 0b 2 + ... + 0b l. Extended to Theorem 12.3, rows (stovpts) a 1, ..., a s, b 1, ..., b i are linear deposits.

some numbers (some numbers or all of them can be equal to zero). This means the presence of impending jealousies between the elements of the arguments:

or .

Z (3.3.1) vibrates, so

(3.3.2)

de – zero row.

Viznachennya. The rows of the matrix are linearly deposited, so that such numbers can be found that not all are equal to zero at the same time, so

(3.3.3)

If equality (3.3.3) is fair, then the rows are called linearly independent. The relationship (3.3.2) shows that if one of the rows is linearly expressed through the others, then the rows are linearly subordinate.

It is easy to draw and turn: since the rows are linearly laid down, then there will be a row that will be a linear combination of other rows.

Let him go, for example, in (3.3.3) .

Viznachennya. Let's go to the matrix And you saw a minor r th order and let minor ( r +1)-th order of the matrix and the whole place a minor. We will say that in this case the minor is oblique for the minor (or it is oblique for ).

Now we’ll tell the important lema.

Lemmaabout popular minors. Yakshcho minor order r Matrix A = Vidmіnniy VID zero, and all Minori, ogo Oblyamovo, ryvni zero, then be a row (Stovpets) matrix a є kombіnatsіyu ї ї ї ї ї ї Rukiv (Stovptziv), pusk.

Proof. Without disrupting the mellowness of the mercury, it is important that it is in the form of zero minor r The th order is located at the upper left corner of the matrix A =:

.

For first k The row of matrix A, confirmed by Lemy, is obvious: it is enough to include in a linear combination this row with a coefficient equal to one, and the row with coefficients equal to zero.

Let us now prove that other rows of matrix A are linearly expressed through the first ones k rows. For whom we will forget the minor ( r +1) the order of the way is added to the minor kth row () ta l-th stovptsya():

.

Subtract minor to zero for all k and l . As a matter of fact, there is no difference between two new points. As a matter of fact, the removal of the minor and the oblique minor for and, therefore, equals zero behind the mind.

Let's lay out the minor after the elements of the restl-th stovptsya:

(3.3.4)

de - algebraic addition to elements. The addition of algebra is the minor matrix A, that is. Divisible (3.3.4) by and expressible via:

(3.3.5)

de , .

Respectfully, we reject:

(3.3.6)

Viraz (3.3.6) means that k -th row of the matrix A is linearly expressed through the first ones r rows.

The fragments of the transposed matrix of the values ​​of the minors do not change (through the power of the deputies), everything is explained fairly by the staunchists. The theorem has been proven.

Naslidok I . Any row (row) of a matrix is ​​a linear combination of basic rows (rows). Yes, the base minor of the matrix is ​​equal to zero, and all the minors that define it are equal to zero.

Naslidok II. Secondary leader n -th order and then equal to zero, so you can place linear rows (stacks). The sufficiency of the linear position of rows (sovpts) for the equality of the originator to zero was previously stated as the power of the originators.

Let's bring it up. Let's be given a square matrix n -th order, the unified minor of any relative zero star is observed when the rank of the matrix is ​​lower n , then. I would like to find one row that is a linear combination of the basic rows of this matrix.

Let us prove another theorem about the rank of the matrix.

Theorem.The maximum number of linearly independent rows of a matrix is ​​equal to the maximum number of linearly independent rows and equal to the rank of this matrix.

Proof. Let the matrix rank A = older r. Either way її k The basis series are linearly independent, otherwise the basis minor is equal to zero. On the other hand, be it r +1 and lay more rows linearly. Having accepted the unacceptable, we could have known a minor order of magnitude greater than r , edited from zero behind the 2 front edges. It remains to be noted that the maximum order of minors substitutable from zero is ancient r . Everything is made fair for the ranks and for the members.

Finally, we present one more way to find the rank of a matrix. The rank of a matrix can be calculated by knowing the minor of the maximum order, which is subtracted from zero.

At first glance, this suggests the calculation of at least a final, or perhaps even a large number of minors of the matrix.

The next theorem allows, however, to introduce to this degree of simplification.

Theorem.If the minor of matrix A is equal to zero, and all the minors that it contains are equal to zero, then the rank of the matrix is ​​equal to r.

Proof. Let us show what a subsystem of matrix rows is when S>r will be in the minds of the theory of linearly independent (from the point of view that r is the maximum number of linearly independent rows of the matrix or whatever minors are of the order of magnitude greater than k to zero).

Let's not accept it. Let the rows be linearly independent. According to the discussion about minors, which are subject to obliteration, the skin of them will be linearly expressed through the rows in which there is a minor and those that refer to those that are subordinate to zero, linearly independent:

(3.3.7)

Let's look at the matrix of coefficients of linear viruses (3.3.7):

.

The rows of this matrix are significant through . They will be linear deposits, fragments of the matrix rank, then. maximum quantity of linearly independent rows that does not overflow r< S . Therefore, there are numbers that are not all equal to zero, but

Let's move on to the equality of components

(3.3.8)

Now let's look at the next linear combination:

or else

Let's take a look at a fairly square matrix of size mxn.

Matrix rank.

The concept of matrix rank is related to the concept of linear position (independence) of rows (rows) of the matrix. Let's look at the concept for rows. For stovts - similarly.

Significantly the drains of matrix A:

e 1 = (a 11, a 12, ..., a 1n); e 2 =(a 21,a 22,...,a 2n);..., e m =(a m1,a m2,...,a mn)

e k =e s where a kj =a sj , j=1,2,…,n

Arithmetic operations on the rows of the matrix (folding, multiplying by a number) are introduced as operations that are carried out element by element: λе k =(λ k1 ,λ k2 ,…,λ kn);

e k +е s = [(k1 + a s1), (a k2 + a s2), ..., (a kn + a sn)].

The row is called linear combination row e 1, e 2,..., e k, since there are similar sums of the creation of these rows on the same number of days:

e=λ 1 e 1 +λ 2 e 2 +…+λ k e k

The rows e 1, e 2,…, e m are called linearly lying, since there are active numbers λ 1 ,λ 2 ,…,λ m , not all are equal to zero, so the linear combination of these rows is equal to the zero row: λ 1 e 1 +λ 2 e 2 +…+λ m m = 0 ,de 0 =(0,0,…,0) (1)

Since a linear combination is equal to zero and only then, if all coefficients λ i are equal to zero (λ 1 =λ 2 =…=λ m =0), then the rows e 1, e 2,…, e m are called linearly independent.

Theorem 1. In order for the rows e 1 e 2, ..., e m to be linearly deposited, it is necessary and sufficient that one of these rows be a linear combination of other rows.

Proof. Necessity. Leave the rows e 1, e 2, ..., e m linear deposits. Come on, for the sake of importance (1) λ m ≠0, then

Incl. The row is a linear combination of other rows. Etc.

Availability. Combine one of the rows, for example, with a linear combination of other rows. Then there will be numbers that equal jealousy that can be rewritten from the view,

even though one of the coefficients, (-1), is not equal to zero. Tobto. rows are linearly laid. Etc.

Viznachennya. Minor kth order a matrix of size mxn is called a kth-order parent with elements that lie on the cross of any k rows and any k columns of matrix A. (k≤min(m,n)). .

butt., Minori 1st order: =, =;

minori 2nd order: , 3rd order

A matrix of the 3rd order has 9 minors of the 1st order, 9 minors of the 2nd order and 1 minor of the 3rd order (the origin of this matrix).

Viznachennya. Rank of matrix A The highest order of zero-substituted minors of the matrix is ​​called. Designation – rg A or r(A).

Power to the rank of the matrix.

1) the rank of the matrix A nxm is selected from the smaller size, then.

r(A)≤min(m,n).

2) r(A)=0 if all elements of the matrix equal 0, then. A = 0.

3) For a square matrix A of nth order, r (A) = n, if A is not virogenous.



(The rank of a diagonal matrix is ​​the same as the number of non-zero diagonal elements).

4) If the rank of the matrix is ​​equal to r, then the matrix may have one minor of order r, which is not equal to zero, and all minors of great orders are equal to zero.

For matrix ranks the following relationships are valid:

2) r(A+B)≤r(A)+r(B); 3) r(AB)≤min(r(A),r(B));

3) r(A+B)≥│r(A)-r(B)│; 4) r(ATA)=r(A);

5) r(AB)=r(A), which is a square nonvirogen matrix.

6) r(AB)≥r(A)+r(B)-n, where n is the number of rows of matrix A or rows of the matrix.

Viznachennya. A non-zero minor of order r(A) is called basic minor. (Matrix A may have a number of basic minors). Rows and columns on the crossbar of which there is a basic minor are called subordinately basic rowsі basic principles.

Theorem 2 (about the basis minor). Basic rows (rows) are linearly independent. Any row (any row) matrix A is a linear combination of basic rows (rows).

Proof. (For rows). If the basic rows were linearly separate, then according to theorem (1) one of these rows will be a linear combination of other basic rows, then, without changing the value of the basic minor, can be derived from this row a linear combination is assigned and the zero row is removed, and this means that , so the base minor is different from zero. Incl. The basic rows are linearly independent.

Let us prove that any row of the matrix is ​​a linear combination of basic rows. Because with sufficient changes in rows (sovpts), the originator retains the power of equality to zero, then, without interfering with strength, you can take into account that the base minor is located at the upper left corner of the matrix

A=, tobto. growths on the first rows and first rows. Let 1£j£n, 1£i£m. Let us show that the primary variable is (r+1)th order

Either j£r or i£r, this variable is equal to zero, because This one will have two new columns or two new rows.

Since j>r and i>r, this primary is a minor of the (r+1)th order of the matrix A. Because The rank of the matrix is ​​equal to r, and any minor of a higher order is equal to 0.

Laying it out after the elements of the remaining (added) stack, we can remove it

a 1j A 1j +a 2j A 2j +…+a rj A rj +a ij A ij =0, where the remaining algebraic complement A ij is avoided with the basis minor M r and therefore A ij = M r ≠0.

Having divided the remaining element into A ij, we can express the element a ij as a linear combination: , de .

The value i (i>r) is fixed and it is determined that for any j (j=1,2,…,n) the elements of the i-th row ei are linearly expressed through the elements of the rows e 1, e 2,…, e r, t. tobto. The i-th row is a linear combination of basic rows: . Etc.

Theorem 3. (A sufficient level of mental equality to the zero of the covariate is necessary). In order for the origin of the nth order D to be equal to zero, it is necessary and sufficient that the row (row) be linearly deposited.

Proof (p.40). Necessity. If the nth order D is equal to zero, then the base minor of the matrix is ​​of order r

Including, one row is a linear combination of others. Following Theorem 1, the rows of the origin are linear.

Availability. Since rows D are linearly deposited, then according to the theorem one row A i is a linear combination of other rows. By removing row A i, a linear combination is assigned, without changing the value of D, the zero row is removed. Well, behind the authorities of the deputies D=0. etc.

Theorem 4. During elementary transformations, the rank of the matrix changes.

Proof. As was shown when examining the powers of the primary signs, when the square matrices are transformed, their primary variables either change, or multiply by a non-zero number, or change sign. In which case, the highest order of the zero-based minors of the output matrix is ​​preserved, then. The rank of the matrix does not change. Etc.

If r(A)=r(B), then i B – equivalent: A~B.

Theorem 5. With the help of elementary transformations, you can adjust the matrix to I look step by step. The matrix is ​​called step by step, as it looks like:

A=, de a ii ≠0, i=1,2,…,r; r≤k.

Umovi r≤k can be reached by transpositions.

Theorem 6. The rank of a step-frequent matrix is ​​the number of non-zero rows .

Tobto. The rank of the step matrix is ​​older because є substitution of zero minor to order r:

some numbers (some numbers or all of them can be equal to zero). This means the presence of impending jealousies between the elements of the arguments:

Z (3.3.1) vibrates, so

If equality (3.3.3) is fair, then the rows are called linearly independent. The relationship (3.3.2) shows that if one of the rows is linearly expressed through the others, then the rows are linearly subordinate.

It is easy to draw and turn: since the rows are linearly laid down, then there will be a row that will be a linear combination of other rows.

Let him go, for example, in (3.3.3) .

Viznachennya. Let the matrix A of the vision have any minor of the r-th order and let the minor of the (r+1)-th order of this matrix have a minor. We will say that in this case the minor is oblique for the minor (or it is oblique for ).

Now we’ll tell the important lema.

Lemma about popular minors. Since the minor of the order r of the matrix A is a distinct form of zero, and all the minors that it contains are equal to zero, then any row (row) of the matrix A is a linear combination of rows (rows) ptsіv), what to establish.

Proof. Without destroying the strength of the merging, it is important that the significant value of zero minor of the rth order stands at the upper left corner of the matrix A =:



.

For the first k rows of matrix A, it is obvious: add a linear combination to include this row with a coefficient equal to one, and the others with coefficients equal to zero.

Let us now prove that the other rows of matrix A are linearly expressed through the first k rows. For this purpose, the minor (r+1)-th order will be added to the minor of the k-th row () and l-th stovptsya():

.

Subtract the minor to zero for all k and l. As a matter of fact, there is no difference between two new points. As a matter of fact, the removal of the minor and the oblique minor for and, therefore, equals zero behind the mind.

Let's lay out the minor after the elements of the rest l-th stovptsya:

Respectfully, we reject:

(3.3.6)

Viraz (3.3.6) means that the k row of matrix A is linearly expressed through the first r rows.

The fragments of the transposed matrix of the values ​​of the minors do not change (through the power of the deputies), everything is explained fairly by the staunchists. The theorem has been proven.

Succession I. Any row (row) of a matrix is ​​a linear combination of basic rows (rows). Yes, the base minor of the matrix is ​​equal to zero, and all the minors that define it are equal to zero.

Nasledok II. The origin of the nth order is also equal to zero, so it is possible to place linear rows (stacks). The sufficiency of the linear position of rows (sovpts) for the equality of the origin to zero was previously stated as the power of the origins.

Let's bring it up. Let us be given a square matrix of the nth order, one minor of which is equal to zero. A star indicates that the rank of the matrix is ​​less than n, then. I would like to find one row that is a linear combination of the basic rows of this matrix.

Let us prove another theorem about the rank of the matrix.

Theorem. The maximum number of linearly independent rows of a matrix is ​​equal to the maximum number of linearly independent rows and equal to the rank of this matrix.

Proof. Let the rank of the matrix A = older r. Then the base rows are linearly independent, otherwise the base minor is equal to zero. On the other side, be it r+1 and more rows to lie linearly. Having assumed the unacceptable, we could find a minor of the order of magnitude greater than the lower r, substituted for the zero behind the successor 2 front fields. It remains to be noted that the maximum order of minors substituting from zero is older than r. Everything is made fair for the ranks and for the members.

Finally, we present one more way to find the rank of a matrix. The rank of a matrix can be calculated by knowing the minor of the maximum order, which is subtracted from zero.

At first glance, this suggests the calculation of at least a final, or perhaps even a large number of minors of the matrix.

The next theorem allows, however, to introduce to this degree of simplification.

Theorem. If the minor of matrix A is equal to zero, and all the minors that it contains are equal to zero, then the rank of the matrix is ​​equal to r.

Proof. It is possible to show that any subsystem of matrix rows at S>r will be in the minds of the theorem linearly independent (from this point, r is the maximum number of linearly independent matrix rows or whatever minor order more lower k reaches zero).

Let's not accept it. Let the rows be linearly independent. According to the discussion about minors, which are subject to obliteration, the skin of them will be linearly expressed through the rows in which there is a minor and those that refer to those that are subordinate to zero, linearly independent:

Now let's look at the next linear combination:

or else

Vikoristovuyuchi (3.3.7) and (3.3.8), eliminated

,

What is important to note is the linear independence of the rows.

Well, our assumption is incorrect and, well, the rows in the minds of the theorem are linear. The theorem has been proven.

Let's look at the rule for calculating the rank of a matrix - the method of oblique minors, based on this theorem.

When calculating the rank of the matrix, the traces go from minors of lower orders to minors of higher orders. If a minor of the rth order has already been found, excluding zero, then it is necessary to calculate the minors of the (r+1)th order in order to complete the minor. If they are equal to zero, then the rank of the matrix is ​​equal to r. This method is complicated because we not only calculate the rank of the matrix, but also determine how the columns (rows) add up the base minor of the matrix.

butt. Calculate the rank of the matrix using the method of framing minors

Decision. The minor of a different order, which stands at the upper left corner of matrix A, is subdivided from zero:

.

Prote all minors of the third order, which are preferred, equal to zero:

; ;
; ;
; .

Also, the rank of matrix A is twofold: .

The first and other rows, the first and other columns in this matrix are basic. Other rows and their linear combinations. Right for the ranks of fair offensive equality:

Finally, the justice of such authorities is significant:

1) the rank of the additional matrix is ​​not greater than the rank of the skin and abscesses;

2) the rank of the additional matrix A on the right or the ungenerated square matrix Q is equal to the rank of the matrix A.

Rich-membered matrices

Viznachennya. A multi-term matrix or a -matrix is ​​a rectangular matrix, the elements of which are multi-term ones of one exchange with numerical coefficients.

Elementary transformations can be done over -matrices. It is clear to them:

Rearrangement of two rows (stovpts);

The row is multiplied by a number other than zero;

An addition to one row (stovptsya) of another row (stovptsya), multiplied by any rich term.

Two matrices of the same dimensions are called equivalent: from one matrix to another it is possible to proceed to an additional final number of elementary transformations.

butt. Bring the equivalence matrix

, .

1. Swap the first and second columns in the matrix:

.

2. From another row, we can see the first one, multiplying by ():

.

3. Multiply the other row by (–1) and respectfully,

.

4. From another chapter, the first one, multiplied by , is subtracted

.

Absolutely everything - the matrix of these dimensions is divided into classes that do not change, equivalent matrices. Matrices that are equivalent to each other create one class, and those that are not equivalent create another.

The skin class of equivalent matrices is characterized by a canonical, or normal, matrix of these dimensions.

Viznachennya. A canonical, or normal, matrix of dimensions is a matrix whose main diagonal contains many terms containing p - less than the numbers m and n ( ), and the higher coefficients equal to 1 are not equal to zero, and the next rich term is divided into the front one. All elements of the pose of the head diagonal are equal to 0.

It is important to note that the middle of the polynomials are polynomials of zero degree, all on the beginning of the head diagonal. Since there are zeros, all of them stand at the end of the head diagonal.

The matrix of the front butt is canonical. Matrix

also canonical.

The skin class-matrix is ​​replaced by a single canonical-matrix, then. The skin matrix is ​​equivalent to a single canonical matrix, which is called the canonical form or the normal form of this matrix.

The terms that stand on the head diagonal of the canonical form of a given matrix are called invariant multipliers of this matrix.

One of the methods for calculating invariant multipliers reduces the given matrix to canonical form.

Thus, for the matrix of the front butt with invariant multipliers

From the above said, the manifestation of one and the same set of invariant multipliers is a necessary and sufficient mental equivalence matrix.

The reduced matrix is ​​reduced to canonical form by assigning invariant multipliers

, ; ,

where r – rank matrices; - the largest contributor to minors of the kth order, taken from the senior coefficient, which is equal to 1.

butt. Let it be given - matrix

.

Decision. Of course, the maximum duration of the first fret, then. .

Significantly minori of a different order:

, etc.

Already, there are enough tributes for him to earn money: , then, .

Meaningfully

,

Otje, .

In this order, the canonical form of this matrix is ​​the matrix:

.

The matrix multimember is called the viraz form

de – change; - Square matrices of order n with numerical elements.

Since S is called the degree of the matrix multiterm, n is the order of the matrix multiterm.

Whether a matrix is ​​quadratic, it can be a matrix polynomial. Fairly, wisely, and firmly established, then. Any matrix term is possible in the form of a square matrix.

The validity of these assertions is clearly evident from the power of the operation over matrices. I'm stuck on the following butts:

butt. Submit a rich matrix

at the sight of the matrix rich member it is possible at the next stage

.

butt. Matrix rich term

can be represented in the form of a simple, richly defined matrix (-matrix)

.

This interchangeability of matrix members and multimember matrix plays an essential role in the mathematical apparatus of factor and component analysis methods.

Matrix terms of the same order can be added, subtracted and multiplied in the same way as standard terms with numerical coefficients. Slide, prote, memory, so multiplying matrix rich members, vzagali, not commutative, because non-commutative multiplying matrix.

Two matrix polynomials are called equal because their coefficients are equal. different matrices at the same levels of change.

The sum (result) of two matrix rich members is such a matrix rich member, which coefficient at the cutaneous stage of change is equal to the sum (severity) of coefficients at the same stage in rich members i.

To multiply the matrix by the matrix, you need to multiply the skin of the matrix by the skin of the matrix, remove the folds and create similar members.

The stage of the matrix rich member – create less or more equal amounts of the stages of the associates.

Operations on matrix terms can be followed by additional operations on similar matrices.

To fold (raise) the matrix members, sufficiently bend (raise) the supporting matrices. The same goes up to multiplication. -matrix of addition of matrix rich terms is of the same order -matrix of syntheses.

On the other hand, you can write down at a glance

de U 0 is a nonvirogen matrix.

When dividing the property, the right to privacy and the right surplus are clearly emphasized

where stage R 1 is less than stage, or (divided without surplus), and also left private and left surplus then and only then, if, in order