Connections with the linear independence of the rows of matrix columns. Matrix rank. Method of oblique minors. Linear independence of rows (rows) of the matrix. Using the method of framing minors, find out the rank of the matrix

Let's take a look at a fairly square matrix of size mxn.

Matrix rank.

The concept of matrix rank is related to the concept of linear position (independence) of rows (rows) of the matrix. Let's look at the concept for rows. For stovts - similarly.

Significantly the drains of matrix A:

e 1 = (a 11, a 12, ..., a 1n); e 2 =(a 21,a 22,…,a 2n);…, e m =(a m1,a m2,…,a mn)

e k =e s where a kj =a sj , j=1,2,…,n

Arithmetic operations over the rows of the matrix (addition, multiplication by number) are introduced as operations that are carried out element by element: k = (k k, k k, ..., k k);

e k +е s = [(k1 + a s1), (a k2 + a s2), ..., (a kn + a sn)].

The row is called linear combination rows e 1, e 2,..., e k, since there are similar sums of creations of these rows on additional active numbers:

e=λ 1 e 1 +λ 2 e 2 +…+λ k e k

The rows e 1, e 2,…, e m are called linearly lying, since there are active numbers λ 1 ,λ 2 ,…,λ m , not all are equal to zero, so the linear combination of these rows is equal to the zero row: λ 1 e 1 +λ 2 e 2 +…+λ m m = 0 de 0 =(0,0,…,0) (1)

Since a linear combination is equal to zero or more, if all coefficients λ i are equal to zero (λ 1 =λ 2 =…=λ m =0), then the rows e 1, e 2,…, e m are called linearly independent.

Theorem 1. In order for rows e 1, e 2,..., e m to be linearly deposited, it is necessary and sufficient that one of these rows be a linear combination of other rows.

Finished. Necessity. Leave the rows e 1, e 2, ..., e m linear deposits. Come on, for the sake of importance (1) λ m ≠0, then

That. The row is a linear combination of other rows. Etc.

Availability. Combine one of the rows, for example, with a linear combination of other rows. Then there will be numbers that equal jealousy that can be rewritten from the view,

I want 1 of the coefficients, (-1), not equal to zero. Tobto. rows are linearly laid. Etc.

Viznachennya. Minor kth order a matrix of size mxn is called a kth-order parent with elements that lie on the cross of any k rows and any k columns of matrix A. (k≤min(m,n)). .

butt., Minori 1st order: =, =;

minori 2nd order: , 3rd order

A matrix of the 3rd order has 9 minors of the 1st order, 9 minors of the 2nd order and 1 minor of the 3rd order (the origin of this matrix).

Viznachennya. Rank of matrix A The highest order of zero-substituted minors of the matrix is ​​called. Designation – rg A or r(A).

Power to the rank of the matrix.

1) the rank of the matrix A nxm is selected from the smaller size, then.

r(A)≤min(m,n).

2) r(A)=0 if all elements of the matrix equal 0, then. A = 0.

3) For a square matrix A of nth order, r(A)=n if A is not virogenous.



(The rank of a diagonal matrix is ​​the same as the number of non-zero diagonal elements).

4) If the rank of the matrix is ​​equal to r, then the matrix may have one minor of order r, which is not equal to zero, and all minors of great orders are equal to zero.

For matrix ranks the following relationships are valid:

2) r(A+B)≤r(A)+r(B); 3) r(AB)≤min(r(A),r(B));

3) r(A+B)≥│r(A)-r(B)│; 4) r(ATA)=r(A);

5) r(AB)=r(A), which is a square nonvirogen matrix.

6) r(AB) r(A)+r(B)-n, where n is the number of rows of matrix A or rows of matrix B.

Viznachennya. A non-zero minor of order r(A) is called basic minor. (Matrix A may have a number of basic minors). Rows and columns on the crossbar of which there is a basic minor are called subordinately basic rowsі basic principles.

Theorem 2 (about the basis minor). Basic rows (rows) are linearly independent. Any row (any row) matrix A is a linear combination of basic rows (rows).

Finished. (For rows). If the basic rows were linearly separate, then according to theorem (1) one of these rows would be a linear combination of other basic rows, then, without changing the value of the basic minor, one can derive from row, a linear combination is indicated and the zero row is removed, and this means that , so the base minor is different from zero. That. The basic rows are linearly independent.

Let us prove that any row of the matrix is ​​a linear combination of basic rows. Because with sufficient changes in rows (sovpts) the primary retains the power of equality to zero, then, without interfering with strength, it is important to take into account that the base minor is at the upper left corner of the matrix

A =, tobto. growths on the first rows and first rows. Let 1£j£n, 1£i£m. Let us show that the primary variable is (r+1)th order

If j£r or i£r, this variable is equal to zero, because This one will have two new columns or two new rows.

Since j>r and i>r, this primary is a minor of the (r+1)th order of the matrix A. Because The rank of the matrix is ​​equal to r, and any minor of a higher order is equal to 0.

Laying it out after the elements of the remaining (added) stack, we can remove it

a 1j A 1j +a 2j A 2j +…+a rj A rj +a ij A ij =0, where the remaining addition to the algebra A ij is combined with the basis minor M r and therefore A ij = M r ≠0.

Having divided the remaining element into A ij, we can express the element a ij as a linear combination: , de .

The value i (i>r) is fixed and it can be removed that for any j (j=1,2,…,n) element i rows e i are linearly expressed through the elements of rows e 1, e 2, ..., e r, etc. i-th row with a linear combination of basic rows: . Etc.

Theorem 3. (A sufficient level of mental equality to the zero of the covariate is necessary). In order for the origin of the nth order D to be equal to zero, it is necessary and sufficient that the row (row) be linearly deposited.

Proof (p.40). Necessity. Since the nth-order origin D is equal to zero, the basis minor of the matrix is ​​of order r

Including, one row is a linear combination of others. Following Theorem 1, the rows of the origin are linear.

Availability. Since rows D are linearly arranged, then according to Theorem 1, one row A i is a linear combination of other rows. By removing row A i, a linear combination is assigned, without changing the value of D, the zero row is removed. Well, behind the authorities of the deputies D=0. etc.

Theorem 4. During elementary transformations, the rank of the matrix changes.

Finished. As it was shown under the hour of examination of the authorities of the ancestors, when the square matrices of their ancestors are recreated, they either change, or multiply by a non-zero number, or change sign. In which case, the highest order of the zero-based minors of the output matrix is ​​preserved, then. The rank of the matrix does not change. Etc.

If r(A)=r(B), then i B – equivalent: A~B.

Theorem 5. For additional help with elementary transformations, you can bring the matrix to I look step by step. The matrix is ​​called step by step, as it looks like:

A=, de a ii ≠0, i=1,2,…,r; r≤k.

Umovi r≤k can be reached by transpositions.

Theorem 6. The rank of a step-frequent matrix is ​​the number of non-zero rows .

Tobto. The rank of the step matrix is ​​older than r, because є substitution of zero minor to order r:

Please note that the rows and columns of the matrix can be viewed as arithmetic vectors of dimensions mі n obviously. Thus, the size matrix can be interpreted as a totality m n-mornikh or n m-worldly arithmetic vectors. By analogy with geometric vectors, we introduce the concepts of linear coherence and linear irrelevance of rows and columns of the matrix.

4.8.1. Viznachennya. Row
called linear combination of rows with coefficients
since for all elements of this series the following equality is true:

,
.

4.8.2. Viznachennya.

Rows
are called linearly lying, since this is a non-trivial linear combination equal to the zero row, then. It turns out that not all numbers are equal to zero


,
.

4.8.3. Viznachennya.

Rows
are called linearly independent, since their trivial linear combination is the same as the zero row, then.

,

4.8.4. Theorem. (Criterion for linear placement of matrix rows)

In order for the rows to be linearly laid down, it is necessary and sufficient that one of them be a linear combination of the others.

Brought to you by:

Necessity. Let the rows go
linearly deposited, then this is a non-trivial linear combination, which is similar to the zero row:

.

Without exchanging the strength, it is acceptable that from the coefficients of the linear combination the substitution of zero (otherwise the rows can be renumbered). Having divided this relationship into , cancelable


,

Then the first row is a linear combination of the others.

Sufficiency. Take one of the rows, for example, , and a linear combination of others, such as

then there is a non-trivial linear combination of rows
, equal to the zero row:

oh, rows
linear deposits, as needed to be completed.

Respect.

Similar values ​​and confirmations can be formulated for the matrix components.

§4.9. Matrix rank.

4.9.1. Viznachennya. Minor in order matrices size
is called the leader of the order with elements embroidered on the backsplash rows and stovptsiv.

4.9.2. Viznachennya. External view of zero minor order matrices size
called basic minor since all the minors of the matrix are in order
equal to zero.

Respect. The matrix can contain a number of basic minors. Obviously, all the stinks will be of the same order. It’s also a possible fit, if the matrix size
minor order In addition to zero, minors are in order
I don't know, then
.

4.9.3. Viznachennya. The rows (stowpts) that create the basic minor are called basic in rows (in rows).

4.9.4. Viznachennya. Rank The matrix is ​​called the order of the basis minor. Matrix rank signified
or else
.

Respect.

It is significant that, due to the equality of rows and columns of the parent, the rank of the matrix does not change due to its transposition.

4.9.5. Theorem. (Invariance to the rank of the matrix due to elementary transformations)

The rank of the matrix does not change due to elementary transformations.

Without confirmation.

4.9.6. Theorem. (About the basic minor).

Basic rows (rows) are linearly independent. Any row (row) of a matrix can be represented as a linear combination of basic rows (rows).

Brought to you by:

Let us carry out the proof for rows. Proof of confirmation for arguments can be carried out by analogy.

Let the matrix rank go size
more ancient , A
− basic minor. Without limiting the complexity, it is acceptable that the base minor of the expansions is in the upper left corner (otherwise, the matrix can be reduced to this form using additional elementary transformations):

.

Let’s start with the linear independence of the basic rows. The proof will be carried out in the proto-legal way. It is acceptable that the base rows are linearly laid down. It follows from Theorem 4.8.4 that one of the series can be represented in the form of a linear combination of other basic series. Therefore, if we remove the specified linear combination from this row, then we subtract the zero row, which means that the minor
is equal to zero, which means the value of the base minor. In this way, we won the victory, and thus, the linear independence of the basic rows was achieved.

Let us now prove that each row of a matrix can be represented in the form of a linear combination of basic rows. What is the row number you are looking at? view 1 to r, then, obviously, it can be represented as a linear combination with a coefficient equal to 1 in a row and zero coefficients for other rows. Let us now show what the row number is view
before
, it can be presented as a linear combination of basic rows. Let's take a look at the minor matrix
, derivation from the base minor
additions to the row and have a good time
:

Let's show what this minor is
view
before
and for any station number view 1 to .

True, because the number of the station view 1 to r, then we can assume that there is a result from two new principles, which, obviously, is equal to zero. What's the station number? view r+1 to , and the row number view
before
, That
This is a minor of the output matrix of a higher order, lower than the base minor, and this means that the value of the base minor is higher than zero. With this rank, it was brought to light that the minor
is equal to zero for any row number view
before
and for any station number view 1 to . By laying it out according to the remaining steps, we can omit:

Here
− advanced algebraic additions. Dear scho
, that's why
є basic minor. Ozhe, elements of the row k may be presented as a seemingly linear combination of the supporting elements of the basic rows with coefficients, so that they do not lie under the column number :

In this way, we achieved that a sufficient number of matrix rows can be represented in the form of a linear combination of basic rows. The theorem has been proven.

Lecture 13

4.9.7. Theorem. (About the rank of an ungenerated square matrix)

In order for a square matrix to be ungenerated, it is necessary and sufficient that the rank of the matrix is ​​equal to the size of the matrix.

Brought to you by:

Necessity. Let's have a square matrix size nє ungenerated, then
, Also, the primary matrix is ​​the basic minor, then.

Sufficiency. Let's go
Then the order of the basic minor is relative to the size of the matrix, therefore, the basic minor is the original matrix , then.
from the value of the base minor.

Investigation.

In order for a square matrix to be ungenerated, it is necessary and sufficient that its rows be linearly independent.

Brought to you by:

Necessity. The fragments of a square matrix are ungenerated, and their rank is equal to the size of the matrix
Then the primary matrix is ​​the base minor. Also, following Theorem 4.9.6 about the basis minor, the rows of the matrix are linearly independent.

Sufficiency. Since all rows of the matrix are linearly independent, then their rank is not less than the size of the matrix, which means
Also, following the previous Theorem 4.9.7, the matrix є unvirgin.

4.9.8. Method of oblique minors for finding the rank of a matrix.

Please note that this method is already implicitly described in the proof of the theorem about the basis minor.

4.9.8.1. Viznachennya. Minor
called Let's talk about it according to C minor
, as it is taken from the minor
adding one new row and one new column of the output matrix.

4.9.8.2. The procedure for finding the rank of a matrix using the method of framing minors.

    We know which exact minor of the matrix is ​​different from zero.

    Let's count all the minors that need to be covered.

    If they are equal to zero, then the stream minor is basic, and the rank of the matrix is ​​equal to the order of the stream minor.

    If among the framing minors there is at least one subdivision of zero, then it is necessary to be precise and the procedure is trivial.

We know the method of minors, which determines the rank of the matrix.

.

It is easy to indicate a current minor of a different order, as opposed to a zero, for example,

.

Let's calculate the minori that we need to define:




Well, since all minors are of the third order, which become oblique and equal to zero, then the minor
є basic, then

Respect. From looking at the butt it is clear that it is hard work. Therefore, the method of elementary transformations is often used more often than not.

4.9.9. Knowing the rank of the matrix through the paths of elementary transformations.

Theorem 4.9.5 can be confirmed that the rank of the matrix does not change during elementary transformations (then the ranks of equivalent matrices are equal). Therefore, the rank of the matrix is ​​equal to the rank of the step-frequency matrix, which is determined by the output elementary transformations. The rank of a staged matrix is ​​obviously the same number of non-zero rows.

The rank of the matrix is ​​significant

by the method of elementary reworkings.

Let's inspect the matrix up to the speed of sight:

The number of non-zero rows of the extracted step-frequency matrix is ​​three, then

4.9.10. System rank is a vector of linear space.

Let's take a look at the vector system
some linear space . Since it is linearly independent, then one can see a linearly independent subsystem in it.

4.9.10.1. Viznachennya. Vector system rank
linear space is called the maximum number of linearly independent vectors of the system. Vector system rank
signified as
.

Respect. Since the system of vectors is linearly independent, its rank is equal to the number of vectors in the system.

Let us formulate a theorem that shows the connection between the rank of the system of vectors in the linear space and the rank of the matrix.

4.9.10.2. Theorem. (About the rank of the vector system in linear space)

The rank of the system of vectors in the linear space is equal to the rank of the matrix, in columns or rows of which are the coordinates of the vectors in each basis of the linear space.

Without confirmation.

Investigation.

In order for the system of vectors in a linear space to be linearly independent, it is necessary and sufficient to have the rank of the matrix, or the rows or rows of the coordinates of the vectors in any basis, in addition to the number of vectors of the system.

The proof is obvious.

4.9.10.3. Theorem (About the dimension of the linear shell).

Dimensions of the linear envelope of vectors
linear space the previous rank of the vector system:

Without confirmation.

some numbers (any number or all can be equal to zero). This means the presence of such zeal between the elements of the doctrines:

Z (3.3.1) vibrates, so

If equality (3.3.3) is fair, then the rows are called linearly independent. The relationship (3.3.2) shows that if one of the rows is linearly expressed through the others, then the rows are linearly subordinate.

It is easy to draw and turn: since the rows are linearly laid down, then there will be a row that will be a linear combination of other rows.

Let him go, for example, in (3.3.3) .

Viznachennya. Let in the matrix of visions any minor of the rth order and let the minor of the (r+1)th order of this matrix be included in the minor. We will say that in this case the minor oblyamovaya minor (or oblyamovaya for ).

Now we’ll tell the important lema.

Lemma about framing minors. Since the minor of the order r of the matrix A is a distinct form of zero, and all the minors that it contains are equal to zero, then any row (row) of the matrix A is a linear combination of rows (rows) ptsіv), what to establish.

Finished. Without destroying the strength of the merging, it is important that the significant value of zero minor of the rth order stands at the upper left corner of the matrix A =:



.

For the first k rows of matrix A, the formula is obvious: add a linear combination to include this row with a coefficient equal to one, and the others with coefficients equal to zero.

Let us now prove that the other rows of matrix A are linearly expressed through the first k rows. For this purpose, the minor (r+1)-th order will be added to the minor of the k-th row () and l-th stovptsya():

.

Subtract the minor to zero for all k and l. As a matter of fact, there is no difference between two new points. As a matter of fact, the removal of the minor and the oblique minor for and, therefore, equals zero behind the mind.

Let's lay out the minor after the elements of the rest l-th stovptsya:

Respectfully, we reject:

(3.3.6)

Viraz (3.3.6) means that the k row of matrix A is linearly expressed through the first r rows.

Since when the matrix is ​​transposed, the values ​​of the minors do not change (through the power of the primary), everything is stated fairly for the rest. The theorem has been proven.

Succession I. Any row (row) of the matrix is ​​a linear combination of basic rows (rows). Yes, the base minor of the matrix is ​​equal to zero, and all the minors that define it are equal to zero.

Nasledok II. The origin of the nth order is also more than equal to zero, so it is possible to place linear rows (stovps). The sufficiency of the linear position of rows (sovpts) for the equality of the originator to zero was previously stated as the power of the originators.

Let's bring it up. Let us be given a square matrix of the nth order, one minor of which is equal to zero. A star indicates that the rank of the matrix is ​​less than n, then. I would like to find one row that is a linear combination of the basic rows of this matrix.

Let us prove another theorem about the rank of the matrix.

Theorem. The maximum number of linearly independent rows of the matrix is ​​equal to the maximum number of linearly independent rows and the maximum rank of the matrix.

Finished. Let the rank of the matrix A = older r. Then the k basic rows are linearly independent, otherwise the basic minor is equal to zero. On the other side, be it r+1 and more rows to lie linearly. Having assumed the unacceptable, we could find a minor of the order of magnitude greater than the lower r, substituted for the zero behind the successor 2 front fields. It remains to be said that the maximum order of minors substituting from zero is older than r. Everything is made fair for the ranks and for the members.

Finally, we present one more way to find the rank of a matrix. The rank of a matrix can be calculated by knowing the minor of the maximum order, which is subtracted from zero.

At first glance, this suggests the calculation of at least the final, but perhaps even a large number of minors of the matrix.

The next theorem allows, however, to introduce to this degree of simplification.

Theorem. If the minor of matrix A is equal to zero, and all the minors that surround it are equal to zero, then the rank of the matrix is ​​equal to r.

Finished. It is enough to show that any subsystem of rows of the matrix at S>r will be in the minds of the theory linearly independent (from the point of view that r is the maximum number of linearly independent rows of the matrix or any minor order the more lower the k becomes zero).

Let's not accept it. Let the rows be linearly independent. According to the argument about minors, which are oblasted, the skin of them will be linearly expressed through the rows in which there is a minor and those that refer to those that are subordinate to zero, linearly independent:

Now let's look at the next linear combination:

or else

Vikoristovuyuchi (3.3.7) and (3.3.8), eliminated

,

What is important to note is the linear independence of the rows.

Well, our assumption is incorrect and, well, if S>r rows in the minds of the theorem are linearly deposited. The theorem has been proven.

Let's look at the rule for calculating the rank of a matrix - the method of oblique minors, based on this theorem.

When calculating the rank of the matrix, the traces go from minors of lower orders to minors of higher orders. If a minor of the rth order has already been found, excluding zero, then it is necessary to calculate the minors of the (r+1)th order in order to complete the minor. If they are equal to zero, then the rank of the matrix is ​​equal to r. This method is complicated because we not only calculate the rank of the matrix, but also determine which columns (rows) add up the base minor of the matrix.

butt. Calculate the rank of the matrix using the minor method.

Decision. The minor of a different order, which stands at the upper left corner of matrix A, is subdivided from zero:

.

However, all the minors of the third order, which are equal to zero:

; ;
; ;
; .

Also, the rank of matrix A is twofold: .

The first and other rows, the first and other columns in this matrix are basic. Other rows and their linear combinations. In truth, for the ranks the following equities are fair:

Finally, the justice of such authorities is significant:

1) the rank of the additional matrix is ​​not greater than the rank of the skin and abscesses;

2) the rank of the additional matrix A on the right or the ungenerated square matrix Q is equal to the rank of the matrix A.

Rich-membered matrices

Viznachennya. A multi-term matrix or a -matrix is ​​a rectangular matrix, the elements of which are multi-term ones of one exchange with numerical coefficients.

Elementary transformations can be done over -matrices. It is clear to them:

Rearrangement of two rows (stovpts);

The row is multiplied by a number other than zero;

An addition to one row (stovptsya) of another row (stovptsya), multiplied by any rich term.

Two matrices of the same dimensions are called equivalent: from one matrix to another it is possible to proceed to an additional final number of elementary transformations.

butt. Bring the equivalence matrix

, .

1. Swap the first and second columns in the matrix:

.

2. From another row, we can see the first one, multiplying by ():

.

3. Multiply the other row by (–1) and respectfully,

.

4. From another chapter, the first one, multiplied by , is subtracted

.

Absolutely everything - the matrix of these dimensions is divided into classes that do not change, equivalent matrices. Matrices that are equivalent to each other create one class, and those that are not equivalent create another.

The skin class of equivalent matrices is characterized by a canonical, or normal, matrix of these dimensions.

Viznachennya. A canonical, or normal, matrix of dimensions is a matrix whose main diagonal contains many terms containing p - less than the numbers m and n ( ), and the higher coefficients, equal to 1, are not equal to zero, and the next rich member is divided to the front. All elements of the pose of the head diagonal are equal to 0.

There is a significant trace that the middle of the polynomials contains many terms of zero degree, all of them on the beginning of the head diagonal. If there are zeros, then they stand at the end of the head diagonal.

The matrix of the front butt is canonical. Matrix

also canonical.

Skin class -matrix to replace a single canonical -matrix, then. The skin matrix is ​​equivalent to a single canonical matrix, which is called the canonical form or the normal form of this matrix.

The terms that stand on the head diagonal of the canonical form of a given matrix are called invariant multipliers of this matrix.

One of the methods for calculating invariant multipliers reduces the given matrix to canonical form.

Thus, for the matrix of the front butt with invariant multipliers

From what has been said, it follows that the presence of one and the same set of invariant multipliers is a necessary and sufficient mental equivalence matrix.

The reduced matrix to canonical form is reduced to the value of invariant multipliers

, ; ,

where r – rank matrices; - the largest contributor to minors of the kth order, taken from the senior coefficient, which is equal to 1.

butt. Let it be given - matrix

.

Decision. Of course, the maximum duration of the first fret, then. .

Significantly minori of a different order:

, etc.

Already, there are enough tributes for him to earn money: , then, .

Meaningfully

,

Otje, .

In this order, the canonical form of this matrix is ​​the approach-matrix:

.

The matrix multimember is called the viraz form

de – change; - Square matrices of order n with numerical elements.

Since S is called the degree of the matrix multiterm, n is the order of the matrix multiterm.

Whether a matrix is ​​quadratic, it can be a matrix polynomial. Fairly, wisely, and firmly established, then. Any matrix term can be presented in the form of a square matrix.

The validity of these assertions is clearly evident from the power of the operation over matrices. I'm stuck on the following butts:

butt. Reveal a rich matrix

in the form of a matrix rich term, you can do this

.

butt. Matrix rich term

can be applied to a seemingly accessible rich-membered matrix (-matrix)

.

This interchangeability of matrix members and multimember matrices plays an important role in the mathematical apparatus of factor and component analysis methods.

Matrix terms of the same order can be added, subtracted and multiplied in the same way as numerical polynomials with numerical coefficients. Slide, prote, memory, so multiplying matrix rich members, vzagali, not commutative, because non-commutative multiplying matrix.

Two matrix polynomials are called equal because their coefficients are equal. different matrices at the same levels of change.

The sum (result) of two matrix rich members is the matrix rich member whose coefficient at the cutaneous stage of change is the same as the sum (result) of coefficients in the same world in gatomembers i.

To multiply the matrix cell by the matrix cell, you need to multiply the skin member of the matrix cell by the skin member of the matrix cell, remove the folds and bring similar terms.

The stage of the matrix rich member – create less or more equal amounts of the stages of the associates.

Operations on matrix terms can be followed by additional operations on similar matrices.

To fold (raise) the matrix members, sufficiently bend (raise) the sub-matrices. Those same fuss multiply. -matrix of addition of matrix rich terms is of the same order -matrix of syntheses.

On the other hand, you can write down at a glance

de 0 is a nonvirogen matrix.

When divided into the main one, it is definitely the right privacy and the right surplus

de stage R 1 smaller per stage, or (divided without excess), as well as left private or left surplus either and only then, if, in order

some numbers (any number or all can be equal to zero). This means the presence of such zeal between the elements of the doctrines:

or .

Z (3.3.1) vibrates, so

(3.3.2)

de – zero row.

Viznachennya. The rows of the matrix are linearly deposited, so that such numbers can be found that not all are equal to zero at the same time, so

(3.3.3)

If equality (3.3.3) is fair, then the rows are called linearly independent. The relationship (3.3.2) shows that if one of the rows is linearly expressed through the others, then the rows are linearly subordinate.

It is easy to draw and turn: since the rows are linearly laid down, then there will be a row that will be a linear combination of other rows.

Let him go, for example, in (3.3.3) .

Viznachennya. Let's go to the matrix And you saw a minor r th order and let minor ( r +1)-th order of the matrix and the whole place a minor. We will say that in this case the minor oblyamovaya minor (or oblyamovaya for ).

Now we’ll tell the important lema.

Lemmaabout framing minors. Yakshcho minor order r matrix A = equal to zero, and all the minors that it contains are equal to zero, then any row (series) of matrix A is a linear combination of its rows (series), which becomes .

Finished. Without disrupting the mellowness of the mercury, it is important that it is in the form of zero minor r The th order is located at the upper left corner of the matrix A =:

.

For first k rows of the matrix And Lemy’s assertion is obvious: add a linear combination to include this row with a coefficient equal to one, and others with coefficients equal to zero.

Let us now prove that other rows of matrix A are linearly expressed through the first ones k rows. For whom we will forget the minor ( r +1) the order of the way is added to the minor k-th row () ta l-th stovptsya():

.

Subtract minor to zero for all k and l . As a matter of fact, there is no difference between two new points. As a matter of fact, the removal of the minor and the oblique minor for and, therefore, equals zero behind the mind.

Let's lay out the minor after the elements of the restl-th stovptsya:

(3.3.4)

de - algebraic addition to elements. Algebraic addition is the minor of matrix A, that is. Divisible (3.3.4) by and expressible via:

(3.3.5)

de , .

Respectfully, we reject:

(3.3.6)

Viraz (3.3.6) means that k -th row of the matrix A is linearly expressed through the first ones r rows.

Since when the matrix is ​​transposed, the values ​​of the minors do not change (through the power of the primary), everything is stated fairly for the rest. The theorem has been proven.

Naslidok I . Any row (row) of a matrix is ​​a linear combination of basic rows (rows). Yes, the base minor of the matrix is ​​equal to zero, and all the minors that define it are equal to zero.

Naslidok II. Secondary leader n In the second order, there is no more than zero, so you can place linear rows (stacks). The sufficiency of the linear position of rows (sovpts) for the equality of the originator to zero was previously stated as the power of the originators.

Let's bring it up. Let's be given a square matrix n -th order, the unified minor of any relative zero star is observed when the rank of the matrix is ​​lower n , then. I would like to find one row that is a linear combination of the basic rows of this matrix.

Let us prove another theorem about the rank of the matrix.

Theorem.The maximum number of linearly independent rows of the matrix is ​​equal to the maximum number of linearly independent rows and the maximum rank of the matrix.

Finished. Let the matrix rank A = older r. Either way її k The basis series are linearly independent, otherwise the basis minor is equal to zero. On the other hand, be it r +1 and lay more rows linearly. Having accepted the unacceptable, we could have known a minor order of magnitude greater than r , edited from zero behind the 2 front edges. It remains to be noted that the maximum order of minors substitutable from zero is ancient r . Everything is made fair for the ranks and for the members.

Finally, we present one more way to find the rank of a matrix. The rank of a matrix can be calculated by knowing the minor of the maximum order, which is subtracted from zero.

At first glance, this suggests the calculation of at least the final, but perhaps even a large number of minors of the matrix.

The next theorem allows, however, to introduce to this degree of simplification.

Theorem.If the minor of matrix A is equal to zero, and all the minors that define it are equal to zero, then the rank of the matrix is ​​equal r.

Finished. Let us show what a subsystem of matrix rows is when S>r there will be in minds the theorem of linearly independent (from the point of view that r is the maximum number of linearly independent rows of the matrix or whatever minors are of the order of magnitude greater than k to zero).

Let's not accept it. Let the rows be linearly independent. According to the argument about minors, which are oblasted, the skin of them will be linearly expressed through the rows in which there is a minor and those that refer to those that are subordinate to zero, linearly independent:

(3.3.7)

Let's look at the matrix of coefficients of linear viruses (3.3.7):

.

The rows of this matrix are significant through . They will be linear deposits, fragments of the rank of the matrix Before, then. maximum quantity of linearly independent rows that does not overflow r< S . Therefore, there are numbers that are not all equal to zero, but

Let's move on to the equality of components

(3.3.8)

Now let's look at the next linear combination:

or else

Linear independence of matrix rows

Given a size matrix

Significantly, the rows of the matrix are in ascending order:

Two rows are called equal like equal auxiliary elements. .

We introduce the operation of multiplying a row by a number and adding rows as an operation that is carried out element by element:

Viznachennya. The series is called a linear combination of rows of the matrix, since there are equal sums of the creation of these rows on the corresponding number (even numbers):

Viznachennya. The rows of the matrix are called linearly lying , since there are numbers that are not equal to zero at the same time, so that the linear combination of rows of the matrix is ​​equal to the zero row:

De. (1.1)

Linear position rows of the matrix means that you want 1 row of the matrix and a linear combination of others.

Viznachennya. Since the linear combination of rows (1.1) is equal to zero and then, if all coefficients are equal, then the rows are called linearly independent .

Theorem about the rank of a matrix. The rank of the matrix is ​​equal to the maximum number of linearly independent rows or columns through which all other rows (columns) are linearly expressed.

The theorem plays an important role in matrix analysis, especially in systems of linear rows.

6, 13,14,15,16. Vector. Operations on vectors (addition, substitution, multiplication by number),n -Virtual vector. Understanding about vector space and its basis.

The vector is called rectification of sections with a cob point A and the end point U(which can be moved parallel to itself).

Vectors can be designated as two great writers, and one row between the boundaries and the arrow.

Dovzhina (or module) vector is the number that is older than the last section of AB, which represents the vector.

Vectors that lie on the same straight line or parallel straight lines are called Colinear .

When the head and tail of a vector are joined (), such a vector is called null it is designated = . The doubling of the zero vector is equal to zero:

1) Additional vector per number:

There will be a vector that is close to each other, which is directly related to the direct vector, which is , and parallel to that, which is .

2) Protilage vector -the addition of the vector is called -to the number (-1), then. -=.

3) The sum of two vectors And it is called a vector, the cob of which runs together with the cob of the vector, and the end of the vector, which ends with the end of the vector. (Trikutniki rule). The sum of several vectors is calculated in a similar way.



4) The difference between two vectors This is called the sum of the vector and the vector -, protagonist.

Scalar tvir

Viznachennya: The scalar addition of two vectors is a number that has the same addition of two vectors per cosine of the two vectors between them:

n-dimensional vector and vector space

Viznachennya. An n-virtual vector is called an ordered population n active numbers to be recorded with the viewer x = (x 1, x 2, ..., x n), de x i i -a component of the vector X.

The concept of an n-viral vector is widely used in economics; for example, any set of goods can be characterized by a vector x = (x 1, x 2, ..., x n), and daily prices y = (y 1, 2, ..., y n).

- Two n-dimensional vectors equal And then, if their similar components are equal, then. x=y, yakshcho x i= y i, i = 1,2,…,n.

- a sum of two vectors however, the dimensions n called vector z = x + y, the components of which add up to the sum of the corresponding components of the additional vectors, then. z i= x i+ y i, i = 1,2, ..., n.

- Additional vector x on the number is called a vector, the components of which are equal to the corresponding components of the vector, then. , i= 1,2,…,n.

Linear operations on any vectors satisfy the following authorities:



1) - commutative (movable) power of the sum;

2) - associative (successful) power of the sum;

3) - associative power before the numerical multiplier;

4) - distributive (distributed) to the sum of vectors of power;

5) - distributive power;

6) There is a zero vector such as for any vector (especially the role of the zero vector);

7) For any vector, the longest vector is such that ;

8) for any vector (especially the role of the numerical multiplier 1).

Viznachennya. The principle of vectors with active components, which includes the operation of folding vectors and multiplying a vector by a number that satisfies the totality of eight powers (considered as axioms), is called vector camp .

Dimension and basis of vector space

Viznachennya. The linear space is called n-mirnim Where in the world do you sleep? n linearly independent vectors, and even some vectors are already independent. In other words, size of space – this is the maximum number of linearly independent vectors that can be placed in a new one. The number n is called the size of the space and is designated.

The set of n linearly independent vectors in the n-world space is called basis .

7. The power vectors and the power values ​​of the matrix. Characteristic alignment of the matrix.

Viznachennya. The vector is called with a powerful vector line operator, if there is a number such as:

The number is called powerful operator values (matrices A), which is similar to the vector.

You can write it in matrix form:

De - matrix of coordinates of the vector or in the expanded view:

Let’s rewrite the system so that the right-hand sides have zeros:

according to the matrix view: . A homogeneous system has been removed, resulting in zero solutions. To create a non-null solution, it is necessary and sufficient for the system source: .

The primary member is a rich member n-th stage of shodo. This rich member is called characteristic term of the operator or matrices A, and the otrimana – characteristic characteristics of the operator chi matrix A.

Butt:

Find the values ​​and vectors of the linear operator specified by the matrix.

Knitting: There is a characteristic knitting or the information of the line operator is more important.

We know the power vector, which confirms the power value. For which matrix equation is used:

Abo , or , the stars are known: , or

Abo.

It is acceptable, however, that the vectors, in any case, are powerful vectors of a linear operator with powerful values.

Likewise, vector.

8. System P linear levels with P changeable ( Zagalny Viglyad). Matrix form for recording such a system. Decision of the system (value). Silly and absurd, songs and insignificant systems of linear ranks.

Disconnection of the system of linear ranks from the unknown

Systems of linear rulers are aware of widespread stagnation of the economy.

The system of linear levels with variable values ​​looks like:

,

de () - additional numbers, ranks coefficients for changes і free members of the ranks obviously.

Short note: ().

Viznachennya. The solutions of the system are called such a set of values, when substituting some skin equalities of the system, they are transformed into the correct equalities.

1) The ranking system is called sleeping room , as if I would like one solution, that crazy because there is no solution.

2) The joint system of ropes is called singing , since there is only one decision, that unsigned as there is more than one decision.

3) The two systems of rankings are called equally strong (equivalent) as they make one and the same impersonal decision (for example, the same decision).

Let's write the system in matrix form:

Significantly: , de

A- matrix of coefficients for changes, or matrix of the system, X - matrix-stovpets of changeables, U - matrix-stovets of free members.

Because the number of matrix columns is equal to the number of matrix rows, then their additions:

Є matrix-stovets. The elements of the matrix are the left parts of the cob system. Based on the value of equality, the matrix of the cob system can be written as: .

Cramer's theorem. Nehai is the primary member of the matrix of the system, and is the primary member of the matrix, which is removed from the matrix by replacing the th column with the column of the free members. Therefore, the system has a single solution, as indicated by the formulas:

Kramer formula.

butt. Untie the system of ranks behind Cramer's formulas

Decision. Primary matrix of the system. The system is now a single solution. Calculably, removed from the replacement of the first, second, third columns with the columns of free members:

Behind Cramer's formulas:

9. Gauss method for the systemn linear levels with P changeable. Understand the Jordan–Gauss method.

Gauss method - method of sequential switching off of changes.

The Gaussian method means that with the help of elementary transformations of rows and permutations of columns, the system of rows is brought to an equal system of stepwise (tricutaneous) form, from which, sequentially, starting from the remaining (by number) changes, There are other changes.

The transformation of Gauss can be manually carried out not with the equalities themselves, but with an expanded matrix of their coefficients, which is determined by the number of strong members assigned to the matrix:

.

It should be noted that the Gaussian method can be used to determine any system of equalities in the form .

butt. Use the Gauss method to calculate the system:

We have expanded the matrix of the system.

Krok 1 . Swap the first and second rows so that they equal 1.

Croc 2 Multiply the elements of the first row by (–2) and (–1) and add them to the elements of the other and third rows, so that there are zeros under the element in the first column. .

For complex systems of linear levels, the following theorems are true:

Theorem 1. Since the rank of the matrix of the sleeping system is ancient, there are a number of changes, then. , then the system is a single solution.

Theorem 2. Since the rank of the matrix of the sleeping system is less than the number of changes, then. , then the system is unknown and has an impersonal solution.

Viznachennya. The base minor of a matrix is ​​any non-zero minor whose order is higher than the rank of the matrix.

Viznachennya. Those unknowns whose coefficients are included in the entry of the basic minor are called basic (and basic), and other unknowns are called free (and minor).

To determine the equalization system at any time - this means to determine i (since initially, the sum of their coefficients is not equal to zero), then i - are completely unknown.

Obviously the basic changes are made through the city.

From another row of the extracted matrix I can obviously change:

From the first row it is obvious: ,

The final decision of the Rivne system: , .