Heterogeneity of the system. Classification of heterogeneous systems


Solving systems of linear algebraic equations (SLAEs) is undoubtedly the most important topic in a linear algebra course. A huge number of problems from all branches of mathematics come down to solving systems of linear equations. These factors explain the reason for this article. The material of the article is selected and structured so that with its help you can

  • choose the optimal method for solving your system of linear algebraic equations,
  • study the theory of the chosen method,
  • solve your system of linear equations by considering detailed solutions to typical examples and problems.

Brief description of the article material.

First, we give all the necessary definitions, concepts and introduce notations.

Next, we will consider methods for solving systems of linear algebraic equations in which the number of equations is equal to the number of unknown variables and which have a unique solution. Firstly, we will focus on Cramer’s method, secondly, we will show the matrix method for solving such systems of equations, and thirdly, we will analyze the Gauss method (the method of sequential elimination of unknown variables). To consolidate the theory, we will definitely solve several SLAEs in different ways.

After this, we will move on to solving systems of linear algebraic equations of general form, in which the number of equations does not coincide with the number of unknown variables or the main matrix of the system is singular. Let us formulate the Kronecker-Capelli theorem, which allows us to establish the compatibility of SLAEs. Let us analyze the solution of systems (if they are compatible) using the concept of a basis minor of a matrix. We will also consider the Gauss method and describe in detail the solutions to the examples.

We will definitely dwell on the structure of the general solution of homogeneous and inhomogeneous systems of linear algebraic equations. Let us give the concept of a fundamental system of solutions and show how the general solution of a SLAE is written using the vectors of the fundamental system of solutions. For a better understanding, let's look at a few examples.

In conclusion, we will consider systems of equations that can be reduced to linear ones, as well as various problems in the solution of which SLAEs arise.

Page navigation.

Definitions, concepts, designations.

We will consider systems of p linear algebraic equations with n unknown variables (p can be equal to n) of the form

Unknown variables, - coefficients (some real or complex numbers), - free terms (also real or complex numbers).

This form of recording SLAE is called coordinate.

IN matrix form writing this system of equations has the form,
Where - the main matrix of the system, - a column matrix of unknown variables, - a column matrix of free terms.

If we add a matrix-column of free terms to matrix A as the (n+1)th column, we get the so-called extended matrix systems of linear equations. Typically, an extended matrix is ​​denoted by the letter T, and the column of free terms is separated by a vertical line from the remaining columns, that is,

Solving a system of linear algebraic equations called a set of values ​​of unknown variables that turns all equations of the system into identities. The matrix equation for given values ​​of the unknown variables also becomes an identity.

If a system of equations has at least one solution, then it is called joint.

If a system of equations has no solutions, then it is called non-joint.

If a SLAE has a unique solution, then it is called certain; if there is more than one solution, then – uncertain.

If the free terms of all equations of the system are equal to zero , then the system is called homogeneous, otherwise – heterogeneous.

Solving elementary systems of linear algebraic equations.

If the number of equations of a system is equal to the number of unknown variables and the determinant of its main matrix is ​​not equal to zero, then such SLAEs will be called elementary. Such systems of equations have a unique solution, and in the case of a homogeneous system, all unknown variables are equal to zero.

We started studying such SLAEs in high school. When solving them, we took one equation, expressed one unknown variable in terms of others and substituted it into the remaining equations, then took the next equation, expressed the next unknown variable and substituted it into other equations, and so on. Or they used the addition method, that is, they added two or more equations to eliminate some unknown variables. We will not dwell on these methods in detail, since they are essentially modifications of the Gauss method.

The main methods for solving elementary systems of linear equations are the Cramer method, the matrix method and the Gauss method. Let's sort them out.

Solving systems of linear equations using Cramer's method.

Suppose we need to solve a system of linear algebraic equations

in which the number of equations is equal to the number of unknown variables and the determinant of the main matrix of the system is different from zero, that is, .

Let be the determinant of the main matrix of the system, and - determinants of matrices that are obtained from A by replacement 1st, 2nd, …, nth column respectively to the column of free members:

With this notation, unknown variables are calculated using the formulas of Cramer’s method as . This is how the solution to a system of linear algebraic equations is found using Cramer's method.

Example.

Cramer's method .

Solution.

The main matrix of the system has the form . Let's calculate its determinant (if necessary, see the article):

Since the determinant of the main matrix of the system is nonzero, the system has a unique solution that can be found by Cramer’s method.

Let's compose and calculate the necessary determinants (we obtain the determinant by replacing the first column in matrix A with a column of free terms, the determinant by replacing the second column with a column of free terms, and by replacing the third column of matrix A with a column of free terms):

Finding unknown variables using formulas :

Answer:

The main disadvantage of Cramer's method (if it can be called a disadvantage) is the complexity of calculating determinants when the number of equations in the system is more than three.

Solving systems of linear algebraic equations using the matrix method (using an inverse matrix).

Let a system of linear algebraic equations be given in matrix form, where the matrix A has dimension n by n and its determinant is nonzero.

Since , then matrix A is invertible, that is, there is an inverse matrix. If we multiply both sides of the equality by the left, we get a formula for finding a matrix-column of unknown variables. This is how we obtained a solution to a system of linear algebraic equations using the matrix method.

Example.

Solve system of linear equations matrix method.

Solution.

Let's rewrite the system of equations in matrix form:

Because

then the SLAE can be solved using the matrix method. Using the inverse matrix, the solution to this system can be found as .

Let's construct an inverse matrix using a matrix from the algebraic complements of the elements of matrix A (if necessary, see the article):

It remains to calculate the matrix of unknown variables by multiplying the inverse matrix to a matrix-column of free members (if necessary, see the article):

Answer:

or in another notation x 1 = 4, x 2 = 0, x 3 = -1.

The main problem when finding solutions to systems of linear algebraic equations using the matrix method is the complexity of finding the inverse matrix, especially for square matrices of order higher than third.

Solving systems of linear equations using the Gauss method.

Suppose we need to find a solution to a system of n linear equations with n unknown variables
the determinant of the main matrix of which is different from zero.

The essence of the Gauss method consists of sequentially eliminating unknown variables: first x 1 is excluded from all equations of the system, starting from the second, then x 2 is excluded from all equations, starting from the third, and so on, until only the unknown variable x n remains in the last equation. This process of transforming system equations to sequentially eliminate unknown variables is called direct Gaussian method. After completing the forward stroke of the Gaussian method, x n is found from the last equation, using this value from the penultimate equation, x n-1 is calculated, and so on, x 1 is found from the first equation. The process of calculating unknown variables when moving from the last equation of the system to the first is called inverse of the Gaussian method.

Let us briefly describe the algorithm for eliminating unknown variables.

We will assume that , since we can always achieve this by rearranging the equations of the system. Let's eliminate the unknown variable x 1 from all equations of the system, starting with the second. To do this, to the second equation of the system we add the first, multiplied by , to the third equation we add the first, multiplied by , and so on, to the nth equation we add the first, multiplied by . The system of equations after such transformations will take the form

where and .

We would have arrived at the same result if we had expressed x 1 in terms of other unknown variables in the first equation of the system and substituted the resulting expression into all other equations. Thus, the variable x 1 is excluded from all equations, starting from the second.

Next, we proceed in a similar way, but only with part of the resulting system, which is marked in the figure

To do this, to the third equation of the system we add the second, multiplied by , to the fourth equation we add the second, multiplied by , and so on, to the nth equation we add the second, multiplied by . The system of equations after such transformations will take the form

where and . Thus, the variable x 2 is excluded from all equations, starting from the third.

Next, we proceed to eliminating the unknown x 3, while we act similarly with the part of the system marked in the figure

So we continue the direct progression of the Gaussian method until the system takes the form

From this moment we begin the reverse of the Gaussian method: we calculate x n from the last equation as , using the obtained value of x n we find x n-1 from the penultimate equation, and so on, we find x 1 from the first equation.

Example.

Solve system of linear equations Gauss method.

Solution.

Let us exclude the unknown variable x 1 from the second and third equations of the system. To do this, to both sides of the second and third equations we add the corresponding parts of the first equation, multiplied by and by, respectively:

Now we eliminate x 2 from the third equation by adding to its left and right sides the left and right sides of the second equation, multiplied by:

This completes the forward stroke of the Gauss method; we begin the reverse stroke.

From the last equation of the resulting system of equations we find x 3:

From the second equation we get .

From the first equation we find the remaining unknown variable and thereby complete the reverse of the Gauss method.

Answer:

X 1 = 4, x 2 = 0, x 3 = -1.

Solving systems of linear algebraic equations of general form.

In general, the number of equations of the system p does not coincide with the number of unknown variables n:

Such SLAEs may have no solutions, have a single solution, or have infinitely many solutions. This statement also applies to systems of equations whose main matrix is ​​square and singular.

Kronecker–Capelli theorem.

Before finding a solution to a system of linear equations, it is necessary to establish its compatibility. The answer to the question when SLAE is compatible and when it is inconsistent is given by Kronecker–Capelli theorem:
In order for a system of p equations with n unknowns (p can be equal to n) to be consistent, it is necessary and sufficient that the rank of the main matrix of the system be equal to the rank of the extended matrix, that is, Rank(A)=Rank(T).

Let us consider, as an example, the application of the Kronecker–Capelli theorem to determine the compatibility of a system of linear equations.

Example.

Find out whether the system of linear equations has solutions.

Solution.

. Let's use the method of bordering minors. Minor of the second order different from zero. Let's look at the third-order minors bordering it:

Since all the bordering minors of the third order are equal to zero, the rank of the main matrix is ​​equal to two.

In turn, the rank of the extended matrix is equal to three, since the minor is of third order

different from zero.

Thus, Rang(A), therefore, using the Kronecker–Capelli theorem, we can conclude that the original system of linear equations is inconsistent.

Answer:

The system has no solutions.

So, we have learned to establish the inconsistency of a system using the Kronecker–Capelli theorem.

But how to find a solution to an SLAE if its compatibility is established?

To do this, we need the concept of a basis minor of a matrix and a theorem about the rank of a matrix.

The minor of the highest order of the matrix A, different from zero, is called basic.

From the definition of a basis minor it follows that its order is equal to the rank of the matrix. For a non-zero matrix A there can be several basis minors; there is always one basis minor.

For example, consider the matrix .

All third-order minors of this matrix are equal to zero, since the elements of the third row of this matrix are the sum of the corresponding elements of the first and second rows.

The following second-order minors are basic, since they are non-zero

Minors are not basic, since they are equal to zero.

Matrix rank theorem.

If the rank of a matrix of order p by n is equal to r, then all row (and column) elements of the matrix that do not form the chosen basis minor are linearly expressed in terms of the corresponding row (and column) elements forming the basis minor.

What does the matrix rank theorem tell us?

If, according to the Kronecker–Capelli theorem, we have established the compatibility of the system, then we choose any basis minor of the main matrix of the system (its order is equal to r), and exclude from the system all equations that do not form the selected basis minor. The SLAE obtained in this way will be equivalent to the original one, since the discarded equations are still redundant (according to the matrix rank theorem, they are a linear combination of the remaining equations).

As a result, after discarding unnecessary equations of the system, two cases are possible.

    If the number of equations r in the resulting system is equal to the number of unknown variables, then it will be definite and the only solution can be found by the Cramer method, the matrix method or the Gauss method.

    Example.

    .

    Solution.

    Rank of the main matrix of the system is equal to two, since the minor is of second order different from zero. Extended Matrix Rank is also equal to two, since the only third order minor is zero

    and the second-order minor considered above is different from zero. Based on the Kronecker–Capelli theorem, we can assert the compatibility of the original system of linear equations, since Rank(A)=Rank(T)=2.

    As a basis minor we take . It is formed by the coefficients of the first and second equations:

    The third equation of the system does not participate in the formation of the basis minor, so we exclude it from the system based on the theorem on the rank of the matrix:

    This is how we obtained an elementary system of linear algebraic equations. Let's solve it using Cramer's method:

    Answer:

    x 1 = 1, x 2 = 2.

    If the number of equations r in the resulting SLAE is less than the number of unknown variables n, then on the left sides of the equations we leave the terms that form the basis minor, and we transfer the remaining terms to the right sides of the equations of the system with the opposite sign.

    The unknown variables (r of them) remaining on the left sides of the equations are called main.

    Unknown variables (there are n - r pieces) that are on the right sides are called free.

    Now we believe that free unknown variables can take arbitrary values, while the r main unknown variables will be expressed through free unknown variables in a unique way. Their expression can be found by solving the resulting SLAE using the Cramer method, the matrix method, or the Gauss method.

    Let's look at it with an example.

    Example.

    Solve a system of linear algebraic equations .

    Solution.

    Let's find the rank of the main matrix of the system by the method of bordering minors. Let's take a 1 1 = 1 as a non-zero minor of the first order. Let's start searching for a non-zero minor of the second order bordering this minor:

    This is how we found a non-zero minor of the second order. Let's start searching for a non-zero bordering minor of the third order:

    Thus, the rank of the main matrix is ​​three. The rank of the extended matrix is ​​also equal to three, that is, the system is consistent.

    We take the found non-zero minor of the third order as the basis one.

    For clarity, we show the elements that form the basis minor:

    We leave the terms involved in the basis minor on the left side of the system equations, and transfer the rest with opposite signs to the right sides:

    Let's give the free unknown variables x 2 and x 5 arbitrary values, that is, we accept , where are arbitrary numbers. In this case, the SLAE will take the form

    Let us solve the resulting elementary system of linear algebraic equations using Cramer’s method:

    Hence, .

    In your answer, do not forget to indicate free unknown variables.

    Answer:

    Where are arbitrary numbers.

Let's summarize.

To solve a system of general linear algebraic equations, we first determine its compatibility using the Kronecker–Capelli theorem. If the rank of the main matrix is ​​not equal to the rank of the extended matrix, then we conclude that the system is incompatible.

If the rank of the main matrix is ​​equal to the rank of the extended matrix, then we select a basis minor and discard the equations of the system that do not participate in the formation of the selected basis minor.

If the order of the basis minor is equal to the number of unknown variables, then the SLAE has a unique solution, which can be found by any method known to us.

If the order of the basis minor is less than the number of unknown variables, then on the left side of the system equations we leave the terms with the main unknown variables, transfer the remaining terms to the right sides and give arbitrary values ​​to the free unknown variables. From the resulting system of linear equations we find the main unknown variables using the Cramer method, the matrix method or the Gauss method.

Gauss method for solving systems of linear algebraic equations of general form.

The Gauss method can be used to solve systems of linear algebraic equations of any kind without first testing them for compatibility. The process of sequential elimination of unknown variables makes it possible to draw a conclusion about both the compatibility and incompatibility of the SLAE, and if a solution exists, it makes it possible to find it.

From a computational point of view, the Gaussian method is preferable.

See its detailed description and analyzed examples in the article Gauss method for solving systems of linear algebraic equations of general form.

Writing a general solution to homogeneous and inhomogeneous linear algebraic systems using vectors of the fundamental system of solutions.

In this section we will talk about simultaneous homogeneous and inhomogeneous systems of linear algebraic equations that have an infinite number of solutions.

Let us first deal with homogeneous systems.

Fundamental system of solutions homogeneous system of p linear algebraic equations with n unknown variables is a collection of (n – r) linearly independent solutions of this system, where r is the order of the basis minor of the main matrix of the system.

If we denote linearly independent solutions of a homogeneous SLAE as X (1) , X (2) , ..., X (n-r) (X (1) , X (2) , ..., X (n-r) are columnar matrices of dimension n by 1) , then the general solution of this homogeneous system is represented as a linear combination of vectors of the fundamental system of solutions with arbitrary constant coefficients C 1, C 2, ..., C (n-r), that is, .

What does the term general solution of a homogeneous system of linear algebraic equations (oroslau) mean?

The meaning is simple: the formula specifies all possible solutions of the original SLAE, in other words, taking any set of values ​​of arbitrary constants C 1, C 2, ..., C (n-r), using the formula we will obtain one of the solutions of the original homogeneous SLAE.

Thus, if we find a fundamental system of solutions, then we can define all solutions of this homogeneous SLAE as .

Let us show the process of constructing a fundamental system of solutions to a homogeneous SLAE.

We select the basis minor of the original system of linear equations, exclude all other equations from the system, and transfer all terms containing free unknown variables to the right-hand sides of the equations of the system with opposite signs. Let's give the free unknown variables the values ​​1,0,0,...,0 and calculate the main unknowns by solving the resulting elementary system of linear equations in any way, for example, using the Cramer method. This will result in X (1) - the first solution of the fundamental system. If we give the free unknowns the values ​​0,1,0,0,…,0 and calculate the main unknowns, we get X (2) . And so on. If we assign the values ​​0.0,...,0.1 to the free unknown variables and calculate the main unknowns, we obtain X (n-r) . In this way, a fundamental system of solutions to a homogeneous SLAE will be constructed and its general solution can be written in the form .

For inhomogeneous systems of linear algebraic equations, the general solution is represented in the form , where is the general solution of the corresponding homogeneous system, and is the particular solution of the original inhomogeneous SLAE, which we obtain by giving the free unknowns the values ​​0,0,…,0 and calculating the values ​​of the main unknowns.

Let's look at examples.

Example.

Find the fundamental system of solutions and the general solution of a homogeneous system of linear algebraic equations .

Solution.

The rank of the main matrix of homogeneous systems of linear equations is always equal to the rank of the extended matrix. Let's find the rank of the main matrix using the method of bordering minors. As a non-zero minor of the first order, we take element a 1 1 = 9 of the main matrix of the system. Let's find the bordering non-zero minor of the second order:

A minor of the second order, different from zero, has been found. Let's go through the third-order minors bordering it in search of a non-zero one:

All third-order bordering minors are equal to zero, therefore, the rank of the main and extended matrix is ​​equal to two. Let's take . For clarity, let us note the elements of the system that form it:

The third equation of the original SLAE does not participate in the formation of the basis minor, therefore, it can be excluded:

We leave the terms containing the main unknowns on the right sides of the equations, and transfer the terms with free unknowns to the right sides:

Let us construct a fundamental system of solutions to the original homogeneous system of linear equations. The fundamental system of solutions of this SLAE consists of two solutions, since the original SLAE contains four unknown variables, and the order of its basis minor is equal to two. To find X (1), we give the free unknown variables the values ​​x 2 = 1, x 4 = 0, then we find the main unknowns from the system of equations
.

Walras' general equilibrium theory, which is the ideological basis of a centralized economy, has a number of undoubted advantages, namely: integrity and certainty of conclusions, making it very attractive for economic analysis.

However, within the framework of this theory it is impossible to adequately describe a decentralized economy. We are talking about the coordination mechanism, the time aspect of economic processes, the nature of flows and agents.

The practice of "groping" for equilibrium in Walras's theory essentially implies that no one in the market can influence prices, that each agent has perfect knowledge of supply and demand, that the process of "groping" occurs instantly, and, finally, that the execution of transactions absolutely unacceptable until “true prices” are established by “groping”, i.e. centralized control over all flows. Thus, this model, which involves very significant restrictions, is very reminiscent of the ideal image of the Soviet economy.

As the Polish economist Lange argued, “nothing is more important than understanding the laws of a decentralized economy. First of all, because it is the only reality with which we deal.”

French economist Jean-Paul Fitoussi argues that between the state and the market there is something intermediate, and by this intermediate he means the variety of forms of coordination of their relations and connections. These two-way connections are not limited to the transmission of an order, nor to direct contact between the exchange participants within the framework of a specific contract. An order has meaning only to the extent that it is executed. This creates some asymmetry between the positions of the superior and the subordinate in favor of the latter. It is in the power of the subordinate to carry out the order. Of course, the boss can check the execution of orders and, as Stalin did in his time, punish the executor. But verification is also an order that reproduces the original asymmetry. Each inspection is followed by an inspection inspection. Thus, already in the very basis of the centralized economy are the origins of decentralization - operational and information asymmetry - heterogeneity.

According to Jacques Sapir, five such forms of heterogeneity can be distinguished.

1. Heterogeneity of products associated with unequal possibilities for their substitution. This is determined not only by the nature of the product, but also by the specific method of its inclusion in a particular technological or economic process.

2. Heterogeneity of economic agents, which is not limited to differences between an employee, an entrepreneur and a capitalist. Dominance means a situation in which around some types of behavior or around some agents there is a spontaneous organization of other types of behavior or agents, i.e. the formation of a collective. The transition from the individual to the collective level is carried out through cooperation within a group of organizations that act as economic agents. They, in turn, imply heterogeneity in methods of interaction and coordination.

3. Heterogeneity of time. It can take two different and complementary forms. One of them is due to the fact that acts of consumption, saving or production for different agents have different time durations - continuum. This is the problem of “action time” heterogeneity. The emergence of another form of time heterogeneity is associated with what we call the time frame within which each agent's decision remains valid. In this case we can talk about “time intervals”.

4. Heterogeneity of enterprises as local production systems. Even if the products produced are identical, the behavior of a small enterprise differs significantly from the behavior of an enterprise with a large number of employees. In addition, there is a difference between the production of a simple and the production of a complex product, etc.

5. Heterogeneity of spaces in which economic actions take place. The unequal provision of different regions with factors of production, both material and human, naturally affects the relative price of these factors.

The typologization of heterogeneities by J. Sapir would be incomplete without two more heterogeneities:

6. Heterogeneity of the information space, due to the geographical and historical and cultural characteristics of the economic space.

7. Political heterogeneity of regions and countries, ensuring the security of investments and accessibility to information sources, and significantly influencing their investment attractiveness. The example of China's economic development very clearly illustrates this point.

Previous

The term “system” is used in various sciences. Accordingly, different definitions of the system are used in different situations: from philosophical to formal. For the purposes of the course, the following definition is best suited: a system is a set of elements united by connections and functioning together to achieve a goal.

Systems are characterized by a number of properties, the main of which are divided into three groups: static, dynamic and synthetic.

1.1 Static properties of systems

Static properties are the features of a certain state of the system. This is what the system has at any given point in time.

Integrity. Every system appears as something unified, whole, separate, different from everything else. This property is called system integrity. It allows you to divide the whole world into two parts: the system and the environment.

Openness. The isolated system, distinguished from everything else, is not isolated from the environment. On the contrary, they are connected and exchange various types of resources (matter, energy, information, etc.). This feature is designated by the term “openness”.

The connections between the system and the environment are directional: in some ways, the environment influences the system (system inputs), in others, the system influences the environment, does something in the environment, and outputs something to the environment (system outputs). The description of the inputs and outputs of a system is called a black box model. In such a model there is no information about the internal features of the system. Despite its apparent simplicity, such a model is often quite sufficient for working with the system.

In many cases, when managing equipment or people, information only about the inputs and outputs of the system allows you to successfully achieve the goal. However, for this, the model must meet certain requirements. For example, the user may experience difficulties if he does not know that on some TV models the power button must be pulled out rather than pressed. Therefore, for successful management, the model must contain all the information necessary to achieve the goal. When trying to satisfy this requirement, four types of errors can occur, which stem from the fact that the model always contains a finite number of connections, whereas a real system has an unlimited number of connections.

An error of the first type occurs when a subject mistakenly views a relationship as significant and decides to include it in the model. This leads to the appearance of extra, unnecessary elements in the model. An error of the second type, on the contrary, is made when a decision is made to exclude a supposedly insignificant connection from the model, without which, in fact, achieving the goal is difficult or even impossible.

The answer to the question of which error is worse depends on the context in which it is asked. It is clear that using a model containing an error inevitably leads to losses. Losses can be small, acceptable, intolerable or unacceptable. The damage caused by a type I error is due to the fact that the information it contains is superfluous. When working with such a model, you will have to spend resources on recording and processing unnecessary information, for example, wasting computer memory and processing time on it. This may not affect the quality of the solution, but it will certainly affect the cost and timeliness. Losses from an error of the second type are damage from the fact that there is not enough information to fully achieve the goal; the goal cannot be fully achieved.

Now it is clear that the worse mistake is the one from which the losses are greater, and this depends on specific circumstances. For example, if time is a critical factor, then an error of the first type becomes much more dangerous than an error of the second type: a decision made on time, even if not the best, is preferable to an optimal, but late one.

An error of the third kind is considered to be the consequences of ignorance. In order to assess the significance of a certain connection, you need to know that it exists at all. If this is not known, then the question of including the connection in the model is not worth it at all. If such a connection is insignificant, then in practice its presence in reality and absence in the model will be unnoticeable. If the connection is significant, then difficulties will arise similar to those with a type II error. The difference is that a type 3 error is more difficult to correct: this requires acquiring new knowledge.

An error of the fourth kind occurs when a known essential connection is erroneously attributed to the number of inputs or outputs of the system. For example, it is well established that in 19th-century England the health of men wearing top hats was significantly superior to that of men wearing caps. It hardly follows from this that the type of headdress can be considered as an input for a system for predicting health status.

Internal heterogeneity of systems, distinctness of parts. If you look inside the “black box”, it turns out that the system is heterogeneous, not monolithic. One may find that different qualities differ in different parts of the system. The description of the internal heterogeneity of the system comes down to isolating relatively homogeneous areas and drawing boundaries between them. This is how the concept of parts of the system appears. Upon closer examination, it turns out that the identified large parts are also heterogeneous, which requires identifying even smaller parts. The result is a hierarchical description of the parts of the system, which is called a composition model.

Information about the composition of the system can be used to work with the system. The goals of interaction with the system may be different, and therefore the composition models of the same system may also differ. At first glance, it is not difficult to distinguish the parts of the system; they “catch the eye.” In some systems, parts arise arbitrarily, in the process of natural growth and development (organisms, societies, etc.). Artificial systems are deliberately assembled from previously known parts (mechanisms, buildings, etc.). There are also mixed types of systems, such as nature reserves and agricultural systems. On the other hand, from the point of view of the rector, student, accountant and business manager, the university consists of different parts. An airplane consists of different parts from the point of view of the pilot, flight attendant, and passenger. The difficulties of creating a composition model can be represented in three ways.

First, the whole can be divided into parts in different ways. In this case, the method of division is determined by the goal. For example, the composition of a car is presented differently to novice car enthusiasts, future professional drivers, mechanics preparing to work in a car service center, and salespeople in car dealerships. It is natural to ask whether parts of the system “really” exist? The answer is contained in the formulation of the property in question: we are talking about distinguishability, and not about the separability of parts. You can distinguish between the parts of the system needed to achieve the goal, but you cannot separate them.

Secondly, the number of parts in the composition model also depends on the level at which the fragmentation of the system is stopped. The parts on the terminal branches of the resulting hierarchical tree are called elements. In different circumstances, decomposition is terminated at different levels. For example, when describing upcoming work, it is necessary to give an experienced worker and a novice instructions of varying degrees of detail. Thus, the composition model depends on what is considered elemental. There are cases when an element has a natural, absolute character (cell, individual, phoneme, electron).

Thirdly, any system is part of a larger system, and sometimes several systems at once. Such a metasystem can also be divided into subsystems in different ways. This means that the external boundary of the system is relative, conditional. The boundaries of the system are determined taking into account the goals of the subject who will use the system model.

Structure. The property of structuredness is that the parts of the system are not isolated, not independent of each other; they are interconnected and interact with each other. Moreover, the properties of the system significantly depend on how exactly its parts interact. Therefore, information about the connections of system elements is so important. The list of essential connections between system elements is called a system structure model. The endowment of any system with a certain structure is called structuring.

The concept of structuring further deepens the idea of ​​the integrity of the system: connections, as it were, hold the parts together and hold them together as a whole. Integrity, noted earlier as an external property, receives a supporting explanation from within the system - through structure.

When constructing a structure model, certain difficulties are also encountered. The first of these is due to the fact that the structure model is determined after the composition model is selected, and depends on what exactly the composition of the system is. But even with a fixed composition, the structure model is variable. This is due to the possibility of defining the significance of connections in different ways. For example, a modern manager is recommended, along with the formal structure of his organization, to take into account the existence of informal relationships between employees, which also affect the functioning of the organization. The second difficulty stems from the fact that each element of the system, in turn, is a “little black box”. So all four types of errors are possible when defining the inputs and outputs of each element included in the structure model.

1.2 DYNAMIC PROPERTIES OF SYSTEMS

If we consider the state of the system at a new point in time, we can again detect all four static properties. But if you superimpose “photographs” of the system at different points in time on top of each other, you will find that they differ in detail: during the time between the two moments of observation, some changes occurred in the system and its environment. Such changes may be important when working with the system, and, therefore, must be reflected in system descriptions and taken into account when working with it. The features of changes over time inside and outside the system are called the dynamic properties of the system. Typically, four dynamic properties of a system are distinguished.

Functionality. Processes Y(t) occurring at the outputs of the system are considered as its functions. The functions of a system are its behavior in the external environment, the results of its activities, and the products produced by the system.

From the multiplicity of outputs follows a multiplicity of functions, each of which can be used by someone and for something. Therefore, the same system can serve different purposes. A subject using a system for his own purposes will naturally evaluate its functions and organize them in relation to his needs. This is how the concepts of main, secondary, neutral, undesirable, superfluous function, etc. appear.

Stimulability. Certain processes also occur at the system inputs X(t), affecting the system and turning after a series of transformations in the system into Y(t). Impacts X(t) are called stimuli, and the very susceptibility of any system to external influences and the change in its behavior under these influences is called stimulability.

Variability of the system over time. In any system, changes occur that must be taken into account. In terms of the system model, we can say that the values ​​of internal variables (parameters) can change Z(t), composition and structure of the system and any combinations thereof. The nature of these changes may also be different. Therefore, further classifications of changes may be considered.

The most obvious classification is by the speed of change (slow, fast. The speed of change is measured relative to any speed taken as a standard. It is possible to introduce a large number of gradations of speed. It is also possible to classify trends in changes in the system regarding its structure and composition.

We can talk about changes that do not affect the structure of the system: some elements are replaced by others that are equivalent; parameters Z(t) can change without changing the structure. This type of system dynamics is called its functioning. Changes can be quantitative in nature: the composition of the system increases, and although its structure automatically changes, this does not affect the properties of the system until a certain point (for example, the expansion of a landfill). Such changes are called system growth. With qualitative changes in the system, its essential properties change. If such changes go in a positive direction, they are called development. With the same resources, a developed system achieves better results, and new positive qualities (functions) may appear. This is due to an increase in the level of consistency and organization of the system.

Growth occurs mainly due to the consumption of material resources, development - due to the assimilation and use of information. Growth and development can occur simultaneously, but they are not necessarily related. Growth is always limited (due to limited material resources), and development from the outside is not limited, since information about the external environment is inexhaustible. Development is the result of training, but training cannot be carried out instead of the learner. Therefore, there is an internal limitation on development. If the system “does not want” to learn, it cannot and will not develop.

In addition to the processes of growth and development, reverse processes can also occur in the system. Changes opposite to growth are called decline, contraction, decrease. A change that is opposite to development is called degradation, loss or weakening of beneficial properties.

The changes considered are monotonic, that is, they are directed “in one direction.” Obviously, monotonous changes cannot last forever. In the history of any system, one can distinguish periods of decline and rise, stability and instability, the sequence of which forms the individual life cycle of the system.

You can use other classifications of processes occurring in the system: according to predictability, processes are divided into random and deterministic; According to the type of time dependence, processes are divided into monotonic, periodic, harmonic, pulsed, etc.

Existence in a changing environment. Not only this system is changing, but also all others. For the system under consideration, this looks like a continuous change in the environment. This circumstance has many consequences for the system itself, which must adapt to new conditions in order not to perish. When considering a specific system, attention is usually paid to the characteristics of a particular reaction of the system, for example, the reaction rate. If we consider systems that store information (books, magnetic media), then the speed of response to changes in the external environment should be minimal to ensure the preservation of information. On the other hand, the response speed of the control system must be many times greater than the rate of change in the environment, since the system must select a control action even before the state of the environment changes irreversibly.

1.3 SYNTHETIC PROPERTIES OF SYSTEMS

Synthetic properties include generalizing, integral, collective properties that describe the interaction of the system with the environment and take into account integrity in the most general sense.

Emergence. The combination of elements into a system leads to the emergence of qualitatively new properties that are not derived from the properties of the parts, inherent only in the system itself and existing only as long as the system is one whole. Such qualities of the system are called
emergent (from the English “to arise”).

Examples of emergent properties can be found in various fields. For example, none of the parts of the plane can fly, but the plane, nevertheless, flies. The properties of water, many of which are not fully understood, do not follow from the properties of hydrogen and oxygen.

Let there be two black boxes, each of which has one input, one output and performs one operation - adding one to the number at the input. When connecting such elements according to the diagram shown in the figure, we obtain a system without inputs, but with two outputs. At each cycle of operation, the system will produce a larger number, while only even numbers will appear at one input, and only odd numbers at the other.




A

b

Fig.1.1. Connection of system elements: a) system with two outputs; b) parallel connection of elements

The emergent properties of a system are determined by its structure. This means that with different combinations of elements, different emergent properties will arise. For example, if you connect elements in parallel, then the functionally new system will not differ from one element. Emergence will manifest itself in increasing the reliability of the system due to the parallel connection of two identical elements - that is, due to redundancy.

It is worth noting an important case when the elements of the system possess all its properties. This situation is typical for the fractal construction of a system. At the same time, the principles of structuring the parts are the same as those of the system as a whole. An example of a fractal system is an organization in which management is structured identically at all levels of the hierarchy.

Inseparability into parts. This property is, in fact, a consequence of emergence. It is especially emphasized because its practical importance is great, and underestimation is very common.

When a part is removed from the system, two important events occur. Firstly, this changes the composition of the system, and therefore its structure. This will be a different system with different properties. Secondly, an element removed from the system will behave differently due to the fact that its environment will change. All of this is to say that caution should be used when considering an element in isolation from the rest of the system.

Inherence. The more integral a system is (from the English inherent - “being part of something”), the better it is coordinated, adapted to the environment, and compatible with it. The degree of inherence varies and can change. The expediency of considering inherence as one of the properties of the system is due to the fact that the degree and quality of the system’s implementation of the chosen function depends on it. In natural systems, inherence increases through natural selection. In artificial systems, inherence should be a special concern of the designer.

In some cases, inherence is ensured with the help of intermediate, intermediary systems. Examples include adapters for using foreign electrical appliances in conjunction with Soviet-style sockets; middleware (such as the COM service in Windows) that allows two programs from different manufacturers to communicate with each other.

Expediency. In systems created by man, the subordination of both structure and composition to achieving the set goal is so obvious that it can be recognized as a fundamental property of any artificial system. This property is called expediency. The goal for which the system is created determines what emergent property will ensure the achievement of the goal, and this, in turn, dictates the choice of structure and composition of the system. In order to extend the concept of expediency to natural systems, it is necessary to clarify the concept of purpose. The clarification is carried out using an artificial system as an example.

The history of any artificial system begins at some point in time 0, when the existing value of the state vector Y 0 turns out to be unsatisfactory, that is, a problematic situation arises. The subject is unhappy with this condition and would like to change it. Let him be satisfied by the values ​​of the state vector Y*. This is the first definition of the goal. Further, it is discovered that Y* does not exist now and cannot, for a number of reasons, be achieved in the near future. The second step in defining a goal is to recognize it as a desired future state. It immediately becomes clear that the future is not limited. The third step in clarifying the concept of a goal is to estimate the time T* when the desired state Y* can be achieved under given conditions. Now the target becomes two-dimensional, it is a point (T*, Y*) on the graph. The task is to move from point (0, Y 0) to point (T*, Y*). But it turns out that this path can be taken along different trajectories, and only one of them can be realized. Let the choice fall on the trajectory Y*( t). Thus, the goal now means not only the final state (T*, Y*), but also the entire trajectory Y*( t) (“intermediate goals”, “plan”). So, the goal is the desired future states Y*( t).

After time T*, state Y* becomes real. Therefore, it becomes possible to define the goal as a future real state. This makes it possible to say that natural systems also have the property of expediency, which allows us to approach the description of systems of any nature from a unified position. The main difference between natural and artificial systems is that natural systems, obeying the laws of nature, realize objective goals, and artificial systems are created to realize subjective goals.

The most common feature of any heterogeneous system is the presence of two ( or more) phases that are separated from each other by a pronounced interface. This feature distinguishes heterogeneous systems from solutions, which also consist of several components that form a homogeneous mixture. We will call one of the phases, continuous, dispersive, and the other, finely divided and distributed in the first, dispersed phase. Depending on the type of dispersion medium, heterogeneous mixtures, liquid and gas, are distinguished. In table 5.1 shows the classification of inhomogeneous systems according to the type of dispersed and dispersed phases.

Table 5.1

Classification of heterogeneous systems

Classification and characteristics of heterogeneous systems

Heterogeneous system A system is considered to be a system that consists of two or more phases. Each phase has its own interface and can be mechanically separated from the other.

A heterogeneous system consists of an internal (dispersed) phase and an external phase (dispersion medium), in which particles of the dispersed phase are located. Systems in which the external phase is liquid are called inhomogeneous liquid systems, and if gases are called inhomogeneous gas systems . Heterogeneous systems are called heterogeneous, and homogeneous - homogeneous. A homogeneous liquid system is understood as a pure liquid or a solution of any substances in it. A heterogeneous, or heterogeneous, liquid system is a liquid in which there are any undissolved substances in the form of tiny particles. Heterogeneous systems are often called dispersed.

The following types of inhomogeneous systems are distinguished: suspensions, emulsions, foams, dusts, fumes, mists.

Suspension is a system consisting of a continuous liquid phase in which solid particles are suspended. For example, sauces with flour, starch milk, molasses with sugar crystals.

Depending on the particle size, suspensions are divided into coarse (particle size more than 100 microns), fine (0.1-100 microns) and colloidal solutions containing solid particles 0.1 microns in size or less.

Emulsion is a system consisting of a liquid and drops of another liquid distributed in it that have not dissolved in the first. This is, for example, milk, a mixture of vegetable oil and water. There are gas emulsions in which the dispersion medium is liquid and the dispersed phase is gas.

Foam is a system consisting of a liquid and gas bubbles distributed in it. For example, creams and other whipped products. Foams are similar in properties to emulsions.

Emulsions and foams are characterized by the possibility of transition of the dispersed phase into a dispersion medium and vice versa. This transition, possible at a certain mass ratio of phases, is called phase inversion or simply inversion.

Aerosols called a dispersed system with a gaseous dispersion medium and a solid or liquid dispersed phase, which consists of particles from quasi-molecular to microscopic sizes that have the property of being suspended for a more or less long time. This concept combines dust, smoke, and fog. For example, flour dust formed during grain grinding, sifting, and transportation of flour; sugar dust formed during drying of sugar, etc. Smoke is formed when solid fuel is burned, fog is formed when steam condenses.

In aerosols, the dispersion medium is gas or air, and the dispersed phase in dust and smoke is solids, and in fogs it is liquid.

Dust and smoke- systems consisting of gas and solid particles distributed in them with sizes of 5-50 microns and 0.3-5 microns, respectively. Fog is a system consisting of gas and liquid droplets of 0.3-3 microns in size distributed in it, formed as a result of condensation.

A qualitative indicator characterizing the uniformity of aerosol particles in size is the degree of dispersion. An aerosol is called monodisperse when its constituent particles are of the same size, and polydisperse when it contains particles of different sizes. Monodisperse aerosols practically do not exist in nature. There are only a few aerosols whose particle sizes only approach monodisperse systems (fungal hyphae, specially produced mists, etc.).

Disperse or heterogeneous Depending on the number of dispersed phases, systems can be single- or multicomponent. For example, milk is a multicomponent system (it has two dispersed phases: fat and protein); sauces (dispersed phases are flour, fat, etc.).

Separation methods heterogeneous systems are classified depending on the size of suspended particles of the dispersed phase, the difference in densities of the dispersed and continuous phases, as well as the viscosity of the continuous phase. The following main separation methods are used: sedimentation, filtration, centrifugation, wet separation, electropurification.

Precipitation is a separation process in which solid or liquid particles of a dispersed phase suspended in a liquid or gas are separated from the continuous phase under the influence of gravity, centrifugal or electrostatic forces. Sedimentation by gravity is called settling.

Filtration - process separation using a porous partition capable of passing liquid or gas and retaining solid particles suspended in the medium. Filtration is carried out under the influence of pressure forces and is used for a finer separation of suspensions and dusts than during sedimentation.

Centrifugation- the process of separating suspensions and emulsions under the influence of centrifugal force.

Wet separation- the process of trapping particles suspended in a gas using a liquid.

Electrocleaning- purification of gases under the influence of electrical forces.

Methods for separating liquid and inhomogeneous gas systems are based on the same principles, but the equipment used has a number of features.


  • §5. Trigonometric form of a complex number. Moivre formula. Root extraction
  • §6. Comprehensive Features
  • Complex functions of one real variable
  • Exponential function zеz with a complex exponent and its properties
  • Euler's formulas. Exponential form of a complex number
  • Chapter 3 Polynomials
  • §1. Polynomial ring
  • §2. Dividing polynomials by decreasing powers
  • §3. Mutually simple and irreducible polynomials. Euclidean's theorem and algorithm
  • §4. Zeros (roots) of a polynomial. Multiplicity of zero. Decomposition of a polynomial into the product of irreducible polynomials over the field c and r
  • Exercises
  • Chapter 4 vector spaces
  • §1. Vector space of polynomials over the field of p coefficients
  • §2. Vector spaces p n over a field p
  • §3. Vectors in geometric space
  • 3.1. Types of vectors in geometric space
  • From the similarity of triangles авс and ав"с" it follows (both in the case of    and in the case of   ) that.
  • 3.3. Specifying free vectors using a Cartesian coordinate system and matching them with vectors from the r3 vector space
  • 3.4. Dot product of two free vectors
  • Exercises
  • §4. Vector subspace
  • 4.1. Subspace generated by a linear combination of vectors
  • 4.2. Linear dependence and vector independence
  • 4.3. Theorems on linearly dependent and linearly independent vectors
  • 4.4. Base and rank of the vector system. Basis and dimension of a vector subspace generated by a system of vectors
  • 4.5. Basis and dimension of the subspace generated by the system
  • §5. Basis and dimension of vector space
  • 5.1. Construction of the basis
  • 5.2. Basic properties of the basis
  • 5.3. Basis and dimension of the free vector space
  • §6. Isomorphism between n – dimensional vector spaces k and p n over the field p
  • §8. Linear mappings of vector spaces
  • 8.1. Linear mapping rank
  • 8.2. Coordinate notation of linear mappings
  • Exercises
  • Chapter 5 of the matrix
  • §1. Matrix rank. Elementary matrix transformations
  • §2. Algebraic operations on matrices.
  • Let the matrices be given
  • §3. Isomorphism between vector space
  • §4. Scalar product of two vectors from the space Rn
  • §5. Square matrices
  • 5.1. Inverse matrix
  • 5.2. Transposed square matrix.
  • Exercises
  • Chapter 6 determinants
  • §1. Definition and properties of the determinant arising from the definition
  • §2. Decomposition of the determinant into elements of a column (row). Alien's complement theorem
  • §3. Geometric representation of the determinant
  • 3.1. Vector product of two free vectors
  • 3.2. Mixed product of three free vectors
  • §4. Using determinants to find the rank of matrices
  • §5. Construction of the inverse matrix
  • Exercises
  • Chapter 7 Systems of Linear Equations
  • §1. Definitions. Collaborative and non-cooperative systems
  • §2. Gaussian method
  • §3. Matrix and vector forms of recording linear
  • 3. Matrix-column of free terms, matrix size k 1.
  • §4. Cramer system
  • §5. Homogeneous system of linear equations
  • §6. Inhomogeneous system of linear equations
  • Exercises
  • Chapter 8 matrix reduction
  • §1. Transition matrix from one basis to another
  • 1.1. Transition matrix associated with transformation
  • 1.2. Orthogonal transition matrices
  • §2. Changing the linear mapping matrix when replacing bases
  • 2.1. Eigenvalues, eigenvectors
  • 2.2. Reducing a square matrix to diagonal form
  • §3. Real linear and quadratic forms
  • 3.1. Reducing a quadratic form to canonical form
  • 3.2. Definite quadratic form. Sylvester criterion
  • Exercises
  • §6. Inhomogeneous system of linear equations

    If in the system of linear equations (7.1) at least one of the free terms V i is different from zero, then such a system is called heterogeneous.

    Let a non-homogeneous system of linear equations be given, which can be represented in vector form as

    , i = 1,2,.. .,To, (7.13)

    Consider the corresponding homogeneous system

    i = 1,2,... ,To. (7.14)

    Let the vector
    is a solution to the inhomogeneous system (7.13), and the vector
    is a solution to the homogeneous system (7.14). Then it is easy to see that the vector
    is also a solution to the inhomogeneous system (7.13). Really



    Now, using formula (7.12) for the general solution of the homogeneous equation, we have

    Where
    any numbers from R, A
    – fundamental solutions of a homogeneous system.

    Thus, the solution of an inhomogeneous system is the combination of its particular solution and the general solution of the corresponding homogeneous system.

    Solution (7.15) is called general solution of an inhomogeneous system of linear equations. From (7.15) it follows that a joint inhomogeneous system of linear equations has a unique solution if the rank r(A) main matrix A matches the number n unknown systems (Cramer system), if r(A)  n, then the system has an infinite number of solutions and this set of solutions is equivalent to the subspace of solutions of the corresponding homogeneous system of equations of dimension nr.

    Examples.

    1. Let a non-homogeneous system of equations be given, in which the number of equations To= 3, and the number of unknowns n = 4.

    X 1 – X 2 + X 3 –2X 4 = 1,

    X 1 – X 2 + 2X 3 – X 4 = 2,

    5X 1 – 5X 2 + 8X 3 – 7X 4 = 3.

    Let's determine the ranks of the main matrix A and expanded A * of this system. Because A And A * non-zero matrices and k = 3 n, therefore 1  r (A), r * (A * )  3. Consider the minors of the second order of matrices A And A * :

    Thus, among the second-order minors of the matrices A And A * there is a minor other than zero, so 2 r(A),r * (A * )  3. Now let's look at third order minors

    , since the first and second columns are proportional. Likewise for minor
    .

    And so all the third order minors of the main matrix A are equal to zero, therefore r(A) = 2. For the extended matrix A * there are also third order minors

    Therefore, among the third order minors of the extended matrix A * there is a minor other than zero, so r * (A * ) = 3. This means that r(A)  r * (A * ) and then, based on the Korneker–Capelli theorem, we conclude that this system is inconsistent.

    2. Solve the system of equations

    3X 1 + 2X 2 + X 3 + X 4 = 1,

    3X 1 + 2X 2 – X 3 – 2X 4 = 2.

    For this system
    and therefore 1 r(A),r * (A * )  2. Consider for matrices A And A * second order minors

    Thus, r(A)= r * (A * ) = 2, and, therefore, the system is consistent. As base variables, we choose any two variables for which the second-order minor, composed of the coefficients of these variables, is not equal to zero. Such variables could be, for example,

    X 3 and X 4 because
    Then we have

    X 3 + X 4 = 1 – 3X 1 – 2X 2 ,

    X 3 – 2X 4 = 2 – 3X 1 – 2X 2 .

    Let's define a particular solution heterogeneous system. To do this, let's put X 1 = X 2 = 0.

    X 3 + X 4 = 1,

    X 3 – 2X 4 = 2.

    The solution to this system: X 3 = 4, X 4 = – 3, therefore, = (0,0,4, –3).

    Now we determine the general solution of the corresponding homogeneous equation

    X 3 + X 4 = – 3X 1 – 2X 2 ,

    X 3 – 2X 4 = – 3X 1 – 2X 2 .

    Let's put: X 1 = 1, X 2 = 0

    X 3 + X 4 = –3,

    X 3 – 2X 4 = –3.

    The solution to this system X 3 = –9, X 4 = 6.

    Thus

    Now let's put X 1 = 0, X 2 = 1

    X 3 + X 4 = –2,

    X 3 – 2X 4 = –2.

    Solution: X 3 = – 6, X 4 = 4, and then

    After a particular solution has been determined , inhomogeneous equations and fundamental solutions
    And of the corresponding homogeneous equation, we write down the general solution of the inhomogeneous equation.

    Where
    any numbers from R.