Using the Lagrange method of undetermined multipliers. Solving a system of nonlinear equations in two unknowns using the Find Solution tool

Parameter name Meaning
Article topic: Lagrange method.
Rubric (thematic category) Mathematics

Finding a polynomial means determining the values ​​of its coefficient . To do this, using the interpolation condition, you can form a system of linear algebraic equations (SLAE).

The determinant of this SLAE is usually called the Vandermonde determinant. Vandermonde's determinant is not equal to zero when for , that is, in the case when there are no matching nodes in the lookup table. However, it can be argued that the SLAE has a solution and this solution is unique. Having solved the SLAE and determined the unknown coefficients you can construct an interpolation polynomial.

A polynomial that satisfies the interpolation conditions, when interpolated by the Lagrange method, is constructed in the form of a linear combination of polynomials of the nth degree:

Polynomials are usually called basic polynomials. In order to Lagrange polynomial satisfies the interpolation conditions, it is extremely important that the following conditions are satisfied for its basis polynomials:

For .

If these conditions are met, then for any we have:

Τᴀᴋᴎᴍ ᴏϬᴩᴀᴈᴏᴍ, execution given conditions for basis polynomials means that the interpolation conditions are also satisfied.

Let us determine the type of basis polynomials based on the restrictions imposed on them.

1st condition: at .

2nd condition: .

Finally, for the basis polynomial we can write:

Then, substituting the resulting expression for the basis polynomials into the original polynomial, we obtain the final form of the Lagrange polynomial:

A particular form of the Lagrange polynomial at is usually called the linear interpolation formula:

.

The Lagrange polynomial taken at is usually called the quadratic interpolation formula:

Lagrange method. - concept and types. Classification and features of the category "Lagrange method." 2017, 2018.

  • - Lagrange method (method of variation of an arbitrary constant).

    Linear remote controls.


  • Definition. DU type i.e. linear with respect to an unknown function and its derivative is called linear.

    Definition. A control system is called homogeneous if the function can be represented as the relationship between its arguments. Example. F is called homogeneous fth measurements


  • if Examples: 1) - 1st order of homogeneity.

    2) - 2nd order of homogeneity. 3) - zero order of homogeneity (simply homogeneous... .- Lecture 8. Application of partial derivatives: extremum problems. Lagrange method.


  • Extremum problems have

    great importance

  • in economic calculations. This is the calculation, for example, of maximum income, profit, minimum costs depending on several variables: resources, production assets, etc. The theory of finding extrema of functions... .

    - T.2.3. DE of higher orders. Equation in total differentials. T.2.4. Linear differential equations of the second order with constant coefficients. Lagrange method.

    3. 2. 1. DE with separable variables S.R.

    3. In natural sciences, technology and economics, one often has to deal with empirical formulas, i.e. formulas compiled based on the processing of statistical data or... Classification of tasks

    mathematical programming PROGRAMMING

    METHODS FOR SOLVING NONLINEAR PROBLEMS

    Control questions

    to section 4 Solution diagram.

    transport problem

    Let us list the main stages of solving the transport problem.

    1. Check the closed condition. If the task is open, the transport table is supplemented with either a column of a fictitious point of consumption or a row of a fictitious supplier. 2. They are building reference plan

    3. Check the support plan for non-degeneracy. If there is not enough occupied cell to satisfy the non-degeneracy condition, one of the cells of the transport table is filled with a supply equal to zero. If necessary, it is permissible to record zero deliveries in several cells. 4. The plan is checked for optimality. 5. If the optimality conditions are not met, move on to the next plan by redistributing supplies.

    Computing process

    is repeated until the optimal plan is obtained.

    1. What is the meaning

    objective function

    in a mathematical model of a transport problem?

    7. In what case and how is it necessary to redistribute supplies in terms of transportation?

    8. Suppose the constructed transportation plan is degenerate. Is it possible to continue solving the problem using the potential method and what needs to be done for this?

    The general mathematical programming problem was formulated in Section 1.1. Depending on the type of functions included in the model (1.1)-(1.3), the problem is classified as one or another type of mathematical programming. There are linear programming (all functions are linear), integer (the solution is represented by integers), quadratic (the objective function is a quadratic form), nonlinear (at least one of the functions of the problem is nonlinear) and stochastic programming (parameters that are probabilistic in nature are included).

    The task class is not linear programming wider than the class of linear models. For example, production costs in most cases are not proportional to the volume of output, but depend on it nonlinearly; income from the sale of production products turns out to be nonlinear function prices, etc. The criteria in optimal planning problems are often maximum profit, minimum cost, and minimum capital costs. The variable quantities are output volumes various types products. The restrictions include production functions that characterize the relationship between product output and the costs of labor and material resources, the volume of which is limited.



    Unlike linear programming, which uses universal method solutions (simplex method), for solving nonlinear problems there is a whole range of methods depending on the form of the functions included in the model. Of the variety of methods, we will consider only two: the Lagrange method and the dynamic programming method.

    WITH The essence of the Lagrange method is to reduce the problem to conditional extreme to solving the unconditional extremum problem. Consider the nonlinear programming model:

    (5.2)

    Where – known functions,

    A – given coefficients.

    Note that in this formulation of the problem, the constraints are specified by equalities, and there is no condition for the variables to be non-negative. In addition, we believe that the functions are continuous with their first partial derivatives.

    Let us transform conditions (5.2) so that on the left or right sides of the equalities there is zero:

    (5.3)

    Let's compose the Lagrange function. It includes the objective function (5.1) and the right-hand sides of the constraints (5.3), taken respectively with the coefficients . There will be as many Lagrange coefficients as there are constraints in the problem.

    The extremum points of function (5.4) are the extremum points original problem and vice versa: optimal plan problem (5.1)-(5.2) is the global extremum point of the Lagrange function.

    Indeed, let a solution be found problems (5.1)-(5.2), then conditions (5.3) are satisfied. Let's substitute the plan into function (5.4) and verify the validity of equality (5.5).

    Thus, in order to find the optimal plan for the original problem, it is necessary to examine the Lagrange function for the extremum. The function has extreme values ​​at points where its partial derivatives are equal zero. Such points are called stationary.

    Let us define the partial derivatives of the function (5.4)

    ,

    .

    After equalization zero derivatives we get the system m+n equations with m+n unknown

    , (5.6)

    In the general case, system (5.6)-(5.7) will have several solutions, which will include all the maxima and minima of the Lagrange function. In order to highlight the global maximum or minimum, the values ​​of the objective function are calculated at all found points. The largest of these values ​​will be the global maximum, and the smallest will be the global minimum. In some cases it turns out possible use sufficient conditions strict extreme continuous functions (see Problem 5.2 below):

    let the function be continuous and twice differentiable in some neighborhood of its stationary point (i.e. )). Then:

    A) If ,(5.8)

    then is the point of strict maximum of the function;

    b) If ,(5.9)

    then is the strict minimum point of the function;

    G ) If ,

    then the question of the presence of an extremum remains open.

    In addition, some solutions of system (5.6)-(5.7) may be negative. Which is inconsistent with the economic meaning of the variables. In this case, you should consider replacing negative values ​​with zero values.

    Economic meaning of Lagrange multipliers.Optimal value multiplier shows how much the criterion value will change Z when the resource increases or decreases j by one unit, since

    The Lagrange method can also be used in the case where the constraints are inequalities. Thus, finding the extremum of the function under conditions

    ,

    performed in several stages:

    1. Determine stationary points of the objective function, for which they solve a system of equations

    .

    2. From the stationary points, select those whose coordinates satisfy the conditions

    3. Using the Lagrange method, solve the problem with equality constraints (5.1)-(5.2).

    4. The points found in the second and third stages are examined for the global maximum: the values ​​of the objective function at these points are compared - the largest value corresponds to the optimal plan.

    Problem 5.1 Let us solve problem 1.3, considered in the first section, using the Lagrange method. Optimal distribution water resources are described by a mathematical model

    .

    Let's compose the Lagrange function

    Let's find the unconditional maximum of this function. To do this, we calculate the partial derivatives and equate them to zero

    ,

    Thus, we got the system linear equations type

    The solution to the system of equations represents an optimal plan for the distribution of water resources across irrigated areas

    Values ​​are measured in hundreds of thousands cubic meters. - the amount of net income per one hundred thousand cubic meters of irrigation water. Therefore, the marginal price of 1 m 3 of irrigation water is equal to den. units

    The maximum additional net income from irrigation will be

    160·12.26 2 +7600·12.26-130·8.55 2 +5900·8.55-10·16.19 2 +4000·16.19=

    172391.02 (den. units)

    Problem 5.2 Solve a nonlinear programming problem

    Let's represent the limitation in the form:

    .

    Let's compose the Lagrange function and determine its partial derivatives

    .

    To determine the stationary points of the Lagrange function, its partial derivatives should be set equal to zero. As a result, we obtain a system of equations

    • Tutorial

    Everyone good day. In this article I want to show one of graphic methods construction mathematical models for dynamic systems, which is called bond graph(“bond” - connections, “graph” - graph). In Russian literature, I found descriptions of this method only in the Textbook of Tomsk Polytechnic University, A.V. Voronin “MODELING OF MECHATRONIC SYSTEMS” 2008 Also show classic method through the Lagrange equation of the 2nd kind.

    Lagrange method

    I will not describe the theory, I will show the stages of calculations with a few comments. Personally, it’s easier for me to learn from examples than to read theory 10 times. It seemed to me that in Russian literature, the explanation of this method, and indeed mathematics or physics in general, is very rich in complex formulas, which accordingly requires a serious mathematical background. While studying the Lagrange method (I study at the Polytechnic University of Turin, Italy), I studied Russian literature to compare calculation methods, and it was difficult for me to follow the progress of solving this method. Even remembering the modeling courses at the Kharkov Aviation Institute, the derivation of such methods was very cumbersome, and no one bothered themselves in trying to understand this issue. This is what I decided to write, a manual for constructing mathematical models according to Lagrange, as it turned out it is not at all difficult, it is enough to know how to calculate derivatives with respect to time and partial derivatives. For more complex models, rotation matrices are also added, but there is nothing complicated in them either.

    Features of modeling methods:

    • Newton-Euler: vector equations based on dynamic equilibrium force And moments
    • Lagrange: scalar equations based on state functions associated with kinetic and potential energies
    • Bond Count: flow based method power between system elements

    Let's start with simple example. Mass with spring and damper. We ignore the force of gravity.


    Fig 1. Mass with spring and damper

    First of all, we designate:

    • initial coordinate system(NSK) or fixed sk R0(i0,j0,k0). Where? You can point your finger at the sky, but by twitching the tips of the neurons in the brain, the idea passes through to place the NSC on the line of movement of the M1 body.
    • coordinate systems for each body with mass(we have M1 R1(i1,j1,k1)), the orientation can be arbitrary, but why complicate your life, set it with minimal difference from the NSC
    • generalized coordinates q_i(the minimum number of variables that can describe the movement), in in this example one generalized coordinate, movement only along the j axis


    Fig 2. We put down coordinate systems and generalized coordinates


    Fig 3. Position and speed of body M1

    Then we will find the kinetic (C) and potential (P) energies and the dissipative function (D) for the damper using the formulas:


    Fig 4. Complete formula for kinetic energy

    In our example there is no rotation, the second component is equal to 0.




    Fig 5. Calculation of kinetic, potential energy and dissipative function

    The Lagrange equation has the following form:


    Fig 6. Lagrange Equation and Lagrangian

    Delta W_i This virtual work perfected by the applied forces and moments. Let's find her:


    Fig 7. Calculation of virtual work

    Where delta q_1 virtual movement.

    We substitute everything into the Lagrange equation:


    Fig 8. The resulting mass model with spring and damper

    This is where Lagrange's method ended. As you can see, it’s not that complicated, but it’s still a very simple example, for which most likely the Newton-Euler method would even be simpler. For more complex systems, where there will be several bodies rotated relative to each other by different angle, Lagrange's method will be easier.

    Bond graph method

    I’ll show you right away what the model looks like in bond-graph for an example with a mass, a spring and a damper:


    Fig 9. Bond-graph masses with spring and damper

    Here you will have to tell a little theory, which will be enough to build simple models. If anyone is interested, you can read the book ( Bond Graph Methodology) or ( Voronin A.V. Modeling of mechatronic systems: tutorial. – Tomsk: Tomsk Polytechnic University Publishing House, 2008).

    Let us first determine that complex systems consist of several domains. For example, an electric motor consists of electrical and mechanical parts or domains.

    bond graph based on the exchange of power between these domains, subsystems. Note that power exchange, of any form, is always determined by two variables ( variable power ) with the help of which we can study the interaction of various subsystems within a dynamic system (see table).

    As can be seen from the table, the expression of power is almost the same everywhere. In summary, Power- This work " flow - f" on " effort - e».

    An effort(English) effort) in the electrical domain this is voltage (e), in the mechanical domain it is force (F) or torque (T), in hydraulics it is pressure (p).

    Flow(English) flow) in the electrical domain it is current (i), in the mechanical domain it is speed (v) or angular velocity (omega), in hydraulics it is the flow or flow rate of fluid (Q).

    Taking these notations, we obtain an expression for power:


    Fig 10. Power formula through power variables

    In the bond-graph language, the connection between two subsystems that exchange power is represented by a bond. bond). That's why this method is called bond-graph or g raf-connections, connected graph. Let's consider block diagram connections in a model with an electric motor (this is not a bond-graph yet):


    Fig 11. Block diagram of power flow between domains

    If we have a voltage source, then accordingly it generates voltage and transfers it to the motor for winding (this is why the arrow is directed towards the motor), depending on the resistance of the winding, a current appears according to Ohm’s law (directed from the motor to the source). Accordingly, one variable is an input to the subsystem, and the second must be exit from the subsystem. Here the voltage ( effort) – input, current ( flow) - exit.

    If you use a current source, how will the diagram change? Right. The current will be directed to the motor, and the voltage to the source. Then the current ( flow) – input, voltage ( effort) - exit.

    Let's look at an example in mechanics. Force acting on a mass.


    Fig 12. Force applied to mass

    The block diagram will be as follows:


    Fig 13. Block diagram

    In this example, Strength ( effort) – input variable for mass. (Force applied to mass)
    According to Newton's second law:

    Mass responds with speed:

    In this example, if one variable ( force - effort) is entrance into the mechanical domain, then another power variable ( speed - flow) – automatically becomes exit.

    To distinguish where the input is and where the output is, a vertical line is used at the end of the arrow (connection) between the elements, this line is called sign of causality or causation (causality). It turns out: applied force is the cause, and speed is the effect. This sign is very important for the correct construction of a system model, since causality is a consequence of the physical behavior and exchange of powers of two subsystems, therefore the choice of the location of the causality sign cannot be arbitrary.


    Fig 14. Designation of causality

    This vertical line shows which subsystem receives the force ( effort) and as a result produce a flow ( flow). In the example with mass it would be like this:


    Fig 14. Causal relationship for the force acting on the mass

    It is clear from the arrow that the input for mass is - force, and the output is speed. This is done so as not to clutter the diagram with arrows and systematize the construction of the model.

    Next important point. Generalized impulse(amount of movement) and moving(energy variables).

    Table of power and energy variables in different domains



    The table above introduces two additional physical quantities used in the bond-graph method. They're called generalized impulse (R) And generalized movement (q) or energy variables, and they can be obtained by integrating power variables over time:


    Fig 15. Relationship between power and energy variables

    In the electrical domain :

    Based on Faraday's law, voltage at the ends of the conductor is equal to the derivative of the magnetic flux through this conductor.


    A Current strength- physical quantity equal to the ratio of the amount of charge Q passing through some time t cross section conductor, to the value of this period of time.

    Mechanical domain:

    From Newton's 2nd law, Force– time derivative of impulse


    And correspondingly, speed- time derivative of displacement:

    Let's summarize:

    Basic elements

    All elements in dynamic systems, can be divided into two-pole and four-pole components.
    Let's consider bipolar components:

    Sources
    There are sources of both effort and flow. Analogy in the electrical domain: source of effortvoltage source, stream sourcecurrent source. Causal signs for sources should only be like this.


    Fig 16. Causal connections and designation of sources

    Component R – dissipative element

    Component I – inertial element

    Component C – capacitive element

    As can be seen from the figures, different elements one type R,C,I described by the same equations. ONLY there is a difference for electrical capacitance, you just need to remember this!

    Quadrupole components:

    Let's look at two components: a transformer and a gyrator.

    Last important components In the bond-graph method, connections are used. There are two types of nodes:




    That's it with the components.

    The main steps for establishing causal relationships after constructing a bond-graph:

    1. Give causal connections to everyone sources
    2. Go through all the nodes and put down causal relationships after point 1
    3. For components I assign an input causal relationship (effort is included in this component), for components C assign output causality (effort comes out of this component)
    4. Repeat point 2
    5. Insert causal connections for R components
    This concludes the mini-course on theory. Now we have everything we need to build models.
    Let's solve a couple of examples. Let's start with electrical circuit, it is better to understand the analogy of constructing a bond-graph.

    Example 1


    Let's start building a bond-graph with a voltage source. Just write Se and put an arrow.


    See, everything is simple! Let's look further, R and L are connected in series, which means the same current flows in them, if we speak in power variables - the same flow. Which node has the same flow? The correct answer is 1-node. We connect the source, resistance (component - R) and inductance (component - I) to the 1-node.


    Next, we have capacitance and resistance in parallel, which means they have same voltage or effort. 0-node is suitable like no other. We connect the capacitance (component C) and resistance (component R) to the 0-node.


    We also connect nodes 1 and 0 to each other. The direction of the arrows is chosen arbitrarily; the direction of the connection affects only the sign in the equations.

    You will get the following connection graph:

    Now we need to establish causal relationships. Following the instructions for the sequence of their placement, let's start with the source.

    1. We have a source of voltage (effort), such a source has only one variant of causality - output. Let's put it on.
    2. Next there is component I, let’s see what they recommend. We put
    3. We put it down for 1-node. Eat
    4. A 0-node must have one input and all output causal connections. We have one day off for now. We are looking for components C or I. We found it. We put
    5. Let's list what's left


    That's all. Bond graph is built. Hurray, Comrades!

    All that remains is to write the equations that describe our system. To do this, create a table with 3 columns. The first will contain all the components of the system, the second will contain the input variable for each element, and the third will contain the output variable for the same component. We have already defined the input and output by causal relationships. So there shouldn't be any problems.

    Let's number each connection for ease of recording the levels. We take the equations for each element from the list of components C, R, I.



    Having compiled a table, we define the state variables, in this example there are 2 of them, p3 and q5. Next you need to write down the equations of state:


    That's it, the model is ready.

    Example 2. I would like to immediately apologize for the quality of the photo, the main thing is that you can read

    Let's solve another example for mechanical system, the same one that we solved using the Lagrange method. I will show the solution without comment. Let's check which of these methods is simpler and easier.

    In Matbala, both mathematical models with the same parameters were compiled, obtained by the Lagrange method and bond-graph. The result is below: Add tags

    Lagrange multiplier method.

    The Lagrange multiplier method is one of the methods that allows you to solve nonlinear programming problems.

    Nonlinear programming is a branch of mathematical programming that studies methods for solving extremal problems with a nonlinear objective function and domain admissible solutions, defined by nonlinear constraints. In economics, this corresponds to the fact that results (efficiency) increase or decrease disproportionately to changes in the scale of resource use (or, what is the same, the scale of production): for example, due to the division of production costs in enterprises into variable and semi-fixed; due to saturation of demand for goods, when each subsequent unit is more difficult to sell than the previous one, etc.

    The nonlinear programming problem is posed as the problem of finding the optimum of a certain objective function

    F(x 1 ,…x n), F (x) → max

    when conditions are met

    g j (x 1 ,…x n)≥0, g (x) ≤ b , x ≥ 0

    Where x-vector of the required variables;

    F (x) -objective function;

    g (x) - constraint function (continuously differentiable);

    b - vector of constraint constants.

    The solution to a nonlinear programming problem (global maximum or minimum) can belong either to the boundary or to the interior of the admissible set.

    Unlike a linear programming problem, in a nonlinear programming problem the optimum does not necessarily lie on the boundary of the region defined by the constraints. In other words, the task is to choose such non-negative values variables subject to a system of restrictions in the form of inequalities under which the maximum (or minimum) of a given function is achieved. In this case, the forms of neither the objective function nor the inequalities are specified. Can be different cases: the objective function is nonlinear, and the constraints are linear; the objective function is linear, and the constraints (at least one of them) are nonlinear; both the objective function and the constraints are nonlinear.

    The nonlinear programming problem is found in the natural sciences, engineering, economics, mathematics, business relations, and government.



    Nonlinear programming, for example, is related to a basic economic problem. Thus, in the problem of allocating limited resources, either efficiency or, if the consumer is being studied, consumption is maximized in the presence of restrictions that express the conditions of resource scarcity. In such a general formulation, the mathematical formulation of the problem may be impossible, but in specific applications the quantitative form of all functions can be determined directly. For example, an industrial enterprise produces plastic products. Production efficiency here is measured by profit, and constraints are interpreted as available labor, production space, equipment productivity, etc.

    The cost-effectiveness method also fits into the nonlinear programming scheme. This method was developed for use in decision making in government. General function efficiency is well-being. Here two nonlinear programming problems arise: the first is maximizing the effect at limited costs, the second is minimizing costs provided that the effect is above a certain minimum level. This problem is usually well modeled using nonlinear programming.

    The results of solving a nonlinear programming problem are helpful in making government decisions. The resulting solution is, of course, recommended, so it is necessary to examine the assumptions and accuracy of the nonlinear programming problem before making a final decision.

    Nonlinear problems are complex; they are often simplified by leading to linear ones. To do this, it is conventionally assumed that in a particular area the objective function increases or decreases in proportion to the change in the independent variables. This approach is called the method of piecewise linear approximations; however, it is applicable only to certain types of nonlinear problems.

    Nonlinear problems under certain conditions are solved using the Lagrange function: by finding its saddle point, the solution to the problem is thereby found. Among the computational algorithms N. p. great place occupy gradient methods. There is no universal method for nonlinear problems and, apparently, there may not be, since they are extremely diverse. Multiextremal problems are especially difficult to solve.

    One of the methods that allows you to reduce a nonlinear programming problem to solving a system of equations is the method undefined multipliers Lagrange.

    Using the Lagrange multiplier method, we essentially establish the necessary conditions, allowing to identify optimal points in optimization problems with constraints in the form of equalities. In this case, the problem with restrictions is transformed into an equivalent problem unconditional optimization, which involves some unknown parameters called Lagrange multipliers.

    The Lagrange multiplier method consists in reducing problems on a conditional extremum to problems on the unconditional extremum of an auxiliary function - the so-called. Lagrange functions.

    For the problem of the extremum of a function f(x 1, x 2,..., x n) under the conditions (constraint equations) φ i(x 1 , x 2 , ..., x n) = 0, i= 1, 2,..., m, the Lagrange function has the form

    L(x 1, x 2… x n,λ 1, λ 2,…λm)=f(x 1, x 2… x n)+∑ i -1 m λ i φ i (x 1, x 2… x n)

    Multipliers λ 1 , λ 2 , ..., λm called Lagrange multipliers.

    If the values x 1 , x 2 , ..., x n , λ 1 , λ 2 , ..., λm the essence of the solutions to the equations that determine the stationary points of the Lagrange function, namely, for differentiable functions are solutions to the system of equations

    then, under fairly general assumptions, x 1 , x 2 , ..., x n provide an extremum of the function f.

    Consider the problem of minimizing a function of n variables subject to one constraint in the form of equality:

    Minimize f(x 1, x 2… x n) (1)

    under restrictions h 1 (x 1, x 2… x n)=0 (2)

    According to the Lagrange multiplier method, this problem is transformed into the following unconstrained optimization problem:

    minimize L(x,λ)=f(x)-λ*h(x) (3)

    where the Function L(x;λ) is called the Lagrange function,

    λ is an unknown constant, which is called the Lagrange multiplier. There are no requirements for the sign of λ.

    Let, for a given value λ=λ 0, the unconditional minimum of the function L(x,λ) with respect to x be achieved at the point x=x 0 and x 0 satisfy the equation h 1 (x 0)=0. Then, as is easy to see, x 0 minimizes (1) taking into account (2), since for all values ​​of x satisfying (2), h 1 (x)=0 and L(x,λ)=min f(x).

    Of course, it is necessary to select the value λ=λ 0 so that the coordinate of the unconditional minimum point x 0 satisfies equality (2). This can be done if, considering λ as a variable, find the unconditional minimum of function (3) in the form of a function λ, and then choose the value of λ at which equality (2) is satisfied. Let's illustrate this with a specific example.

    Minimize f(x)=x 1 2 +x 2 2 =0

    under the constraint h 1 (x)=2x 1 +x 2 -2=0=0

    The corresponding unconstrained optimization problem is written as follows:

    minimize L(x,λ)=x 1 2 +x 2 2 -λ(2x 1 +x 2 -2)

    Solution. Equating the two components of the gradient L to zero, we obtain

    → x 1 0 =λ

    → x 2 0 =λ/2

    In order to check whether the stationary point x° corresponds to the minimum, we calculate the elements of the Hessian matrix of the function L(x;u), considered as a function of x,

    which turns out to be positive definite.

    This means that L(x,u) is a convex function of x. Consequently, the coordinates x 1 0 =λ, x 2 0 =λ/2 determine the global minimum point. The optimal value of λ is found by substituting the values ​​x 1 0 and x 2 0 into the equation 2x 1 + x 2 =2, from which 2λ+λ/2=2 or λ 0 =4/5. Thus, the conditional minimum is achieved at x 1 0 =4/5 and x 2 0 =2/5 and is equal to min f(x) = 4/5.

    When solving the example problem, we considered L(x;λ) as a function of two variables x 1 and x 2 and, in addition, assumed that the value of the parameter λ was chosen so that the constraint was satisfied. If the solution of the system

    J=1,2,3,…,n

    λ cannot be obtained in the form of explicit functions, then the values ​​of x and λ are found by solving the following system consisting of n+1 equations with n+1 unknowns:

    J=1,2,3,…,n., h 1 (x)=0

    To find everyone possible solutions this system can be used numerical methods search (for example, Newton's method). For each of the solutions (), we should calculate the elements of the Hessian matrix of the function L, considered as a function of x, and find out whether this matrix is ​​positive definite (local minimum) or negative definite (local maximum).

    The Lagrange multiplier method can be extended to the case where the problem has several constraints in the form of equalities. Consider a general problem that requires

    Minimize f(x)

    under restrictions h k =0, k=1, 2, ..., K.

    The Lagrange function takes the following form:

    Here λ 1 , λ 2 , ..., λk-Lagrange multipliers, i.e. unknown parameters whose values ​​need to be determined. Equating the partial derivatives of L with respect to x to zero, we obtain the following system of n equations with n unknowns:

    If it turns out to be difficult to find a solution to the above system in the form of functions of the vector λ, then you can expand the system by including restrictions in the form of equalities

    The solution of the extended system, consisting of n + K equations with n + K unknowns, determines the stationary point of the function L. Then a procedure for checking for a minimum or maximum is implemented, which is carried out on the basis of calculating the elements of the Hessian matrix of the function L, considered as a function of x, similar to as was done in the case of a problem with one constraint. For some problems, an extended system of n+K equations with n+K unknowns may have no solutions, and the Lagrange multiplier method turns out to be inapplicable. It should be noted, however, that such tasks are quite rare in practice.

    Let's consider special case common task nonlinear programming, assuming that the system of constraints contains only equations, there are no conditions for the non-negativity of the variables and and - the functions are continuous along with their partial derivatives. Therefore, by solving the system of equations (7), we obtain all points at which function (6) can have extreme values.

    Algorithm for the Lagrange multiplier method

    1. Compose the Lagrange function.

    2. Find the partial derivatives of the Lagrange function with respect to the variables x J ,λ i and equate them to zero.

    3. We solve the system of equations (7), find the points at which the objective function of the problem can have an extremum.

    4. Among the points suspicious for an extremum, we find those at which the extremum is reached, and calculate the values ​​of function (6) at these points.

    Example.

    Initial data: According to the production plan, the company needs to produce 180 products. These products can be manufactured in two technological ways. When producing x 1 products using the 1st method, the costs are 4x 1 +x 1 2 rubles, and when producing x 2 products using the 2nd method, they are 8x 2 +x 2 2 rubles. Determine how many products should be produced using each method so that the cost of production is minimal.

    The objective function for the stated problem has the form
    ® min under the conditions x 1 + x 2 =180, x 2 ≥0.
    1. Compose the Lagrange function
    .
    2. We calculate the partial derivatives with respect to x 1, x 2, λ and equate them to zero:

    3. Solving the resulting system of equations, we find x 1 =91,x 2 =89

    4. Having made a replacement in the objective function x 2 =180-x 1, we obtain a function of one variable, namely f 1 =4x 1 +x 1 2 +8(180-x 1)+(180-x 1) 2

    We calculate or 4x 1 -364=0 ,

    whence we have x 1 * =91, x 2 * =89.

    Answer: The number of products manufactured by the first method is x 1 =91, by the second method x 2 =89, while the value of the objective function is equal to 17,278 rubles.

    an(t)z(n)(t) + an − 1(t)z(n − 1)(t) + ... + a1(t)z"(t) + a0(t)z(t) = f(t)

    consists in replacing arbitrary constants ck in the general solution

    z(t) = c1z1(t) + c2z2(t) + ...

    Cnzn(t)

    corresponding homogeneous equation

    an(t)z(n)(t) + an − 1(t)z(n − 1)(t) + ... + a1(t)z"(t) + a0(t)z(t) = 0

    to auxiliary functions ck(t), whose derivatives satisfy the linear algebraic system

    The determinant of system (1) is the Wronskian of the functions z1,z2,...,zn, which ensures its unique solvability with respect to .

    If are antiderivatives for , taken at fixed values ​​of the integration constants, then the function

    is a solution to the original linear inhomogeneous differential equation. Integration of an inhomogeneous equation in the presence of a general solution to the corresponding homogeneous equation is thus reduced to quadratures.

    Lagrange method (method of variation of arbitrary constants)

    A method for obtaining a general solution to an inhomogeneous equation, knowing the general solution to a homogeneous equation without finding a particular solution.

    For a linear homogeneous differential equation of nth order

    y(n) + a1(x) y(n-1) + ... + an-1 (x) y" + an(x) y = 0,

    where y = y(x) is an unknown function, a1(x), a2(x), ..., an-1(x), an(x) are known, continuous, true: 1) there are n linearly independent solutions equations y1(x), y2(x), ..., yn(x); 2) for any values ​​of the constants c1, c2, ..., cn, the function y(x)= c1 y1(x) + c2 y2(x) + ... + cn yn(x) is a solution to the equation; 3) for any initial values ​​x0, y0, y0,1, ..., y0,n-1 there are values ​​c*1, c*n, ..., c*n such that the solution y*(x)= c*1 y1(x) + c*2 y2(x) + ... + c*n yn (x) satisfies at x = x0 the initial conditions y*(x0)=y0, (y*)"(x0) =y0,1 , ...,(y*)(n-1)(x0)=y0,n-1.

    The expression y(x)= c1 y1(x) + c2 y2(x) + ... + cn yn(x) is called general decision linear homogeneous differential equation of the nth order.

    The set of n linearly independent solutions of a linear homogeneous differential equation of the nth order y1(x), y2(x), ..., yn(x) is called the fundamental system of solutions to the equation.

    For a linear homogeneous differential equation with constant coefficients, there is a simple algorithm for constructing a fundamental system of solutions. We will look for a solution to the equation in the form y(x) = exp(lx): exp(lx)(n) + a1exp(lx)(n-1) + ... + an-1exp(lx)" + anexp(lx) = = (ln + a1ln-1 + ... + an-1l + an)exp(lx) = 0, i.e. the number l is the root of the characteristic equation ln + a1ln-1 + ... + an-1l + an = 0. The left side of the characteristic equation is called the characteristic polynomial of the linear differential equation: P(l) = ln + a1ln-1 + ... + an-1l + an. Thus, the problem of solving a linear homogeneous equation of the nth order with. constant coefficients reduces to solving an algebraic equation.

    If the characteristic equation has n different real roots l1№ l2 № ... № ln, then the fundamental system of solutions consists of the functions y1(x) = exp(l1x), y2(x) = exp(l2x), ..., yn (x) = exp(lnx), and the general solution of the homogeneous equation is: y(x)= c1 exp(l1x) + c2 exp(l2x) + ... + cn exp(lnx).

    a fundamental system of solutions and a general solution for the case of simple real roots.

    If any of the real roots of the characteristic equation is repeated r times (r-multiple root), then in the fundamental system of solutions there are r functions corresponding to it; if lk=lk+1 = ... = lk+r-1, then the fundamental system of solutions to the equation includes r functions: yk(x) = exp(lkx), yk+1(x) = xexp(lkx), yk +2(x) = x2exp(lkx), ..., yk+r-1(x) =xr-1 exp(lnx).

    EXAMPLE 2. Fundamental system of solutions and general solution for the case of multiple real roots.

    If the characteristic equation has complex roots, then each pair of simple (with multiplicity 1) complex roots lk,k+1=ak ± ibk in the fundamental system of solutions corresponds to a pair of functions yk(x) = exp(akx)cos(bkx), yk+ 1(x) = exp(akx)sin(bkx).

    EXAMPLE 4. Fundamental system of solutions and general solution for the case of simple complex roots. Imaginary roots.

    If a complex pair of roots has multiplicity r, then such a pair lk=lk+1 = ... = l2k+2r-1=ak ± ibk, in the fundamental system of solutions corresponds to the functions exp(akx)cos(bkx), exp(akx )sin(bkx), xexp(akx)cos(bkx), xexp(akx)sin(bkx), x2exp(akx)cos(bkx), x2exp(akx)sin(bkx), ........ ........ xr-1exp(akx)cos(bkx), xr-1exp(akx)sin(bkx).

    EXAMPLE 5. Fundamental system of solutions and general solution for the case of multiple complex roots.

    Thus, to find a general solution to a linear homogeneous differential equation with constant coefficients, one should: write down the characteristic equation; find all roots of the characteristic equation l1, l2, ... , ln; write down the fundamental system of solutions y1(x), y2(x), ..., yn(x); write down the expression for the general solution y(x)= c1 y1(x) + c2 y2(x) + ... + cn yn(x). To solve the Cauchy problem, you need to substitute the expression for the general solution into the initial conditions and determine the values ​​of the constants c1,..., cn, which are solutions to the system of linear algebraic equations c1 y1(x0) + c2 y2(x0) + ... + cn yn (x0) = y0, c1 y"1(x0) + c2 y"2(x0) + ... + cn y"n(x0) =y0,1, ......... , c1 y1 (n-1)(x0) + c2 y2(n-1)(x0) + ... + cn yn(n-1)(x0) = y0,n-1

    For a linear inhomogeneous differential equation of nth order

    y(n) + a1(x) y(n-1) + ... + an-1 (x) y" + an(x) y = f(x),

    where y = y(x) is an unknown function, a1(x), a2(x), ..., an-1(x), an(x), f(x) are known, continuous, valid: 1) if y1(x) and y2(x) are two solutions to a non-homogeneous equation, then the function y(x) = y1(x) - y2(x) is a solution to the corresponding homogeneous equation; 2) if y1(x) is a solution to an inhomogeneous equation, and y2(x) is a solution to the corresponding homogeneous equation, then the function y(x) = y1(x) + y2(x) is a solution to the inhomogeneous equation; 3) if y1(x), y2(x), ..., yn(x) are n linearly independent solutions of a homogeneous equation, and ych(x) is an arbitrary solution of an inhomogeneous equation, then for any initial values ​​x0, y0, y0 ,1, ..., y0,n-1 there are values ​​c*1, c*n, ..., c*n such that the solution y*(x)=c*1 y1(x) + c*2 y2(x) + ... + c*n yn (x) + yч(x) satisfies at x = x0 the initial conditions y*(x0)=y0, (y*)"(x0)=y0,1 , . ..,(y*)(n-1)(x0)=y0,n-1.

    The expression y(x)= c1 y1(x) + c2 y2(x) + ... + cn yn(x) + yч(x) is called the general solution of a linear inhomogeneous differential equation of the nth order.

    To find partial solutions of inhomogeneous differential equations with constant coefficients with right-hand sides of the form: Pk(x)exp(ax)cos(bx) + Qm(x)exp(ax)sin(bx), where Pk(x), Qm(x ) are polynomials of degree k and m, respectively, there is a simple algorithm for constructing a particular solution, called the selection method.

    The selection method, or the method of undetermined coefficients, is as follows. The required solution to the equation is written in the form: (Pr(x)exp(ax)cos(bx) + Qr(x)exp(ax)sin(bx))xs, where Pr(x), Qr(x) are polynomials of degree r = max(k, m) with unknown coefficients pr , pr-1, ..., p1, p0, qr, qr-1, ..., q1, q0. The factor xs is called the resonant factor. Resonance occurs in cases where among the roots of the characteristic equation there is a root l =a ± ib of multiplicity s. Those. if among the roots of the characteristic equation of the corresponding homogeneous equation there is one such that its real part coincides with the coefficient in the exponent of the exponent, and its imaginary part coincides with the coefficient in the argument trigonometric function on the right side of the equation, and the multiplicity of this root is s, then the required partial solution contains a resonant factor xs. If there is no such coincidence (s=0), then there is no resonant factor.

    Substituting the expression for a particular solution into the left side of the equation, we obtain a generalized polynomial of the same form as the polynomial on the right side of the equation, the coefficients of which are unknown.

    Two generalized polynomials are equal if and only if the coefficients of factors of the form xtexp(ax)sin(bx), xtexp(ax)cos(bx) with the same powers t are equal. Equating the coefficients of such factors, we obtain a system of 2(r+1) linear algebraic equations for 2(r+1) unknowns. It can be shown that such a system is consistent and has a unique solution.