Using the method of indefinite Lagrange multipliers, find the conditional extremum. Lagrange multiplier method for functions of two variables

Lagrange Multiplier Method is a classic method for solving problems mathematical programming(in particular convex). Unfortunately, when practical application The method may encounter significant computational difficulties, narrowing the scope of its use. We consider the Lagrange method here mainly because it is an apparatus actively used to substantiate various modern numerical methods, widely used in practice. As for the Lagrange function and the Lagrange multipliers, they play an independent and exclusively important role in theory and applications not only of mathematical programming.

Let's consider classical problem optimization

max (min) z=f(x) (7.20)

This problem stands out from problem (7.18), (7.19) in that among the restrictions (7.21) there are no inequalities, there are no conditions for the variables to be non-negative, their discreteness, and the functions f(x) are continuous and have partial derivatives with respect to at least second order.

The classical approach to solving problem (7.20), (7.21) gives a system of equations ( the necessary conditions), which must be satisfied by the point x*, which provides the function f(x) with a local extremum on the set of points satisfying constraints (7.21) (for the convex programming problem, the found point x*, in accordance with Theorem 7.6, will simultaneously be the point of the global extremum).

Let us assume that at the point x* function (7.20) has a local conditional extreme and the rank of the matrix is ​​. Then the necessary conditions will be written in the form:

(7.22)

there is a Lagrange function; - Lagrange multipliers.

There are also sufficient conditions, under which the solution of the system of equations (7.22) determines the extremum point of the function f(x). This question is resolved based on the study of the sign of the second differential of the Lagrange function. However, sufficient conditions are mainly of theoretical interest.

You can specify the following procedure for solving problem (7.20), (7.21) using the Lagrange multiplier method:

1) compose the Lagrange function (7.23);

2) find the partial derivatives of the Lagrange function with respect to all variables and set them equal to zero. This will result in system (7.22), consisting of equations. Solve the resulting system (if this turns out to be possible!) and thus find all the stationary points of the Lagrange function;

3) from stationary points taken without coordinates, select points at which the function f(x) has conditional local extrema in the presence of restrictions (7.21). This choice is made, for example, using sufficient conditions for a local extremum. Often the study is simplified if specific conditions of the problem are used.



Example 7.3. Find optimal distribution limited resource in a unit. between n consumers, if the profit received from allocating x j units of resource to the jth consumer is calculated by the formula .

Solution. The mathematical model of the problem has the following form:


We compose the Lagrange function:

.

We find partial derivatives of the Lagrange function and equate them to zero:

Solving this system of equations, we get:

Thus, if the jth consumer is allocated units. resource, then the total profit will reach its maximum value and amount to den. units

We examined the Lagrange method as applied to a classical optimization problem. This method can be generalized to the case where the variables are non-negative and some constraints are given in the form of inequalities. However, this generalization is primarily theoretical and does not lead to specific computational algorithms.

In conclusion, let us give the Lagrange multipliers economic interpretation. To do this, let us turn to the simplest classical optimization problem

max (min) z=f(x 1 , X 2); (7.24)

𝜑(x 1, x 2)=b. (7.25)

Let us assume that the conditional extremum is reached at point . Corresponding extreme value of the function f(x)

Let us assume that in restrictions (7.25) the quantity b may change, then the coordinates of the extremum point, and therefore the extreme value f* functions f(x) will become quantities depending on b, i.e. ,, and therefore the derivative of function (7.24)

Multiplier methodLagrange(in English literature “LaGrange's method of undetermined multipliers”) ˗ is a numerical method for solving optimization problems that allows you to determine the “conditional” extremum of the objective function (minimum or maximum value)

in the presence of specified restrictions on its variables in the form of equalities (i.e., the area acceptable values)

˗ these are the values ​​of the function argument (controllable parameters) on the real domain at which the function value tends to an extremum. The use of the name “conditional” extremum is due to the fact that the variables are subject to additional condition, which limits the range of acceptable values ​​when searching for the extremum of a function.

The Lagrange multiplier method allows the problem of searching for a conditional extremum of an objective function on a set of admissible values ​​to be transformed into the problem unconditional optimization functions.

In case the functions And are continuous along with their partial derivatives, then there are such variables λ that are not simultaneously equal to zero, under which the following condition is satisfied:

Thus, in accordance with the Lagrange multiplier method, to find the extremum of the objective function on the set of admissible values, I compose the Lagrange function L(x, λ), which is further optimized:

where λ ˗ is a vector of additional variables called undetermined Lagrange multipliers.

Thus, the problem of finding the conditional extremum of the function f(x) has been reduced to the problem of finding the unconditional extremum of the function L(x, λ).

And

The necessary condition for the extremum of the Lagrange function is given by a system of equations (the system consists of “n + m” equations):

Solving this system of equations allows us to determine the arguments of the function (X) at which the value of the function L(x, λ), as well as the value of the target function f(x) correspond to the extremum.

The magnitude of the Lagrange multipliers (λ) is of practical interest if the constraints are presented in the form with a free term in the equation (constant). In this case, we can consider a further (increase/decrease) value of the objective function by changing the value of the constant in the equation system. Thus, the Lagrange multiplier characterizes the rate of change in the maximum of the objective function when the limiting constant changes.

There are several ways to determine the nature of the extremum of the resulting function:

First method: Let be the coordinates of the extremum point, and be the corresponding value of the objective function. A point close to the point is taken and the value of the objective function is calculated:

If , then there is a maximum at the point.

If , then there is a minimum at the point.

Second method: A sufficient condition from which the nature of the extremum can be determined is the sign of the second differential of the Lagrange function. The second differential of the Lagrange function is defined as follows:

If in given point minimum, if , then the objective function f(x) has a conditional maximum.

Third method: Also, the nature of the extremum of the function can be determined by considering the Hessian of the Lagrange function. The Hessian matrix is ​​a symmetrical square matrix second partial derivatives of a function at the point at which the matrix elements are symmetrical about the main diagonal.

To determine the type of extremum (maximum or minimum of a function), you can use Sylvester’s rule:

1. In order for the second differential of the Lagrange function to be of positive sign it is necessary that the angular minors of the function be positive. Under such conditions, the function at this point has a minimum.

2. In order for the second differential of the Lagrange function to be negative in sign , it is necessary that the angular minors of the function alternate, and the first element of the matrix must be negativesv. Under such conditions, the function at this point has a maximum.

By angular minor we mean the minor located in the first k rows and k columns of the original matrix.

The main practical significance of the Lagrange method is that it allows you to move from conditional optimization to unconditional optimization and, accordingly, expand your arsenal available methods solving the problem. However, the problem of solving a system of equations, which boils down to this method, in general it is not simpler original problem searching for an extremum. Such methods are called indirect. Their use is explained by the need to obtain a solution to an extremal problem in analytical form (for example, for certain theoretical calculations). When solving specific practical problems Direct methods are usually used, based on iterative processes of calculating and comparing the values ​​of the functions being optimized.

Calculation method

1 step: We determine the Lagrange function from the given objective function and system of constraints:

Forward

In order to add your comment to the article, please register on the site.

Consider a constrained optimization problem containing only constraints in the form of equalities

min

subject to restrictions

,
.

This problem can, in principle, be solved as an unconstrained optimization problem obtained by eliminating m independent variables from the objective function using given equalities. The presence of constraints in the form of equalities actually makes it possible to reduce the dimension of the original problem.

The new problem can be solved using a suitable unconstrained optimization method. Example

.

It is required to minimize the function when limited

By eliminating the variable

using the equation, we obtain an optimization problem with two variables without restrictions:

However, the elimination of variables method is only applicable in cases where the equations representing the constraints can be solved with respect to a certain set of variables. If there are a large number of constraints in the form of equalities, the process of eliminating variables becomes a very labor-intensive procedure. In addition, there may be situations where the equation cannot be solved with respect to a variable. In this case, it is advisable to use the Lagrange multiplier method.

Using the Lagrange multiplier method, the necessary conditions are essentially established to allow the identification of optimum points in optimization problems with equality constraints.

Let's consider the problem

min

subject to restrictions

,
.

From the course of mathematical analysis it is well known that the conditional minimum point of the function coincides with the saddle point of the Lagrange function:

,

in this case, the saddle point must provide a minimum in terms of variables and maximum parameters . These parameters are called Lagrange multipliers. Equating partial derivatives of functions By and by

,
,

,
.

to zero, we obtain the necessary conditions for a stationary point:
System solution

equations determines the stationary point of the Lagrange function. Sufficient conditions for the existence of a minimum of the original problem contain, in addition to those mentioned above, the positive definiteness of the Hessian matrix of the objective function.

4.2. Coon-tucker conditions Let's consider the problem not linear programming

min

with restrictions in the form of inequalities

,
.

under restrictions ,
:



.

Let us reduce constraints in the form of inequalities to equality constraints by adding weakening variables to each of them

Let's form the Lagrange function:

,
;

,
;

,
.

Then the necessary conditions for a minimum take the form You can multiply the last equation by
and replace the attenuating variables by expressing them from the second equation. The second equation can be transformed by discarding the attenuating variables and moving to inequality constraints. One more condition should be added

, which must be fulfilled at the conditional minimum point.

,
; (1)

,
; (2)

,
; (3)

,
. (4)

Finally, we obtain the necessary conditions for the existence of a minimum of a nonlinear programming problem with inequality constraints, which are called the Kuhn-Tucker conditions:
Inequality constraint called active at a point
, if it turns into equality
, and is called inactive if

.
If it is possible to detect, before directly solving the problem, constraints that are inactive at the optimum point, then these constraints can be excluded from the model and thereby reduce its size.
.
If
, That
and the constraint is active and represents an equality constraint. On the other hand, if the constraint is a strict inequality
, then the Lagrange multiplier will have the form
those. limitation

is inactive and can be ignored.

Of course, it is not known in advance what restrictions can be neglected.

Description of the method

Where . Rationale The following justification for the Lagrange multiplier method is not a rigorous proof of it. It contains heuristic reasoning to help understand

geometric meaning

method.

Two-dimensional case Level lines and curve. Let it be required to find the extremum of some function of two variables under the condition specified by the equation. We will assume that all functions are continuously differentiable, and this equation defines a smooth curve f S Let it be required to find the extremum of some function of two variables under the condition specified by the equation on surface . Then the problem reduces to finding the extremum of the function Let it be required to find the extremum of some function of two variables under the condition specified by the equation on the curve f. We will also assume that

does not pass through points where the gradient f turns to 0. f S Let it be required to find the extremum of some function of two variables under the condition specified by the equation Let's draw function level lines on the plane Let it be required to find the extremum of some function of two variables under the condition specified by the equation(that is, curves). From geometric considerations it is clear that the extremum of the function Let it be required to find the extremum of some function of two variables under the condition specified by the equation there can only be points at which the tangents to f and the corresponding level line coincide. Indeed, if the curve Let it be required to find the extremum of some function of two variables under the condition specified by the equation crosses the level line f at a point transversally (that is, at some non-zero angle), then moving along the curve

from a point we can get to the level lines corresponding to a larger value f, and less. Therefore, such a point cannot be an extremum point.

Thus, a necessary condition for an extremum in our case will be the coincidence of the tangents. To write it in analytical form, note that it is equivalent to the parallelism of the gradients of the functions

and ψ at a given point, since the gradient vector is perpendicular to the tangent to the level line. This condition is expressed in the following form: where λ is a non-zero number that is a Lagrange multiplier. Let's now consider

Lagrange function

, depending on and λ: A necessary condition for its extremum is that the gradient is equal to zero. In accordance with the rules of differentiation, it is written in the form f We have obtained a system, the first two equations of which are equivalent to the necessary condition for a local extremum (1), and the third is equivalent to the equation , which contradicts our assumptions. It should be noted that the points found in this way may not be the desired points of the conditional extremum - the considered condition is necessary, but not sufficient. Finding a conditional extremum using an auxiliary function L and forms the basis of the Lagrange multiplier method, applied here for the simplest case of two variables. It turns out that the above reasoning generalizes to the case any number variables and equations that specify conditions.

Based on the Lagrange multiplier method, it is possible to prove some sufficient conditions for a conditional extremum, which require analysis of the second derivatives of the Lagrange function.

Application

  • The Lagrange multiplier method is used to solve nonlinear programming problems that arise in many fields (for example, in economics).
  • The main method for solving the problem of optimizing the quality of encoding audio and video data at a given average bitrate (distortion optimization - English. Rate-Distortion optimization).

see also

Links

Wikimedia Foundation.

2010.

    See what “Lagrange Multipliers” are in other dictionaries: Lagrange multipliers - additional factors transforming target function extremal problem of convex programming (in particular, linear programming) when it is solved by one of classical methods method of resolving multipliers... ...

    Economic-mathematical dictionary Lagrange multipliers - Additional factors that transform the objective function of an extremal convex programming problem (in particular, linear programming) when solving it using one of the classical methods, the method of resolving multipliers (Lagrange method).... ...

    Technical Translator's Guide Mechanics. 1) Lagrange equations of the 1st kind, differential equations of mechanical motion. systems, which are given in projections onto rectangular coordinate axes and contain the so-called. Lagrange multipliers. Obtained by J. Lagrange in 1788. For a holonomic system, ... ...

    Physical encyclopedia Ordinary mechanics differential equations 2nd order, describing the movements of mechanical. systems under the influence of forces applied to them. L.u. established by J. Lag range in two forms: L. u. 1st kind, or equations in Cartesian coordinates with... ...

    1) in hydromechanics, the equation of fluid (gas) motion in Lagrange variables, which are the coordinates of the medium. Received French scientist J. Lagrange (approx. 1780). From L. u. the law of motion of the medium is determined in the form of dependencies... ... Mechanics. 1) Lagrange equations of the 1st kind, differential equations of mechanical motion. systems, which are given in projections onto rectangular coordinate axes and contain the so-called. Lagrange multipliers. Obtained by J. Lagrange in 1788. For a holonomic system, ... ...

    Lagrange multiplier method, a method for finding the conditional extremum of the function f(x), where, relative to m constraints, i varies from one to m. Contents 1 Description of the method ... Wikipedia

    A function used in solving problems on the conditional extremum of functions of many variables and functionals. With the help of L. f. the necessary conditions for optimality in problems on a conditional extremum are written down. In this case, it is not necessary to express only variables... 2nd order, describing the movements of mechanical. systems under the influence of forces applied to them. L.u. established by J. Lag range in two forms: L. u. 1st kind, or equations in Cartesian coordinates with... ...

    Method for solving problems on Conditional extremum; L.M.M. consists in reducing these problems to problems on the unconditional extremum of an auxiliary function, the so-called. Lagrange functions. For the problem of the extremum of the function f (x1, x2,..., xn) for... ...

    Variables, with the help of which the Lagrange function is constructed when studying problems on a conditional extremum. The use of linear methods and the Lagrange function allows us to obtain the necessary optimality conditions in problems involving a conditional extremum in a uniform way... 2nd order, describing the movements of mechanical. systems under the influence of forces applied to them. L.u. established by J. Lag range in two forms: L. u. 1st kind, or equations in Cartesian coordinates with... ...

    1) in hydromechanics, the equations of motion of a fluid medium, written in Lagrange variables, which are the coordinates of particles of the medium. From L. u. the law of motion of particles of the medium is determined in the form of dependences of coordinates on time, and from them... ... Great Soviet Encyclopedia

Brief theory

The Lagrange multiplier method is a classical method for solving mathematical programming problems (in particular, convex ones). Unfortunately, the practical application of the method may encounter significant computational difficulties, narrowing the scope of its use. We consider the Lagrange method here mainly because it is an apparatus that is actively used to substantiate various modern numerical methods that are widely used in practice. As for the Lagrange function and Lagrange multipliers, they play an independent and extremely important role in the theory and applications of not only mathematical programming.

Consider a classic optimization problem:

Among the restrictions of this problem there are no inequalities, there are no conditions for the non-negativity of the variables, their discreteness, and the functions are continuous and have partial derivatives of at least second order.

The classical approach to solving the problem provides a system of equations (necessary conditions) that must be satisfied by the point that provides the function with a local extremum on the set of points that satisfy the restrictions (for a convex programming problem, the found point will also be the global extremum point).

Let us assume that at a point function (1) has a local conditional extremum and the rank of the matrix is ​​equal to . Then the necessary conditions will be written in the form:

there is a Lagrange function;

– Lagrange multipliers.

There are also sufficient conditions under which the solution of the system of equations (3) determines the extremum point of the function. This question is resolved based on the study of the sign of the second differential of the Lagrange function. However, sufficient conditions are mainly of theoretical interest.

You can specify the following procedure for solving problem (1), (2) using the Lagrange multiplier method:

1) compose the Lagrange function (4);

2) find the partial derivatives of the Lagrange function with respect to all variables and equate them

zero. Thus, a system (3) will be obtained, consisting of equations. Solve the resulting system (if this turns out to be possible!) and thus find all the stationary points of the Lagrange function;

3) from stationary points taken without coordinates, select points at which the function has conditional local extrema in the presence of restrictions (2). This choice is made, for example, using sufficient conditions for a local extremum. Often the study is simplified if specific conditions of the problem are used.

Example of problem solution

The task

The company produces two types of goods in quantities and . The useful cost function is determined by the relation. The prices of these goods in the market are equal and accordingly. Determine at what volumes of output is achieved maximum profit

and what is it equal to if the total costs do not exceed

Having trouble understanding the progress of a decision? The website offers a service Solving problems using methods of optimal solutions to order

The solution of the problem

Economic and mathematical model of the problem

Profit function:

Cost restrictions:

We get the following economic and mathematical model:

Moreover, according to the meaning of the task

Lagrange multiplier method

Let's compose the Lagrange function:

We find the 1st order partial derivatives:

Let's create and solve a system of equations:

Since then

Maximum profit:

Answer
An example of solving a quadratic convex programming problem using a graphical method is given.

Solving a linear problem by graphical method
Considered graphic method solving a linear programming problem (LPP) with two variables. An example of the task is given detailed description constructing a drawing and finding a solution.

Wilson's inventory management model
Using the example of solving the problem, the basic model of inventory management (Wilson model) is considered. The following model indicators were calculated: optimal size order quantities, annual holding costs, delivery intervals and ordering point.

Direct Cost Ratio Matrix and Input-Output Matrix
Using the example of solving a problem, Leontiev's intersectoral model is considered. The calculation of the matrix of coefficients of direct material costs, the matrix “input-output”, the matrix of coefficients of indirect costs, vectors of final consumption and gross output is shown.