Numerical methods method of undetermined Lagrange multipliers pdf. Economic and mathematical model of the problem

Determination method conditional extremum begins with the construction of an auxiliary Lagrange function, which in the region of feasible solutions reaches a maximum for the same values ​​of variables x 1 , x 2 , ..., x n , which is the same as the objective function z . Let the problem of determining the conditional extremum of the function be solved z = f(X) under restrictions φ i ( x 1 , x 2 , ..., x n ) = 0, i = 1, 2, ..., m , m < n

Let's compose a function

which is called Lagrange function. X , - constant factors ( Lagrange multipliers). Note that Lagrange multipliers can be given an economic meaning. If f(x 1 , x 2 , ..., x n ) - income consistent with the plan X = (x 1 , x 2 , ..., x n ) , and the function φ i (x 1 , x 2 , ..., x n ) - costs of the i-th resource corresponding to this plan, then X , - price (assessment) of the i-th resource, characterizing the change in the extreme value objective function depending on the change in the size of the i-th resource (marginal estimate). L(X) - function n+m variables (x 1 , x 2 , ..., x n , λ 1 , λ 2 , ..., λ n ) . Determining the stationary points of this function leads to solving the system of equations

It's easy to see that . Thus, the task of finding the conditional extremum of the function z = f(X) reduces to finding the local extremum of the function L(X) . If a stationary point is found, then the question of the existence of an extremum in the simplest cases is resolved on the basis of sufficient conditions for the extremum - studying the sign of the second differential d 2 L(X) at a stationary point, provided that the variable increments Δx i - connected by relationships

obtained by differentiating the coupling equations.

Solving a system of nonlinear equations in two unknowns using the Find Solution tool

Settings Finding a solution allows you to find a solution to the system nonlinear equations with two unknowns:

Where
- nonlinear function of variables x And y ,
- arbitrary constant.

It is known that the couple ( x , y ) is a solution to system of equations (10) if and only if it is a solution to the following equation with two unknowns:

WITH on the other hand, the solution to system (10) is the intersection points of two curves: f ] (x, y) = C And f 2 (x, y) = C 2 on surface XOY.

This leads to a method for finding the roots of the system. nonlinear equations:

    Determine (at least approximately) the interval of existence of a solution to the system of equations (10) or equation (11). Here it is necessary to take into account the type of equations included in the system, the domain of definition of each of their equations, etc. Sometimes the selection of an initial approximation of the solution is used;

    Tabulate the solution to equation (11) for the variables x and y on the selected interval, or construct graphs of functions f 1 (x, y) = C, and f 2 (x,y) = C 2 (system(10)).

    Localize the supposed roots of the system of equations - find several minimum values ​​from the table tabulating the roots of equation (11), or determine the intersection points of the curves included in the system (10).

4. Find the roots for the system of equations (10) using the add-in Finding a solution.

Lagrange Multiplier Method is classical method problem solving mathematical programming(in particular convex). Unfortunately, when practical application The method may encounter significant computational difficulties, narrowing the scope of its use. We consider the Lagrange method here mainly because it is an apparatus that is actively used to substantiate various modern numerical methods that are widely used in practice. As for the Lagrange function and Lagrange multipliers, they play an independent and extremely important role in the theory and applications of not only mathematical programming.

Consider the classical optimization problem

max (min) z=f(x) (7.20)

This problem stands out from problem (7.18), (7.19) in that among the restrictions (7.21) there are no inequalities, there are no conditions for the variables to be non-negative, their discreteness, and the functions f(x) are continuous and have partial derivatives with respect to at least second order.

The classical approach to solving problem (7.20), (7.21) gives a system of equations ( the necessary conditions), which must be satisfied by the point x*, which provides the function f(x) with a local extremum on the set of points satisfying constraints (7.21) (for the convex programming problem, the found point x*, in accordance with Theorem 7.6, will simultaneously be the point of the global extremum).

Let us assume that at point x* function (7.20) has a local conditional extremum and the rank of the matrix is ​​equal to . Then the necessary conditions will be written in the form:

(7.22)

there is a Lagrange function; - Lagrange multipliers.

There are also sufficient conditions under which the solution of the system of equations (7.22) determines the extremum point of the function f(x). This question is resolved based on the study of the sign of the second differential of the Lagrange function. However, sufficient conditions are mainly of theoretical interest.

You can specify the following procedure for solving problem (7.20), (7.21) using the Lagrange multiplier method:

1) compose the Lagrange function (7.23);

2) find the partial derivatives of the Lagrange function with respect to all variables and set them equal to zero. This will result in system (7.22), consisting of equations. Solve the resulting system (if this turns out to be possible!) and thus find all the stationary points of the Lagrange function;

3) from stationary points taken without coordinates, select points at which the function f(x) has conditional local extrema in the presence of restrictions (7.21). This choice is made, for example, using sufficient conditions for a local extremum. Often the study is simplified if specific conditions of the problem are used.



Example 7.3. Find optimal distribution limited resource in a unit. between n consumers, if the profit received from allocating x j units of resource to the jth consumer is calculated by the formula .

Solution.Mathematical model the task looks like this:


We compose the Lagrange function:

.

We find partial derivatives of the Lagrange function and equate them to zero:

Solving this system of equations, we get:

Thus, if the jth consumer is allocated units. resource, then the total profit will reach its maximum value and amount to den. units

We examined Lagrange's method as applied to classical problem optimization. This method can be generalized to the case where the variables are non-negative and some constraints are given in the form of inequalities. However, this generalization is primarily theoretical and does not lead to specific computational algorithms.

In conclusion, let us give the Lagrange multipliers economic interpretation. To do this, let us turn to the simplest classical optimization problem

max (min) z=f(x 1 , X 2); (7.24)

𝜑(x 1, x 2)=b. (7.25)

Let us assume that the conditional extremum is reached at point . The corresponding extreme value of the function f(x)

Let us assume that in restrictions (7.25) the quantity b may change, then the coordinates of the extremum point, and therefore the extreme value f* functions f(x) will become quantities depending on b, i.e. ,, and therefore the derivative of function (7.24)

First, let's consider the case of a function of two variables. The conditional extremum of a function $z=f(x,y)$ at the point $M_0(x_0;y_0)$ is the extremum of this function, achieved under the condition that the variables $x$ and $y$ in the vicinity of this point satisfy the connection equation $\ varphi (x,y)=0$.

The name “conditional” extremum is due to the fact that an additional condition $\varphi(x,y)=0$ is imposed on the variables. If one variable can be expressed from the connection equation through another, then the problem of determining the conditional extremum is reduced to the problem of determining the usual extremum of a function of one variable. For example, if the connection equation implies $y=\psi(x)$, then substituting $y=\psi(x)$ into $z=f(x,y)$, we obtain a function of one variable $z=f\left (x,\psi(x)\right)$. In the general case, however, this method is of little use, so the introduction of a new algorithm is required.

Lagrange multiplier method for functions of two variables.

The Lagrange multiplier method consists of constructing a Lagrange function to find a conditional extremum: $F(x,y)=f(x,y)+\lambda\varphi(x,y)$ (the $\lambda$ parameter is called the Lagrange multiplier ). The necessary conditions for the extremum are specified by a system of equations from which the stationary points are determined:

$$ \left \( \begin(aligned) & \frac(\partial F)(\partial x)=0;\\ & \frac(\partial F)(\partial y)=0;\\ & \varphi (x,y)=0. \end(aligned) \right.

A sufficient condition from which one can determine the nature of the extremum is the sign $d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("" )dy^2$. If at a stationary point $d^2F > 0$, then the function $z=f(x,y)$ has a conditional minimum at this point, but if $d^2F< 0$, то условный максимум.

There is another way to determine the nature of the extremum. From the coupling equation we obtain: $\varphi_(x)^(")dx+\varphi_(y)^(")dy=0$, $dy=-\frac(\varphi_(x)^("))(\varphi_ (y)^("))dx$, therefore at any stationary point we have:

$$d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=F_(xx)^( "")dx^2+2F_(xy)^("")dx\left(-\frac(\varphi_(x)^("))(\varphi_(y)^("))dx\right)+ F_(yy)^("")\left(-\frac(\varphi_(x)^("))(\varphi_(y)^("))dx\right)^2=\\ =-\frac (dx^2)(\left(\varphi_(y)^(") \right)^2)\cdot\left(-(\varphi_(y)^("))^2 F_(xx)^(" ")+2\varphi_(x)^(")\varphi_(y)^(")F_(xy)^("")-(\varphi_(x)^("))^2 F_(yy)^ ("") \right)$$

The second factor (located in brackets) can be represented in this form:

The elements of the determinant $\left| are highlighted in red. \begin(array) (cc) F_(xx)^("") & F_(xy)^("") \\ F_(xy)^("") & F_(yy)^("") \end (array)\right|$, which is the Hessian of the Lagrange function. If $H > 0$, then $d^2F< 0$, что указывает на условный максимум. Аналогично, при $H < 0$ имеем $d^2F >0$, i.e. we have a conditional minimum of the function $z=f(x,y)$.

A note regarding the notation of the determinant $H$. show\hide

$$ H=-\left|\begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_ (xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \ end(array) \right| $$

In this situation, the rule formulated above will change as follows: if $H > 0$, then the function has a conditional minimum, and if $H< 0$ получим условный максимум функции $z=f(x,y)$. При решении задач следует учитывать такие нюансы.

Algorithm for studying a function of two variables for a conditional extremum

  1. Compose the Lagrange function $F(x,y)=f(x,y)+\lambda\varphi(x,y)$
  2. Solve the system $ \left \( \begin(aligned) & \frac(\partial F)(\partial x)=0;\\ & \frac(\partial F)(\partial y)=0;\\ & \ varphi (x,y)=0. \end(aligned) \right.$
  3. Determine the nature of the extremum at each of the stationary points found in the previous paragraph. To do this, use any of the following methods:
    • Compose the determinant of $H$ and find out its sign
    • Taking into account the coupling equation, calculate the sign of $d^2F$

Lagrange multiplier method for functions of n variables

Let's say we have a function of $n$ variables $z=f(x_1,x_2,\ldots,x_n)$ and $m$ coupling equations ($n > m$):

$$\varphi_1(x_1,x_2,\ldots,x_n)=0; \; \varphi_2(x_1,x_2,\ldots,x_n)=0,\ldots,\varphi_m(x_1,x_2,\ldots,x_n)=0.$$

Denoting the Lagrange multipliers as $\lambda_1,\lambda_2,\ldots,\lambda_m$, we compose the Lagrange function:

$$F(x_1,x_2,\ldots,x_n,\lambda_1,\lambda_2,\ldots,\lambda_m)=f+\lambda_1\varphi_1+\lambda_2\varphi_2+\ldots+\lambda_m\varphi_m$$

The necessary conditions for the presence of a conditional extremum are given by a system of equations from which the coordinates of stationary points and the values ​​of the Lagrange multipliers are found:

$$\left\(\begin(aligned) & \frac(\partial F)(\partial x_i)=0; (i=\overline(1,n))\\ & \varphi_j=0; (j=\ overline(1,m)) \end(aligned) \right.$$

You can find out whether a function has a conditional minimum or a conditional maximum at the found point, as before, using the sign $d^2F$. If at the found point $d^2F > 0$, then the function has a conditional minimum, but if $d^2F< 0$, - то условный максимум. Можно пойти иным путем, рассмотрев следующую матрицу:

Determinant of the matrix $\left| \begin(array) (ccccc) \frac(\partial^2F)(\partial x_(1)^(2)) & \frac(\partial^2F)(\partial x_(1)\partial x_(2) ) & \frac(\partial^2F)(\partial x_(1)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(1)\partial x_(n)) \\ \frac(\partial^2F)(\partial x_(2)\partial x_1) & \frac(\partial^2F)(\partial x_(2)^(2)) & \frac(\partial^2F )(\partial x_(2)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(2)\partial x_(n))\\ \frac(\partial^2F )(\partial x_(3) \partial x_(1)) & \frac(\partial^2F)(\partial x_(3)\partial x_(2)) & \frac(\partial^2F)(\partial x_(3)^(2)) &\ldots & \frac(\partial^2F)(\partial x_(3)\partial x_(n))\\ \ldots & \ldots & \ldots &\ldots & \ ldots\\ \frac(\partial^2F)(\partial x_(n)\partial x_(1)) & \frac(\partial^2F)(\partial x_(n)\partial x_(2)) & \ frac(\partial^2F)(\partial x_(n)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(n)^(2))\\ \end( array) \right|$, highlighted in red in the matrix $L$, is the Hessian of the Lagrange function. We use the following rule:

  • If the signs of the angular minors $H_(2m+1),\; H_(2m+2),\ldots,H_(m+n)$ matrices $L$ coincide with the sign of $(-1)^m$, then the stationary point under study is the conditional minimum point of the function $z=f(x_1,x_2 ,x_3,\ldots,x_n)$.
  • If the signs of the angular minors $H_(2m+1),\; H_(2m+2),\ldots,H_(m+n)$ alternate, and the sign of the minor $H_(2m+1)$ coincides with the sign of the number $(-1)^(m+1)$, then the stationary the point is the conditional maximum point of the function $z=f(x_1,x_2,x_3,\ldots,x_n)$.

Example No. 1

Find the conditional extremum of the function $z(x,y)=x+3y$ under the condition $x^2+y^2=10$.

The geometric interpretation of this problem is as follows: it is required to find the largest and smallest values ​​of the applicate of the plane $z=x+3y$ for the points of its intersection with the cylinder $x^2+y^2=10$.

It is somewhat difficult to express one variable through another from the coupling equation and substitute it into the function $z(x,y)=x+3y$, so we will use the Lagrange method.

Denoting $\varphi(x,y)=x^2+y^2-10$, we compose the Lagrange function:

$$ F(x,y)=z(x,y)+\lambda \varphi(x,y)=x+3y+\lambda(x^2+y^2-10);\\ \frac(\partial F)(\partial x)=1+2\lambda x; \frac(\partial F)(\partial y)=3+2\lambda y. $$

Let us write a system of equations to determine the stationary points of the Lagrange function:

$$ \left \( \begin(aligned) & 1+2\lambda x=0;\\ & 3+2\lambda y=0;\\ & x^2+y^2-10=0. \end (aligned)\right.$$

If we assume $\lambda=0$, then the first equation becomes: $1=0$. The resulting contradiction indicates that $\lambda\neq 0$. Under the condition $\lambda\neq 0$, from the first and second equations we have: $x=-\frac(1)(2\lambda)$, $y=-\frac(3)(2\lambda)$. Substituting the obtained values ​​into the third equation, we get:

$$ \left(-\frac(1)(2\lambda) \right)^2+\left(-\frac(3)(2\lambda) \right)^2-10=0;\\ \frac (1)(4\lambda^2)+\frac(9)(4\lambda^2)=10; \lambda^2=\frac(1)(4); \left[ \begin(aligned) & \lambda_1=-\frac(1)(2);\\ & \lambda_2=\frac(1)(2). \end(aligned) \right.\\ \begin(aligned) & \lambda_1=-\frac(1)(2); \; x_1=-\frac(1)(2\lambda_1)=1; \; y_1=-\frac(3)(2\lambda_1)=3;\\ & \lambda_2=\frac(1)(2); \; x_2=-\frac(1)(2\lambda_2)=-1; \; y_2=-\frac(3)(2\lambda_2)=-3.\end(aligned) $$

So, the system has two solutions: $x_1=1;\; y_1=3;\; \lambda_1=-\frac(1)(2)$ and $x_2=-1;\; y_2=-3;\; \lambda_2=\frac(1)(2)$. Let us find out the nature of the extremum at each stationary point: $M_1(1;3)$ and $M_2(-1;-3)$. To do this, we calculate the determinant of $H$ at each point.

$$ \varphi_(x)^(")=2x;\; \varphi_(y)^(")=2y;\; F_(xx)^("")=2\lambda;\; F_(xy)^("")=0;\; F_(yy)^("")=2\lambda.\\ H=\left| \begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_(xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \end(array) \right|= \left| \begin(array) (ccc) 0 & 2x & 2y\\ 2x & 2\lambda & 0 \\ 2y & 0 & 2\lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right| $$

At point $M_1(1;3)$ we get: $H=8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & 1 & 3\\ 1 & -1/2 & 0 \\ 3 & 0 & -1/2 \end(array) \right|=40 > 0$, so at the point The $M_1(1;3)$ function $z(x,y)=x+3y$ has a conditional maximum, $z_(\max)=z(1;3)=10$.

Similarly, at point $M_2(-1,-3)$ we find: $H=8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & -1 & -3\\ -1 & 1/2 & 0 \\ -3 & 0 & 1/2 \end(array) \right|=-40$. Since $H< 0$, то в точке $M_2(-1;-3)$ имеем условный минимум функции $z(x,y)=x+3y$, а именно: $z_{\min}=z(-1;-3)=-10$.

I note that instead of calculating the value of the determinant $H$ at each point, it is much more convenient to expand it in general form. In order not to clutter the text with details, I will hide this method under a note.

Writing the determinant $H$ in general form. show\hide

$$ H=8\cdot\left|\begin(array)(ccc)0&x&y\\x&\lambda&0\\y&0&\lambda\end(array)\right| =8\cdot\left(-\lambda(y^2)-\lambda(x^2)\right) =-8\lambda\cdot\left(y^2+x^2\right). $$

In principle, it is already obvious what sign $H$ has. Since none of the points $M_1$ or $M_2$ coincides with the origin, then $y^2+x^2>0$. Therefore, the sign of $H$ is opposite to the sign of $\lambda$. You can complete the calculations:

$$ \begin(aligned) &H(M_1)=-8\cdot\left(-\frac(1)(2)\right)\cdot\left(3^2+1^2\right)=40;\ \ &H(M_2)=-8\cdot\frac(1)(2)\cdot\left((-3)^2+(-1)^2\right)=-40. \end(aligned) $$

The question about the nature of the extremum at the stationary points $M_1(1;3)$ and $M_2(-1;-3)$ can be solved without using the determinant $H$. Let's find the sign of $d^2F$ at each stationary point:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=2\lambda \left( dx^2+dy^2\right) $$

Let me note that the notation $dx^2$ means exactly $dx$ raised to the second power, i.e. $\left(dx \right)^2$. Hence we have: $dx^2+dy^2>0$, therefore, with $\lambda_1=-\frac(1)(2)$ we get $d^2F< 0$. Следовательно, функция имеет в точке $M_1(1;3)$ условный максимум. Аналогично, в точке $M_2(-1;-3)$ получим условный минимум функции $z(x,y)=x+3y$. Отметим, что для определения знака $d^2F$ не пришлось учитывать связь между $dx$ и $dy$, ибо знак $d^2F$ очевиден без дополнительных преобразований. В следующем примере для определения знака $d^2F$ уже будет необходимо учесть связь между $dx$ и $dy$.

Answer: at point $(-1;-3)$ the function has a conditional minimum, $z_(\min)=-10$. At point $(1;3)$ the function has a conditional maximum, $z_(\max)=10$

Example No. 2

Find the conditional extremum of the function $z(x,y)=3y^3+4x^2-xy$ under the condition $x+y=0$.

First method (Lagrange multiplier method)

Denoting $\varphi(x,y)=x+y$, we compose the Lagrange function: $F(x,y)=z(x,y)+\lambda \varphi(x,y)=3y^3+4x^2 -xy+\lambda(x+y)$.

$$ \frac(\partial F)(\partial x)=8x-y+\lambda; \; \frac(\partial F)(\partial y)=9y^2-x+\lambda.\\ \left \( \begin(aligned) & 8x-y+\lambda=0;\\ & 9y^2-x+\ lambda=0; \\ & x+y=0. \end(aligned) \right.

Having solved the system, we get: $x_1=0$, $y_1=0$, $\lambda_1=0$ and $x_2=\frac(10)(9)$, $y_2=-\frac(10)(9)$ , $\lambda_2=-10$. We have two stationary points: $M_1(0;0)$ and $M_2 \left(\frac(10)(9);-\frac(10)(9) \right)$. Let us find out the nature of the extremum at each stationary point using the determinant $H$.

$$H=\left| \begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_(xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \end(array) \right|= \left| \begin(array) (ccc) 0 & 1 & 1\\ 1 & 8 & -1 \\ 1 & -1 & 18y \end(array) \right|=-10-18y $$

At point $M_1(0;0)$ $H=-10-18\cdot 0=-10< 0$, поэтому $M_1(0;0)$ есть точка условного минимума функции $z(x,y)=3y^3+4x^2-xy$, $z_{\min}=0$. В точке $M_2\left(\frac{10}{9};-\frac{10}{9}\right)$ $H=10 >0$, therefore at this point the function has a conditional maximum, $z_(\max)=\frac(500)(243)$.

We investigate the nature of the extremum at each point using a different method, based on the sign of $d^2F$:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=8dx^2-2dxdy+ 18ydy^2 $$

From the connection equation $x+y=0$ we have: $d(x+y)=0$, $dx+dy=0$, $dy=-dx$.

$$ d^2 F=8dx^2-2dxdy+18ydy^2=8dx^2-2dx(-dx)+18y(-dx)^2=(10+18y)dx^2 $$

Since $ d^2F \Bigr|_(M_1)=10 dx^2 > 0$, then $M_1(0;0)$ is the conditional minimum point of the function $z(x,y)=3y^3+4x^ 2-xy$. Similarly, $d^2F \Bigr|_(M_2)=-10 dx^2< 0$, т.е. $M_2\left(\frac{10}{9}; -\frac{10}{9} \right)$ - точка условного максимума.

Second way

From the connection equation $x+y=0$ we get: $y=-x$. Substituting $y=-x$ into the function $z(x,y)=3y^3+4x^2-xy$, we obtain some function of the variable $x$. Let's denote this function as $u(x)$:

$$ u(x)=z(x,-x)=3\cdot(-x)^3+4x^2-x\cdot(-x)=-3x^3+5x^2. $$

Thus, we reduced the problem of finding the conditional extremum of a function of two variables to the problem of determining the extremum of a function of one variable.

$$ u_(x)^(")=-9x^2+10x;\\ -9x^2+10x=0; \; x\cdot(-9x+10)=0;\\ x_1=0; \ ; y_1=-x_1=0;\\ x_2=\frac(10)(9); \; y_2=-x_2=-\frac(10)(9).

We obtained points $M_1(0;0)$ and $M_2\left(\frac(10)(9); -\frac(10)(9)\right)$. Further research is known from the course of differential calculus of functions of one variable. By examining the sign of $u_(xx)^("")$ at each stationary point or checking the change in the sign of $u_(x)^(")$ at the found points, we obtain the same conclusions as when solving the first method. For example, we will check sign $u_(xx)^("")$:

$$u_(xx)^("")=-18x+10;\\ u_(xx)^("")(M_1)=10;\;u_(xx)^("")(M_2)=- 10.$$

Since $u_(xx)^("")(M_1)>0$, then $M_1$ is the minimum point of the function $u(x)$, and $u_(\min)=u(0)=0$ . Since $u_(xx)^("")(M_2)<0$, то $M_2$ - точка максимума функции $u(x)$, причём $u_{\max}=u\left(\frac{10}{9}\right)=\frac{500}{243}$.

The values ​​of the function $u(x)$ for a given connection condition coincide with the values ​​of the function $z(x,y)$, i.e. the found extrema of the function $u(x)$ are the sought conditional extrema of the function $z(x,y)$.

Answer: at the point $(0;0)$ the function has a conditional minimum, $z_(\min)=0$. At the point $\left(\frac(10)(9); -\frac(10)(9) \right)$ the function has a conditional maximum, $z_(\max)=\frac(500)(243)$.

Let's consider another example in which we will clarify the nature of the extremum by determining the sign of $d^2F$.

Example No. 3

Find the largest and smallest values ​​of the function $z=5xy-4$ if the variables $x$ and $y$ are positive and satisfy the coupling equation $\frac(x^2)(8)+\frac(y^2)(2) -1=0$.

Let's compose the Lagrange function: $F=5xy-4+\lambda \left(\frac(x^2)(8)+\frac(y^2)(2)-1 \right)$. Let's find the stationary points of the Lagrange function:

$$ F_(x)^(")=5y+\frac(\lambda x)(4); \; F_(y)^(")=5x+\lambda y.\\ \left \( \begin(aligned) & 5y+\frac(\lambda x)(4)=0;\\ & 5x+\lambda y=0;\\ & \frac(x^2)(8)+\frac(y^2)(2)- 1=0;\\ & x > 0; \; y > 0. \end(aligned) \right.

All further transformations are carried out taking into account $x > 0; \; y > 0$ (this is specified in the problem statement). From the second equation we express $\lambda=-\frac(5x)(y)$ and substitute the found value into the first equation: $5y-\frac(5x)(y)\cdot \frac(x)(4)=0$ , $4y^2-x^2=0$, $x=2y$. Substituting $x=2y$ into the third equation, we get: $\frac(4y^2)(8)+\frac(y^2)(2)-1=0$, $y^2=1$, $y =1$.

Since $y=1$, then $x=2$, $\lambda=-10$. We determine the nature of the extremum at point $(2;1)$ based on the sign of $d^2F$.

$$ F_(xx)^("")=\frac(\lambda)(4); \; F_(xy)^("")=5; \; F_(yy)^("")=\lambda. $$

Since $\frac(x^2)(8)+\frac(y^2)(2)-1=0$, then:

$$ d\left(\frac(x^2)(8)+\frac(y^2)(2)-1\right)=0; \; d\left(\frac(x^2)(8) \right)+d\left(\frac(y^2)(2) \right)=0; \; \frac(x)(4)dx+ydy=0; \; dy=-\frac(xdx)(4y). $$

In principle, here you can immediately substitute the coordinates of the stationary point $x=2$, $y=1$ and the parameter $\lambda=-10$, obtaining:

$$ F_(xx)^("")=\frac(-5)(2); \; F_(xy)^("")=-10; \; dy=-\frac(dx)(2).\\ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^(" ")dy^2=-\frac(5)(2)dx^2+10dx\cdot \left(-\frac(dx)(2) \right)-10\cdot \left(-\frac(dx) (2) \right)^2=\\ =-\frac(5)(2)dx^2-5dx^2-\frac(5)(2)dx^2=-10dx^2. $$

However, in other problems on a conditional extremum there may be several stationary points. In such cases, it is better to represent $d^2F$ in general form, and then substitute the coordinates of each of the found stationary points into the resulting expression:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=\frac(\lambda) (4)dx^2+10\cdot dx\cdot \frac(-xdx)(4y) +\lambda\cdot \left(-\frac(xdx)(4y) \right)^2=\\ =\frac (\lambda)(4)dx^2-\frac(5x)(2y)dx^2+\lambda \cdot \frac(x^2dx^2)(16y^2)=\left(\frac(\lambda )(4)-\frac(5x)(2y)+\frac(\lambda \cdot x^2)(16y^2) \right)\cdot dx^2 $$

Substituting $x=2$, $y=1$, $\lambda=-10$, we get:

$$ d^2 F=\left(\frac(-10)(4)-\frac(10)(2)-\frac(10 \cdot 4)(16) \right)\cdot dx^2=- 10dx^2. $$

Since $d^2F=-10\cdot dx^2< 0$, то точка $(2;1)$ есть точкой условного максимума функции $z=5xy-4$, причём $z_{\max}=10-4=6$.

Answer: at point $(2;1)$ the function has a conditional maximum, $z_(\max)=6$.

In the next part we will consider the application of the Lagrange method for functions more variables.


Let and be twice continuously differentiable scalar functions of the vector argument. It is required to find the extremum of the function, provided that the argument satisfies the system of restrictions:

(the last condition is also called the connection condition).

Most simple method finding a conditional extremum is to reduce the problem to finding an unconditional extremum by resolving the connection equation with respect to m variables and their subsequent substitution into the objective function.

Example 3. Find the extremum of the function under the condition .

Solution. From the connection equation we express x 2 through x 1 and substitute the resulting expression into the function at:

This function has a single extremum (minimum) at x 1=2. Respectively, x 2=1. Thus, the point of conditional extremum (minimum) given function is the point .

In the example considered, the coupling equation is easily solvable with respect to one of the variables. However, in more difficult cases It is not always possible to express variables. Accordingly, the approach described above is not applicable to all problems.

More universal method solving problems of finding a conditional extremum is Lagrange multiplier method. It is based on the application of the following theorem. If a point is an extremum point of a function in the region defined by the equations, then (for some additional conditions) there is such m-dimensional vector that point is a stationary point of the function

Algorithm for the Lagrange multiplier method

Step 1. Compose the Lagrange function:

where is the Lagrange multiplier corresponding to i-th restriction.

Step 2. Find the partial derivatives of the Lagrange function and equate them to zero

Step 3. Having solved the resulting system from n+m equations, find stationary points.

Note that at stationary points the necessary but not sufficient condition for the extremum of the function is satisfied. Analysis of a stationary point for the presence of an extremum in it in this case quite complicated. Therefore, the Lagrange multiplier method is mainly used in cases where the existence of a minimum or maximum of the function under study is known in advance from geometric or substantive considerations.

When solving some economic tasks Lagrange multipliers have a certain semantic content. So, if - the profit of the enterprise under the production plan n goods, - costs i-th resource, then l i- assessment of this resource, characterizing the rate of change in the optimum of the objective function depending on the change i-th resource.

Example 4. Find the extrema of the function under the condition .

Solution. The functions are both continuous and have continuous partial derivatives. Let's compose the Lagrange function:

Let's find the partial derivatives and equate them to zero.

We get two stationary points:

Taking into account the nature of the objective function, the level lines of which are planes, and the function (ellipse), we conclude that at the point, the function takes a minimum value, and at the point a maximum.

Example 5. In the field of system solutions

find the maximum and minimum value of the function given the condition .

Solution. Crossing the area admissible solutions and the straight line is the segment MN: M(0,6), N(6.0). Therefore, the function can take extreme values ​​either at stationary points or at points M And N. To find a stationary point, we apply the Lagrange method. Let's compose the Lagrange function

Let's find the partial derivatives of the Lagrange function and equate them to zero

Solving the system, we obtain a stationary point K(2.2;3.8). Let's compare the values ​​of the objective function at the points K, M, N:

Hence,

Example 6. The market demand for a certain product is known to be 180 pieces. This product can be manufactured by two enterprises of the same concern according to various technologies. In production x 1 products by the first enterprise, its costs will be rub., and during production x 2 products by the second enterprise they make up rub.

Determine how many products manufactured using each technology the concern can offer so that the total costs of its production are minimal.

Solution. Mathematical model of the problem:

To find minimum value objective function provided x 1+ x 2=180, i.e. Without taking into account the requirement of non-negativity of variables, we compose the Lagrange function:

Let us find the first derivatives of the Lagrange function with respect to x 1, x 2, l, and equate them to 0. We obtain a system of equations:

Solving this system, we find the following roots: , i.e. we obtain the coordinates of a point suspected of being an extremum.

To determine whether a point ( ) local minimum, we study the Hessian determinant, for which we calculate the second partial derivatives of the objective function:

Because

then the Hessian determinant is positive definite; therefore, the objective function is convex and at the point ( ) we have a local minimum:

Consider a constrained optimization problem containing only constraints in the form of equalities

min

subject to restrictions

,
.

This problem can in principle be solved as a problem unconditional optimization, obtained by excluding m independent variables from the objective function using the given equalities. The presence of restrictions in the form of equalities actually allows you to reduce the dimension original problem.

The new problem can be solved using a suitable unconstrained optimization method. Example

.

It is required to minimize the function when limited

By eliminating the variable

using the equation, we obtain an optimization problem with two variables without restrictions:

However, the elimination of variables method is only applicable in cases where the equations representing the constraints can be solved with respect to a certain set of variables. If there are a large number of constraints in the form of equalities, the process of eliminating variables becomes a very labor-intensive procedure. In addition, there may be situations where the equation cannot be solved with respect to a variable. In this case, it is advisable to use the Lagrange multiplier method.

Using the Lagrange multiplier method, the necessary conditions are essentially established to allow the identification of optimum points in optimization problems with equality constraints.

Let's consider the problem

min

subject to restrictions

,
.

From the course of mathematical analysis it is well known that the conditional minimum point of the function coincides with the saddle point of the Lagrange function:

,

in this case, the saddle point must provide a minimum in terms of variables and maximum parameters . These parameters are called Lagrange multipliers. Equating partial derivatives of functions By and by

,
,

,
.

to zero, we obtain the necessary conditions for a stationary point:
System solution

equations determines the stationary point of the Lagrange function. Sufficient conditions for the existence of a minimum of the original problem contain, in addition to those mentioned above, the positive definiteness of the Hessian matrix of the objective function.

4.2. Coon-tucker conditions

min

Let us consider a nonlinear programming problem with constraints in the form of inequalities

,
.

under restrictions ,
:



.

Let us reduce constraints in the form of inequalities to equality constraints by adding weakening variables to each of them

Let's form the Lagrange function:

,
;

,
;

,
.

Then the necessary conditions for a minimum take the form You can multiply the last equation by
and replace the attenuating variables by expressing them from the second equation. The second equation can be transformed by discarding the attenuating variables and moving to inequality constraints. One more condition should be added

, which must be fulfilled at the conditional minimum point.

,
; (1)

,
; (2)

,
; (3)

,
. (4)

Finally, we obtain the necessary conditions for the existence of a minimum of a nonlinear programming problem with inequality constraints, which are called the Kuhn-Tucker conditions:
Inequality constraint called active at a point
, if it turns into equality
, and is called inactive if

.
If it is possible to detect, before directly solving the problem, constraints that are inactive at the optimum point, then these constraints can be excluded from the model and thereby reduce its size.
.
If
, That
and the constraint is active and represents an equality constraint. On the other hand, if the constraint is a strict inequality
, then the Lagrange multiplier will have the form
those. limitation