Recursion in logic. Recursion in programming

Functions: recursively given a function in its definition contains itself; in particular, a function defined by a recurrent formula is recursive. Thus, in one expression you can give an infinite set of ways to calculate a function, define many objects through yourself using previously specified private definitions.

Data

Struct element_of_list ( element_of_list * next; /* reference to the next element of the same type */ int data; /* some data */ ) ;

The recursive structure of data often dictates the use of recursion to process the data.

In physics

A classic example of infinite recursion is two mirrors placed opposite each other: they form two corridors of decreasing reflections of the mirrors.

Another example of infinite recursion is the effect of self-excitation (positive feedback) in electronic circuits amplification, when the signal from the output reaches the input, is amplified, again reaches the input of the circuit and is amplified again. Amplifiers for which this operating mode is standard are called self-oscillators.

In linguistics

The ability of a language to generate nested sentences and constructions. Basic offer " the cat ate the mouse» can be expanded by recursion as Vanya guessed that the cat ate the mouse, then as Katya knows that Vanya guessed that the cat ate the mouse and so on. Recursion is considered one of the linguistic universals, that is, it is characteristic of any natural language. However, in lately actively discussed possible absence recursion in one of the languages ​​of the Amazon - Pirahã, which is noted by linguist Daniel Everett ( English) .

In culture

Most jokes about recursion concern infinite recursion, in which there is no exit condition, for example, the famous saying: “to understand recursion, you must first understand recursion.”

A very popular joke about recursion, reminiscent of a dictionary entry:

Several stories by Stanislaw Lem are devoted to (possible) incidents with infinite recursion:

  • the story about Ion the Quiet “The Fourteenth Journey” from the “Star Diaries of Iyon the Quiet”, in which the hero sequentially moves from an article about sepulki to an article about sepullation, from there to an article about sepulcaria, which again contains a reference to the article “sepulki”:

I found the following brief information:
"SEPULKI - important element civilization of the Ardrites (q.v.) from the planet Enteropia (q.v.). See SEPULCARIA."
I followed this advice and read:
“SEPULCARIA - devices for sepullation (see).”
I searched for "Sepulenia"; it read:
“SEPULATION is the occupation of the Ardrites (q.v.) from the planet Enteropia (q.v.). See SEPULS.”

Lem S. “Star Diaries of Iyon the Quiet. Journey fourteen."

  • A story from “Cyberiad” about an intelligent machine that had enough intelligence and laziness to build a similar one to solve a given problem, and entrust the solution to it (the result was an endless recursion, when each new machine built a similar one and delegated the task to it).
  • Recursive acronyms: GNU (GNU Not Unix), PHP (PHP: Hypertext Preprocessor), etc.

See also

  • Return sequence

Notes


Wikimedia Foundation. 2010.

  • Video memory
  • Electromagnetic radiation

See what “Recursion” is in other dictionaries:

    recursion- return, repetition Dictionary of Russian synonyms. recursion noun, number of synonyms: 1 ... Dictionary of synonyms

    recursion- - [] recursion In a general sense, the calculation of a function using a specific algorithm. Examples of such algorithms are recurrent formulas that derive the calculation of a given term... ... Technical Translator's Guide

    Recursion- in a general sense, the calculation of a function using a specific algorithm. Examples of such algorithms are recurrent formulas, which derive the calculation of a given term of a sequence (most often numerical) from the calculation of several previous ... Economic-mathematical dictionary

    Recursion- A therapeutic pattern in which some condition or criterion formulated in the original statement is taken and applied to the statement itself. For example: I don't have time. How much time did you have to spend to make sure that you... ... Great psychological encyclopedia

    RECURSION- a method of determining functions, which is the object of study in the theory of algorithms and other branches of mathematics. logic. This method has long been used in arithmetic to determine numerical sequences (progression, Fibonacci numbers, etc.).... ... Mathematical Encyclopedia

    recursion- (background) (lat. recursio return). One of the three phases of sound articulation, indentation. Transferring the speech organs to a calm state or beginning to articulate the next sound. In the word rest, recursion (indentation) when articulating [t] can overlap... ... Dictionary of linguistic terms T.V. Foal

Encyclopedic YouTube

  • 1 / 5

    In mathematics, recursion has to do with the method of defining functions and number series: recursively defined  a function determines its value by calling itself with other arguments. In this case, two options are possible:

    e = 2 + 2 2 + 3 3 + 4 4 + … = 2 + f (2) (\displaystyle e=2+(\cfrac (2)(2+(\cfrac (3)(3+(\cfrac ( 4)(4+\ldots ))))))\;=2+f(2)), Where f (n) = n n + f (n + 1) (\displaystyle f(n)=(\cfrac (n)(n+f(n+1)))) A direct calculation using the above formula will cause an infinite recursion, but it can be proven that the value of f(n) tends to unity as n increases (therefore, despite the infinity of the series, the value of the Euler number is finite). To approximately calculate the value of e, it is enough to artificially limit the recursion depth to some predetermined number and, upon reaching it, use it instead f (n) (\displaystyle f(n)) unit.

    Another example of recursion in mathematics is a number series given by a recurrent formula, when each next term of the series is calculated as the result of a function of n previous terms. Thus, using a finite expression (which is a combination of a recurrent formula and a set of values ​​for the first n terms of a series), the definition of an infinite series can be given.

    Struct element_of_list ( element_of_list * next ; /* pointer to the next element of the same type */ int data ; /* some data */ );

    Since infinite number nested objects cannot be placed in finite memory; in reality, such recursively defined structures are always finite. In the final (terminal) cells, empty pointers are usually written, which are also flags indicating to the program processing the structure that its end has been reached.

    Recursion is the property of an object to imitate itself. An object is recursive if its parts look the same as the whole object. Recursion is very widely used in mathematics and programming:

    • data structures:
      • a graph (in particular trees and lists) can be viewed as a collection of a single node and a subgraph (smaller graph);
      • the string consists of the first character and a substring (smaller string);
    • design patterns, e.g. A decorator object can include other objects that are also decorators. McColm Smith studied recursive patterns in detail, highlighting a general design pattern in his book - Recursion;
    • recursive functions (algorithms) call themselves.

    The article is devoted to the analysis of the complexity of recursive algorithms, the necessary mathematical information is given, and examples are considered. In addition, the possibility of replacing recursion with a loop, tail recursion, is described.

    Examples of recursive algorithms

    A recursive algorithm always breaks a problem into parts that are the same in structure as the original problem, but simpler. To solve subproblems, the function is called recursively, and their results are somehow combined. Division of a task occurs only when it cannot be solved immediately (it is too complex).

    For example, the task of processing an array can often be reduced to processing its parts. Division into parts is carried out until they become elementary, i.e. simple enough to get the result without further simplification.

    Finding an Array Element

    start; search(array, begin, end, element) ; searches for an element with value element in array between indexes begin and end if begin > end result:= false; element not found otherwise if array = element result:= true; element found otherwise result:= search(array, begin+1, end, element) end; return result

    The algorithm divides the original array into two parts - the first element and an array of remaining elements. There are two simple cases when separation is not required - all elements have been processed or the first element is the one being sought.

    In the search algorithm, the array could be divided differently (for example, in half), but this would not affect the efficiency. If the array is sorted, then dividing it in half is advisable, because At each step, the amount of data processed can be reduced by half.

    Binary search in an array

    Binary search is performed on a sorted array. At each step, the searched element is compared with the value located in the middle of the array. Depending on the results of the comparison, either the left or right sides can be "discarded".

    Start; binary_search(array, begin, end, element) ; searches for an element with the value element ; in an array ordered in ascending order array ; between the indices begin and end if begin > end end; return false - element not found mid:= (end + begin) div 2; calculating the index of the element in the middle of the considered part of the array if array = element end; return true (element found) if array< element результат:= binary_search(array, mid+1, end, element) иначе результат:= binary_search(array, begin, mid, element) конец; вернуть результат

    Calculating Fibonacci numbers

    Fibonacci numbers are determined by a recurrent expression, i.e. such that the calculation of an element is expressed from previous elements: \(F_0 = 0, F_1 ​​= 1, F_n = F_(n-1) + F_(n-2), n > 2\).

    Start; fibonacci(number) if number = 0 end; return 0 if number = 1 end; return 1 fib_1:= fibonacci(number-1) fib_2:= fibonacci(number-2) result:= fib_1 + fib_2 end; return result

    Quick sort

    Algorithm quick sort at each step, selects one of the elements (reference) and relative to it divides the array into two parts, which are processed recursively. Elements smaller than the supporting one are placed in one part, and the rest are placed in the other.

    Flowchart of the quicksort algorithm

    Merge sort

    The basis of the merge sort algorithm is the ability to quickly combine ordered arrays (or lists) so that the result is ordered. The algorithm splits the original array into two parts randomly(usually in half), recursively sorts them and combines the result. Division occurs as long as the array size is greater than one, because an empty array and an array of one element are always sorted.

    Block diagram of merge sort

    At each merge step, the first raw element from both lists is selected. The elements are compared and the smallest one is added to the result and marked as processed. The merge occurs until one of the lists is empty.

    Start; merge(Array1, Size1, Array2, Size2) ; the original arrays are ordered; the result is an ordered array of length Size1+Size2 i:= 0, j:= 0 eternal_loop if i >= Size1 add elements from j to Size2 of the Array2 array to the end of the result exit the loop if j >= Size2 add elements from i to Size1 of the array Array1 to the end of the result exit the loop if Array1[i]< Array2[j] результат := Array1[i] i:= i + 1 иначе (если Array1[i] >= Array2[j]) result := Array2[j] j:= j + 1 end; return result

    Analysis of recursive algorithms

    When the complexity of iterations and their number in the worst, best and average cases are calculated. However, it will not be possible to apply this approach to a recursive function, because the result will be a recurrence relation. For example, for the function to search for an element in an array:

    \(
    \begin(equation*)
    T^(search)_n = \begin(cases)
    \mathcal(O)(1) \quad &\text($n = 0$) \\
    \mathcal(O)(1) + \mathcal(O)(T^(search)_(n-1)) \quad &\text($n > 0$)
    \end(cases)
    \end(equation*)
    \)

    Recurrent relations do not allow us to evaluate complexity - we cannot simply compare them, and therefore compare the effectiveness of the corresponding algorithms. It is necessary to obtain a formula that describes the recurrent relation - a universal way to do this is to select the formula using the substitution method, and then prove the correspondence of the formula to the relation using the method of mathematical induction.

    Substitution (iteration) method

    It consists of sequentially replacing the recurrent part in an expression to obtain new expressions. Replacement is carried out until it is possible to catch general principle and express it as a non-recurrent formula. For example, to search for an element in an array:

    \(
    T^(search)_n = \mathcal(O)(1) + \mathcal(O)(T^(search)_(n-1)) =
    2\times\mathcal(O)(1) + \mathcal(O)(T^(search)_(n-2)) =
    3\times\mathcal(O)(1) + \mathcal(O)(T^(search)_(n-3))
    \)

    We can assume that \(T^(search)_n = T^(search)_(n-k) + k\times\mathcal(O)(1)\), but then \(T^(search)_n = T^ (search)_(0) + n\times\mathcal(O)(1) = \mathcal(O)(n)\).

    We have derived the formula, but the first step contains an assumption, i.e. there is no proof of the correspondence of the formula to the recurrent expression - the method of mathematical induction allows us to obtain the proof.

    Method of mathematical induction

    Allows you to prove the truth of a certain statement (\(P_n\)), consists of two steps:

    1. proof of the statement for one or more special cases \(P_0, P_1, …\);
    2. from the truth of \(P_n\) (inductive hypothesis) and special cases, the proof of \(P_(n+1)\) is deduced.

    Let us prove the correctness of the assumption made when estimating the complexity of the search function (\(T^(search)_n = (n+1)\times\mathcal(O)(1)\)):

    1. \(T^(search)_(1) = 2\times\mathcal(O)(1)\) is true from the condition (can be substituted into the original recurrent formula);
    2. let's assume the truth \(T^(search)_n = (n+1)\times\mathcal(O)(1)\);
    3. we need to prove that \(T^(search)_(n+1) = ((n+1)+1)\times\mathcal(O)(1) = (n+2)\times\mathcal(O) (1)\);
      1. let's substitute \(n+1\) into the recurrence relation: \(T^(search)_(n+1) = \mathcal(O)(1) + T^(search)_n\);
      2. on the right side of the expression it is possible to make a replacement based on the inductive hypothesis: \(T^(search)_(n+1) = \mathcal(O)(1) + (n+1)\times\mathcal(O)(1) = (n+2)\times\mathcal(O)(1)\);
      3. the statement has been proven.

    Often, such proof is a rather labor-intensive process, but it is even more difficult to identify a pattern using the substitution method. In this regard, the so-called general method.

    General (basic) method for solving recurrence relations

    The general method is not universal; for example, it cannot be used to estimate the complexity of the above algorithm for calculating Fibonacci numbers. However, it applies to all cases of using the divide and conquer approach:

    \(T_n = a\cdot T(\frac(n)(b))+f_n; a, b = const, a \geq 1, b > 1, f_n > 0, \forall n\).

    Equations of this type are obtained if the original problem is divided into a subtasks, each of which processes \(\frac(n)(b)\) elements. \(f_n\) is the complexity of the operations of dividing the problem into parts and combining solutions. In addition to the type of relationship, the general method imposes restrictions on the function \(f_n\), distinguishing three cases:

    1. \(\exists \varepsilon > 0: f_n = \mathcal(O)(n^(\log_b a - \varepsilon)) \Rightarrow T_n = \Theta(n^(\log_b a))\);
    2. \(f_n = \Theta(n^(\log_b a)) \Rightarrow T_n = \Theta(n^(\log_b a) \cdot \log n)\);
    3. \(\exists \varepsilon > 0, c< 1: f_n = \Omega(n^{\log_b a + \varepsilon}), f_{\frac{n}{b}} \leq c \cdot f_n \Rightarrow T_n = \Theta(f_n)\).

    The correctness of the statements for each case is proven formally. The task of analyzing a recursive algorithm now comes down to determining the case of the main theorem to which the recurrence relation corresponds.

    Analysis of the binary search algorithm

    The algorithm splits the source data into 2 parts (b = 2), but processes only one of them (a = 1), \(f_n = 1\). \(n^(\log_b a) = n^(\log_2 1) = n^0 = 1\). The function of dividing the task and arranging the result grows at the same rate as \(n^(\log_b a)\), which means it is necessary to use the second case of the theorem:

    \(T^(binarySearch)_n = \Theta(n^(\log_b a) \cdot \log n) = \Theta(1 \cdot \log n) = \Theta(\log n)\).

    Search algorithm analysis

    Recursive function splits original problem per subtask (a = 1), the data is divided into one part (b = 1). We cannot use the main theorem to analyze this algorithm, because the condition \(b > 1\) is not satisfied.

    To carry out the analysis, the substitution method or the following reasoning can be used: each recursive call reduces the dimension of the input data by one, which means there will be n pieces in total, each of which has complexity \(\mathcal(O)(1)\). Then \(T^(search)_n = n \cdot \mathcal(O)(1) = \mathcal(O)(n)\).

    Analysis of Merge Sort Algorithm

    The source data is divided into two parts, both of which are processed: \(a = 2, b = 2, n^(\log_b a) = n\).

    When processing a list, division may require \(\Theta(n)\) operations, while for an array it takes constant time (\(\Theta(1)\)). However, in any case, \(\Theta(n)\) will be spent on connecting the results, so \(f_n = n\).

    The second case of the theorem is used: \(T^(mergeSort)_n = \Theta(n^(\log_b a) \cdot \log n) = \Theta(n \cdot \log n)\).

    Analysis of the labor intensity of quick sort

    In the best case, the original array is split into two parts, each containing half of the original data. The division will require n operations. The complexity of composing the result depends on the data structures used - for an array \(\mathcal(O)(n)\), for a linked list \(\mathcal(O)(1)\). \(a = 2, b = 2, f_n = b\), which means the complexity of the algorithm will be the same as that of merge sort: \(T^(quickSort)_n = \mathcal(O)(n \cdot \log n)\ ).

    However, in the worst case, the minimum or maximum element of the array will be constantly selected as the reference. Then \(b = 1\), which means we again cannot use the main theorem. However, we know that in this case n recursive calls will be made, each of which divides the array into parts (\(\mathcal(O)(n)\)) - which means the complexity of the algorithm \(T^(quickSort)_n = \mathcal(O)(n^2)\).

    When analyzing quick sort by substitution, one would also have to consider the best and worst cases separately.

    Tail recursion and loop

    Analyzing the complexity of recursive functions is much more complex than evaluating loops, but the main reason why loops are preferable is the high cost of calling a function.

    After the call control is transferred another function. To transfer control, it is enough to change the value of the program counter register, in which the processor stores the number of the currently executed instruction - in the same way, control is transferred to the branches of the algorithm, for example, when using conditional operator. However, a call is not only a transfer of control, because after the called function completes its calculations, it must return control to the point where the call was made, and also restore the values ​​of local variables that existed there before the call.

    To implement this behavior, a stack (call stack) is used - the command number to return and information about local variables are placed in it. The stack is not infinite, so recursive algorithms can lead to its overflow, in any case, working with it can take a significant part of the time.

    In some cases recursive function It’s quite easy to replace with a loop, for example, those discussed above. In some cases, a more creative approach is required, but most often such a replacement is possible. Additionally, there is a special kind of recursion where the recursive call is the last operation performed by the function. Obviously, in this case, the calling function will not change the result in any way, which means there is no point in returning control. Such recursion is called tail recursion— compilers automatically replace it with a loop.

    Often it helps to do a tail recursion accumulative parameter method, which consists of adding the function of an additional accumulator argument in which the result is accumulated. The function performs calculations on the accumulator before the recursive call. A good example Using this technique is the function of calculating the factorial:
    \(fact_n = n \cdot fact(n-1) \\
    fact_3 = 3 \cdot fact_2 = 3 \cdot (2 \cdot fact_1) = 3\cdot (2 \cdot (1 \cdot fact_0)) = 6 \\
    fact_n = factTail_(n, 1) \\
    \\
    factTail_(n, accumulator) = factTail(n-1, accumulator \cdot n)\\
    factTail_(3, 1) = factTail_(2, 3) = factTail_(1, 6) = factTail_(0, 6) = 6
    \)

    As more complex example Let's consider the function of calculating Fibonacci numbers. The main function calls an auxiliary function that uses the accumulating parameter method, passing the initial value of the iterator and two accumulators (the two previous Fibonacci numbers) as arguments.

    Start; fibonacci(number) return fibonacci(number, 1, 1, 0) end start; fibonacci(number, iterator, fib1, fib2) if iterator == number return fib1 return fibonacci(number, iterator + 1, fib1 + fib2, fib1) end

    A function with an accumulating parameter returns the accumulated result if the specified number of numbers are calculated, otherwise it increments the counter, calculates a new Fibonacci number and makes a recursive call. Optimizing compilers may detect that the result of a function call is passed unchanged to the function's output and replace it with a loop. This technique is especially relevant in functional and logical programming languages, because in them the programmer cannot explicitly use cyclic constructs.

    Literature

    1. Multi-threaded Qt server. Thread pool. Decorator pattern[ Electronic resource] – access mode: https://site/archives/1390. Date of access: 02/21/2015.
    2. Jason McColm Smith: Trans. from English - M.: LLC “I.D. Williams”, 2013. - 304 p.
    3. Skiena S. Algorithms. Development Guide.-2nd ed.: trans. from English-SPb.: BHV-Petersburg, 2011.-720 pp.: ill.
    4. Vasiliev V. S. Analysis of the complexity of algorithms. Examples [Electronic resource] – access mode: https://site/archives/1660. Date of access: 02/21/2015.
    5. A. Aho, J. Hopcroft, J. Ullman, Data Structures and Algorithms, M., Williams, 2007.
    6. Miller, R. Sequential and parallel algorithms: A general approach / R. Miller, L. Boxer; lane from English - M.: BINOM. Knowledge Laboratory, 2006. - 406 p.
    7. Sergievsky G.M. Functional and logical programming: textbook. manual for higher students textbook establishments / G.M. Sergievsky, N.G. Volchenkov. - M.: Publishing center "Academy", 2010.- 320 p.

    Recursion is when a subroutine calls itself. When faced with such an algorithmic design for the first time, most people experience certain difficulties, but with a little practice, recursion will become an understandable and very useful tool in your programming arsenal.

    1. The essence of recursion

    A procedure or function may contain calls to other procedures or functions. The procedure can also call itself. There is no paradox here - the computer only sequentially executes the commands it encounters in the program and, if it encounters a procedure call, it simply begins to execute this procedure. It doesn't matter what procedure gave the command to do this.

    Example of a recursive procedure:

    Procedure Rec(a: integer); begin if a>

    Let's consider what happens if a call, for example, of the form Rec(3) is made in the main program. Below is a flowchart showing the execution sequence of the statements.

    Rice. 1. Block diagram of the recursive procedure.

    Procedure Rec is called with parameter a = 3. It contains a call to procedure Rec with parameter a = 2. The previous call has not completed yet, so you can imagine that another procedure is created and the first one does not finish its work until it finishes. The calling process ends when parameter a = 0. At this point, 4 instances of the procedure are executed simultaneously. The number of simultaneously performed procedures is called recursion depth.

    The fourth procedure called (Rec(0)) will print the number 0 and finish its work. After this, control returns to the procedure that called it (Rec(1)) and the number 1 is printed. And so on until all procedures are completed. The original call will print four numbers: 0, 1, 2, 3.

    Another visual image of what is happening is shown in Fig. 2.

    Rice. 2. Executing the Rec procedure with parameter 3 consists of executing the Rec procedure with parameter 2 and printing the number 3. In turn, executing the Rec procedure with parameter 2 consists of executing the Rec procedure with parameter 1 and printing the number 2. Etc.

    As independent exercise think about what happens when you call Rec(4). Also consider what would happen if you called the Rec2(4) procedure below, with the operators reversed.

    Procedure Rec2(a: integer); begin writeln(a); if a>0 then Rec2(a-1); end;

    Please note that in these examples the recursive call is inside a conditional statement. This necessary condition in order for the recursion to ever end. Also note that the procedure calls itself with a different parameter than the one with which it was called. If the procedure does not use global variables, then this is also necessary so that the recursion does not continue indefinitely.

    A little more possible complex circuit: Function A calls function B, which in turn calls A. This is called complex recursion. It turns out that the procedure described first must call a procedure that has not yet been described. For this to be possible, you need to use .

    Procedure A(n: integer); (Forward description (header) of the first procedure) procedure B(n: integer); (Forward description of the second procedure) procedure A(n: integer); ( Full description procedures A) begin writeln(n); B(n-1); end; procedure B(n: integer); (Full description of procedure B) begin writeln(n); if n

    The forward declaration of procedure B allows it to be called from procedure A. The forward declaration of procedure A is not required in this example and is added for aesthetic reasons.

    If ordinary recursion can be likened to an ouroboros (Fig. 3), then the image of complex recursion can be drawn from the famous children's poem, where “The wolves were frightened and ate each other.” Imagine two wolves eating each other and you will understand complex recursion.

    Rice. 3. Ouroboros - a snake devouring its own tail. Drawing from the alchemical treatise “Synosius” by Theodore Pelecanos (1478).

    Rice. 4. Complex recursion.

    3. Simulating a loop using recursion

    If a procedure calls itself, it essentially causes the instructions it contains to be executed again, similar to a loop. Some programming languages ​​do not contain looping constructs at all, leaving programmers to organize repetitions using recursion (for example, Prolog, where recursion is a basic programming technique).

    For example, let's simulate the work for loop. To do this, we need a step counter variable, which can be implemented, for example, as a procedure parameter.

    Example 1.

    Procedure LoopImitation(i, n: integer); (The first parameter is the step counter, the second parameter is the total number of steps) begin writeln("Hello N ", i); //Here are any instructions that will be repeated if i

    The result of a call of the form LoopImitation(1, 10) will be the execution of instructions ten times with the counter changing from 1 to 10. In in this case will be printed:

    Hello N 1
    Hello N 2

    Hello N 10

    In general, it is not difficult to see that the parameters of the procedure are the limits for changing the counter values.

    You can swap the recursive call and the instructions to be repeated, as in the following example.

    Example 2.

    Procedure LoopImitation2(i, n: integer); begin if i

    In this case, a recursive procedure call will occur before instructions begin to be executed. The new instance of the procedure will also, first of all, call another instance, and so on, until we reach the maximum value of the counter. Only after this the last of the called procedures will execute its instructions, then the second to last one will execute its instructions, etc. The result of calling LoopImitation2(1, 10) will be to print greetings in reverse order:

    Hello N 10

    Hello N 1

    If we imagine a chain of recursively called procedures, then in example 1 we go through it from earlier called procedures to later ones. In example 2, on the contrary, from later to earlier.

    Finally, a recursive call can be placed between two blocks of instructions. For example:

    Procedure LoopImitation3(i, n: integer); begin writeln("Hello N ", i); (The first block of instructions may be located here) if i

    Here, the instructions from the first block are first executed sequentially, then the instructions from the second block are executed in reverse order. When calling LoopImitation3(1, 10) we get:

    Hello N 1

    Hello N 10
    Hello N 10

    Hello N 1

    It would take two loops to do the same thing without recursion.

    You can take advantage of the fact that the execution of parts of the same procedure is spaced out over time. For example:

    Example 3: Converting a number to binary.

    Obtaining the digits of a binary number, as is known, occurs by dividing with a remainder by the base of the number system 2. If there is a number, then its last digit in its binary representation is equal to

    Taking the whole part of division by 2:

    we get a number that has the same binary representation, but without the last digit. Thus, it is enough to repeat the above two operations until the next division field receives an integer part equal to 0. Without recursion it will look like this:

    While x>0 do begin c:=x mod 2; x:=x div 2; write(c); end;

    The problem here is that the digits of the binary representation are calculated in reverse order (latest first). To print a number in normal form, you will have to remember all the numbers in the array elements and print them in a separate loop.

    Using recursion, it is not difficult to achieve output in the correct order without an array and a second loop. Namely:

    Procedure BinaryRepresentation(x: integer); var c, x: integer; begin (First block. Executed in order of procedure calls) c:= x mod 2; x:= x div 2; (Recursive call) if x>0 then BinaryRepresentation(x); (Second block. Executed in reverse order) write(c); end;

    Generally speaking, we did not receive any winnings. The digits of the binary representation are stored in local variables, which are different for each running instance of the recursive procedure. That is, it was not possible to save memory. On the contrary, we waste extra memory storing many local variables x. However, this solution seems beautiful to me.

    4. Recurrence relations. Recursion and Iteration

    A sequence of vectors is said to be given by a recurrence relation if the initial vector and the functional dependence of the subsequent vector on the previous one are given

    A simple example of a quantity calculated using recurrence relations is the factorial

    The next factorial can be calculated from the previous one as:

    By introducing the notation , we obtain the relation:

    The vectors from formula (1) can be interpreted as sets of variable values. Then the calculation of the required element of the sequence will consist of repeated updating of their values. In particular for factorial:

    X:= 1; for i:= 2 to n do x:= x * i; writeln(x);

    Each such update (x:= x * i) is called iteration, and the process of repeating iterations is iteration.

    Let us note, however, that relation (1) is a purely recursive definition of the sequence and the calculation of the nth element is actually the repeated taking of the function f from itself:

    In particular, for factorial one can write:

    Function Factorial(n: integer): integer; begin if n > 1 then Factorial:= n * Factorial(n-1) else Factorial:= 1; end;

    It should be understood that calling functions entails some additional overhead, so the first option for calculating the factorial will be slightly faster. In general, iterative solutions work faster than recursive ones.

    Before moving on to situations where recursion is useful, let's look at one more example where it should not be used.

    Let's consider special case recurrent relationships, when the next value in a sequence depends not on one, but on several previous values ​​at once. An example is the famous Fibonacci sequence, in which each next element is the sum of the previous two:

    With a “frontal” approach, you can write:

    Function Fib(n: integer): integer; begin if n > 1 then Fib:= Fib(n-1) + Fib(n-2) else Fib:= 1; end;

    Each Fib call creates two copies of itself, each copy creates two more, and so on. The number of operations increases with the number n exponentially, although for an iterative solution linear in n number of operations.

    In fact, the above example teaches us not WHEN recursion should not be used, otherwise HOW it should not be used. After all, if there is a fast iterative (loop-based) solution, then the same loop can be implemented using a recursive procedure or function. For example:

    // x1, x2 – initial conditions (1, 1) // n – number of the required Fibonacci number function Fib(x1, x2, n: integer): integer; var x3: integer; begin if n > 1 then begin x3:= x2 + x1; x1:= x2; x2:= x3; Fib:= Fib(x1, x2, n-1); end else Fib:= x2; end;

    Still, iterative solutions are preferable. The question is, when should recursion be used in this case?

    Any recursive procedures and functions that contain just one recursive call to themselves can be easily replaced by iterative loops. To get something that doesn't have a simple non-recursive counterpart, you need to resort to procedures and functions that call themselves two or more times. In this case, the set of called procedures no longer forms a chain, as in Fig. 1, but a whole tree. There are wide classes of problems when the computational process must be organized in this way. For them, recursion will be the simplest and most natural solution.

    5. Trees

    The theoretical basis for recursive functions that call themselves more than once is the branch of discrete mathematics that studies trees.

    5.1. Basic definitions. Ways to depict trees

    Definition: we will call the finite set T, consisting of one or more nodes such that:
    a) There is one special node called the root of this tree.
    b) The remaining nodes (excluding the root) are contained in pairwise disjoint subsets, each of which in turn is a tree. Trees are called subtrees of this tree.

    This definition is recursive. In short, a tree is a set consisting of a root and subtrees attached to it, which are also trees. A tree is defined through itself. However this definition makes sense, since the recursion is finite. Each subtree contains fewer nodes than its containing tree. In the end, we come to subtrees containing only one node, and this is already clear what it is.

    Rice. 3. Tree.

    In Fig. Figure 3 shows a tree with seven nodes. Although ordinary trees grow from bottom to top, it is customary to draw them the other way around. When drawing a diagram by hand, this method is obviously more convenient. Because of this inconsistency, confusion sometimes arises when one node is said to be above or below another. For this reason, it is more convenient to use the terminology used when describing family trees, calling nodes closer to the root ancestors, and more distant ones descendants.

    A tree can be depicted graphically in some other ways. Some of them are shown in Fig. 4. According to the definition, a tree is a system of nested sets, where these sets either do not intersect or are completely contained in one another. Such sets can be depicted as regions on a plane (Fig. 4a). In Fig. 4b, the nested sets are not located on a plane, but are elongated into one line. Rice. 4b can also be viewed as a diagram of some algebraic formula containing nested parentheses. Rice. 4b gives one more popular way tree structure images in the form of a stepped list.

    Rice. 4. Other ways to depict tree structures: (a) nested sets; (b) nested parentheses; (c) concession list.

    The staggered list has obvious similarities to the formatting method program code. Indeed, a program written within the framework of the structured programming paradigm can be represented as a tree consisting of nested structures.

    You can also draw an analogy between a ledge list and appearance tables of contents in books where sections contain subsections, which in turn contain subsections, etc. Traditional way The numbering of such sections (section 1, subsections 1.1 and 1.2, subsection 1.1.2, etc.) is called the Dewey decimal system. Applied to the tree in Fig. 3 and 4 this system will give:

    1. A; 1.1B; 1.2 C; 1.2.1 D; 1.2.2 E; 1.2.3 F; 1.2.3.1 G;

    5.2. Passing trees

    In all algorithms related to tree structures, the same idea invariably appears, namely the idea passing or tree traversal. This is a way of visiting tree nodes in which each node is traversed exactly once. This results in a linear arrangement of tree nodes. In particular, there are three ways: you can go through the nodes in forward, reverse and end order.

    Forward traversal algorithm:

    • Get to the root
    • Go through all subtrees from left to right in direct order.

    This algorithm is recursive, since the traversal of a tree contains the traversal of subtrees, and they, in turn, are traversed using the same algorithm.

    In particular, for the tree in Fig. 3 and 4, direct traversal gives a sequence of nodes: A, B, C, D, E, F, G.

    The resulting sequence corresponds to a sequential left-to-right enumeration of nodes when representing a tree using nested parentheses and in decimal system Dewey, as well as the top-down passage when presented in the form of a stepped list.

    When implementing this algorithm in a programming language, getting to the root corresponds to the procedure or function performing some actions, and going through subtrees corresponds to recursive calls to itself. In particular, for a binary tree (where each node has at most two subtrees), the corresponding procedure would look like this:

    // Preorder Traversal – English name for direct order procedure PreorderTraversal((Arguments)); begin //Passing the root DoSomething((Arguments)); //Transition of the left subtree if (There is a left subtree) then PreorderTransversal((Arguments 2)); //Transition of the right subtree if (There is a right subtree) then PreorderTransversal((Arguments 3)); end;

    That is, first the procedure performs all actions, and only then all recursive calls occur.

    Reverse traversal algorithm:

    • Go through the left subtree,
    • Get to the root
    • Go through the next subtree to the left.
    • Get to the root
    • etc. until the rightmost subtree is traversed.

    That is, all subtrees are traversed from left to right, and the return to the root is located between these traversals. For the tree in Fig. 3 and 4 this gives the sequence of nodes: B, A, D, C, E, G, F.

    In a corresponding recursive procedure, the actions will be located in the spaces between the recursive calls. Specifically for a binary tree:

    // Inorder Traversal – English name for reverse order procedure InorderTraversal((Arguments)); begin //Traveling the left subtree if (A left subtree exists) then InorderTraversal((Arguments 2)); //Passing the root DoSomething((Arguments)); //Traverse the right subtree if (A right subtree exists) then InorderTraversal((Arguments 3)); end;

    End-order traversal algorithm:

    • Go through all subtrees from left to right,
    • Get to the root.

    For the tree in Fig. 3 and 4 this will give the sequence of nodes: B, D, E, G, F, C, A.

    In a corresponding recursive procedure, the actions will be located after the recursive calls. Specifically for a binary tree:

    // Postorder Traversal – English name for the end order procedure PostorderTraversal((Arguments)); begin //Traveling the left subtree if (There is a left subtree) then PostorderTraversal((Arguments 2)); //Transcending the right subtree if (A right subtree exists) then PostorderTraversal((Arguments 3)); //Passing the root DoSomething((Arguments)); end;

    5.3. Representation of a tree in computer memory

    If some information is located in tree nodes, then an appropriate dynamic data structure can be used to store it. In Pascal this is done using variable type a record containing pointers to subtrees of the same type. For example, a binary tree where each node contains an integer can be stored using a variable of type PTree, which is described below:

    Type PTree = ^TTree; TTree = record Inf: integer; LeftSubTree, RightSubTree: PTree; end;

    Each node has a PTree type. This is a pointer, meaning each node must be created by calling the New procedure on it. If the node is a leaf node, then its LeftSubTree and RightSubTree fields are assigned the value nil. Otherwise, the LeftSubTree and RightSubTree nodes are also created by the New procedure.

    One such record is shown schematically in Fig. 5.

    Rice. 5. Schematic representation of a TTree type record. The record has three fields: Inf – a number, LeftSubTree and RightSubTree – pointers to records of the same TTree type.

    An example of a tree made up of such records is shown in Figure 6.

    Rice. 6. A tree made up of TTree type records. Each entry stores a number and two pointers that can contain either nil, or addresses of other records of the same type.

    If you have not previously worked with structures consisting of records containing links to records of the same type, we recommend that you familiarize yourself with the material about.

    6. Examples of recursive algorithms

    6.1. Drawing a tree

    Let's consider the algorithm for drawing the tree shown in Fig. 6. If each line is considered a node, then this image fully satisfies the definition of a tree given in the previous section.

    Rice. 6. Tree.

    The recursive procedure would obviously draw one line (the trunk up to the first branch), and then call itself to draw the two subtrees. Subtrees differ from the tree containing them in the coordinates of their starting point, rotation angle, trunk length, and the number of branches they contain (one less). All these differences should be made parameters of the recursive procedure.

    An example of such a procedure, written in Delphi, is presented below:

    Procedure Tree(Canvas: TCanvas; //Canvas on which the tree will be drawn x,y: extended; //Root coordinates Angle: extended; //Angle at which the tree grows TrunkLength: extended; //Trunk length n: integer / /Number of branches (how many more //recursive calls remain)); var x2, y2: extended; //Trunk end (branch point) begin x2:= x + TrunkLength * cos(Angle); y2:= y - TrunkLength * sin(Angle); Canvas.MoveTo(round(x), round(y)); Canvas.LineTo(round(x2), round(y2)); if n > 1 then begin Tree(Canvas, x2, y2, Angle+Pi/4, 0.55*TrunkLength, n-1); Tree(Canvas, x2, y2, Angle-Pi/4, 0.55*TrunkLength, n-1); end; end;

    To obtain Fig. 6 this procedure was called with the following parameters:

    Tree(Image1.Canvas, 175, 325, Pi/2, 120, 15);

    Note that drawing is carried out before recursive calls, that is, the tree is drawn in direct order.

    6.2. Hanoi Towers

    According to legend in the Great Temple of Banaras, under the cathedral marking the middle of the world, there is a bronze disk on which 3 diamond rods are fixed, one cubit high and thick as a bee. A long time ago, at the very beginning of time, the monks of this monastery committed offense before the god Brahma. Enraged, Brahma erected three high rods and placed 64 pure gold discs on one of them, so that each smaller disc rested on a larger one. As soon as all 64 disks are transferred from the rod on which God Brahma placed them when creating the world, to another rod, the tower along with the temple will turn into dust and the world will perish under thunderclaps.
    The process requires that larger disk never found myself above anything less. The monks are in a dilemma: in what order should they do the shifts? It is required to provide them with software to calculate this sequence.

    Independently of Brahma, this puzzle was proposed at the end of the 19th century by the French mathematician Edouard Lucas. The sold version usually used 7-8 disks (Fig. 7).

    Rice. 7. Puzzle “Towers of Hanoi”.

    Let's assume that there is a solution for n-1 disk. Then for shifting n disks, proceed as follows:

    1) Shift n-1 disk.
    2) Shift n th disk onto the remaining free pin.
    3) We shift the stack from n-1 disk received in point (1) on top n-th disk.

    Because for the case n= 1 the rearrangement algorithm is obvious, then by induction, using actions (1) – (3), we can rearrange an arbitrary number of disks.

    Let's create a recursive procedure that prints the entire sequence of rearrangements for a given number of disks. Each time such a procedure is called, it must print information about one shift (from point 2 of the algorithm). For rearrangements from points (1) and (3), the procedure will call itself with the number of disks reduced by one.

    //n – number of disks //a, b, c – pin numbers. Shifting is done from pin a, //to pin b with auxiliary pin c. procedure Hanoi(n, a, b, c: integer); begin if n > 1 then begin Hanoi(n-1, a, c, b); writeln(a, " -> ", b); Hanoi(n-1, c, b, a); end else writeln(a, " -> ", b); end;

    Note that the set of recursively called procedures in this case forms a tree traversed in reverse order.

    6.3. Parsing Arithmetic Expressions

    The task of parsing is to calculate the value of the expression using an existing string containing an arithmetic expression and the known values ​​of the variables included in it.

    The process of calculating arithmetic expressions can be represented as a binary tree. Indeed, each of the arithmetic operators (+, –, *, /) requires two operands, which will also be arithmetic expressions and, accordingly, can be considered as subtrees. Rice. Figure 8 shows an example of a tree corresponding to the expression:

    Rice. 8. Syntax tree corresponding to the arithmetic expression (6).

    In such a tree, the end nodes will always be variables (here x) or numeric constants, and all internal nodes will contain arithmetic operators. To execute an operator, you must first evaluate its operands. Thus, the tree in the figure should be traversed in terminal order. Corresponding sequence of nodes

    called reverse Polish notation arithmetic expression.

    When constructing a syntax tree, you should pay attention to the following feature. If, for example, there is an expression

    and we will read the operations of addition and subtraction from left to right, then the correct syntax tree will contain a minus instead of a plus (Fig. 9a). In essence, this tree corresponds to the expression. It is possible to make the creation of a tree easier if you analyze expression (8) in reverse, from right to left. In this case, the result is a tree with Fig. 9b, equivalent to tree 8a, but not requiring replacement of signs.

    Similarly, from right to left, you need to analyze expressions containing multiplication and division operators.

    Rice. 9. Syntax trees for expression ab + c when reading from left to right (a) and from right to left (b).

    This approach does not completely eliminate recursion. However, it allows you to limit yourself to only one call to a recursive procedure, which may be sufficient if the motive is to maximize performance.

    7.3. Determining a tree node by its number

    The idea of ​​this approach is to replace recursive calls with a simple loop that will be executed as many times as there are nodes in the tree formed by the recursive procedures. What exactly will be done at each step should be determined by the step number. Match the step number and necessary actions– the task is not trivial and in each case it will have to be solved separately.

    For example, let's say you want to do k nested loops n steps in each:

    For i1:= 0 to n-1 do for i2:= 0 to n-1 do for i3:= 0 to n-1 do …

    If k is unknown in advance, it is impossible to write them explicitly, as shown above. Using the technique demonstrated in Section 6.5, you can obtain the required number of nested loops using a recursive procedure:

    Procedure NestedCycles(Indexes: array of integer; n, k, depth: integer); var i: integer; begin if depth

    To get rid of recursion and reduce everything to one cycle, note that if you number the steps in the radix number system n, then each step has a number consisting of the numbers i1, i2, i3, ... or the corresponding values ​​from the Indexes array. That is, the numbers correspond to the values ​​of the cycle counters. Step number in regular decimal notation:

    There will be a total of steps n k. By going through their numbers in the decimal number system and converting each of them to the radix system n, we get the index values:

    M:= round(IntPower(n, k)); for i:= 0 to M-1 do begin Number:= i; for p:= 0 to k-1 do begin Indexes := Number mod n; Number:= Number div n; end; DoSomething(Indexes); end;

    Let us note once again that the method is not universal and you will have to come up with something different for each task.

    Security questions

    1. Determine what the following recursive procedures and functions will do.

    (a) What will the following procedure print when Rec(4) is called?

    Procedure Rec(a: integer); begin writeln(a); if a>0 then Rec(a-1); writeln(a); end;

    (b) What will be the value of the function Nod(78, 26)?

    Function Nod(a, b: integer): integer; begin if a > b then Nod:= Nod(a – b, b) else if b > a then Nod:= Nod(a, b – a) else Nod:= a; end;

    (c) What will be printed by the procedures below when A(1) is called?

    Procedure A(n: integer); procedure B(n: integer); procedure A(n: integer); begin writeln(n); B(n-1); end; procedure B(n: integer); begin writeln(n); if n

    (d) What will the procedure below print when calling BT(0, 1, 3)?

    Procedure BT(x: real; D, MaxD: integer); begin if D = MaxD then writeln(x) else begin BT(x – 1, D + 1, MaxD); BT(x + 1, D + 1, MaxD); end; end;

    2. Ouroboros - a snake devouring its own tail (Fig. 14) when unfolded has a length L, diameter around the head D, abdominal wall thickness d. Determine how much tail he can squeeze into himself and how many layers will the tail be laid in after that?

    Rice. 14. Expanded ouroboros.

    3. For the tree in Fig. 10a indicate the sequence of visiting nodes in forward, reverse and end traversal order.

    4. Graphically depict the tree defined using nested brackets: (A(B(C, D), E), F, G).

    5. Graphically depict the syntax tree for the following arithmetic expression:

    Write this expression in reverse Polish notation.

    6. For the graph below (Fig. 15), write down the adjacency matrix and the incidence matrix.

    Tasks

    1. Having calculated the factorial sufficiently large number times (a million or more), compare the effectiveness of recursive and iterative algorithms. How much will the execution time differ and how will this ratio depend on the number whose factorial is being calculated?

    2. Write a recursive function that checks whether parentheses are placed correctly in a string. If the arrangement is correct, the following conditions are met:

    (a) the number of opening and closing parentheses is equal.
    (b) within any pair there is an opening - corresponding closing bracket, the brackets are placed correctly.

    Examples of incorrect placement:)(, ())(, ())((), etc.

    3. The line may contain both parentheses and square brackets. Each opening bracket has a corresponding closing bracket of the same type (round - round, square - square). Write a recursive function that checks whether the parentheses are placed correctly in this case.

    Example of incorrect placement: ([)].

    4. The number of regular bracket structures of length 6 is 5: ()()(), (())(), ()(()), ((())), (()()).
    Write a recursive program to generate all regular bracket structures of length 2 n.

    Note: Correct parenthesis structure of minimum length "()". Structures of longer length are obtained from structures of shorter length in two ways:

    (a) if the smaller structure is taken into brackets,
    (b) if two smaller structures are written sequentially.

    5. Create a procedure that prints all possible permutations for integers from 1 to N.

    6. Create a procedure that prints all subsets of the set (1, 2, ..., N).

    7. Create a procedure that prints all possible representations of the natural number N as the sum of other natural numbers.

    8. Create a function that calculates the sum of the array elements using the following algorithm: the array is divided in half, the sums of the elements in each half are calculated and added. The sum of the elements in half the array is calculated using the same algorithm, that is, again by dividing in half. Divisions occur until the resulting pieces of the array contain one element each and calculating the sum, accordingly, becomes trivial.

    Comment: This algorithm is an alternative. In the case of real-valued arrays, it usually allows for smaller rounding errors.

    10. Create a procedure that draws the Koch curve (Figure 12).

    11. Reproduce the figure. 16. In the figure, at each next iteration the circle is 2.5 times smaller (this coefficient can be made a parameter).

    Literature

    1. D. Knuth. The art of computer programming. v. 1. (section 2.3. “Trees”).
    2. N. Wirth. Algorithms and data structures.

    Recursions are interesting events in their own right, but in programming they are especially important in specific cases. When encountering them for the first time, quite a significant number of people have problems understanding them. This is due to the huge scope of potential applications of the term itself, depending on the context in which “recursion” is used. But we can hope that this article will help avoid any possible misunderstanding or misunderstanding.

    What is "recursion" anyway?

    The word "recursion" has a whole range of meanings, which depend on the field in which it is applied. The universal designation is as follows: recursions are definitions, images, descriptions of objects or processes in the objects themselves. They are possible only in cases where the object is part of itself. Mathematics, physics, programming and a number of other scientific disciplines define recursion in their own way. Practical Application she found at work information systems and physical experiments.

    What is meant by recursion in programming?

    Recursive situations, or recursion in programming, are moments when a program procedure or function calls itself. No matter how strange it may sound for those who started learning programming, there is nothing strange here. It should be remembered that recursions are not difficult, and in some cases they replace loops. If the computer is correctly instructed to call a procedure or function, it will simply start executing it.

    Recursion can be finite or infinite. In order for the first one to stop causing itself, it must also contain conditions for termination. This can be to decrease the value of a variable and, when a certain value is reached, to stop the call and terminate the program/proceed to subsequent code, depending on the needs to achieve certain goals. By infinite recursion we mean that it will be called as long as the computer or program in which it runs is running.

    It is also possible to organize complex recursion using two functions. Let's say there are A and B. Function A has a call to B in its code, and B, in turn, indicates to the computer the need to execute A. Complex recursions are a way out of a number of complex logical situations for computer logic.

    If the reader of these lines has studied program loops, then he has probably already noticed the similarity between them and recursion. In general, they can indeed perform similar or identical tasks. Using recursion it is convenient to simulate the operation of a loop. This is especially useful where the cycles themselves are not very convenient to use. The software implementation scheme does not differ much between different high-level programming languages. But still, recursion in Pascal and recursion in C or another language have their own characteristics. It can be successfully implemented in low-level languages ​​like Assembly, but this is more problematic and time-consuming.

    Recursion trees

    What is a "tree" in programming? This is a finite set consisting of at least one node that:

    1. It has an initial special node, which is called the root of the entire tree.
    2. The remaining nodes are in a nonzero number of pairwise disjoint subsets, and they are also a tree. All such forms of organization are called subtrees of the main tree.

    In other words: trees contain subtrees, which contain more trees, but in smaller numbers than the previous tree. This continues until one of the nodes can no longer move forward, and this will mark the end of the recursion. There is one more nuance about the schematic representation: ordinary trees grow from bottom to top, but in programming they are drawn the other way around. Nodes that have no continuation are called leaf nodes. For ease of designation and convenience, genealogical terminology (ancestors, children) is used.

    Why is it used in programming?

    Recursion in programming has found its application in solving a number of complex problems. If it is necessary to make only one call, then it is easier to use an integration loop, but with two or more repetitions, in order to avoid building a chain and make their execution in the form of a tree, recursive situations are used. For a wide class of tasks, organizing the computing process in this way is the most optimal from the point of view of resource consumption. So, recursion in Pascal or any other high level language programming is a call to a function or procedure until the conditions are met, regardless of the number of external calls. In other words, a program can have only one call to a subroutine, but it will occur until a predetermined moment. In some ways, this is an analogue of a cycle with its own specifics of use.

    Differences between recursion in different programming languages

    Despite general scheme implementations and specific application in each individual case, recursion in programming has its own characteristics. This can lead to difficulties when searching for the required material. But you should always remember: if a programming language calls functions or procedures, then calling recursion is feasible. But its most significant differences appear when using low and high programming languages. This is especially true for software implementation capabilities. Execution ultimately depends on what task is posed, and recursion is written in accordance with it. The functions and procedures used are different, but their goal is always the same - to force them to call themselves.

    Recursion is easy. How easy is it to remember the contents of an article?

    For beginners, it may be difficult to understand at first, so you need examples of recursion, or at least one. Therefore, we should give a small example from everyday life that will help to understand the very essence of this mechanism for achieving goals in programming. Take two or more mirrors, place them so that all the others are displayed in one. You can see that the mirrors reflect themselves multiple times, creating an infinity effect. Recursions are, figuratively speaking, reflections (there will be many of them). As you can see, it’s not difficult to understand, if only you have the desire. And by studying programming materials, you can further understand that recursion is also a very easy task to complete.