Linear regression plot in excel. Regression in Excel

The regression line is a graphical reflection of the relationship between phenomena. You can very clearly build a regression line in Excel.

To do this you need:

1.Open Excel

2.Create data columns. In our example, we will build a regression line, or relationship, between aggressiveness and self-doubt in first-graders. 30 children participated in the experiment, the data is presented in the Excel table:

1 column - subject number

2 column - aggressiveness in points

3 column - self-doubt in points

3.Then you need to select both columns (without the column name), click tab insert , choose spot , and choose the very first one from the proposed layouts dot with markers .

4. So we have a template for the regression line - the so-called - scatter plot. To go to the regression line, click on the resulting figure and press tab constructor, find on the panel chart layouts and choose M A ket9 , it also says f(x)

5. So, we have a regression line. The graph also shows its equation and the square of the correlation coefficient

6. All that remains is to add the name of the graph and the name of the axes. Also, if desired, you can remove the legend, reduce the number horizontal lines grids (tab layout , then net ). Basic changes and settings are made in the tab Layout

The regression line was constructed in MS Excel. Now you can add it to the text of the work.

This is the most common way to show the dependence of some variable on others, for example, how does GDP level from the size foreign investment or from National Bank lending rate or from prices for key energy resources.

Modeling allows you to show the magnitude of this dependence (coefficients), thanks to which you can make a direct forecast and carry out some kind of planning based on these forecasts. Also, based on regression analysis, it is possible to make management decisions aimed at stimulating priority causes influencing the final result; the model itself will help to identify these priority factors.

General view of the linear regression model:

Y=a 0 +a 1 x 1 +...+a k x k

Where a - regression parameters (coefficients), x - influencing factors, k - number of model factors.

Initial data

Among the initial data, we need a certain set of data that would represent several consecutive or interconnected values ​​of the final parameter Y (for example, GDP) and the same number of values ​​of the indicators whose influence we are studying (for example, foreign investment).

The figure above shows a table with these same initial data; Y is an indicator of the economically active population, and the number of enterprises, the amount of investment in capital and household income are influencing factors, that is, X's.

Based on the figure, one can also make an erroneous conclusion that modeling can only be about time series, that is, moment series recorded sequentially in time, but this is not so; with the same success, one can model in the context of a structure, for example, the values ​​​​indicated in the table can be broken down not by year, but by region.

To build adequate linear models It is desirable that the source data does not have strong drops or collapses; in such cases, it is advisable to carry out smoothing, but we will talk about smoothing next time.

Analysis package

The parameters of a linear regression model can also be calculated manually using the Ordinary Least Squares Method (OLS), but this is quite time-consuming. This can be calculated a little faster using the same method by using formulas in Excel, where the program itself will do the calculations, but you will still have to enter the formulas manually.

Excel has an add-in Analysis package which is pretty powerful tool to help the analyst. This toolkit, among other things, can calculate regression parameters using the same least squares method, in just a few clicks. In fact, how to use this tool will be discussed further.

Activate the Analysis Package

By default, this add-on is disabled and you won’t find it in the tab menu, so we’ll take a step-by-step look at how to activate it.

In Excel, at the top left, activate the tab File, in the menu that opens, look for the item Options and click on it.

In the window that opens, on the left, look for the item Add-ons and activate it, in this tab at the bottom there will be a drop-down control list, where by default it will be written Excel add-ins , there will be a button to the right of the drop-down list Go, you need to click on it.

A pop-up window will prompt you to select available add-ons; in it you need to check the box Analysis package and at the same time, just in case, Finding a solution(also a useful thing), and then confirm your choice by clicking on the button OK.

Instructions for finding linear regression parameters using the Analysis Package

After activating the Analysis Pack add-on, it will always be available in the main menu tab Data under the link Data Analysis

In the active tool window Data Analysis from the list of possibilities we search and select Regression

Next, a window will open for setting up and selecting source data for calculating the parameters of the regression model. Here you need to indicate the intervals of the initial data, namely the parameter being described (Y) and the factors influencing it (X), as shown in the figure below; the remaining parameters, in principle, are optional to configure.

After you have selected the source data and clicked the OK button, Excel produces calculations on a new sheet of the active workbook (unless it was set otherwise in the settings), these calculations look like this:

The key cells are filled with yellow; these are the ones you need to pay attention to first of all; the other parameters of significance are also important, but their detailed analysis probably requires a separate post.

So, 0,865 - This R 2- coefficient of determination, showing that 86.5% of the calculated parameters of the model, that is, the model itself, explain the dependence and changes in the parameter being studied - Y from the studied factors - X's. If exaggerated, then this is an indicator of the quality of the model and the higher it is, the better. It is clear that it cannot be more than 1 and is considered good when R 2 is above 0.8, and if it is less than 0.5, then the reasonableness of such a model can be safely questioned.

Now let's move on to model coefficients:
2079,85 - This a 0- a coefficient that shows what Y will be if all factors used in the model are equal to 0, it is understood that this is a dependence on other factors not described in the model;
-0,0056 - a 1- a coefficient that shows the weight of the influence of factor x 1 on Y, that is, the number of enterprises within a given model affects the indicator of the economically active population with a weight of only -0.0056 (a rather small degree of influence). The minus sign shows that this influence is negative, that is, the more enterprises, the less economically active population, no matter how paradoxical this may be in meaning;
-0,0026 - a 2- coefficient of influence of the volume of investment in capital on the size of the economically active population; according to the model, this influence is also negative;
0,0028 - a 3- coefficient of influence of population income on the size of the economically active population, here the influence is positive, that is, according to the model, an increase in income will contribute to an increase in the size of the economically active population.

Let's collect the calculated coefficients into the model:

Y = 2079.85 - 0.0056x 1 - 0.0026x 2 + 0.0028x 3

Actually, this is linear regression model, which for the source data used in the example looks exactly like this.

Model estimates and forecast

As we have already discussed above, the model is built not only to show the magnitude of the dependence of the parameter being studied on the influencing factors, but also so that, knowing these influencing factors, it is possible to make a forecast. Making this forecast is quite simple; you just need to substitute the values ​​of the influencing factors in place of the corresponding X's in the resulting model equation. In the figure below, these calculations are made in Excel in a separate column.

The actual values ​​(those that occurred in reality) and the calculated values ​​​​according to the model in the same figure are displayed in the form of graphs to show the difference, and therefore the error of the model.

I repeat once again, in order to make a forecast using a model, it is necessary that there are known influencing factors, and if we are talking about a time series and, accordingly, a forecast for the future, for example, for the next year or month, then it is not always possible to find out what the influencing factors will be in this very future. In such cases, it is also necessary to make a forecast for the influencing factors; most often this is done using an autoregressive model - a model in which the influencing factors are the object under study and time, that is, the dependence of the indicator on what it was in the past is modeled.

We will look at how to build an autoregressive model in the next article, but now let’s assume that we know what the values ​​of the influencing factors will be in the future period (in the example, 2008), and by substituting these values ​​into the calculations we will get our forecast for 2008.

Statistical data processing can also be carried out using an add-on ANALYSIS PACKAGE(Fig. 62).

From the suggested items, select the item “ REGRESSION" and click on it with the left mouse button. Next, click OK.

A window will appear as shown in Fig. 63.

Analysis Tool " REGRESSION» is used to fit a graph to a set of observations using the least squares method. Regression is used to analyze the impact on an individual dependent value variable one or more independent variables. For example, several factors influence an athlete's athletic performance, including age, height, and weight. It is possible to calculate the degree to which each of these three factors influences an athlete's performance, and then use that data to predict the performance of another athlete.

The Regression tool uses the function LINEST.

REGRESSION Dialog Box

Labels Select the check box if the first row or first column of the input range contains headings. Clear this check box if there are no headers. In this case suitable headings for the output table data will be created automatically.

Reliability Level Select the check box to include an additional level in the output summary table. In the appropriate field, enter the confidence level that you want to apply, in addition to the default 95% level.

Constant - zero Select the checkbox to force the regression line to pass through the origin.

Output interval Enter link to left top cell output range. Provide at least seven columns for the output summary table, which will include: ANOVA results, coefficients, standard error of the Y calculation, standard deviations, number of observations, standard errors for coefficients.

New worksheet Set the switch to this position to open new leaf in the workbook and paste the analysis results starting in cell A1. If necessary, enter a name for the new sheet in the field located opposite the corresponding radio button.

New workbook Set the switch to this position to create a new workbook in which the results will be added to a new sheet.

Residuals Select the check box to include residuals in the output table.

Standardized Residuals Select the check box to include standardized residuals in the output table.

Residual Plot Select the check box to plot the residuals for each independent variable.

Fit Plot Select the check box to plot the predicted versus observed values.

Normal probability plot Select the checkbox to plot a normal probability graph.

Function LINEST

To carry out calculations, select with the cursor the cell in which we want to display the average value and press the = key on the keyboard. Next, in the Name field, indicate the desired function, For example AVERAGE(Fig. 22).

Function LINEST calculates statistics for a series using least squares to calculate a straight line that in the best possible way approximates the available data and then returns an array that describes the resulting straight line. You can also combine the function LINEST with other functions to compute other kinds of models that are linear in unknown parameters (whose unknown parameters are linear), including polynomial, logarithmic, exponential, and power series. Because it returns an array of values, the function must be specified as an array formula.

The equation for a straight line is:

y=m 1 x 1 +m 2 x 2 +…+b (in case of several ranges of x values),

where the dependent value y is a function of the independent value x, the m values ​​are the coefficients corresponding to each independent variable x, and b is a constant. Note that y, x and m can be vectors. Function LINEST returns array(mn;mn-1;…;m 1 ;b). LINEST may also return additional regression statistics.

LINEST(known_values_y; known_values_x; const; statistics)

Known_y_values ​​- a set of y-values ​​that are already known for the relation y=mx+b.

If the known_y_values ​​array has one column, then each column in the known_x_values ​​array is treated as a separate variable.

If the known_y_values ​​array has one row, then each row in the known_x_values ​​array is treated as a separate variable.

Known_x-values ​​are an optional set of x-values ​​that are already known for the relationship y=mx+b.

The array known_x_values ​​can contain one or more sets of variables. If only one variable is used, then the known_y_values ​​and known_x_values ​​arrays can have any shape - as long as they have the same dimension. If more than one variable is used, then known_y_values ​​must be a vector (i.e., an interval one row high or one column wide).

If array_known_x_values ​​is omitted, then the array (1;2;3;...) is assumed to be the same size as array_known_values_y.

Const is a boolean value that specifies whether the constant b is required to be equal to 0.

If the argument "const" is TRUE or omitted, then the constant b is evaluated as usual.

If the “const” argument is FALSE, then the value of b is set to 0 and the values ​​of m are selected in such a way that the relation y=mx is satisfied.

Statistics - A boolean value that specifies whether additional regression statistics should be returned.

If statistics is TRUE, LINEST returns additional regression statistics. The returned array will look like this: (mn;mn-1;...;m1;b:sen;sen-1;...;se1;seb:r2;sey:F;df:ssreg;ssresid).

If statistics is FALSE or omitted, LINEST returns only the coefficients m and the constant b.

Additional regression statistics (Table 17)

Magnitude Description
se1,se2,...,sen Standard error values ​​for coefficients m1,m2,...,mn.
seb Standard value errors for constant b (seb = #N/A if the "const" argument is FALSE).
r2 Coefficient of determinism. The actual values ​​of y and the values ​​obtained from the equation of the line are compared; Based on the comparison results, the coefficient of determinism is calculated, normalized from 0 to 1. If it is equal to 1, then there is a complete correlation with the model, i.e., there is no difference between the actual and estimated values ​​of y. In the opposite case, if the coefficient of determination is 0, there is no point in using the regression equation to predict the values ​​of y. For more information on how to calculate r2, see "Notes" at the end this section.
sey Standard error for estimating y.
F F-statistic or F-observed value. The F statistic is used to determine whether the observed relationship between a dependent and independent variable is due to chance.
df Degrees of freedom. Degrees of freedom are useful for finding F-critical values in the statistical table. To determine the confidence level of the model, you compare the values ​​in the table with the F statistic returned by the LINEST function. For more information about calculating df, see the “Notes” at the end of this section. Next, Example 4 shows the use of F and df values.
ssreg Regression sum of squares.
ssresid Residual sum of squares. For more information about calculating ssreg and ssresid, see the “Notes” at the end of this section.

The figure below shows the order in which additional regression statistics are returned (Figure 64).

Notes:

Any straight line can be described by its slope and intersection with the y-axis:

Slope (m): To determine the slope of a line, usually denoted by m, you need to take two points on the line (x 1 ,y 1) and (x 2 ,y 2); the slope will be equal to (y 2 -y 1)/(x 2 -x 1).

Y-intercept (b): The y-intercept of a line, usually denoted by b, is the y-value for the point at which the line intersects the y-axis.

The equation of the straight line is y=mx+b. If the values ​​of m and b are known, then any point on the line can be calculated by substituting the values ​​of y or x into the equation. You can also use the TREND function.

If there is only one independent variable x, you can obtain the slope and y-intercept directly using the following formulas:

Slope: INDEX(LINEST(known_y_values; known_x_values); 1)

Y-intercept: INDEX(LINEST(known_y_values; known_x_values); 2)

The accuracy of the approximation using the straight line calculated by the LINEST function depends on the degree of data scatter. The closer the data is to a straight line, the more accurate the model used by the LINEST function. The LINEST function uses least squares to determine the best fit to the data. When there is only one independent variable x, m and b are calculated using the following formulas:

where x and y are sample means, for example x = AVERAGE(known_x's) and y = AVERAGE(known_y's).

The LINEST and LGRFPRIBL fitting functions can calculate the straight line or exponential curve that best fits the data. However, they do not answer the question of which of the two results is more suitable for solving the problem. You can also evaluate the TREND(known_y_values; known_x_values) function for a straight line or the GROWTH(known_y_values; known_x_values) function for an exponential curve. These functions, unless new_x-values ​​are specified, return an array of calculated y-values ​​for the actual x-values ​​along a line or curve. You can then compare the calculated values ​​with the actual values. You can also create charts for visual comparison.

When performing regression analysis, Microsoft Excel calculates, for each point, the square of the difference between the predicted y value and the actual y value. The sum of these squared differences is called the residual sum of squares (ssresid). Microsoft Excel then calculates the total sum of squares (sstotal). If const = TRUE or the value of this argument is not specified, total amount squares will be equal to the sum of the squares of the differences between the real values ​​of y and the average values ​​of y. When const = FALSE, the total sum of squares will be equal to the sum of squares of the real y values ​​(without subtracting the average y value from the partial y value). The regression sum of squares can then be calculated as follows: ssreg = sstotal - ssresid. The smaller the residual sum of squares, the greater the value of the coefficient of determination r2, which shows how good the equation obtained using regression analysis, explains the relationships between variables. The coefficient r2 is equal to ssreg/sstotal.

In some cases, one or more X columns (let the Y and X values ​​be in columns) have no additional predicative value in other X columns. In other words, removing one or more X columns may result in Y values ​​calculated with the same precision. In this case, the redundant X columns will be excluded from the regression model. This phenomenon is called "collinearity" because the redundant columns of X can be represented as the sum of several non-redundant columns. The LINEST function checks for collinearity and removes any redundant X columns from the regression model if it detects them. Removed X columns can be identified in LINEST output by a factor of 0 and a se value of 0. Removing one or more columns as redundant changes the value of df because it depends on the number of X columns actually used for predictive purposes. For more information on calculating df, see Example 4 below. When df changes due to the removal of redundant columns, the values ​​of sey and F also change. It is not recommended to use collinearity often. However, it should be used if some X columns contain 0 or 1 as an indicator indicating whether the subject of the experiment is included in separate group. If const = TRUE or a value for this argument is not specified, LINEST inserts an additional X column to model the intersection point. If there is a column with values ​​of 1 for men and 0 for women, and there is a column with values ​​of 1 for women and 0 for men, then the last column is removed because its values ​​can be obtained from the "male indicator" column.

The calculation of df for cases where X columns are not removed from the model due to collinearity occurs as follows: if there are k known_x columns and the value const = TRUE or not specified, then df = n – k – 1. If const = FALSE, then df = n - k. In both cases, removing the X columns due to collinearity increases the df value by 1.

Formulas that return arrays must be entered as array formulas.

When entering an array of constants as an argument, for example, known_x_values, you should use a semicolon to separate values ​​on the same line and a colon to separate lines. The separator characters may vary depending on the settings in the Language and Settings window in Control Panel.

It should be noted that the y values ​​predicted by the regression equation may not be correct if they fall outside the range of the y values ​​that were used to define the equation.

Basic algorithm used in the function LINEST, differs from the main function algorithm INCLINE And CUT. The difference between algorithms can lead to different results with uncertain and collinear data. For example, if the known_y_values ​​argument data points are 0 and the known_x_values ​​argument data points are 1, then:

Function LINEST returns a value equal to 0. Function algorithm LINEST used to return suitable values ​​for collinear data, and in in this case at least one answer can be found.

The SLOPE and LINE functions return the #DIV/0! error. The algorithm of the SLOPE and INTERCEPT functions is used to find only one answer, but in this case there may be several.

In addition to calculating statistics for other types of regression, LINEST can be used to calculate ranges for other types of regression by entering functions of the x and y variables as series of the x and y variables for LINEST. For example, the following formula:

LINEST(y_values, x_values^COLUMN($A:$C))

works by having one column of Y values ​​and one column of X values ​​to calculate a cube approximation (3rd degree polynomial) of the following form:

y=m 1 x+m 2 x 2 +m 3 x 3 +b

The formula can be modified to calculate other types of regression, but in some cases the output values ​​and other statistics may need to be adjusted.

The MS Excel package allows you to do most of the work very quickly when constructing a linear regression equation. It is important to understand how to interpret the results obtained.

Requires an add-on to work Analysis package, which must be enabled in the menu item Service\Add-ons

In Excel 2007, to enable the analysis package, you need to click go to block Excel Options by pressing the button on the left top corner, and then the button Excel Options"at the bottom of the window:



To build a regression model, you must select the item Service\Data Analysis\Regression. (In Excel 2007, this mode is in the block Data/Data Analysis/Regression). A dialog box will appear that you need to fill out:

1) Input interval Y¾ contains a link to cells that contain the values ​​of the resulting characteristic y. The values ​​must be arranged in a column;

2) Input interval X¾ contains a link to cells that contain factor values. The values ​​must be arranged in columns;

3) Sign Tags set if the first cells contain explanatory text (data labels);

4) Reliability level¾ is the confidence level, which is considered to be 95% by default. If this value does not suit you, then you need to enable this flag and enter the required value;

5) Sign Constant-zero is included if it is necessary to construct an equation in which the free variable is ;

6) Output Options determine where the results should be placed. By default builds mode New worksheet;

7) Block Leftovers allows you to include the output of residuals and the construction of their graphs.

As a result, information containing all necessary information and grouped into three blocks: Regression statistics , Analysis of variance, Withdrawal of balance. Let's take a closer look at them.

1. Regression statistics:

multiple R is determined by the formula ( Pearson correlation coefficient);

R (coefficient of determination);

Normalized R-square is calculated by the formula (used for multiple regression);

Standard error S calculated by the formula ;

Observations ¾ is the amount of data n.

2. Analysis of variance, line Regression:

Parameter df equals m(number of factor sets x);

Parameter SS is determined by the formula ;

Parameter MS is determined by the formula ;

Statistics F is determined by the formula ;

Significance F. If the resulting number exceeds , then the hypothesis is accepted (there is no linear relationship), otherwise the hypothesis is accepted (there is a linear relationship).


3. Analysis of variance, line Remainder:

Parameter df equal to ;

Parameter SS is determined by the formula ;

Parameter MS is determined by the formula.

4. Analysis of variance, line Total contains the sum of the first two columns.

5. Analysis of variance, line Y-intersection contains the coefficient, standard error and t-statistics.

P-value ¾ is the value of the significance levels corresponding to the calculated t-statisticians. Determined by the function STUDRIST( t-statistics; ). If P-value exceeds , then the corresponding variable is statistically insignificant and can be excluded from the model.

Bottom 95% And Top 95%¾ are the lower and upper limits of the 95 percent confidence intervals for the coefficients of the theoretical linear regression equation. If the confidence probability value in the data input block was left at its default value, then the last two columns will duplicate the previous ones. If the user has entered their own confidence value, the last two columns contain the lower and upper bound values ​​for the specified confidence level.

6. Analysis of variance, the lines contain the coefficient values, standard errors, t-statistician, P-values ​​and confidence intervals for the corresponding .

7. Block Withdrawal of balance contains the predicted values y(in our notation this is ) and residues .

28 Oct

Good afternoon, dear blog readers! Today we will talk about nonlinear regressions. Solution linear regressions can be viewed via LINK.

This method used mainly in economic modeling and forecasting. Its goal is to observe and identify dependencies between two indicators.

The main types of nonlinear regressions are:

  • polynomial (quadratic, cubic);
  • hyperbolic;
  • sedate;
  • demonstrative;
  • logarithmic

Can also be used various combinations. For example, for time series analytics in banking, insurance, and demographic studies, the Gompzer curve is used, which is a type of logarithmic regression.

In forecasting using nonlinear regressions, the main thing is to find out the correlation coefficient, which will show us whether there is a close relationship between two parameters or not. As a rule, if the correlation coefficient is close to 1, then there is a connection, and the forecast will be quite accurate. One more important element nonlinear regressions is the average relative error ( A ), if it is in the interval<8…10%, значит модель достаточно точна.

This is where we will probably finish the theoretical block and move on to practical calculations.

We have a table of car sales over a period of 15 years (let's denote it X), the number of measurement steps will be the argument n, we also have revenue for these periods (let's denote it Y), we need to predict what the revenue will be in the future. Let's build the following table:

For the study, we will need to solve the equation (dependence of Y on X): y=ax 2 +bx+c+e. This is a pairwise quadratic regression. In this case, we apply the least squares method to find out the unknown arguments - a, b, c. It will lead to a system of algebraic equations of the form:

To solve this system, we will use, for example, Cramer’s method. We see that the sums included in the system are coefficients for the unknowns. To calculate them, we will add several columns to the table (D,E,F,G,H) and sign according to the meaning of the calculations - in column D we will square x, in E we will cube it, in F we will multiply the exponents x and y, in H we square x and multiply with y.

You will get a table of the form filled in with the things needed to solve the equation.

Let's form a matrix A system consisting of coefficients for unknowns on the left sides of the equations. Let's place it in cell A22 and call it " A=". We follow the system of equations that we chose to solve the regression.

That is, in cell B21 we must place the sum of the column where we raised the X indicator to the fourth power - F17. Let's just refer to the cell - “=F17”. Next, we need the sum of the column where X was cubed - E17, then we go strictly according to the system. Thus, we will need to fill out the entire matrix.

In accordance with Cramer's algorithm, we will type a matrix A1, similar to A, in which, instead of the elements of the first column, the elements of the right sides of the system equations should be placed. That is, the sum of the X column squared multiplied by Y, the sum of the XY column and the sum of the Y column.

We will also need two more matrices - let's call them A2 and A3 in which the second and third columns will consist of the coefficients of the right-hand sides of the equations. The picture will be like this.

Following the chosen algorithm, we will need to calculate the values ​​of the determinants (determinants, D) of the resulting matrices. Let's use the MOPRED formula. We will place the results in cells J21:K24.

We will calculate the coefficients of the equation according to Cramer in the cells opposite the corresponding determinants using the formula: a(in cell M22) - “=K22/K21”; b(in cell M23) - “=K23/K21”; With(in cell M24) - “=K24/K21”.

We get our desired equation of paired quadratic regression:

y=-0.074x 2 +2.151x+6.523

Let us evaluate the closeness of the linear relationship using the correlation index.

To calculate, add an additional column J to the table (let's call it y*). The calculation will be as follows (according to the regression equation we obtained) - “=$m$22*B2*B2+$M$23*B2+$M$24.” Let's place it in cell J2. All that remains is to drag the autofill marker down to cell J16.

To calculate the sums (Y-Y average) 2, add columns K and L to the table with the corresponding formulas. We calculate the average for the Y column using the AVERAGE function.

In cell K25 we will place the formula for calculating the correlation index - “=ROOT(1-(K17/L17))”.

We see that the value of 0.959 is very close to 1, which means there is a close nonlinear relationship between sales and years.

It remains to evaluate the quality of fit of the resulting quadratic regression equation (determination index). It is calculated using the formula for the squared correlation index. That is, the formula in cell K26 will be very simple - “=K25*K25”.

The coefficient of 0.920 is close to 1, which indicates a high quality of fit.

The last step is to calculate the relative error. Let's add a column and enter the formula there: “=ABS((C2-J2)/C2), ABS - module, absolute value. Draw the marker down and in cell M18 display the average value (AVERAGE), assign a percentage format to the cells. The result obtained - 7.79% is within the acceptable error values<8…10%. Значит вычисления достаточно точны.

If the need arises, we can build a graph using the obtained values.

An example file is attached - LINK!

Categories:// from 10/28/2017