Linest function in excel is a statistical function that is used for calculating straightline statistics and return an array from the available selected data which also describes that line. In other words, Linest function calculates the statistics of a simple line equation (Y = mx + C) which also explains the relationship between the dependent and independent variables using the least square procedure to find the best solution for data used in.
Syntax:= LINEST(known_y’s, [known_x’s], [const], [stats])
The LINEST function syntax has the following arguments:
 known_y’s Required. The set of yvalues that you already know in the relationship y = mx + b.
 If the range of known_y’s is in a single column, each column of known_x’s is interpreted as a separate variable.
 If the range of known_y’s is contained in a single row, each row of known_x’s is interpreted as a separate variable.
 known_x’s Optional. A set of xvalues that you may already know in the relationship y = mx + b.
 The range of known_x’s can include one or more sets of variables. If only one variable is used, known_y’s and known_x’s can be ranges of any shape, as long as they have equal dimensions. If more than one variable is used, known_y’s must be a vector (that is, a range with a height of one row or a width of one column).
 If known_x’s is omitted, it is assumed to be the array {1,2,3,…} that is the same size as known_y’s.
 const Optional. A logical value specifying whether to force the constant b to equal 0.
 If const is TRUE or omitted, b is calculated normally.
 If const is FALSE, b is set equal to 0 and the mvalues are adjusted to fit y = mx.
 stats Optional. A logical value specifying whether to return additional regression statistics.
 If stats is TRUE, LINEST returns the additional regression statistics; as a result, the returned array is {mn,mn1,…,m1,b;sen,sen1,…,se1,seb;r^{2},sey;F,df;ssreg,ssresid}.
 If stats is FALSE or omitted, LINEST returns only the mcoefficients and the constant b.The additional regression statistics are as follows.
Statistic Description se1,se2,…,sen The standard error values for the coefficients m1,m2,…,mn. seb The standard error value for the constant b (seb = #N/A when const is FALSE). r^{2} The coefficient of determination. Compares estimated and actual yvalues, and ranges in value from 0 to 1. If it is 1, there is a perfect correlation in the sample — there is no difference between the estimated yvalue and the actual yvalue. At the other extreme, if the coefficient of determination is 0, the regression equation is not helpful in predicting a yvalue. For information about how r^{2} is calculated, see “Remarks,” later in this topic. sey The standard error for the y estimate. F The F statistic, or the Fobserved value. Use the F statistic to determine whether the observed relationship between the dependent and independent variables occurs by chance. df The degrees of freedom. Use the degrees of freedom to help you find Fcritical values in a statistical table. Compare the values you find in the table to the F statistic returned by LINEST to determine a confidence level for the model. For information about how df is calculated, see “Remarks,” later in this topic. Example 4 shows use of F and df. ssreg The regression sum of squares. ssresid The residual sum of squares. For information about how ssreg and ssresid are calculated, see “Remarks,” later in this topic.
Example: Let’s look at some Excel LINEST function examples and explore how to use the LINEST function as a worksheet function in Microsoft Excel:
Example 1:
To use the LINEST as array formula then you need to do the following steps :
 Select the cell where the function is and press f2.
 Press CTRL +SHIFT +ENTER.
In this LINEST Function in Excel example, we are going to see how LINEST function work with the data. Enter the data in excel with two data caption named as X and Y.
Syntax: =LINEST(B2:B14,A2:A14,TRUE,TRUE)
Result: 0.931868132
As mentioned above we need to press CTRL+SHIFT+ENTER to get the exact data. Now we can see that formula is enclosed with two parentheses i.e. { } where LINEST function is evaluated.
We can mention a straight line with slope and yintercept. In order to get the intercept and slope regression, we can use LINEST function lets see an example with step by step procedure.
Example 2: Simple Linear Regression
Example 3: Multiple Linear Regression
Syntax: =LINEST(E2:E12,A2:D12,TRUE,TRUE)
Result: 234.2371645
Dynamic array formula entered in B14
Note:
 You can describe any straight line with the slope and the yintercept:Slope (m):
To find the slope of a line, often written as m, take two points on the line, (x1,y1) and (x2,y2); the slope is equal to (y2 – y1)/(x2 – x1).Yintercept (b):
The yintercept of a line, often written as b, is the value of y at the point where the line crosses the yaxis.The equation of a straight line is y = mx + b. Once you know the values of m and b, you can calculate any point on the line by plugging the y or xvalue into that equation. You can also use the TREND function.
 When you have only one independent xvariable, you can obtain the slope and yintercept values directly by using the following formulas:Slope:
=INDEX(LINEST(known_y’s,known_x’s),1)Yintercept:
=INDEX(LINEST(known_y’s,known_x’s),2) 
The accuracy of the line calculated by the LINEST function depends on the degree of scatter in your data. The more linear the data, the more accurate the LINEST model. LINEST uses the method of least squares for determining the best fit for the data. When you have only one independent xvariable, the calculations for m and b are based on the following formulas:
 where x and y are sample means; that is, x = AVERAGE(known x’s) and y = AVERAGE(known_y’s).

The line and curvefitting functions LINEST and LOGEST can calculate the best straight line or exponential curve that fits your data. However, you have to decide which of the two results best fits your data. You can calculate TREND(known_y’s,known_x’s) for a straight line, or GROWTH(known_y’s, known_x’s) for an exponential curve. These functions, without the new_x’s argument, return an array of yvalues predicted along that line or curve at your actual data points. You can then compare the predicted values with the actual values. You may want to chart them both for a visual comparison.
 In regression analysis, Excel calculates for each point the squared difference between the yvalue estimated for that point and its actual yvalue. The sum of these squared differences is called the residual sum of squares, ssresid. Excel then calculates the total sum of squares, sstotal. When the const argument = TRUE or is omitted, the total sum of squares is the sum of the squared differences between the actual yvalues and the average of the yvalues. When the const argument = FALSE, the total sum of squares is the sum of the squares of the actual yvalues (without subtracting the average yvalue from each individual yvalue). Then regression sum of squares, ssreg, can be found from: ssreg = sstotal – ssresid. The smaller the residual sum of squares is, compared with the total sum of squares, the larger the value of the coefficient of determination, r^{2}, which is an indicator of how well the equation resulting from the regression analysis explains the relationship among the variables. The value of r^{2} equals ssreg/sstotal.
 In some cases, one or more of the X columns (assume that Y’s and X’s are in columns) may have no additional predictive value in the presence of the other X columns. In other words, eliminating one or more X columns might lead to predicted Y values that are equally accurate. In that case these redundant X columns should be omitted from the regression model. This phenomenon is called “collinearity” because any redundant X column can be expressed as a sum of multiples of the nonredundant X columns. The LINEST function checks for collinearity and removes any redundant X columns from the regression model when it identifies them. Removed X columns can be recognized in LINEST output as having 0 coefficients in addition to 0 se values. If one or more columns are removed as redundant, df is affected because df depends on the number of X columns actually used for predictive purposes. For details on the computation of df. If df is changed because redundant X columns are removed, values of sey and F are also affected. Collinearity should be relatively rare in practice. However, one case where it is more likely to arise is when some X columns contain only 0 and 1 values as indicators of whether a subject in an experiment is or is not a member of a particular group. If const = TRUE or is omitted, the LINEST function effectively inserts an additional X column of all 1 values to model the intercept. If you have a column with a 1 for each subject if male, or 0 if not, and you also have a column with a 1 for each subject if female, or 0 if not, this latter column is redundant because entries in it can be obtained from subtracting the entry in the “male indicator” column from the entry in the additional column of all 1 values added by the LINEST function.
 The value of df is calculated as follows, when no X columns are removed from the model due to collinearity: if there are k columns of known_x’s and const = TRUE or is omitted, df = n – k – 1. If const = FALSE, df = n – k. In both cases, each X column that was removed due to collinearity increases the value of df by 1.
 When entering an array constant (such as known_x’s) as an argument, use commas to separate values that are contained in the same row and semicolons to separate rows. Separator characters may be different depending on your regional settings.
 Note that the yvalues predicted by the regression equation may not be valid if they are outside the range of the yvalues you used to determine the equation.
 The underlying algorithm used in the LINEST function is different than the underlying algorithm used in the SLOPE and INTERCEPT functions. The difference between these algorithms can lead to different results when data is undetermined and collinear. For example, if the data points of the known_y’s argument are 0 and the data points of the known_x’s argument are 1:
 LINEST returns a value of 0. The algorithm of the LINEST function is designed to return reasonable results for collinear data and, in this case, at least one answer can be found.
 SLOPE and INTERCEPT return a #DIV/0! error. The algorithm of the SLOPE and INTERCEPT functions is designed to look for only one answer, and in this case there can be more than one answer.
 In addition to using LOGEST to calculate statistics for other regression types, you can use LINEST to calculate a range of other regression types by entering functions of the x and y variables as the x and y series for LINEST. For example, the following formula:=LINEST(yvalues, xvalues^COLUMN($A:$C))
works when you have a single column of yvalues and a single column of xvalues to calculate the cubic (polynomial of order 3) approximation of the form:
y = m1*x + m2*x^2 + m3*x^3 + b
You can adjust this formula to calculate other types of regression, but in some cases it requires the adjustment of the output values and other statistics.
 The Ftest value that is returned by the LINEST function differs from the Ftest value that is returned by the FTEST function. LINEST returns the F statistic, whereas FTEST returns the probability.