いくつかのデータがあり、そのデータをモデルに適合させたとしましょう(非線形回帰)。次に、Rの2乗()を計算します。
R-2が負の場合、それはどういう意味ですか?それは私のモデルが悪いということですか?の範囲は[-1,1]になります。とき平均のことだけでなく何をするか、0でありますか?
いくつかのデータがあり、そのデータをモデルに適合させたとしましょう(非線形回帰)。次に、Rの2乗()を計算します。
R-2が負の場合、それはどういう意味ですか?それは私のモデルが悪いということですか?の範囲は[-1,1]になります。とき平均のことだけでなく何をするか、0でありますか?
回答:
は負の値になる可能性があり、次のことを意味します。
が0と1の間であると言っている人々には、これは当てはまりません。「二乗」という言葉が含まれる何かの負の値は、数学の規則に違反しているように聞こえるかもしれませんが、切片のないR 2モデルで発生する可能性があります。理由を理解するために、R 2計算があります。
これは少し長い-あなたがそれを理解せずに答えが必要な場合は、最後までスキップしてください。そうでなければ、私はこれを簡単な言葉で書き込もうとしました。
まず、3つの変数を定義しましょう:、T S S、およびE S S。
RSSの計算:
独立変数ごとに、従属変数yがあります。xの各値についてyの値を予測する最適な線形線をプロットします。レッツは、の値を呼び出すYラインが予測yと。あなたのラインが予測するものと実際のy値が何であるかとの間の誤差は、減算で計算できます。これらの差はすべて二乗されて合計され、残差平方和R S Sが得られます。
式にそれを入れて、
TSSの計算:
私たちは、の平均値を計算することができと呼ばれ、ˉ yと。我々はプロットするとˉ yは、それが一定であるため、それはデータを通してちょうど水平線です。我々はしかしそれで何ができるか、減算であるˉ Y(の平均値Yのすべての実際の値から)、Y。結果は二乗されて合計され、これにより総平方和T S Sが得られます。
式にそれを置く
ESSの計算:
間の差Y(の値Yラインによって予測)と平均値ˉ Yは二乗と加算されます。これは等しい二乗の和の説明であり、 Σ (Yが - ˉ Y)2
、覚え、我々は追加することができ+のYが - yはその中に、それ自体を相殺するからです。したがって、T S S = Σ (Y - Y + Y - ˉ Y)2。これらのブラケットを拡大し、我々が入手T S S = Σ (Y - Y)2 2
When, and only when the line is plotted with an intercept, the following is always true: . Therefore, , which you may notice just means that .
重要な部分は次のとおりです。
. Since both the numerator and demoninator are sums of squares, must be positive.
BUT
When we don't specify an intercept, does not necessarily equal . This means that .
Dividing all terms by , we get .
Finally, we substitute to get . This time, the numerator has a term in it which is not a sum of squares, so it can be negative. This would make negative. When would this happen? would be negative when is negative and is positive, or vice versa. This occurs when the horizontal line of actually explains the data better than the line of best fit.
Here's an exaggerated example of when is negative (Source: University of Houston Clear Lake)
Put simply:
You also asked about .
I commend you for making it through that. If you found this helpful, you should also upvote fcop's answer here which I had to refer to, because it's been a while.
Neither answer so far is entirely correct, so I will try to give my understanding of R-Squared. I have given a more detailed explanation of this on my blog post here "What is R-Squared"
Sum Squared Error
The objective of ordinary least squared regression is to get a line which minimized the sum squared error. The default line with minimum sum squared error is a horizontal line through the mean. Basically, if you can't do better, you can just predict the mean value and that will give you the minimum sum squared error
R-Squared is a way of measuring how much better than the mean line you have done based on summed squared error. The equation for R-Squared is
Now SS Regression and SS Total are both sums of squared terms. Both of those are always positive. This means we are taking 1, and subtracting a positive value. So the maximum R-Squared value is positive 1, but the minimum is negative infinity. Yes, that is correct, the range of R-squared is between -infinity and 1, not -1 and 1 and not 0 and 1
What Is Sum Squared Error
Sum squared error is taking the error at every point, squaring it, and adding all the squares. For total error, it uses the horizontal line through the mean, because that gives the lowest sum squared error if you don't have any other information, i.e. can't do a regression.
As an equation it is this
Now with regression, our objective is to do better than the mean. For instance this regression line will give a lower sum squared error than using the horizontal line.
The equation for regression sum squared error is this
Ideally, you would have zero regression error, i.e. your regression line would perfectly match the data. In that case you would get an R-Squared value of 1
Negative R Squared
All the information above is pretty standard. Now what about negative R-Squared ?
Well it turns out that there is not reason that your regression equation must give lower sum squared error than the mean value. It is generally thought that if you can't make a better prediction than the mean value, you would just use the mean value, but there is nothing forcing that to be the cause. You could for instance predict the median for everything.
In actual practice, with ordinary least squared regression, the most common time to get a negative R-Squared value is when you force a point that the regression line must go through. This is typically done by setting the intercept, but you can force the regression line through any point.
When you do that the regression line goes through that point, and attempts to get the minimum sum squared error while still going through that point.
By default, the regression equations use average x and average y as the point that the regression line goes through. But if you force it through a point that is far away from where the regression line would normally be you can get sum squared error that is higher than using the horizontal line
In the image below, both regression lines were forced to have a y intercept of 0. This caused a negative R-squared for the data that is far offset from the origin.
For the top set of points, the red ones, the regression line is the best possible regression line that also passes through the origin. It just happens that that regression line is worse than using a horizontal line, and hence gives a negative R-Squared.
Undefined R-Squared
There is one special case no one mentioned, where you can get an undefined R-Squared. That is if your data is completely horizontal, then your total sum squared error is zero. As a result you would have a zero divided by zero in the R-squared equation, which is undefined.
As the previous commenter notes, r^2 is between [0,1], not [-1,+1], so it is impossible to be negative. You cannot square a value and get a negative number. Perhaps you are looking at r, the correlation? It can be between [-1,+1], where zero means there is no relationship between the variables, -1 means there is a perfect negative relationship (as one variable increases, the other decreases), and +1 is a perfect positive relationship (both variables go up or down concordantly).
If indeed you are looking at r^2, then, as the previous commenter describes, you are probably seeing the adjusted r^2, not the actual r^2. Consider what the statistic means: I teach behavioral science statistics, and the easiest way that I've learned to teach my students about the meaning of r^2 is " % variance explained." So if you have r^2=0.5, the model explains 50% of the variation of the dependent (outcome) variable. If you have a negative r^2, it would mean that the model explains a negative % of the outcome variable, which is not an intuitively reasonable suggestion. However, adjusted r^2 takes the sample size (n) and number of predictors (p) into consideration. A formula for calculating it is here. If you have a very low r^2, then it is reasonably easy to get negative values. Granted, a negative adjusted r^2 does not have any more intuitive meaning than regular r^2, but as the previous commenter says, it just means your model is very poor, if not just plain useless.