エラーサーフェスコンベックスの原因は何ですか?それはコバリンス行列またはヘッセ行列によって決定されますか?


17

現在、回帰の最小二乗(および他の)推定について学習しています。また、いくつかの適応アルゴリズムの文献でも読んでいるところから、「... and error surface isconvex ...」というフレーズが表示され、そもそも凸である理由についての深さはどこにも見当たりません。

...だから、それを正確に凸状にするのは何ですか?

私は自分のコスト関数で自分の適応アルゴリズムを設計できるようにしたいので、この繰り返しの省略はやや面倒ですが、コスト関数が凸誤差曲面を生成するかどうかわからない場合、私はすることができませんグローバルな最小値はないので、勾配降下のようなものを適用するのは遠すぎます。たぶん私は創造的になりたい-たぶん、私はエラー基準として最小二乗を使いたくないでしょう

さらに掘り下げてみると(そして私の質問はここから始まります)、凸状のエラーサーフェスがあるかどうかを判断するには、ヘッセ行列が正の半正定行列であることを確認する必要があります。対称行列の場合、このテストは簡単です-ヘッセ行列のすべての固有値が非負であることを確認してください。(行列が対称でない場合、Gramianにより、行列を独自の転置に追加して同じ固有値検定を実行することで対称にすることができますが、ここでは重要ではありません)。

ヘッセ行列とは何ですか?ヘッセ行列は、コスト関数の部分の可能なすべての組み合わせを成文化します。パーシャルはいくつありますか?フィーチャベクトル内のフィーチャの数。パーシャルの計算方法は?元のコスト関数から「手動」で偏導関数を取得します。

それがまさに私がやったことです:マトリックスXで示されるm x nデータマトリックスがあると仮定します。ここで、mは例の数を示し、nは例ごとの特徴の数を示します。(これはパーシャルの数にもなります)。私は、我々が持っていると言うことができると仮定メートルの時間サンプルおよびnは、センサからの空間サンプルを、物理的なアプリケーションは、ここではあまり重要ではありません。Xmnmn

さらに、サイズm x 1のベクトルもあります。(これは「ラベル」ベクトル、またはXのすべての行に対応する「答え」です)。簡単にするために、この特定の例ではm = n = 2と仮定しました。したがって、2つの「例」と2つの「機能」です。ym1Xm=n=2

ここで、ここで最適な「ライン」または多項式を確認したいとします。つまり、コスト関数が次のようになるように、多項式係数ベクトルに対して入力データフィーチャを投影します。θ

J(θ)=12mi=1m[θ0x0[i]+θ1x1[i]y[i]]2

今、私たちが最初の偏微分WRTみましょうしたがって、(機能0):θ0

δJ(θ)δθ0=1mi=1m[θ0x0[i]+θ1x1[i]y[i]]x0[i]

δJ(θ)δθ0=1mi=1m[θ0x02[i]+θ1x1[i]x0[i]y[i]x0[i]]

次に、すべての2番目の部分音を計算します。

δ2J(θ)δθ02=1mi=1mx02[i]

δ2J(θ)δθ0θ1=1mi=1mx0[i]x1[i]

δ2J(θ)δθ1θ0=1mi=1mx1[i]x0[i]

δ2J(θ)δθ12=1mi=1mx12[i]

ヘッセ行列は次のものにすぎないことがわかります。

H(J(θ))=[δ2J(θ)δθ02δ2J(θ)δθ0θ1δ2J(θ)δθ1θ0δ2J(θ)δθ12]

H(J(θ))=[1mi=1mx02[i]1mi=1mx0[i]x1[i]1mi=1mx1[i]x0[i]1mi=1mx12[i]]

Now, based on how I have constructed the data matrix X, (my 'features' go by columns, and my examples go by rows), the Hessian appears to be:

H(J(θ))=XTX=Σ

...which is nothing but the sample covariance matrix!

So I am not quite sure how to interpret - or I should say, I am not quite sure how generalizing I should be here. But I think I can say that:

  • Always true:

    • The Hessian matrix always controls whether or not your error/cost surface is convex.
    • If you Hessian matrix is pos-semi-def, you are convex, (and can happily use algorithms like gradient descent to converge to the optimal solution).
  • True for LSE only:

    • The Hessian matrix for the LSE cost criterion is nothing but the original covariance matrix. (!).
    • To me this means that, if I use LSE criterion, the data itself determines whether or not I have a convex surface? ... Which would then mean that the eigenvectors of my covariance matrix somehow have the capability to 'shape' the cost surface? Is this always true? Or did it just work out for the LSE criteria? It just doesnt sit right with me that the convexity of an error surface should be dependent on the data.

So putting it back in the context of the original question, how does one determine whether or not an error surfance (based on some cost function you select) is convex or not? Is this determination based on the data, or the Hessian?

Thanks

TLDR: How, exactly, and practically do I go about determining whether my cost-function and/or data-set yield a convex or non-convex error surface?

回答:


7

You can think of linear-least squares in single dimension. The cost function is something like a2. The first derivative (Jacobian) is then 2a, hence linear in a. The second derivative (Hessian) is 2 - a constant.

Since the second derivative is positive, you are dealing with convex cost function. This is eqivalent to positive definite Hessian matrix in multivariate calculus.

You deal with just two variables (θ1, θ2) thus the Hessian is particularly simple.

In practice, however, there are often many variables involved, so it is impractical to build and inspect Hessian.

More efficient method is to work directly on the Jacobian matrix J in the least-squares problem:

Jx=b

J can be rank-deficient, singular or near-singular. In such cases, the quadratic surface of the cost function is almost flat and/or wildly stretched in some direction. You can also find that your matrix is theoretically solvable, but the solution is numerically unstable. A method of preconditioning can be used to cope with such cases.

Some algorithms simple run a Cholesky decomposition of J. If the algorithm fails, it means that J is singular (or ill-conditioned).

Numerically more stable, but more expensive is a QR decomposition, which also exists only if J is regular.

Finally, the state-of-the art method is a Singular Value Decomposition (SVD), which is most expensive, can be done on every matrix, reveals numerical rank of J and allows you to treat rank-deficient cases separately.

I wrote an article about linear and non-linear least squares solutions that covers these topics in detail:

Linear and Nonlinear Least-Squares with Math.NET

There are also references to great books that deal with advanced topics related to least-squares (covariance in parameters/data points, preconditioning, scaling, orthogonal distance regression - total least-squares, determining precision and accuracy of the least-squares estimator etc.).

I have made a sample project for the article, which is open source:

LeastSquaresDemo - binary

LeastSquaresDemo - source (C#)


Thanks Libor: 1) Tangential but, choleskey is like a matrix square root it seems, yes? 2) Not sure I understand your point about how the Hessian tells you about convexity at each point on the error surface - are you saying in general? Because from LSE derivation above, the Hessian does not depend on the θ parameters at all, and just on the data. Perhaps you mean in general? 3) Finally in total, how to then determine if an error surface is convex - just stick to making sure the Hessian is SPD? But you mentioned that it might depend on θ...so how can one know for sure? Thanks!
Spacey

2) Yes I mean in general. In linear least squares, the whole error surface has constant Hessian. Taking second derviative of quadratic is constant, the same applies for Hessian. 3) It depends on conditioning of your data matrix. If the Hessian is spd, you there is a single closed solution and the error surface is convex in all directions. Otherwise the data matrix is ill conditioned or singular. I have never used Hessian to probe that, rather inspecting singular values of the data matrix or checking whether it has Cholesky decomposition. Both ways will tell you whether there is a solution.
Libor

Libor - 1) If you can, please add how you have used SVD of X data matrix, or how you have used Choleskey decomposition to check that you have a single closed solution, they seems to be very useful and it a good point, and I would be curious to learn how to use those. 2) Last thing, just to make sure I understand you about Hessian: So the Hessian is, in general, a function of θ, and/or X. If it is SPD, we have a convex surface. (If the Hessian has θ in it however, we would have to evaluate it everywhere it seems). THanks again.
Spacey

Mohammad: 1) I have rewritten the answer and added links to my article about Least-Squares (there may be some errors, I have not published it officialy yet) including working sample project. I hope it will help you understand the problem more deeply... 2) In linear-least squares, Hessian is constant and depends on data points only. In general, it depends on model parameters as well, but this is only the case of non-linear least squares.
Libor
弊社のサイトを使用することにより、あなたは弊社のクッキーポリシーおよびプライバシーポリシーを読み、理解したものとみなされます。
Licensed under cc by-sa 3.0 with attribution required.