この質問は、線形モデルの特定のバージョンにおける制限付き最尤法(REML)の推定を扱っています。
ここで、は、と同様に、でパラメーター化された()行列です。は迷惑パラメーターの未知のベクトルです。関心が推定である、私たちは持っている。最尤法によるモデルの推定は問題ありませんが、REMLを使用したいと思います。これはよく知られており、例えば、参照LaMotteを、尤度その、なるよう任意の半直交行列である書くことができます。
when is full column rank.
My problem is that for some perfectly reasonable, and scientifically interesting, the matrix is not of full column rank. All the derivations I have seen of the restricted likelihood above makes use of determinant equalities that are not applicable when , i.e. they assume full column rank of . This means that the above restricted likelihood is only correct for my setting on parts of the parameter space, and thus is not what I want to optimize.
Question: Are there more general restricted likelihoods, derived, in the statistical literature or elsewhere, without the assumption that be full column rank? If so, what do they look like?
Some observations:
- Deriving the exponential part is no problem for any and it may be written in terms of the Moore-Penrose inverse as above
- The columns of are an (any) orthonormal basis for
- For known , the likelihood for can easily be written down for every , but of course the number of basis vectors, i.e. columns, in depends on the column rank of
If anyone interested in this question believes the exact parameterization of would help, let me know and I'll write them down. At this point though, I'm mostly interested in a REML for a general of the correct dimensions.
A more detailed description of the model follows here. Let be an -dimensional first order Vector Autoregression [VAR(1)] where . Suppose the process is started in some fixed value at time .
Define . The model may be written in the linear model form using the following definitions and notation:
where denotes a dimensional vector of ones and the first standard basis vector of .
Denote . Notice that if is not full rank then is not full column rank. This includes, for example, cases where one of the components of does not depend on the past.
The idea of estimating VARs using REML is well known in, for example, the predictive regressions literature (see e.g. Phillips and Chen and the references therein.)
It may be worthwhile to clarify that the matrix is not a design matrix in the usual sense, it just falls out of the model and unless there is a priori knowledge about there is, as far as I can tell, no way to reparameterize it to be full rank.
I have posted a question on math.stackexchange that is related to this one in the sense that an answer to the math question may help in deriving a likelihood that would answer this question.