確率的等連続性を使用した上記の答えは非常にうまく機能しますが、ここでは、観測された情報行列が情報行列の強い整合性のある推定量である、つまり私たちはプラグインあれば推定量の強く一貫したシーケンス。すべての詳細が正しいことを願っています。N−1JN(θ^N(Y))⟶a.s.I(θ0)
我々が使用するインデックス集合である、私たちは一時的表記法を採用できるようにJ (〜Y、θを):= J (θ )の依存性を明示するために、J (θ )ランダムベクトルに〜Y。また、仕事はして要素ごともの(J (〜Y、θ ))RIN={1,2,...,N}J(Y~,θ):=J(θ)J(θ)Y~および( J N(θ))R S = Σ N iが= 1(J( Y I、θ))R S、R、S=1、。。。、k、この議論のため。関数(J(⋅、θ))r sは、集合 R n × Θで実数値になります(J(Y~,θ))rs(JN(θ))rs=∑Ni=1(J(Yi,θ))rsr,s=1,...,k(J(⋅,θ))rsRn×Θ∘, and we will suppose that it is Lebesgue measurable for every θ∈Θ∘. A uniform (strong) law of large numbers defines a set of conditions under which
supθ∈Θ∣∣N−1(JN(θ))rs−Eθ[(J(Y1,θ))rs]∣∣=supθ∈Θ∣∣N−1∑Ni=1(J(Yi,θ))rs−(I(θ))rs∣∣⟶a.s0(1)
The conditions that must be satisfied in order that (1) holds are (a) Θ∘ is a compact set; (b) (J(Y~,θ))rs is a continuous function on Θ∘ with probability 1; (c) for each θ∈Θ∘ (J(Y~,θ))rs is dominated by a function h(Y~), i.e. |(J(Y~,θ))rs|<h(Y~); and
(d) for each θ∈Θ∘ Eθ[h(Y~)]<∞;. These conditions come from Jennrich (1969, Theorem 2).
Now for any yi∈Rn, i∈IN and θ′∈S⊆Θ∘, the following inequality obviously holds
∣∣N−1∑Ni=1(J(yi,θ′))rs−(I(θ′))rs∣∣≤supθ∈S∣∣N−1∑Ni=1(J(yi,θ))rs−(I(θ))rs∣∣.(2)
Suppose that {θ^N(Y)} is a strongly consistent sequence of estimators for θ0, and let ΘN1=BδN1(θ0)⊆K⊆Θ∘ be an open ball in Rk with radius δN1→0 as N1→∞, and suppose K is compact. Then since θ^N(Y)∈ΘN1 for N sufficiently large enough we have P[limN{θ^N(Y)∈ΘN1}]=1 for sufficiently large N. Together with (2) this implies
P[limN→∞{∣∣N−1∑Ni=1(J(Yi,θ^N(Y)))rs−(I(θ^N(Y)))rs∣∣≤supθ∈ΘN1∣∣N−1∑Ni=1(J(Yi,θ))rs−(I(θ))rs∣∣}]=1.(3)
Now ΘN1⊆Θ∘ implies conditions (a)-(d) of Jennrich (1969, Theorem 2) apply to ΘN1. Thus (1) and (3) imply
P[limN→∞{∣∣N−1∑Ni=1(J(Yi,θ^N(Y)))rs−(I(θ^N(Y)))rs∣∣=0}]=1.(4)
Since (I(θ^N(Y)))rs⟶a.s.I(θ0) then (4) implies that N−1(JN(θ^N(Y)))rs⟶a.s.(I(θ0))rs. Note that (3) holds however small ΘN1 is, and so the result in (4) is independent of the choice of N1 other than N1 must be chosen such that
ΘN1⊆Θ∘. This result holds for all r,s=1,...,k, and so in terms of matrices we have N−1JN(θ^N(Y))⟶a.s.I(θ0).