確率の収束について


12

ましょう{Xn}n1ランダム変数STの配列であるXna確率で> 0は固定された定数です。私は次を見せようとしています: a>0

Xna
aXn1
両方の確率。私の論理が健全かどうかを確認するためにここにいます これが私の仕事です

試行

最初の部分については、我々は持っています

|Xna|<ϵ|Xna|<ϵ|Xn+a|=ϵ|(Xnsqrta)+2a|
ϵ|Xna|+2ϵa<ϵ2+2ϵa
お知らせ
ϵ2+2ϵa>ϵa
その後、
P(|Xna|ϵ)P(|Xna|ϵa)1asn
Xnainprobability

第二部については、

|aXn1|=|XnaXn|<ϵ|Xna|<ϵ|Xn|
ここで、Xna asnXnは有界シーケンスです。つまり、実数が存在するM< STは|Xn|M。したがって、 確率で見ると、 P | a
|Xna|<ϵ|Xn||Xna|<ϵM
P(|aXn1|>ϵ)=P(|Xna|>ϵ|Xn|)P(|Xna|>ϵM)0asn

私は最初のものにはかなり自信がありますが、2番目のものにはかなり不安です。私の論理は健全でしたか?


6
配列検討場合のPr X N = = 1 - 1 / NのPr X N = N = 1 / N1 1 / n 1 なのでこのシーケンスは確率aに収束するように思えますが、sup X n= max a nXnPr(Xn=a)=11/nPr(Xn=n)=1/n11/n1asup(Xn)=max(a,n)
whuber

2
連続マッピング定理?
クリストフハンク

回答:


13

The details of the proofs matter less than developing appropriate intuition and techniques. This answer focuses on an approach designed to help do that. It consists of three steps: a "setup" in which the assumption and definitions are introduced; the "body" (or a "crucial step") in which the assumptions are somehow related to what is to be proven, and the "denouement" in which the proof is completed. As in many cases with probability proofs, the crucial step here is a matter of working with numbers (the possible values of random variables) rather than dealing with the much more complicated random variables themselves.


Convergence in probability of a sequence of random variables Yn to a constant a means that no matter what neighborhood of 0 you pick, eventually each Yna lies in this neighborhood with a probability that is arbitrarily close to 1. (I won't spell out how to translate "eventually" and "arbitrarily close" into formal mathematics--anybody interested in this post already knows that.)

Recall that a neighborhood of 0 is any set of real numbers containing an open set of which 0 is a member.

セットアップはルーチンです。 シーケンスを考え、O0の任意の近傍とする。目的は、最終的にY n1Oに横たわる可能性がarbitrarily意的に高くなることを示すことです。以来、Oは近隣で、存在しなければならないε > 0のための開区間- ε ε O。私たちは縮むことがε、必要に応じて確保するためにε < 1Yn=a/XnO0Yn1OOϵ>0(ϵ,ϵ)Oϵϵ<1, too. This will assure that subsequent manipulations are legitimate and useful.

The crucial step will be to connect Yn with Xn. That requires no knowledge of random variables at all. The algebra of numeric inequalities (exploiting the assumption a>0) tells us that the set of numbers {Yn(ω)|Yn(ω)1(ϵ,ϵ)}, for any ϵ>0, is in one-to-one correspondence with the set of all Xn(ω) for which

a1+ϵ<Xn(ω)<a1ϵ.

Equivalently,

Xn(ω)a(aϵ1+ϵ,aϵ1ϵ)=U.

Since a0, the right hand side U indeed is a neighborhood of 0. (This clearly shows what breaks down when a=0.)

We are ready for the denouement.

Because Xna in probability, we know that eventually each Xna will lie within U with arbitrarily high probability. Equivalently, Yn1 will eventually lie within (ϵ,ϵ)O with arbitrarily high probability, QED.


I apologize for such a late best answer. It's been a busy week. Thank you so for much this!!!
Savage Henry

5

We are given that

limnP(|Xnα|>ϵ)=0

and we want to show that

limnP(|αXn1|>ϵ)=0

We have that

|αXn1|=|1Xn(αXn)|=|1Xn||Xnα|

So equivalently, we are examining the probability limit

limnP(|1Xn||Xnα|>ϵ)=?0

We can break the probability into two mutually exclusive joint probabilities

P(|1Xn||Xnα|>ϵ)=P(|1Xn||Xnα|>ϵ,|Xn|1)+P(|1Xn||Xnα|>ϵ,|Xn|<1)

For the first element we have the series of inequalities

P(|1Xn||Xnα|>ϵ,|Xn|1)P[|Xnα|>ϵ,|Xn|1]P[|Xnα|>ϵ]

The first inequality comes from the fact that we are considering the region where |Xn| is higher than unity and so its reciprocal is smaller than unity. The second inequality because a joint probability of a set of events cannot be greater than the probability of a subset of these events.
The limit of the rightmost term is zero (this is the premise), so the limit of the leftmost term is also zero. So the first element of the probability that interests us is zero.

For the second element we have

P(|1Xn||Xnα|>ϵ,|Xn|<1)=P(|Xnα|>ϵ|Xn|,|Xn|<1)

Define δϵmax|Xn|. Since here |Xn| is bounded, it follows that δ can be made arnitrarily small or large, and so it is equivalent to ϵ. So we have the inequality

P[|Xnα|>δ,|Xn|<1]P[|Xnα|>δ]

Again, the limit on the right side is zero by our premise, so the limit on the left side is also zero. Therefore the second element of the probability that interests us is also zero. QED.


5

For the first part, take x,a,ϵ>0, and note that

|xa|ϵ|xa|ϵaa|xa|ϵax+a|(xa)(x+a)|ϵa|xa|ϵa.
Hence, for any ϵ>0, defining δ=ϵa, we have
Pr(|Xna|ϵ)Pr(|Xna|δ)0,
when n, implying that XnPra.

For the second part, take again x,a,ϵ>0, and cheat from Hubber's answer (this is the key step ;-) to define

δ=min{aϵ1+ϵ,aϵ1ϵ}.
Now,
|xa|<δaδ<x<a+δaaϵ1+ϵ<x<a+aϵ1ϵa1+ϵ<x<a1ϵ1ϵ<ax<1+ϵ|ax1|<ϵ.
The contrapositive of this statement is
|ax1|ϵ|xa|δ.

Therefore,

Pr(|aXn1|ϵ)Pr(|Xna|δ)0,
when n, implying that aXnPr1.

Note: both items are consequences of a more general result. First of all remember this Lemma: XnPrX if and only if for any subsequence {ni}N there is a subsequence {nij}{ni} such that XnijX almost surely when j. Also, remember from Real Analysis that g:AR is continuous at a limit point x of A if and only if for every sequence {xn} in A it holds that xnx implies g(xn)g(x). Hence, if g is continuous and XnX almost surely, then

Pr(limng(Xn)=g(X))Pr(limxXn=X)=1,
and it follows that g(Xn)g(X) almost surely. Moreover, g being continuous and XnPrX, if we pick any subsequence {ni}N, then, using the Lemma, there is a subsequence {nij}{ni} such that XnijX almost surely when j. But then, as we have seen, it follows that g(Xnij)g(X) almost surely when j. Since this argument holds for every subsequence {ni}N, using the Lemma in the other direction, we conclude that g(Xn)Prg(X). Hence, to answer your question you can just define continuous functions g(x)=x and h(x)=a/x, for x>0, and apply this result.

Zen thank you for you answer. This was very clear!
Savage Henry
弊社のサイトを使用することにより、あなたは弊社のクッキーポリシーおよびプライバシーポリシーを読み、理解したものとみなされます。
Licensed under cc by-sa 3.0 with attribution required.