誰かがPTLOS演習4.1を解決しましたか?


19

これは、2003年のエドウィンジェインズによる確率理論:科学の論理で与えられた演習ですここには部分的な解決策があります。私はより一般的な部分的な解決策を考え出しましたが、他の誰かがそれを解決したかどうか疑問に思っていました。答えを投稿する前に少し待って、他の人に試してもらいます。

さて、H iで示される相互排他的で網羅的なn仮説があると仮定します。Hi(i=1,,n)。さらに、 D jで示されるmデータセットがあるとしますDj(j=1,,m)。i番目の仮説の尤度比は次の式で与えられます。

LR(Hi)=P(D1D2,Dm|Hi)P(D1D2,Dm|H¯i)

これらは条件付き確率であることに注意してください。i番目の仮説が与えられた場合Himデータセットが独立していると仮定します。

P(D1D2,Dm|Hi)=j=1mP(Dj|Hi)(i=1,,n)Condition 1

ここで、分母もこの状況を考慮に入れれば非常に便利になります。

P(D1D2,Dm|H¯i)=j=1mP(Dj|H¯i)(i=1,,n)Condition 2

この場合、尤度比は各データセットのより小さい係数の積に分割されるため、次のようになります。

LR(Hi)=j=1mP(Dj|Hi)P(Dj|H¯i)

したがって、この場合、各データセットのだろう「のための投票Hi」または「反対票Hi」独立して、他のデータセットの。

演習では、n>2(2つ以上の仮説)の場合、この因数分解が発生するような非自明な方法がないことを証明します。つまり、条件1と条件2が成立すると仮定すると、最大で1つの要因 1と異なっているので、1つだけのデータセットは、尤度比に寄与する。

P(D1|Hi)P(D1|H¯i)P(D2|Hi)P(D2|H¯i)P(Dm|Hi)P(Dm|H¯i)

個人的には、この結果は非常に魅力的でした。なぜなら、複数の仮説検定は一連のバイナリ仮説検定に他ならないことを基本的に示しているからです。


私は上のインデックスで混乱少しだ ; あるˉ H I = argを最大H H I P D 1... DのM | H ?またはそれはˉ H I = argを最大H { H 1... H N } P D 1... DのM | H H¯iH¯i=argmaxhHiP(D1,Dm|h)H¯i=argmaxh{H1,,Hn}P(D1,Dm|h)?後者でなければならないようですが、なぜ下付き文字なのかわかりません。または多分私は完全に何か他のものを見逃しています:)
JMS

@JMS - 論理的な声明「の略H 私は偽である」、または他の仮説のいずれかに該当します。だから、 "ブール代数"私たちが持っているで¯ H IH 1 + H 2 + + H I - 1 + H I + 1 + + H N(仮説は、排他的かつ網羅しているため)H¯iHiH¯iH1+H2++Hi1+Hi+1++Hn
probabilityislogic

サンダースの部分解で与えられた代数よりも直感的な解決策が必要だと思います。各仮説が与えられてデータが独立している場合、仮説の事前確率が変化しても、これは保持され続けます。そして、どういうわけか、結果は同じことが結論にも
当てはまら

@charles-あなたの気持ちを正確に知っています。いくつかの定性的な矛盾(Reductio ad absurdum)を使用して導き出すことができると思っていましたが、できませんでした。サンダーの数学を拡張することもできます。そして、結果が何を意味するかという点で「危険なもの」である条件2です。
確率論的

@probabilityislogic「基本的に、複数の仮説検定が一連のバイナリ仮説検定に他ならないことを示しています。」どうか、この文を拡張していただけますか?ジェインズの本から98ページを読んで、私はあなたのテストを減らすことができることを理解してテストにH 1をするために、後方を取得するために、何らかの形で正規化し、各他の仮説に対して及びH 1が、私は理由を理解していませんこれは、演習4.1の結果から得られます。H1,,HnH1H1
マーティン・ドロズディック

回答:


7

The reason we accepted eq. 4.28 (in the book, your condition 1) was that we assumed the probability of the data given a certain hypothesis Ha and background information X is independent, in other words for any Di and Dj with ij:

P(Di|DjHaX)=P(Di|HaX)(1)
Nonextensibility beyond the binary case can therefore be discussed like this: If we assume eq.1 to be true, is eq.2 also true?

P(Di|DjHa¯X)=?P(Di|Ha¯X)(2)
First lets look at the left side of eq.2, using the multiplication rule:

P(Di|DjHa¯X)=P(DiDjHa¯|X)P(DjHa¯|X)(3)
n{H1Hn}
Ha¯=baHb
P(Di|DjHa¯X)=baP(Di|DjHbX)P(DjHb|X)baP(DjHb|X)=baP(Di|HbX)P(DjHb|X)baP(DjHb|X)
baP(DjHb|X), cancel out and eq.2 is proved correct, since Hb=Ha¯. Therefore equation 4.29 can be derived from equation 4.28 in the book. But when we have more than two hypotheses, this doesn't happen, for example, if we have three hypotheses: {H1,H2,H3}, the equation above becomes:
P(Di|DjH1¯X)=P(Di|H2X)P(DjH2|X)+P(Di|H3X)P(DjH3|X)P(DjH2|X)+P(DjH3|X)
In other words:
P(Di|DjH1¯X)=P(Di|H2X)1+P(DjH3|X)P(DjH2|X)+P(Di|H3X)1+P(DjH2|X)P(DjH3|X)
The only way this equation can yield eq.2 is that both denominators equal 1, i.e. both fractions in the denominators must equal zero. But that is impossible.

1
I think the fourth equation is incorrect. We should have P(DiDjHb|X)=P(DiHB|X)P(Dj|HbX)
確率論的

Thank you very much probabilityislogic, I was able to correct the solution. What do you think now?
astroboy

I just don't understand how Jaynes says: "Those who fail to distinguish between logical independence and causal independence would suppose that (4.29) is always valid".
astroboy

I think I found the answer to my last comment: right after the sentence above Jaynes says: "provided only that no Di exerts a physical influence on any other Dj". So essentially Jaynes is saying that even if they don't have physical influence, there is a logical limitation that doesn't allow the generalization to more than two hypotheses.
astroboy

After reading the text again I feel my last comment was not a good answer. As I understand it now, Jayne's wanted to say: "Those who fail to distinguish between logical independence and causal independence" would argue that Di and Dj are assumed to have no physical influence. Thus they have causal independence which for them implies logical independence over any set of hypotheses. So they find all this discussion meaningless and simply proceed to generalize the binary case.
astroboy

1

Okay, so rather than go and re-derive Saunder's equation (5), I will just state it here. Condition 1 and 2 imply the following equality:

j=1m(kihkdjk)=(kihk)m1(kihkj=1mdjk)
where
djk=P(Dj|Hk,I)hk=P(Hk|I)

Now we can specialise to the case m=2 (two data sets) by taking D1(1)D1 and relabeling D2(1)D2D3Dm. Note that these two data sets still satisfy conditions 1 and 2, so the result above applies to them as well. Now expanding in the case m=2 we get:

(kihkd1k)(lihld2l)=(kihk)(lihld1ld2l)

kilihkhld1kd2l=kilihkhld1ld2l

kilihkhld2l(d1kd1l)=0(i=1,,n)

The term (d1ad1b) occurs twice in the above double summation, once when k=a and l=b, and once again when k=b and l=a. This will occur as long as a,bi. The coefficient of each term is given by d2b and d2a. Now because there are i of these equations, we can actually remove i from these equations. To illustrate, take i=1, now this means we have all conditions except where a=1,b=2 and b=1,a=2. Now take i=3, and we now can have these two conditions (note this assumes at least three hypothesis). So the equation can be re-written as:

l>khkhl(d2ld2k)(d1kd1l)=0

Now each of the hi terms must be greater than zero, for otherwise we are dealing with n1<n hypothesis, and the answer can be reformulated in terms of n1. So these can be removed from the above set of conditions:

l>k(d2ld2k)(d1kd1l)=0

Thus, there are n(n1)2 conditions that must be satisfied, and each conditions implies one of two "sub-conditions": that djk=djl for either j=1 or j=2 (but not necessarily both). Now we have a set of all of the unique pairs (k,l) for djk=djl. If we were to take n1 of these pairs for one of the j, then we would have all the numbers 1,,n in the set, and dj1=dj2==dj,n1=dj,n. This is because the first pair has 2 elements, and each additional pair brings at least one additional element to the set*

But note that because there are n(n1)2 conditions, we must choose at least the smallest integer greater than or equal to 12×n(n1)2=n(n1)4 for one of the j=1 or j=2. If n>4 then the number of terms chosen is greater than n1. If n=4 or n=3 then we must choose exactly n1 terms. This implies that dj1=dj2==dj,n1=dj,n. Only with two hypothesis (n=2) is where this does not occur. But from the last equation in Saunder's article this equality condition implies:

P(Dj|H¯i)=kidjkhkkihk=djikihkkihk=dji=P(Dj|Hi)

Thus, in the likelihood ratio we have:

P(D1(1)|Hi)P(D1(1)|H¯i)=P(D1|Hi)P(D1|H¯i)=1 ORP(D2(1)|Hi)P(D2(1)|H¯i)=P(D2D3,Dm|Hi)P(D2D3,Dm|H¯i)=1

To complete the proof, note that if the second condition holds, the result is already proved, and only one ratio can be different from 1. If the first condition holds, then we can repeat the above analysis by relabeling D1(2)D2 and D2(2)D3,Dm. Then we would have D1,D2 not contributing, or D2 being the only contributor. We would then have a third relabeling when D1D2 not contributing holds, and so on. Thus, only one data set can contribute to the likelihood ratio when condition 1 and condition 2 hold, and there are more than two hypothesis.

*NOTE: An additional pair might bring no new terms, but this would be offset by a pair which brought 2 new terms. e.g. take dj1=dj2 as first[+2], dj1=dj3 [+1] and dj2=dj3 [+0], but next term must have djk=djl for both k,l(1,2,3). This will add two terms [+2]. If n=4 then we don't need to choose any more, but for the "other" j we must choose the 3 pairs which are not (1,2),(2,3),(1,3). These are (1,4),(2,4),(3,4) and thus the equality holds, because all numbers (1,2,3,4) are in the set.


I am beginning to doubt the accuracy of this proof. The result in Saunders maths implies only n non linear constraints on the djk. This makes djk only have n degrees of freedom instead of 2n. However to get to the n(n1)2 conditions a different argument is required.
probabilityislogic

0

For the record, here is a somewhat more extensive proof. It also contains some background information. Maybe this is helpful for others studying the topic.

The main idea of the proof is to show that Jaynes' conditions 1 and 2 imply that

P(Dmk|HiX)=P(Dmk|X),
for all but one data set mk=1,,m. It then shows that for all these data sets, we also have
P(Dmk|H¯iX)=P(Dmk|X).
Thus we have for all but one data set,
P(Dmk|HiX)P(Dmk|H¯iX)=P(Dmk|X)P(Dmk|X)=1.
The reason that I wanted to include the proof here is that some of the steps involved are not at all obvious, and one needs to take care not to use anything else than conditions 1 and 2 and the product rule (as many of the other proofs implicitly do). The link above includes all these steps in detail. It is on my Google Drive and I will make sure it stays accessible.


Welcome to Cross Validated. Thank you for your answer. Can you please edit you answer to expand it, in order to include the main points of the link you provide? It will be more helpful both for people searching in this site and in case the link breaks. By the way, take the opportunity to take the Tour, if you haven't done it already. See also some tips on How to Answer, on formatting help and on writing down equations using LaTeX / MathJax.
Ertxiem - reinstate Monica

Thanks for your comment. I edited the post and sketched the main steps of the proof.
dennis
弊社のサイトを使用することにより、あなたは弊社のクッキーポリシーおよびプライバシーポリシーを読み、理解したものとみなされます。
Licensed under cc by-sa 3.0 with attribution required.