さまざまな確率の確率分布


36

各試行の確率が0.6で、16回の試行で9回の成功の確率を取得したい場合、二項分布を使用できます。16の各試験の成功確率が異なる場合、何を使用できますか?


1
@whuber正規近似の説明では、平均と標準偏差の計算はウィキペディアの説明とは異なります。Wikiでは、平均はnpであり、標準偏差はnp(1-p)です。したがって、この問題では、二項分布の成功確率の変化の正規近似では、平均はp1 + p2 + p3 + p4 + p5 + ... + piであり、分散はp1(1-p1)+ p2( 1-p2)+ ... + pi(1-pi)。私は正しいですか?
デビッド

1
ポアソン二項分布に関するウィキペディアを参照してください。また、ここでいくつかのヒットを見つける検索語
グレン_b-モニカを復元

@Davidすべてのpiが共通の値等しい場合pp1+p2++pn=npおよびp1(1p1)++pn(1pn)=np(1p)、参照するウィキペディアの説明が単なる特殊なケースであることを示しています。
whuber


回答:


22

これは16の(おそらく独立した)二項試行の合計です。独立性の仮定により、確率を掛けることができます。そこから、成功の確率およびp 2を持つ2つの試行の後、両方の試行で成功する可能性はp 1 p 2であり、成功しない可能性は1 p 11 p 2であり、 1つの成功はp 11 p 2+ 1 pp1p2p1p2(1p1)(1p2)。この最後の表現は、正確に1つの成功を得る2つの方法が相互に排他的であるという事実にその正当性を負っています。つまり、確率が追加されます。p1(1p2)+(1p1)p2

これらの2つのルール(独立した確率の乗算と相互排他的なルールの加算)を使用して、たとえば確率 16回の試行の答えを算出できます。そのためには、指定された数の成功(9など)を取得するすべての方法を考慮する必要があります。あります 16p1,,p169つの成功を達成する11440の方法。たとえば、そのうちの1つは、試行1、2、4、5、6、11、12、14、および15が成功し、他の試行が失敗した場合に発生します。成功の確率を持っていたP1P2P4P5P6P11P12P14およびP15及び障害は確率あっ1-P31-P7...(169)=11440p1,p2,p4,p5,p6,p11,p12,p14,p15。これらの16の数値を乗算すると、この特定の結果シーケンスの可能性得られます。 この数値と残りの11,439の数値を合計すると、答えが得られます。1p3,1p7,,1p13,1p16

もちろん、コンピューターを使用します。

16回を超える試行では、分布を概算する必要があります。確率および1 p iのいずれも小さすぎない場合、正規近似はうまく機能する傾向があります。この方法を使用すると、の和の期待ことに注意してn個の試験があるμ = P 1 + P 2 + + P N(試験は独立しているため)、分散がσ 2 = P 11 - P 1+ ppi1pinμ=p1+p2++pn。次に、合計の分布が平均 μと標準偏差 σの正規分布であるとします。答えは、 σの数倍以下で μと異なる成功の割合に対応する確率の計算に適している傾向があります。nが大きくなり、この近似は、これまで以上に正確に取得し、のより大きな倍数のために働く σ離れてから μσ2=p1(1p1)+p2(1p2)++pn(1pn)μσμσnσμ


9
コンピュータ科学者はこれらを「ポアソン試験」と呼び、ベルヌーイ試験と区別しています。中央極限定理近似に加えて、利用可能な適切なテール境界もあります。ここに一つあります。Googleが「ポアソン試験のチェルノフ限界」で検索すると、一般的なCS治療で見つかる可能性のある結果が表示されます。
枢機

@Cardinalその命名法は興味深いです。非常に小さなに対して有効ですが、そうでなければ分布はポアソン分布によって近似されないため、誤解を招くように思われます。(この質問に関するCVに関する別の議論があり、「16」が10,000に置き換えられ、テールの確率を調べますが、私はそれを再び見つけることができませんでした。)pi
whuber

1
はい、名前に同意します。最初に出会ったとき、少し奇妙に感じました。ここでは、検索のための便利な用語としてこれを示しました。コンピューター科学者は、特定のアルゴリズムを扱う際にこれらの確率をしばしば考慮するようです。もしあなたがそれを見つけたら、私は他の質問を読むことに興味があります。それは、この1多分?
枢機

2
@cardinalは、私たち「CSの人々」がポアソン試行と呼ぶのが正しいことです。実際、この場合、標準のチェルノフ-ホーフディングの範囲は、OPが要求している範囲を正確に与えます。
スレシュVenkatasubramanian

1
@デビッドによってコメント昨日あたりとしては、そことして平均値を近似する通常のあなたの文を使用して、何かが間違っている我々は取ることができ、それぞれが16のベルヌーイRVSを、合算されています値が0または1であるため、合計のサポート領域は0〜1ではなく0〜16になります。SDも確認する価値があります。
μ=(p1+p2++pn)/n
-wolfies

12

@whuberの通常の近似に代わる方法の1つは、「混合」確率、または階層モデルを使用することです。とき、これが適用されるであろういくつかの方法で類似している、とあなたは確率分布によってこれをモデル化することができ、P ID I S T θ の密度関数とG P | θ いくつかのパラメータによってインデックスさθ。積分方程式が得られます:pipiDist(θ)g(p|θ)θ

Pr(s=9|n=16,θ)=(169)01p9(1p)7g(p|θ)dp

二項確率は設定から来、正規近似は(と思う)の設定から来グラムP | θ = G P | μ σ = 1g(p|θ)=δ(pθ)(@whuberの答えで定義されたμσを使用)、そしてこのPDFの「テール」がピーク付近で急激に落ちることに注意してください。g(p|θ)=g(p|μ,σ)=1σϕ(pμσ)μσ

また、ベータ分布を使用することもできます。これは、単純な分析形式につながり、通常の近似が行う「小さなp」問題に悩む必要はありません-ベータは非常に柔軟です。使用しとの分布α β次方程式の解によってセット(これを「mimimum KLダイバージェンス」推定です)。beta(α,β)α,β

ψ(α)ψ(α+β)=1ni=1nlog[pi]
ψ(β)ψ(α+β)=1ni=1nlog[1pi]

Where ψ(.) is the digamma function - closely related to harmonic series.

We get the "beta-binomial" compound distribution:

(169)1B(α,β)01p9+α1(1p)7+β1dp=(169)B(α+9,β+7)B(α,β)

This distribution converges towards a normal distribution in the case that @whuber points out - but should give reasonable answers for small n and skewed pi - but not for multimodal pi, as beta distribution only has one peak. But you can easily fix this, by simply using M beta distributions for the M modes. You break up the integral from 0<p<1 into M pieces so that each piece has a unique mode (and enough data to estimate parameters), and fit a beta distribution within each piece. then add up the results, noting that making the change of variables p=xLUL for L<x<U the beta integral transforms to:

B(α,β)=LU(xL)α1(Ux)β1(UL)α+β1dx

+1 This answer contains some interesting and clever suggestions. The last one looks particularly flexible and powerful.
whuber

Just to take something very simple and concrete, suppose (i) pi=i17 and (ii) pi=i/17, for i=1 to 16. What would be the solution to your α and β estimates, and thus your estimates for P(X=9) given n=16, as per the OP's problem?
wolfies

Great answer and proposal, especially the beta! It'd be cool to see this answer written in its general form with n and s.
pglpm

8

Let Xi ~ Bernoulli(pi) with probability generating function (pgf):

pgf=E[tXi]=1pi(1t)

Let S=i=1nXi denote the sum of n such independent random variables. Then, the pgf for the sum S of n=16 such variables is:

pgfS=E[tS]=E[tX1]E[tX2]E[tX16] (... by independence)=i=116(1pi(1t))

We seek P(S=9), which is:

19!d9pgfSdt9|t=0

ALL DONE. This produces the exact symbolic solution as a function of the pi. The answer is rather long to print on screen, but it is entirely tractable, and takes less than 1100th of a second to evaluate using Mathematica on my computer.

Examples

If pi=i17,i=1 to 16, then: P(S=9)=964794185433480818448661191875666868481=0.198268

If pi=i17,i=1 to 16, then: P(S=9)=0.000228613

More than 16 trials?

With more than 16 trials, there is no need to approximate the distribution. The above exact method works just as easily for examples with say n=50 or n=100. For instance, when n=50, it takes less than 110th of second to evaluate the entire pmf (i.e. at every value s=0,1,,50) using the code below.

Mathematica code

Given a vector of pi values, say:

n = 16;   pvals = Table[Subscript[p, i] -> i/(n+1), {i, n}];

... here is some Mathematica code to do everything required:

pgfS = Expand[ Product[1-(1-t)Subscript[p,i], {i, n}] /. pvals];
D[pgfS, {t, 9}]/9! /. t -> 0  // N

0.198268

To derive the entire pmf:

Table[D[pgfS, {t,s}]/s! /. t -> 0 // N, {s, 0, n}]

... or use the even neater and faster (thanks to a suggestion from Ray Koopman below):

CoefficientList[pgfS, t] // N

For an example with n=1000, it takes just 1 second to calculate pgfS, and then 0.002 seconds to derive the entire pmf using CoefficientList, so it is extremely efficient.


1
It can be even simpler. With[{p = Range@16/17}, N@Coefficient[Times@@(1-p+p*t),t,9]] gives the probability of 9 successes, and With[{p = Range@16/17}, N@CoefficientList[Times@@(1-p+p*t),t]] gives the probabilities of 0,...,16 successes.
Ray Koopman

@RayKoopman That is cool. The Table for the p-values is intentional to allow for more general forms not suitable with Range. Your use of CoefficientList is very nice! I've added an Expand to the code above which speeds the direct approach up enormously. Even so, CoefficientList is even faster than a ParallelTable. It does not make much difference for n under 50 (both approaches take just a tiny fraction of a second either way to generate the entire pmf), but your CoefficientList will also be a real practical advantage when n is really large.
wolfies

5

@wolfies comment, and my attempt at a response to it revealed an important problem with my other answer, which I will discuss later.

Specific Case (n=16)

There is a fairly efficient way to code up the full distribution by using the "trick" of using base 2 (binary) numbers in the calculation. It only requires 4 lines of R code to get the full distribution of Y=i=1nZi where Pr(Zi=1)=pi. Basically, there are a total of 2n choices of the vector z=(z1,,zn) that the binary variables Zi could take. Now suppose we number each distinct choice from 1 up to 2n. This on its own is nothing special, but now suppose that we represent the "choice number" using base 2 arithmetic. Now take n=3 so I can write down all the choices so there are 23=8 choices. Then 1,2,3,4,5,6,7,8 in "ordinary numbers" becomes 1,10,11,100,101,110,111,1000 in "binary numbers". Now suppose we write these as four digit numbers, then we have 0001,0010,0011,0100,0101,0110,0111,1000. Now look at the last 3 digits of each number - 001 can be thought of as (Z1=0,Z2=0,Z3=1)Y=1, etc. Counting in binary form provides an efficient way to organise the summation. Fortunately, there is an R function which can do this binary conversion for us, called intToBits(x) and we convert the raw binary form into a numeric via as.numeric(intToBits(x)), then we will get a vector with 32 elements, each element being the digit of the base 2 version of our number (read from right to left, not left to right). Using this trick combined with some other R vectorisations, we can calculate the probability that y=9 in 4 lines of R code:

exact_calc <- function(y,p){
    n       <- length(p)
    z       <- t(matrix(as.numeric(intToBits(1:2^n)),ncol=2^n))[,1:n] #don't need columns n+1,...,32 as these are always 0
    pz      <- z%*%log(p/(1-p))+sum(log(1-p))
    ydist   <- rowsum(exp(pz),rowSums(z))
    return(ydist[y+1])
}

Plugging in the uniform case pi(1)=i17 and the sqrt root case pi(2)=i17 gives a full distribution for y as:

yPr(Y=y|pi=i17)Pr(Y=y|pi=i17)00.00000.055810.00000.178420.00030.265230.00260.243040.01390.153650.04910.071060.11810.024870.19830.006780.23530.001490.19830.0002100.11810.0000110.04910.0000120.01390.0000130.00260.0000140.00030.0000150.00000.0000160.00000.0000

So for the specific problem of y successes in 16 trials, the exact calculations are straight-forward. This also works for a number of probabilities up to about n=20 - beyond that you are likely to start to run into memory problems, and different computing tricks are needed.

Note that by applying my suggested "beta distribution" we get parameter estimates of α=β=1.3206 and this gives a probability estimate that is nearly uniform in y, giving an approximate value of pr(y=9)=0.06799117. This seems strange given that a density of a beta distribution with α=β=1.3206 closely approximates the histogram of the pi values. What went wrong?

General Case

I will now discuss the more general case, and why my simple beta approximation failed. Basically, by writing (y|n,p)Binom(n,p) and then mixing over p with another distribution pf(θ) is actually making an important assumption - that we can approximate the actual probability with a single binomial probability - the only problem that remains is which value of p to use. One way to see this is to use the mixing density which is discrete uniform over the actual pi. So we replace the beta distribution pBeta(a,b) with a discrete density of pi=116wiδ(ppi). Then using the mixing approximation can be expressed in words as choose a pi value with probability wi, and assume all bernoulli trials have this probability. Clearly, for such an approximation to work well, most of the pi values should be similar to each other. This basically means that for @wolfies uniform distribution of values, pi=i17 results in a woefully bad approximation when using the beta mixing distribution. This also explains why the approximation is much better for pi=i17 - they are less spread out.

The mixing then uses the observed pi to average over all possible choices of a single p. Now because "mixing" is like a weighted average, it cannot possibly do any better than using the single best p. So if the pi are sufficiently spread out, there can be no single p that could provide a good approximation to all pi.

One thing I did say in my other answer was that it may be better to use a mixture of beta distributions over a restricted range - but this still won't help here because this is still mixing over a single p. What makes more sense is split the interval (0,1) up into pieces and have a binomial within each piece. For example, we could choose (0,0.1,0.2,,0.9,1) as our splits and fit nine binomials within each 0.1 range of probability. Basically, within each split, we would fit a simple approximation, such as using a binomial with probability equal to the average of the pi in that range. If we make the intervals small enough, the approximation becomes arbitrarily good. But note that all this does is leave us with having to deal with a sum of indpendent binomial trials with different probabilities, instead of Bernoulli trials. However, the previous part to this answer showed that we can do the exact calculations provided that the number of binomials is sufficiently small, say 10-15 or so.

To extend the bernoulli-based answer to a binomial-based one, we simply "re-interpret" what the Zi variables are. We simply state that Zi=I(Xi>0) - this reduces to the original bernoulli-based Zi but now says which binomials the successes are coming from. So the case (Z1=0,Z2=0,Z3=1) now means that all the "successes" come from the third binomial, and none from the first two.

Note that this is still "exponential" in that the number of calculations is something like kg where g is the number of binomials, and k is the group size - so you have Yj=1gXj where XjBin(k,pj). But this is better than the 2gk that you'd be dealing with by using bernoulli random variables. For example, suppose we split the n=16 probabilities into g=4 groups with k=4 probabilities in each group. This gives 44=256 calculations, compared to 216=65536

By choosing g=10 groups, and noting that the limit was about n=20 which is about 107 cells, we can effectively use this method to increase the maximum n to n=50.

If we make a cruder approximation, by lowering g, we will increase the "feasible" size for n. g=5 means that you can have an effective n of about 125. Beyond this the normal approximation should be extremely accurate.


@momo - I think this is ok, as my answers are two different ways to approach the problem. This answer is not an edited version of my first one - it is just a different answer
probabilityislogic

1
For a solution in R that is extremely efficient and handles much, much larger values of n, please see stats.stackexchange.com/a/41263. For instance, it solved this problem for n=104, giving the full distribution, in under three seconds. (A comparable Mathematica 9 solution--see @wolfies' answer--also performs well for smaller n but could not complete the execution with such a large value of n.)
whuber

5

The (in general intractable) pmf is

Pr(S=k)=A{1,,n}|A|=k(iApi)(j{1,,n}A(1pj)).
R code:
p <- seq(1, 16) / 17
cat(p, "\n")
n <- length(p)
k <- 9
S <- seq(1, n)
A <- combn(S, k)
pr <- 0
for (i in 1:choose(n, k)) {
    pr <- pr + exp(sum(log(p[A[,i]])) + sum(log(1 - p[setdiff(S, A[,i])])))
}
cat("Pr(S = ", k, ") = ", pr, "\n", sep = "")

For the pi's used in wolfies answer, we have:

Pr(S = 9) = 0.1982677

When n grows, use a convolution.


1
Doing that with R code was really helpful. Some of us are more concrete-thinkers and it greatly helps to have an operational version of the generating function.
DWin

@DWin I provide efficient R code in the solution to the same problem (with different values of the pi) at stats.stackexchange.com/a/41263. The problem here is solved in 0.00012 seconds total computation time (estimated by solving it 1000 times) compared to 0.53 seconds (estimated by solving it once) for this R code and 0.00058 seconds using Wolfies' Mathematica code (estimated by solving it 1000 times).
whuber

So P(S=k) would follow a Poisson-Binomial Distribution.
fccoelho

+1 Very useful post in my attempt at answering this question. I was wondering if using logs is more of a cool mathematical formulation than a real need. I am not too concerned about running times...
Antoni Parellada
弊社のサイトを使用することにより、あなたは弊社のクッキーポリシーおよびプライバシーポリシーを読み、理解したものとみなされます。
Licensed under cc by-sa 3.0 with attribution required.