確率分布が均一なときにエントロピーが最大化されるのはなぜですか?


32

エントロピーはプロセス/変数のランダム性の尺度であり、次のように定義できることを知っています。ランダム変数X set A :- H(X)=xiAp(xi)log(p(xi)). In the book on Entropy and Information Theory by MacKay, he provides this statement in Ch2

Entropy is maximized if p is uniform.

Intuitively, I am able to understand it, like if all datapoints in set A are picked with equal probability 1/m (m being cardinality of set A), then the randomness or the entropy increases. But if we know that some points in set A are going to occur with more probability than others (say in the case of normal distribution, where the maximum concentration of data points is around the mean and small standard deviation area around it, then the randomness or entropy should decrease.

But is there any mathematical proof for this ? Like the equation for H(X) I differentiate it with respect to p(x) and set it to 0 or something like that.

On a side note, is there any connnection between the entropy that occurs information theory and the entropy calculations in chemistry (thermodynamics) ?


2
This question is answered (in passing) at stats.stackexchange.com/a/49174/919.
whuber

I am getting quite confused with another statement given in Christopher Bishops book which states that "for a single real variable, the distribution that maximizes the entropy is the Gaussian." It also states that "multivariate distribution with max- imum entropy, for a given covariance, is a Gaussian". How is this statement valid? Isnt the entropy of the uniform distribution the maximum always?
user76170

6
Maximization is always performed subject to constraints on the possible solution. When the constraints are that all probability must vanish beyond predefined limits, the maximum entropy solution is uniform. When instead the constraints are that the expectation and variance must equal predefined values, the ME solution is Gaussian. The statements you quote must have been made within particular contexts where these constraints were stated or at least implicitly understood.
whuber

2
I probably also should mention that the word "entropy" means something different in the Gaussian setting than it does in the original question here, for then we are discussing entropy of continuous distributions. This "differential entropy" is a different animal than the entropy of discrete distributions. The chief difference is that the differential entropy is not invariant under a change of variables.
whuber

So which means that maximisation always is with respect to constraints ? What if there are no constraints ? I mean, cant there be a question like this ? Which probability distribution has maximum entropy ?
user76170

回答:


25

Heuristically, the probability density function on {x1,x2,..,.xn} with maximum entropy turns out to be the one that corresponds to the least amount of knowledge of {x1,x2,..,.xn}, in other words the Uniform distribution.

Now, for a more formal proof consider the following:

A probability density function on {x1,x2,..,.xn} is a set of nonnegative real numbers p1,...,pn that add up to 1. Entropy is a continuous function of the n-tuples (p1,...,pn), and these points lie in a compact subset of Rn, so there is an n-tuple where entropy is maximized. We want to show this occurs at (1/n,...,1/n) and nowhere else.

Suppose the pj are not all equal, say p1<p2. (Clearly n1.) We will find a new probability density with higher entropy. It then follows, since entropy is maximized at some n-tuple, that entropy is uniquely maximized at the n-tuple with pi=1/n for all i.

Since p1<p2, for small positive ε we have p1+ε<p2ε. The entropy of {p1+ε,p2ε,p3,...,pn} minus the entropy of {p1,p2,p3,...,pn} equals

p1log(p1+εp1)εlog(p1+ε)p2log(p2εp2)+εlog(p2ε)
To complete the proof, we want to show this is positive for small enough ε. Rewrite the above equation as
p1log(1+εp1)ε(logp1+log(1+εp1))p2log(1εp2)+ε(logp2+log(1εp2))

Recalling that log(1+x)=x+O(x2) for small x, the above equation is

εεlogp1+ε+εlogp2+O(ε2)=εlog(p2/p1)+O(ε2)
which is positive when ε is small enough since p1<p2.

A less rigorous proof is the following:

Consider first the following Lemma:

Let p(x) and q(x) be continuous probability density functions on an interval I in the real numbers, with p0 and q>0 on I. We have

IplogpdxIplogqdx
if both integrals exist. Moreover, there is equality if and only if p(x)=q(x) for all x.

Now, let p be any probability density function on {x1,...,xn}, with pi=p(xi). Letting qi=1/n for all i,

i=1npilogqi=i=1npilogn=logn
which is the entropy of q. Therefore our Lemma says h(p)h(q), with equality if and only if p is uniform.

Also, wikipedia has a brief discussion on this as well: wiki


11
I admire the effort to present an elementary (Calculus-free) proof. A rigorous one-line demonstration is available via the weighted AM-GM inequality by noting that exp(H) = (1pi)pipi1pi=n with equality holding iff all the 1/pi are equal, QED.
whuber

I don't understand how logn can be equal to logn.
user1603472

4
@user1603472 do you mean i=1npilogn=logn? Its because i=1npilogn=logni=1npi=logn×1
HBeel

@Roland I pulled the logn outside of the sum since it does not depend on i. Then the sum is equal to 1 because p1,,pn are the densities of a probability mass function.
HBeel

Same explanation with more details can be found here: math.uconn.edu/~kconrad/blurbs/analysis/entropypost.pdf
Roland

14

Entropy in physics and information theory are not unrelated. They're more different than the name suggests, yet there's clearly a link between. The purpose of entropy metric is to measure the amount of information. See my answer with graphs here to show how entropy changes from uniform distribution to a humped one.

The reason why entropy is maximized for a uniform distribution is because it was designed so! Yes, we're constructing a measure for the lack of information so we want to assign its highest value to the least informative distribution.

Example. I asked you "Dude, where's my car?" Your answer is "it's somewhere in USA between Atlantic and Pacific Oceans." This is an example of the uniform distribution. My car could be anywhere in USA. I didn't get much information from this answer.

However, if you told me "I saw your car one hour ago on Route 66 heading from Washington, DC" - this is not a uniform distribution anymore. The car's more likely to be in 60 miles distance from DC, than anywhere near Los Angeles. There's clearly more information here.

Hence, our measure must have high entropy for the first answer and lower one for the second. The uniform must be least informative distribution, it's basically "I've no idea" answer.


7

The mathematical argument is based on Jensen inequality for concave functions. That is, if f(x) is a concave function on [a,b] and y1,yn are points in [a,b], then: nf(y1+ynn)f(y1)++f(yn)

Apply this for the concave function f(x)=xlog(x) and Jensen inequality for yi=p(xi) and you have the proof. Note that p(xi) define a discrete probability distribution, so their sum is 1. What you get is log(n)i=1np(xi)log(p(xi)), with equality for the uniform distribution.


1
I actually find the Jensen's inequality proof to be a much deeper proof conceptually than the AM-GM one.
Casebash

4

On a side note, is there any connnection between the entropy that occurs information theory and the entropy calculations in chemistry (thermodynamics) ?

Yes, there is! You can see the work of Jaynes and many others following his work (such as here and here, for instance).

But the main idea is that statistical mechanics (and other fields in science, also) can be viewed as the inference we do about the world.

As a further reading I'd recommend Ariel Caticha's book on this topic.


1

An intuitive explanation:

If we put more probability mass into one event of a random variable, we will have to take away some from other events. The one will have less information content and more weight, the others more information content and less weight. Therefore the entropy being the expected information content will go down since the event with lower information content will be weighted more.

As an extreme case imagine one event getting probability of almost one, therefore the other events will have a combined probability of almost zero and the entropy will be very low.


0

Main idea: take partial derivative of each pi, set them all to zero, solve the system of linear equations.

Take a finite number of pi where i=1,...,n for an example. Denote q=1i=0n1pi.

H=i=0n1pilogpi(1q)logqHln2=i=0n1pilnpi(1q)lnq
Hpi=lnqpi=0
Then q=pi for every i, i.e., p1=p2=...=pn.


I am glad you pointed out this is the "main idea," because it's only a part of the analysis. The other part--which might not be intuitive and actually is a little trickier--is to verify this is a global minimum by studying the behavior of the entropy as one or more of the pi shrinks to zero.
whuber
弊社のサイトを使用することにより、あなたは弊社のクッキーポリシーおよびプライバシーポリシーを読み、理解したものとみなされます。
Licensed under cc by-sa 3.0 with attribution required.