Entropy: An Introduction
As the first digital computers were being invented in the 1940s and researchers like Alan Turing were formalizing their capabilities, the scientific and engineering communities needed a way to quantify previous intuitive notions of information and communication. In 1948 Claude Shannon, during a stint at Bell Labs (also the birthplace of the transistor and Unix operating systems, among other major achievements in computer science) provided the first comprehensive framework to do so in his landmark paper A Mathematical Theory of Communication. With the tools provided by this theory (now known as Information Theory), we can answer important questions about the limits of modern computers like: How can we store information (say, a file on our hard drive) in the most efficient way? Or, how do we successfully communicate information (say, a text message) over a channel (e.g., our cell network) that may not be 100% reliable?
We’ve provided a couple examples of “information” in the abstract sense, but how can we define information in a formal, precise way? We’ll start with an example. Let’s say the city of Seattle has decided to introduce a new service, whereby every morning each resident gets a text message with the day’s weather forecast. If it’s wintertime, where the weather is overcast ~100% of the time, sending the text is redundant; a resident will always be correct if they have basic knowledge of the city’s weather patterns and assume that every day will be overcast. In other words, the forecast won’t provide any new information for city residents, since there’s no uncertainty in the outcome.
Now, suppose instead we’re in a different season with multiple possibilities for the day’s weather. In this case our service will actually be useful  residents no longer can just assume that the weather will be overcast every day and be correct 100% of the time. Just how useful is a given forecast? Suppose we’re in the summer, and the weather (optimistically) is sunny 75% of the time and overcast 25% of the time. In this case, a resident would be correct on most days if they naively assumed the day’s weather would be sunny, and a forecast of a sunny day thus doesn’t provide much new information as compared to the naive baseline. Correspondingly, due to their relative rarity, a forecast indicating a cloudy day would be much more informative, as the naive prediction would be incorrect in this case.
From the previous example we see that the information transmitted when communicating the outcome of an event is intimately related to frequency of that outcome. With this idea in mind, we introduce our first concept in information theory, selfinformation.
Let’s say we have some discrete random variable $X \sim p(x)$ taking on a set of potential values $x \in \mathcal{X}$ with a corresponding probability mass function $p(x)$. We then define
We usually take the $\log$ in the above definition to be base 2, so selfinformation is measured in bits. At a first glance our definition of $I(x)$ may seem arbitrary, but we can see that it captures the ideas we had in our thought experiment earlier, along with some other nice properties. As $p(x) \to 1$, we have that $I(x) \to 0$. This aligns with our intuition that information about an event is only transferred from a source to a recipient if the recipient was not already aware of that event’s outcome. Similarly, as $p(x) \to 0$, we have that $I(x) \to \infty$, reflecting that the communication of a rare outcome is more informative than a common one.
Finally, suppose we wish to communicate the outcomes of two independent events $X_1, X_2 \stackrel{\text{i.i.d.}}{\sim} p(x)$. Since the outcomes of $X_1$ and $X_2$ are independent of each other, we’d expect that a compound message containing both outcomes would convey an amount of information equal to the sum of sending two messages containing each outcome individually. The selfinformation satisfies this property too, since for some joint outcome $(x_1, x_2) \in \mathcal{X} \times \mathcal{X}$ we have
$\begin{aligned} I(x_1, x_2) &=\log p(x_1, x_2)\\ &= \log\big(p(x_1)p(x_2)\big)\\ &= [\log p(x_1) + \log p(x_2)]\\ &= I(x_1) + I(x_2) \end{aligned}$
which matches from what we’d expect of a measure of information.
Now, suppose the outcome isn’t known ahead of time, but we still want a sense of how much information we’ll be transmitting. One simple way to do this is to weight the selfinformation $I(x)$ of each outcome by the probability of that outcome $p(x)$. This gives us our second major concept in information theory, entropy
By convention we assign a value of $0$ to $p(x)\log p(x)$ when $x = 0$, even though technically $0\log 0$ is undefined. We can justify this by continuity, as it can be shown $p(x) \log p(x) \to 0$ as $p(x) \to 0^{+}$.^{1} This too aligns with our intuition  an outcome with no chance of occuring doesn’t have any effect on the variability of our event $X$.
Entropy is quite possibly the “fundamental unit” of information theory, and it’ll continue coming up in all kinds of interesting ways. We can think of $H(X)$ as quantifying how much we’ll be “surprised” by the outcome of $X$ on average. An example will help illustrate this idea
Plugging in to our formula for entropy, we have
$H(X) = p\log p  (1  p)\log(1p)$
To build up our intuition we can visualize our entropy for different values of $p$ in the following graph (where we let $H(p)$ denote $H(X)$ as a function of $p$)
As we would expect, $H(X)$ is 0 when $X$ is deterministic ($p = 0$ or $p = 1$) as there’s no potential for surprise in such cases. On the other hand, it’s maximized when $p = 1/2$ when the variability in outcomes is greatest. We’ll work through another more complicated example before moving on.
Plugging into our formula for entropy we have
$\begin{aligned} H(X) &= \sum_{k=1}^{\infty}q^{k1}p\cdot \log\big(q^{k1}p\big) \\ &= \bigg[\sum_{k=1}^{\infty}pq^{k1}\log p + \sum_{k=1}^{\infty}(k1)pq^{k1}\log q\bigg]\\ &= \bigg[\sum_{k=0}^{\infty}pq^{k}\log p + \sum_{k=0}^{\infty}kpq^{k}\log q\bigg] \\ &= \bigg[p\log p\sum_{k=0}^{\infty}q^{k} + p\log q\sum_{k=0}^{\infty}kq^{k}\bigg] \end{aligned}$
From calculus, we know that $\sum_{n=0}^{\infty}r^{n} = \frac{1}{1r}$ and $\sum_{n=0}^{\infty}nr^{n} = \frac{r}{(1r)^2}$ for any real number $r$ with $r < 1$. This gives us
$\begin{aligned} H(X) &= \bigg[\frac{p\log p}{p} + \frac{pq\log q}{p^2}\bigg] \\ &= \bigg[\frac{p\log p + q\log q}{p}\bigg]\\ &= \bigg[\frac{p\log p + (1p)\log (1p)}{p}\bigg]\\ \end{aligned}$
If we let $Y \sim Bern(p)$, this further simplifies to
$\begin{aligned} H(X) &= \frac{H(Y)}{p} \end{aligned}$
and we visualize our entropy for for different values of $p$ below:
As we might expect, $H(p) \to 0$ as $p \to 1$ since there’s less variability in the number of trials we need until our first success. On the other hand $H(p) \to \infty$ as $p \to 0$, which reflects the fact that it’s much harder to predict the number of trials required to reach a success as the probability of success is lowered.
One immediate consequence of the definition of entropy is the following:
Proof:
Going back to our definition of entropy:
$H(X) = \sum_{x \in \mathcal{X}}p(x)\log p(x)$
Since our $p(x)$ terms are probabilities, we must have $0 \leq p(x) \leq 1$. Since $\log x \leq 0$ for $0 < x \leq 1$, so we must have $\sum_{x \in \mathcal{X}}p(x)\log p(x) \leq 0$. Multiplying by the negative sign in the definition of $H(X)$ then completes the proof. $\square$
This fact will prove very useful as we proceed in the course, so it’s good to commit to memory now.
The reader may have noticed that our definition of entropy is nearly identical to that of the expected value of a random variable, differing only in the presence of the $\log$ function. This begs the question  can we reconcile these two concepts? First, as a reminder we define expected value as
Now, suppose we apply some deterministic function $g$ to our random variable $X$  this gives us a new random variable $g(X)$. Since the randomness in $g(X)$ is entirely due to the randomness in $X$, intuitively we should be able to compute $E[g(X)]$ using $X$’s distribution $p(x)$ without needing to compute $g(X)$’s distribution. It turns out that our intuition here is correct; this is a wellknown result in probability theory that is used enough to have its own name
Despite the intuitive nature of this theorem, proving it requires some work which we omit here and leave as an exercise for the reader.^{2} Setting $g(X) = \log p(X)$ and using LOTUS we then have
which allows us to formally justify our notion of entropy as the average amount of information contained in the outcome of a random varible.
We’ll go over one final example before concluding these notes. Suppose we want to communicate the outcome of an event represented by a random variable $X$. Ideally (to save money, power, etc.) we’d like to do so with as short a message as possible on average. Naturallly, our encoding scheme should use fewer bits to represent highprobability events and more bits to encode low probability ones. It turns out (we’ll prove this later), any coding scheme for this problem requires at least $I(X)$ bits on average to send our message without corrupting its contents. In other words, $H(X) = E[I(X)]$ provides a lower bound on the average number of bits required to guarantee that our message will be transmitted successfully. While we won’t have the tools to prove this until later, a simple example will help illustrate this idea
How could we devise a code to transmit the outcome of $X$ in this case? One possible scheme is as follows: If $X = a$, we send a 1 and end transmission, otherwise we send a 0 and continue. Similarly, if $X = b$, we then send a 1 and stop, otherwise we send another 0. Finally, if $X = c$ we send a 1 and stop, otherwise we send a 0 and stop. Under this scheme, we send a total of 1 bit half the time, 2 bits 1/4 of the time, and 3 bits 1/4 of the time, giving us a total of 1.75 bits on average.
Each time we ask our “questions” in this scheme, we’re eliminating half of the remaining probability space. This is very similar to the way that binary search eliminates half the possible solution space at each step of the algorithm. Moreover, just as binary search is an optimal algorithm (i.e., takes the fewest number of steps possible to still guarantee a correct answer), we would expect our encoding scheme to be optimal for $X$.
We can verify this intuition by plugging into our formula for entropy
$H(X) = \frac{1}{2}\log\frac{1}{2}\frac{1}{4}\log\frac{1}{4}\frac{1}{8}\log\frac{1}{8}\frac{1}{8}\log\frac{1}{8} = 1.75 \text{ bits}$
Since our average number of bits needed to encode the outcome of $X$ can’t be less than $H(X)$, our proposed scheme is indeed optimal.
Footnotes:

One potential proof is to apply L’Hôpital’s rule rule from calculus ↩