Suppose that \(T \subseteq \R^n\) and that \(d_1\) and \(d_2\) are distances on \(T\).
Show that if \(e_h(T,d_1), e_h(T,d_2) \leq \alpha\), then \(e_{h+1}(T, d_3) \leq \alpha\), where \(d_3(x,y) \seteq \max(d_1(x,y), d_2(x,y))\).
Suppose that \(d_1\) and \(d_2\) are both induced by norms on \(\R^n\), i.e., \(d_1(x,y)=N_1(x-y)\) and \(d_2(x,y)=N_2(x-y)\).
Prove that \(e_{h+1}(T,d_2) \leq 2 e_h(T,d_1) e_h(U_1, d_2)\), where \(U_1\) is the unit ball of \(N_1\).
Suppose that \(X_1,X_2,\ldots,X_m\) is a family of centered (\(\E X_i = 0\) for every \(i \in [m]\)) random variables that are subgaussian with respect to a distance \(d\) on \(\{1,\ldots,m\}\) (with constant \(c = \Theta(1)\)). Suppose furthermore that \(d(i,j) \leq \Delta\) for all \(1 \leq i,j \leq m\).
First show that
\[\E \max_{i \in \{1,\ldots,m\}} |X_i-X_j| \lesssim \Delta \sqrt{\log m}\,.\]Show that the first part implies
\[\E \max_{i \in \{1,\ldots,m\}} X_i \lesssim \Delta \sqrt{\log m}\,.\]Give an example to show that in general one cannot always have
\[\E \max_{i \in \{1,\ldots,m\}} |X_i| \lesssim \Delta \sqrt{\log m}\,.\]Suppose that \(X_1,\ldots,X_m\) are symmetric in the senes that \(X_j\) and \(-X_j\) have the same distribution for every \(j\). Show that
\[\E \max_{i \in \{1,\ldots,m\}} |X_i| \asymp \E |X_1| + \E \max_{i \in \{1,\ldots,m\}} |X_i-X_j|\,.\]A function \(f : \R^n \to \R\) is log-concave if \(f(x)=e^{-h(x)}\) for a convex function \(h : \R^n \to \R\).
A random variable \(\mathbf{Z}\) on \(\R^n\) is called log-concave if its density is is a log-concave function. A classic example is the \(n\)-dimensional Gaussian whose density is proportional to \(e^{-\|x\|_2^2/2}\).
Suppose that \(\mu\) is a probability measure on \(\R^n\). One says that \(\mu\) is log-concave if, for all compact sets \(A,B \subseteq \R^n\), we have
\[\begin{equation}\label{eq:log-concave-measure} \log \mu(\lambda A + (1-\lambda )B) \geq \lambda \log \mu(A) + (1-\lambda) \log \mu(B)\,, \quad \forall \lambda \in [0,1]\,. \end{equation}\]Classic examples for log-concave measures include the volume measure intersected with a convex body \(K \subseteq \R^n\):
\[\mu(A) \seteq \mathrm{vol}_n(A \cap K)\,.\]The two notions coincide: \(\mathbf{Z}\) is a log-concave random variable if and only if its distribution \(\mu\) is a log-concave probability measure.
In this problem, you will prove that if \(\mathbf{Z}\) is a log-concave random variable, then every 1-dimensional marginal is exponentially concentrated. More formally: For every \(u \in \R^n\), you want to show that there is some universal constant \(c > 0\) such that for all \(t > 0\),
As a tool, you will use a generalization of Borell’s lemma. This says that if a symmetric convex set \(A\) takes up a substantial portion of a convex body \(K\), then expanding \(A\) by a little bit consumes almost all of \(K\). We use the notation \(S^c = \R^n \setminus S\) for the complement of \(S\).
Borell’s Lemma: Let \(K\) be a convex body of volume \(1\) in \(\R^n\), and let \(A\) be a closed, convex, symmetric set such that \(\mathrm{vol}_n(K \cap A) = \delta > 1/2\). Then for every \(t > 1\)
\[\mathrm{vol}_n\left(K \cap (tA)^c\right) \leq \delta \left(\frac{1-\delta}{\delta}\right)^{(t+1)/2}\]First, you should prove a generalization of Borell’s Lemma to the setting of log-concave probability distributions.
Lemma: Let \(\mu\) be a log-concave probability distribution on \(\R^n\). Then for any symmetric convex set \(A \subseteq \mathbb{R}^n\) with \(\alpha \seteq \mu(A) \in (0,1)\), and any \(t > 1\), we have
\[1 - \mu(t A) \leq \alpha \left(\frac{1-\alpha}{\alpha}\right)^{(t+1)/2} \leq \left(\frac{1}{\alpha} - 1\right)^{(t-1)/2}\]To prove it, first show that
\[\frac{2}{t+1} (tA)^c + \frac{t-1}{t+1} A \subseteq A^c\,.\]And then use \eqref{eq:log-concave-measure}.
Now prove \eqref{eq:psi1} by applying your lemma to the set
\[A \seteq \left\{ x \in \R^n : \abs{\langle u, x\rangle} < 3 \E \abs{\langle u,\mathbf{Z}\rangle} \right\}\]Suppose that \(\mathbf{X}\) is an isotropic, log-concave random variable with \(\E[\mathbf{X}]=0\). This means that the density of \(\mathbf{X}\) is \(e^{-f(x)}\) for some convex function \(f : \R^n \to \R\), and that \(\E[\mathbf{X}\mathbf{X}^{\top}] = I\).
If \(\psi_n\) is the KLS constant in \(n\) dimensions, then it is known (from early work of Gromov and Milman) that for any \(L\)-Lipschitz function \(F : \R^n \to \R\),
\[\begin{equation}\label{eq:gm} \P\left[|f(\mathbf{X})-\E f(\mathbf{X})| > t\right] \leq 2 \exp\left(-\frac{c t}{\psi_n L}\right). \end{equation}\]Here \(c > 0\) is some fixed constant, and a function is \(L\)-Lipschitz if \(|f(x)-f(y)|\leq L \|x-y\|_2\) for all \(x,y \in \R^n\).
Supppose that \(\mathbf{Z}\) is a log-concave (but not necessarily isotropic) random variable with \(\E[\mathbf{Z}]=0\). Define the covariance matrix \(A = \E[\mathbf{Z} \mathbf{Z}^{\top}]\) and let \(\mathbf{X} \seteq A^{-1/2} \mathbf{Z}\). Prove that if \(A\) has full rank, then \(\mathbf{X}\) is isotropic.
Let \(\mathcal{N}\) be a norm on \(\R^n\) and denote by \(\mathcal{N}^*\) its dual norm, which is defined by
\[\mathcal{N}^*(y) \seteq \max \left\{ \langle x,y\rangle : \mathcal{N}(x) \leq 1 \right\}.\]Prove that \(\mathcal{N}(x) = \max \left \{ \langle x,y\rangle : \mathcal{N}^*(y) \leq 1 \right\}\).
Use this formulation of \(\mathcal{N}(x)\) to prove that
\[\mathcal{N}(A^{1/2} x) \leq \|x\|_2 \max \left\{ \|A^{1/2} w\|_2 : \mathcal{N}^*(w) \leq 1 \right\}.\]Show that for any \(w \in \R^n\),
\[\|A^{1/2} w\|_2 \leq 2 \mathcal{N}^*(w) \E[\mathcal{N}(\mathbf{Z})].\]You should use the following fact: If \(\mathbf{Y}\) is a mean-zero, real-valued, log-concave random variable, then
\[(\E[\mathbf{Y}^2])^{1/2} \leq 2 \E[\abs{\mathbf{Y}}]\,.\]You may use the fact that if \(\mathbf{Z}\) is log-concave, then the random variable \(\langle w,\mathbf{Z}\rangle\) is also log-concave for any \(w \in \R^n\).
Combine parts (C) and (D) to show that the map \(x \mapsto \mathcal{N}(A^{1/2} x)\) is \(L\)-Lipschitz for \(L = 2 \E[\mathcal{N}(\mathbf{Z})]\).
Use part (E) and \eqref{eq:gm} to prove an upper bound on
\[\Pr\left[\left|\mathcal{N}(\mathbf{Z})-\E[\mathcal{N}(\mathbf{Z})]\right| \geq t\right],\]where \(\mathbf{Z}\) is any log-concave random variable.