We studied the \(\ell_1\) norm sparsification problem:
Given norms \(N_1,\ldots,N_m\) on \(\R^n\), define \(N(x) \seteq N_1(x) + \cdots + N_m(x)\). Then we seek a weight vector \(c \in \R_+^m\) with \(\# \{ i : c_i > 0 \}\) small such that
\[\max_{N(x) \leq 1} \left|N(x) - \sum_{j=1}^m c_j N_j(x)\right| \leq \e\,.\]The role of concentration was essential: If \(\mathbf{Z}\) is a log-concave random vector on \(\R^n\) and \(\|\cdot\|\) is any norm, then
\[\Pr\left(\|\mathbf{Z}\| > t \E[\|\mathbf{Z}\|]\right) \leq e^{-c t \psi_n}\,,\]where \(c > 0\) is a universal constant and \(\psi_n\) is the KLS constant.
This sort of concentration is essentially equivalent to a uniform lower bound on the Cheeger constant of isotropic log-concave measures on \(\R^n\). (Next lecture.)