CSE 422: Assignment #7

Compressive sensing and convex programming

Due: Wed, Feb 28, 11:59pm
Gradescope Link
Dropbox for code

You should use the python cvxpy package for convex programming. See the installation instructions.

Problem 1: Compressive sensing

Consider the image wonderland-tree.png. If it’s easier to use, it is represented as a 2D binary array in the file wonderland-tree.txt.

In the writeup, include your code for parts (b), (c), and (e).

(a) [4 points]

Let \(n\) be the total number of pixels and let \(k\) be the total number of 1’s. What is \(k/n\)? Recall that compressive sensing only works when the image is sparse, so we’re hoping \(k/n\) is much less than \(1\).

(b) [9 points]

Fix a \(1200 \times 1200\) matrix \(A\) of independently chosen \(N(0,1)\) entries. Let \(A_r\) denote the first \(r\) rows of \(A\). Let \(x\) be the image wonderland-tree represented as a vector of \(1200\) pixels. Our compressed image will be \(b_r = A_r x\).

Based on Lecture 12, write a linear program that recovers an approximation \(x_r\) to \(x\) from \(b_r\) and verify using numpy.allclose() that \(x_{700}=x\) up to numerical precision (in other words, that your recovery is “exact.”)

(c) [7 points]

Let \(r^*\) be the smallest \(r\) such that \(\|x-x_r\|_1 < 0.001\). Find \(r^*\).

[Hint: Use binary search]

(d) [5 points]

Plot \(\|x_i-x\|_1\) for \(i \in \{r^*-10,r^*-9,\ldots,r^*-1,r^*,r^*+1,r^*+2\}\). Is there a sharp drop off?

(e) [7 points]

Now let \(A\) be a \(1200 \times 1200\) matrix whose entries are chosen independently from the distribution:

\[\begin{cases} -1 & \textrm{ with probability } \frac{p}{2} \\ +1 & \textrm{ with probability } \frac{p}{2} \\ 0 & \textrm{ with probability } 1-p\,. \end{cases}\]

For each value of \(p \in [0,1]\), let \(\bar{r}(p)\) denote the average of \(r^*\) over \(5\) trials, where \(r^*\) is the smallest value \(r\) such that \(\|x-x_r\|_1 < 0.001\).

Report the value of \(\bar{r}(0.5)\).

(f) [5 points]

Plot the value \(\bar{r}(p)\) for \(p \in \{0.2, 0.4, 0.6, 0.8, 1.0\}\).

(g) [6 points]

How does the average number of measurements \(\bar{r}(p)\) behave as a function of \(p\)? Can you offer an explanation?

Why would smaller values of \(p\) be better for implementing the measurements?

(g.EC) [2 points]

What is the smallest value \(p > 0\) such that \(r^*\) is still “reasonable” (in light of your previous experiments)?

Problem 2: Image reconstruction

See the PNG file bezos-corrupted.png. You will explore some methods for reconstructing the original image (which you can find here).

The following python code will read in the image

from PIL import Image
from numpy import array
img = array(Image.open("bezos-corrupted.png"), dtype=int)[:,:]
Known = (img > 0).astype(int)

Using matplotlib, you should display Known and make sure it matches what you expect:

import matplotlib.pyplot as plt
plt.gray()
plt.imshow(Known)
plt.show()

(a) [8 points]

First explore the naive solution for image reconstruction. For every pixel in img that is 0 (unknown), replace it with the average of its (up to 4) known neighbors’ pixels. If there are no known neighbors, then keep the pixel of value 0. Submit your recovered image and a two-three sentence explanation for why the naive solution seems to perform poorly. How might you improve it?

Submit your code.

(b) [8 points]

The following code should reconstruct a decent Bezos.

from cvxpy import Variable, Minimize, Problem, multiply, tv
U = Variable(img.shape)
obj = Minimize(tv(U))
constraints = [multiply(Known, U) == multiply(Known, img)]
prob = Problem(obj, constraints)
prob.solve(verbose=True)
# the recovered image is now in U.value

Submit your recovered image and a 1-2 sentence comparison between this image and the one you reconstructed naively.

(c) [9 points]

Read about what the function tv does here.

Give a 2-3 sentence explanation for why it makes sense, conceptually, to use the \(\ell_2\) norm (as opposed to the \(\ell_1\) norm) for recovering a corrupted image.