We can use
Singular Value Decomposition (SVD) and
Principal Component Analysis (PCA) to
poorly compress an image of a rubbery ducky and learn some concepts about eigenvectors and values at the same time.
SVD breaks an NxM matrix up into 3 matrices (called UΣV*), which are:
- U - An MxM unitary matrix. Each column is an eigenvector.
- Σ (or S) - An MxN diagonal matrix, representing the eigenvalues.
- V* - (complex conjugate of V), An NxN unitary matrix. Each row is an eigenvector.
Which when multipled together give you back the original matrix. Matlab is a decent choice of language since it's literally two lines to get the SVD.
| %% Do SVD on grayscale NxN matrix |
| img = imread('Rubber_Ducky.jpg'); |
| [U,S,V] = svd(rgb2gray(img)); |
It wasn't immediately clear to me what this means, or why it's useful. One of my friends had done a quick project to understand Principal Component Analysis (PCA) by using it to compress images. Note this type of compression doesn't provide any guarantees on preserving features.
Why does this work? What are
eigenvalues and eigenvectors all about? Wikipedia comes to the rescue. One way to visualize it is that if we have a bunch of points in an n dimensional space (for simplicity we can image just a 2D space), what sort of statistcal inferences could we make on this point? Suppose the points are not uniformly distributed, but form a blob (or normal distribution). One type of statistic would be finding the average center of all the points, which would be the centroid of this blob. Now an if we look at the way the points are distributed about this center, say the majority of the points fit in an oval-oid shape,
|
From the wiki page on PCA |
We can find two directions or vectors which define this ovaloid, and two values defining the strength of those vectors. These are, since we're in a 2D space, the 2 eigenvectors and eigenvalues, which are ways of describing the data distribution. One can see that with just the eigenvectors and values we have gained an understanding of the cluster of points beyond just storing the values, and can use the understanding to recreate an approximation of the system.
This concept can be applied to many forms of data, such as the large cluster of points that are the rgb or grayscale values of an image. We're finding a way of describing the data distribution of the image pixels, such that we can recreate it with the understanding of the system.
PCA allows you to take a subset of columns from U, values along the diagonal from Σ, and rows from V* instead of the whole matrices. When you multiply these subsets of eigenvector columns, eigen values, and eigenvector rows together we get an approximation of the original image, the more rows, the closer to the original we get. What this means is we are able to store less amount of data to recreate the original image at varying levels of approximations, a type of lossy compression.
| % Create regenerated image |
| Ug(:, 1:p) = U(:, 1:p); |
| Sg(1:p, 1:p) = S(1:p, 1:p); |
| Vgt(1:p, :) = Vt(1:p, :); |
| regenerated_img = Ug*Sg*Vgt; |
Where p is the subset we wish to take. Using Matlab I wrote a little script to generate an approximate image for all possible subsets and put it together into a little gif.
|
We can also break the RGB channels up and apply PCA to each independently. |