Linear Programming and Random Projections
Tags | CS161 |
---|
Linear programming
The big idea: maximize a linear function subject to a bunch of linear constraints. This is a very general problem, and in fact, max flow is one of them! Maximize the sum of flows leaving  (linear), subject to a bunch of constraints on flow
Geometric intuition
Linear programming is like taking a bunch of lines and chopping up the subspace into a little polytope
, which is a convex hull
![](Linear%20Programming%20and%20Random%20Projections%2076bd593f908a4752bea401bda9d72b23/Untitled.png)
The direction of optimization is always one directional because it’s a linear function. In other words, to optimize this linear programming problem, you just need to step as far as possible in one direction, subject to the bounds of the constraints, like a fence. Therefore the highest value in the polytope is always going to be on a vertex. Think about this for a second!
![](Linear%20Programming%20and%20Random%20Projections%2076bd593f908a4752bea401bda9d72b23/Untitled%201.png)
Duality
Here comes the key problem. If I gave you the optimal value, can you show that it is correct? Well, say that you had certain restrictions . Also say that you wanted to maximize .
You can express  in terms of , but the coefficients you need, these form another constrained optimization problem. Below is a rough slide on what we are trying to do here
![](Linear%20Programming%20and%20Random%20Projections%2076bd593f908a4752bea401bda9d72b23/Untitled%202.png)
The general upshot
Each linear programming algorithm has a primal
and a dual
objective. These have the same solutions, and this is why the max flow is the same as the min cut!
Random projections
The goal here is to reduce a high-dimensional thing into a low-dimensional representation. The best way of doing this is to use a random subspace
![](Linear%20Programming%20and%20Random%20Projections%2076bd593f908a4752bea401bda9d72b23/Untitled%203.png)
It turns out that through the Johnson-Lidenstrauss
lemma claims that euclidian distance is approximately preserved by random projections, which means that you can figure out the nearest neighbors approximately through projections. This is nice!
Compressions
Say that you had a long but sparse vector. You can try to compress this using a compression matrix into a shorter but dense vector. This is possible because the subspace of sparse vectors is, firstly, not a subspace mathematically speaking. Second, it forms more of a “ribbed ball”
![](Linear%20Programming%20and%20Random%20Projections%2076bd593f908a4752bea401bda9d72b23/Untitled%204.png)
and so if you were to project this onto a random subspace, it’s likely that the sparse vectors achieve separation
![](Linear%20Programming%20and%20Random%20Projections%2076bd593f908a4752bea401bda9d72b23/Untitled%205.png)
Polynomial interpolation
Given a set of points, can you fit an  degree polynomial to it? This is actually really useful because you can encode a message in a polynomial, and if you send it across a noisy channel, there is a greater chance that the “idea” of a polynomial is preserved and therefore the message is also preserved.
This is used a lot in practice, known as reed-solomon encoding
.