# Linear Programming and Random Projections

Tags | CS161 |
---|

# Linear programming

The big idea: maximize a linear function subject to a bunch of linear constraints. This is a very general problem, and in fact, max flow is one of them! Maximize the sum of flows leaving $s$ď»ż (linear), subject to a bunch of constraints on flow

## Geometric intuition

Linear programming is like taking a bunch of lines and chopping up the subspace into a little `polytope`

, which is a convex hull

The direction of optimization is always one directional because itâ€™s a linear function. In other words, to optimize this linear programming problem, you just need to step as far as possible in one direction, subject to the bounds of the constraints, like a fence. Therefore the highest value in the polytope is always going to be on a vertex. Think about this for a second!

## Duality

Here comes the key problem. If I gave you the optimal value, can you show that it is correct? Well, say that you had certain restrictions $g(x) \leq c, gâ€™(x) \leq câ€™$ď»ż. Also say that you wanted to maximize $f(x)$ď»ż.

You can express $f(x)$ď»ż in terms of $g(x)$ď»ż, but the coefficients you need, these form another constrained optimization problem. Below is a rough slide on what we are trying to do here

## The general upshot

Each linear programming algorithm has a `primal`

and a `dual`

objective. These have the same solutions, and this is why the max flow is the same as the min cut!

# Random projections

The goal here is to reduce a high-dimensional thing into a low-dimensional representation. The best way of doing this is to use a random subspace

It turns out that through the `Johnson-Lidenstrauss`

lemma claims that euclidian distance is approximately preserved by random projections, which means that you can figure out the nearest neighbors approximately through projections. This is nice!

## Compressions

Say that you had a long but sparse vector. You can try to compress this using a compression matrix into a shorter but dense vector. This is possible because the subspace of sparse vectors is, firstly, not a subspace mathematically speaking. Second, it forms more of a â€śribbed ballâ€ť

and so if you were to project this onto a random subspace, itâ€™s likely that the sparse vectors achieve separation

# Polynomial interpolation

Given a set of points, can you fit an $n$ď»ż degree polynomial to it? This is actually really useful because you can encode a message in a polynomial, and if you send it across a noisy channel, there is a greater chance that the â€śideaâ€ť of a polynomial is preserved and therefore the message is also preserved.

This is used a lot in practice, known as `reed-solomon encoding`

.