For the layman, the issue is the 3d representation format of models of the world in polygonal mesh representation that can be reconstructed exactly by solving some linear programming problems. The main interest here is that these models can be used in real computer graphics applications we are claiming and the Donoho-Tanner theorems are telling us the phase transition for when exact recovery is impossible by solving linear programs (which can be tractable on $300 Intel i5 machines) and therefore telling us about our needed REDUNDANCY and SPARSITY relative to some image grid size N. Concretely we want to think of N as something like 10^9 (1000x1000x1000 volume space for 3d), and if we are modelling objects that take up 0.1 of this or less even, say 10^6 points of nonzeros (say an image of 1000×1000 pixel image), then we want to know what percent of any model parameter dimension has to be zero of 10^6 in order for exact parameter estimation to be possible, and Donoho-Tanner theorems give us exact results for this problem fully settling the issue.

phase-transition-sparse-recovery

The phase transition for 10x compression for exact recovery is less than 24% nonzeros for 0.1*N=d.

This applies in any situation we’re looking at an image of possible size N holding an image of 0.1*N parameters where there is around 4X redundancy. This is probably going to be the case almost all the time for the following computer graphics scene model. Suppose K objects are stored in PLY format each holding (vertices, edges) data for 3d wireframes. The model is basically going to be taking these objects and mapping them to some (x,y,z) grid of fixed dimensions whose product is our N, and now the Donoho-Tanner theory applies exactly for some of these images as a function of K and the size of the model wireframes as a fraction of N.

### Like this:

Like Loading...

*Related*

## Leave a Reply