Learning over sets using kernel principal angles
The Journal of Machine Learning Research
Towards unbiased end-to-end network diagnosis
Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications
Towards unbiased end-to-end network diagnosis
IEEE/ACM Transactions on Networking (TON)
Matrix sparsification and the sparse null space problem
APPROX/RANDOM'10 Proceedings of the 13th international conference on Approximation, and 14 the International conference on Randomization, and combinatorial optimization: algorithms and techniques
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Hi-index | 0.00 |
Let $A$ be a $k \by n$ underdetermined matrix. The sparse basis problem for the row space $W$ of $A$ is to find a basis of $W$ with the fewest number of nonzeros. Suppose that all the entries of $A$ are nonzero, and that they are algebraically independent over the rational number field. Then every nonzero vector in $W$ has at least $n-k+1$ nonzero entries. Those vectors in $W$ with exactly $n-k+1$ nonzero entries are the elementary vectors of $W$. A simple combinatorial condition that is both necessary and sufficient for a set of $k$ elementary vectors of $W$ to form a basis of $W$ is presented here. A similar result holds for the null space of $A$ where the elementary vectors now have exactly $k+1$ nonzero entries. These results follow from a theorem about nonzero minors of order $m$ of the $(m-1)$st compound of an $m \by n$ matrix with algebraically independent entries, which is proved using multilinear algebra techniques. This combinatorial condition for linear independence is a first step towards the design of algorithms that compute sparse bases for the row and null space without imposing artificial structure constraints to ensure linear independence.