Global Optimization with Polynomials and the Problem of Moments
SIAM Journal on Optimization
Blobworld: Image Segmentation Using Expectation-Maximization and Its Application to Image Querying
IEEE Transactions on Pattern Analysis and Machine Intelligence
SIAM Journal on Optimization
Algorithms for simultaneous sparse approximation: part I: Greedy pursuit
Signal Processing - Sparse approximations in signal and image processing
Convergent SDP-Relaxations in Polynomial Optimization with Sparsity
SIAM Journal on Optimization
Regressor and structure selection in NARX models using a structured ANOVA approach
Automatica (Journal of IFAC)
Recovering sparse signals with a certain family of nonconvex penalties and DC programming
IEEE Transactions on Signal Processing
Brief paper: Segmentation of ARX-models using sum-of-norms regularization
Automatica (Journal of IFAC)
Matching pursuits with time-frequency dictionaries
IEEE Transactions on Signal Processing
Greed is good: algorithmic results for sparse approximation
IEEE Transactions on Information Theory
Decoding by linear programming
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Hi-index | 22.14 |
Minimizing the @?"0-seminorm of a vector under convex constraints is a combinatorial (NP-hard) problem. Replacement of the @?"0-seminorm with the @?"1-norm is a commonly used approach to compute an approximate solution of the original @?"0-minimization problem by means of convex programming. In the theory of compressive sensing, the condition that the sensing matrix satisfies the Restricted Isometry Property (RIP) is a sufficient condition to guarantee that the solution of the @?"1-approximated problem is equal to the solution of the original @?"0-minimization problem. However, the evaluation of the conservativeness of the @?"1-relaxation approaches is recognized to be a difficult task in case the RIP is not satisfied. In this paper, we present an alternative approach to minimize the @?"0-norm of a vector under given constraints. In particular, we show that an @?"0-minimization problem can be relaxed into a sequence of semidefinite programming problems, whose solutions are guaranteed to converge to the optimizer (if unique) of the original combinatorial problem also in case the RIP is not satisfied. Segmentation of ARX models is then discussed in order to show, through a relevant problem in system identification, that the proposed approach outperforms the @?"1-based relaxation in detecting piece-wise constant parameter changes in the estimated model.