Better subset regression using the nonnegative garrote
Technometrics
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Parallel and Distributed Computation: Numerical Methods
Parallel and Distributed Computation: Numerical Methods
Convergence of a block coordinate descent method for nondifferentiable minimization
Journal of Optimization Theory and Applications
Wireless sensor networks: a survey
Computer Networks: The International Journal of Computer and Telecommunications Networking
Incremental Subgradient Methods for Nondifferentiable Optimization
SIAM Journal on Optimization
Tools for privacy preserving distributed data mining
ACM SIGKDD Explorations Newsletter
Fast Optimization Methods for L1 Regularization: A Comparative Study and Two New Approaches
ECML '07 Proceedings of the 18th European conference on Machine Learning
Sparse reconstruction by separable approximation
IEEE Transactions on Signal Processing
Distributed in-network channel decoding
IEEE Transactions on Signal Processing
The Split Bregman Method for L1-Regularized Problems
SIAM Journal on Imaging Sciences
A collaborative training algorithm for distributed learning
IEEE Transactions on Information Theory
Distributed spectrum sensing for cognitive radio networks by exploiting sparsity
IEEE Transactions on Signal Processing
Consensus-Based Distributed Support Vector Machines
The Journal of Machine Learning Research
Consensus in Ad Hoc WSNs With Noisy Links—Part I: Distributed Estimation of Deterministic Signals
IEEE Transactions on Signal Processing
Quantized incremental algorithms for distributed optimization
IEEE Journal on Selected Areas in Communications
Foundations and Trends® in Machine Learning
Distributed parametric and nonparametric regression with on-line performance bounds computation
Automatica (Journal of IFAC)
Hi-index | 35.69 |
The Lasso is a popular technique for joint estimation and continuous variable selection, especially well-suited for sparse and possibly under-determined linear regression problems. This paper develops algorithms to estimate the regression coefficients via Lasso when the training data are distributed across different agents, and their communication to a central processing unit is prohibited for e.g., communication cost or privacy reasons. A motivating application is explored in the context of wireless communications, whereby sensing cognitive radios collaborate to estimate the radio-frequency power spectrum density. Attaining different tradeoffs between complexity and convergence speed, three novel algorithms are obtained after reformulating the Lasso into a separable form, which is iteratively minimized using the alternating-direction method of multipliers so as to gain the desired degree of parallelization. Interestingly, the per agent estimate updates are given by simple soft-thresholding operations, and inter-agent communication overhead remains at affordable level. Without exchanging elements from the different training sets, the local estimates consent to the global Lasso solution, i.e., the fit that would be obtained if the entire data set were centrally available. Numerical experiments with both simulated and real data demonstrate the merits of the proposed distributed schemes, corroborating their convergence and global optimality. The ideas in this paper can be easily extended for the purpose of fitting related models in a distributed fashion, including the adaptive Lasso, elastic net, fused Lasso and nonnegative garrote.