Regularization theory and neural networks architectures
Neural Computation
Approximation of scattered data using smooth grid functions
Journal of Computational and Applied Mathematics
The nature of statistical learning theory
The nature of statistical learning theory
Information complexity of multivariate Fredholm integral equations in Sobolev classes
Journal of Complexity
2D spiral pattern recognition with possibilistic measures
Pattern Recognition Letters
An equivalence between sparse approximation and support vector machines
Neural Computation
Data mining methods for knowledge discovery
Data mining methods for knowledge discovery
SSVM: A Smooth Support Vector Machine for Classification
Computational Optimization and Applications
Learning from Data: Concepts, Theory, and Methods
Learning from Data: Concepts, Theory, and Methods
On Comparing Classifiers: Pitfalls toAvoid and a Recommended Approach
Data Mining and Knowledge Discovery
On the Parallel Solution of 3D PDEs on a Network of Workstations and on Vector Computers
Parallel Computer Architectures: Theory, Hardware, Software, Applications
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Computing
On the Parallelization of the Sparse Grid Approach for Data Mining
LSSC '01 Proceedings of the Third International Conference on Large-Scale Scientific Computing-Revised Papers
Shrinkage estimator generalizations of Proximal Support Vector Machines
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Dynamic Cluster Formation Using Level Set Methods
IEEE Transactions on Pattern Analysis and Machine Intelligence
Regression with the optimised combination technique
ICML '06 Proceedings of the 23rd international conference on Machine learning
Hi-index | 0.00 |
Recently we presented a new approach [18] to the classification problem arising in data mining. It is based on the regularization network approach but, in contrast to other methods which employ ansatz functions associated to data points, we use a grid in the usually high-dimensional feature space for the minimization process. To cope with the curse of dimensionality, we employ sparse grids [49]. Thus, only O(hn-1nd-1) instead of O(hn-d) grid points and unknowns are involved. Here d denotes the dimension of the feature space and hn = 2-n gives the mesh size. We use the sparse grid combination technique [28] where the classification problem is discretized and solved on a sequence of conventional grids with uniform mesh sizes in each dimension. The sparse grid solution is then obtained by linear combination. In contrast to our former work, where d-linear functions were used, we now apply linear basis functions based on a simplicial discretization. This allows to handle more dimensions and the algorithm needs less operations per data point.We describe the sparse grid combination technique for the classification problem, give implementational details and discuss the complexity of the algorithm. It turns out that the method scales linearly with the number of given data points. Finally we report on the quality of the classifier built by our new method on data sets with up to 10 dimensions. It turns out that our new method achieves correctness rates which are competitive to that of the best existing methods.