A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Theoretical Views of Boosting and Applications
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
Dynamic Classifier Selection for Effective Mining from Noisy Data Streams
ICDM '04 Proceedings of the Fourth IEEE International Conference on Data Mining
Feature selection based on rough sets and particle swarm optimization
Pattern Recognition Letters
A comparison of PSO and GA approaches for gene selection and classification of microarray data
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Boosting and other ensemble methods
Neural Computation
Memory-Aware Green Scheduling on Multi-core Processors
ICPPW '10 Proceedings of the 2010 39th International Conference on Parallel Processing Workshops
DENS: Data Center Energy-Efficient Network-Aware Scheduling
GREENCOM-CPSCOM '10 Proceedings of the 2010 IEEE/ACM Int'l Conference on Green Computing and Communications & Int'l Conference on Cyber, Physical and Social Computing
Review of performance metrics for green data centers: a taxonomy study
The Journal of Supercomputing
Hi-index | 0.00 |
Energy-efficient computing has now become a key challenge not only for data-center operations, but also for many other energy-driven systems, with the focus on reducing of all energy-related costs, and operational expenses, as well as its corresponding and environmental impacts. However, current intelligent data models are typically performance driven. For instance, most data-driven machine-learning approaches are often known to require high computational cost in order to find the global optima. Designing more accurate intelligent data models to satisfy the market needs will hence lead to a higher likelihood of energy waste due to the increased computational cost. This paper thus introduces an energy-efficient framework for large-scale data modeling and classification/prediction. It can achieve a predictive accuracy comparable to or better than the state-of-the-art machine-learning models, while at the same time, maintaining a low computational cost when dealing with large-scale data. The effectiveness of the proposed approaches has been demonstrated by our experiments with two large-scale KDD data sets: Mtv-1 and Mtv-2.