Multilayer feedforward networks are universal approximators
Neural Networks
Low power error control for wireless links
MobiCom '97 Proceedings of the 3rd annual ACM/IEEE international conference on Mobile computing and networking
The simulation and evaluation of dynamic voltage scaling algorithms
ISLPED '98 Proceedings of the 1998 international symposium on Low power electronics and design
Algorithmic transforms for efficient energy scalable computation
ISLPED '00 Proceedings of the 2000 international symposium on Low power electronics and design
Modulation scaling for Energy Aware Communication Systems
ISLPED '01 Proceedings of the 2001 international symposium on Low power electronics and design
Dynamic Power Management: Design Techniques and CAD Tools
Dynamic Power Management: Design Techniques and CAD Tools
Learning to Cooperate via Policy Search
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
Adaptive Power-Fidelity in Energy-Aware Wireless Embedded Systems
RTSS '01 Proceedings of the 22nd IEEE Real-Time Systems Symposium
Not all agents are equal: scaling up distributed POMDPs for agent networks
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Poster abstract: MDP framework for sensor network coordination
IPSN '09 Proceedings of the 2009 International Conference on Information Processing in Sensor Networks
Markov decision processes for control of a sensor network-based health monitoring system
IAAI'05 Proceedings of the 17th conference on Innovative applications of artificial intelligence - Volume 3
Introduction to Machine Learning
Introduction to Machine Learning
Hi-index | 0.00 |
The paper suggests the methods for learning compact representation of the optimal decision policies in a Markov Decision Process (MDP) framework for sensor-network based human health monitoring systems. The learning of a small decision policy is key to deploying the model in small sensor nodes with limited memory. The decision process enables distributed sensor nodes to adapt their sampling rates in response to changing event criticality and the availability of resources (energy) at each node. The globally optimal policy is first calculated offline using an MDP and deployed onto each node. However, the space complexity of the representation is exponential in the number of the sensor nodes and discretization grain of the problem. In this paper, we compare the capability of the compact representation of the optimal decision policy by using different base supervised learners. The results show that unpruned decision trees and high confidence pruned decision trees provide the lowest error rate while the required node number of the decision tree is enough small to be stored in the sensors. Ensembles of lower-confidence trees are capable of perfect representation with only an order of magnitude increase in classifier size compared to individual pruned trees.