Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Vehicle classification in distributed sensor networks
Journal of Parallel and Distributed Computing
IEEE Transactions on Neural Networks
Prerequesites for symbiotic brain-machine interfaces
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
A formal engineering approach to high-level design of situation analysis decision support systems
ICFEM'11 Proceedings of the 13th international conference on Formal methods and software engineering
Learning belief connections in a model for situation awareness
PRIMA'11 Proceedings of the 14th international conference on Agents in Principle, Agents in Practice
An intelligent situation awareness support system for safety-critical environments
Decision Support Systems
Hi-index | 0.00 |
Domains such as force protection require an effective decision maker to maintain a high level of situation awareness. A system that combines humans with neural networks is a desirable approach. Furthermore, it is advantageous for the calculation engine to operate in three learning modes: supervised for initial training and known updating, reinforcement for online operational improvement, and unsupervised in the absence of all external signaling. An Adaptive Resonance Theory based architecture capable of seamlessly switching among the three types of learning is discussed that can be used to help optimize the decision making of a human operator in such a scenario. This is followed by a situation assessment module.