A reinforcement learning approach for host-based intrusion detection using sequences of system calls

  • Authors:
  • Xin Xu;Tao Xie

  • Affiliations:
  • School of Computer, National University of Defense Technology, Changsha, P. R. China;School of Computer, National University of Defense Technology, Changsha, P. R. China

  • Venue:
  • ICIC'05 Proceedings of the 2005 international conference on Advances in Intelligent Computing - Volume Part I
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Intrusion detection has emerged as an important technique for network security. Due to the complex and dynamic properties of intrusion behaviors, machine learning and data mining methods have been widely employed to optimize the performance of intrusion detection systems (IDSs). However, the results of existing work still need to be improved both in accuracy and in computational efficiency. In this paper, a novel reinforcement learning approach is presented for host-based intrusion detection using sequences of system calls. A Markov reward process model is introduced for modeling the behaviors of system call sequences and the intrusion detection problem is converted to predicting the value functions of the Markov reward process. A temporal different learning algorithm using linear basis functions is used for value function prediction so that abnormal temporal behaviors of host processes can be predicted accurately and efficiently. The proposed method has advantages over previous algorithms in that the temporal property of system call data is well captured in a natural and simple way and better intrusion detection performance can be achieved. Experimental results on the MIT system call data illustrate that compared with previous work, the proposed method has better detection accuracy with low training costs.