A Kernel-Based Reinforcement Learning Approach to Dynamic Behavior Modeling of Intrusion Detection

  • Authors:
  • Xin Xu;Yirong Luo

  • Affiliations:
  • Institute of Automation, National University of Defense Technology, 410073, Changsha, P.R. China;Network Center, Hunan Agriculture University, 410080, Changsha, P.R. China

  • Venue:
  • ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

As an important active defense technique for computer networks, intrusion detection has received lots of attention in recent years. However, the performance of current intrusion detection systems (IDSs) is far from being satisfactory due to the increasing number of complex sequential attacks. Aiming at the above problem, in this paper, a novel kernel-based reinforcement learning method for sequential behavior modeling in host-based IDSs is proposed. Based on Markov process modeling of host-based intrusion detection using sequences of system calls, the performance optimization of IDSs is transformed to a sequential prediction problem using evaluative reward signals. By using the kernel-based learning prediction algorithm, i.e., the kernel least-squares temporal-difference (kernel LS-TD) algorithm, which implements LS-TD learning in a kernel-induced feature space, the nonlinear modeling and prediction problem for sequential behaviors in IDSs is efficiently solved. Experiments on system call data from the University of New Mexico illustrate that the proposed kernel-based RL approach can achieve better detection accuracy than previous sequential behavior modeling methods including Hidden Markov Models (HMMs) and linear TD algorithms.