Extracting linguistic quantitative rules from supervised neural networks

  • Authors:
  • W. Wettayaprasit_affanb;C. Lursinsap;C. H. Chu

  • Affiliations:
  • Dept. of Comp. Sci., Fac. of Sci., Prince of Songkla Univ., Songkhla, 90112, Thailand (Correspd. E-mail: wwettayaprasit@yahoo.com) and Adv. Vir. and Intell. Comp. (AVIC) Res. Ctr., Dept. of Math., ...;Advanced Virtual and Intelligent Computing (AVIC) Research Center, Department of Mathematics, Faculty of Science, Chulalongkorn University, Bangkok, 10330, Thailand;The Center for Advanced Computer Studies (CACS), University of Louisiana at Lafayette, Lafayette, LA, 70504, USA

  • Venue:
  • International Journal of Knowledge-based and Intelligent Engineering Systems
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Extracting meaningful and understandable knowledge from a trained neural network is one of the ultimate goals in the area of data mining. In this paper, we propose a technique for extracting knowledge with less complex mathematical elaboration based on our activation interval projection on each dimensional axis with certainty factor refinement. The knowledge is captured in forms of if-then rules, which their premises are the conjunction of input feature intervals representing in linguistic quantities. Our experiment signifies that the extracted rules accurate when compared with those from a neural network.