A philosophical basis for knowledge acquisition
Knowledge Acquisition
Adaptive Web Document Classification with MCRDR
ITCC '04 Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC'04) Volume 2 - Volume 2
A new model for classifying DNA code inspired by neural networks and FSA
PKAW'06 Proceedings of the 9th Pacific Rim Knowledge Acquisition international conference on Advances in Knowledge Acquisition and Management
Hi-index | 0.00 |
The World Wide Web (Web) was not designed to ‘push’ information to clients but for clients to ‘pull’ information from servers (providers). This type of technology is not efficient in prompt information delivery from changing sources. Recently, XML-based ‘RSS’, or ‘Weblog’, has become popular, because they simulate real time information delivery using automated client pull technology. However, this is still inefficient because people have to manually manage large quantities of Web information, causing information overflow. Secondly, most current Web information still uses HTML instead of XML. Our automated information mediator (AIMS) collects new information from both traditional HTML sites and XML sites and alleviates the information overload problem by using narrowcasting from the server side, and information filtering from the client side using Multiple Classification Ripple-Down Rules (MCRDR) knowledge acquisition for document classification. The approach overcomes the traditional knowledge acquisition problem with an exception based knowledge representation and case based validation and verification. By employing this approach, the system allows domain experts, or even naive end users to manage their knowledge and personalize their agent system without help from a knowledge engineer.