Expert systems in law: out of the research laboratory and into the marketplace
ICAIL '87 Proceedings of the 1st international conference on Artificial intelligence and law
Case-based reasoning
ACM SIGSOFT Software Engineering Notes
Software metrics (2nd ed.): a rigorous and practical approach
Software metrics (2nd ed.): a rigorous and practical approach
Towards a practical method to validate decision support systems
Decision Support Systems
The evaluation of legal knowledge based systems
ICAIL '99 Proceedings of the 7th international conference on Artificial intelligence and law
Proceedings of the 8th international conference on Artificial intelligence and law
Tools for World Wide Web based legal decision support systems
Proceedings of the 8th international conference on Artificial intelligence and law
Software Engineering Economics
Software Engineering Economics
Software Evaluation for Certification
Software Evaluation for Certification
A Framework for Evaluating Software Technology
IEEE Software
Artificial Intelligence and Law - Online dispute resolution
Bargaining in the shadow of the law - using utility functions to support legal negotiation
Proceedings of the 11th international conference on Artificial intelligence and law
AssetDivider: a new mediation tool in Australian family law
HuCom '08 Proceedings of the 1st International Working Conference on Human Factors and Computational Models in Negotiation
The significance of evaluation in AI and law: a case study re-examining ICAIL proceedings
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Law
Hi-index | 0.00 |
In an ideal world, a legal knowledge-based system would be evaluated by an evaluator with expertise in both the legal domain and software engineering evaluation processes. However in the real world, this task is typically undertaken by legal professionals, who may lack software engineering expertise. Where software engineers do have this responsibility, they may lack legal domain knowledge. We extend the ISO 14598 evaluation process with a novel evaluation framework which satisfies three important requirements: elements of existing software engineering evaluation methodologies are integrated and subsumed, making them more readily accessible to the evaluator; requirements specific to the legal domain are included; and the intended users do not necessarily require extensive software engineering expertise. The framework emphasises the importance of the evaluation context and goals and integrates these with system properties and contingency-guidelines to suggest appropriate evaluation criteria. The evaluation process supports the selection of criteria by manual, semi-automated or automated methods and a design of an architecture to support this choice of appropriate criteria, is presented. Two evaluations are discussed that were conducted using the process and its associated framework. With ongoing research and development in the field of Artificial intelligence and Law, the need for an easily accessible and specialized evaluation methodology is apparent. Such a method would assist legal professionals frame an evaluation of legal knowledge-based systems and help software engineers understand the evaluation requirements of legal professionals.