Expert systems in law: out of the research laboratory and into the marketplace
ICAIL '87 Proceedings of the 1st international conference on Artificial intelligence and law
How evaluation guides AI research
AI Magazine
Chiron: planning in an open-textured domain
Chiron: planning in an open-textured domain
Knowledge-based systems and knowledge management: friends or foes
Information and Management
Modelling reasoning about evidence in legal procedure
Proceedings of the 8th international conference on Artificial intelligence and law
Proceedings of the 8th international conference on Artificial intelligence and law
ICAIL '03 Proceedings of the 9th international conference on Artificial intelligence and law
A study of accrual of arguments, with applications to evidential reasoning
ICAIL '05 Proceedings of the 10th international conference on Artificial intelligence and law
Effective document clustering for large heterogeneous law firm collections
ICAIL '05 Proceedings of the 10th international conference on Artificial intelligence and law
Evaluating and selecting software packages: A review
Information and Software Technology
Coherence-driven argumentation to norm consensus
Proceedings of the 12th International Conference on Artificial Intelligence and Law
Legal document clustering with built-in topic segmentation
Proceedings of the 20th ACM international conference on Information and knowledge management
Hi-index | 0.00 |
This paper examines the presence of performance evaluation in works published at ICAIL conferences since 2000. As such, it is a self-reflexive, meta-level study that investigates the proportion of works that include some form of performance assessment in their contribution. It also reports on the categories of evaluation present as well as their degree. In addition, the paper compares current trends in performance measurement with those of earlier ICAILs, as reported in the Hall and Zeleznikow work on the same topic (ICAIL 2001). The paper also develops an argument for why evaluation in formal Artificial Intelligence and Law reports such as ICAIL proceedings is imperative. It underscores the importance of answering the question: how good is the system?, how reliable is the approach?, or, more succinctly, does it work? The paper argues that the presence of a performance-based ethic within a scientific research community is a sign of maturity and essential scientific rigor. Finally the work references an evaluation checklist and presents a set of recommended best practices for the inclusion of evaluation methods going forward.