A comparison of machine learning with human judgment

  • Authors:
  • Michael W. Kattan;Dennis A. Adams;Michael S. Parks

  • Affiliations:
  • -;-;-

  • Venue:
  • Journal of Management Information Systems - Special section: Research in integrating learning capabilities into information systems
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper compares human judgment with machine learning in a check processing context. An experiment was conducted comparing teams of subjects with three commercially available machine learning algorithms: recursive partitioning, ID3, and a back-propagation neural network. Also, the statistical technique discriminant analysis was compared. The subjects were allowed to induce rules from historical data under ideal human conditions, such as adequate time and opportunity to sort the data as desired. The results on multiple holdout samples indicate that human judgment, recursive partitioning, and the ID3 algorithm were equally accurate and more accurate than a back-propagation neural network. Subjects who chose mixed strategies of judgment were more accurate than those using noncompensatory strategies, while no subjects chose compensatory strategies. Large decision trees were not more accurate than smaller ones. There appeared to be a time threshold for humans to form accurate decision rules. Holdout sample accuracy tended to increase with primary sample accuracy. ID3 built larger trees than did either humans or recursive partitioning. The conclusion of this research is that the knowledge engineer faced with available historical data concerning a classification problem should not waste his time discerning rules, since he will only take longer and be no more accurate than a good learning tool. Knowledge of these tools will be the requisite skill for the knowledge engineer of the 1990s. Implications for IS design and further research are discussed.