Learning to use word processors: problems and prospects
ACM Transactions on Information Systems (TOIS)
Using Online Catalogs: A Nationwide Survey
Using Online Catalogs: A Nationwide Survey
Learning text editor semantics by analogy
CHI '83 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Mental models and problem solving in using a calculator
CHI '83 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Reducing manual labor: An experimental analysis of learning aids for a text editor
CHI '82 Proceedings of the 1982 Conference on Human Factors in Computing Systems
Why do some people have more difficulty learning to use an information retrieval system than others?
SIGIR '87 Proceedings of the 10th annual international ACM SIGIR conference on Research and development in information retrieval
Transparent Queries: investigation users' mental models of search engines
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Conceptualizing institutional repositories: using co-discovery to uncover mental models
Proceedings of the third symposium on Information interaction in context
Active support for query formulation in virtual digital libraries: a case study with DAFFODIL
ECDL'05 Proceedings of the 9th European conference on Research and Advanced Technology for Digital Libraries
Barriers to task-based information access in molecular medicine
Journal of the American Society for Information Science and Technology
What would 'google' do? users' mental models of a digital library search engine
TPDL'12 Proceedings of the Second international conference on Theory and Practice of Digital Libraries
Hi-index | 0.00 |
An empirical study was performed to train naive subjects in the use of a prototype Boolean logic-based information retrieval system on a bibliographic database. Subjects were undergraduates with little or no prior computing experience. Subjects trained with a conceptual model of the system performed better than subjects trained with procedural instructions, but only on complex, problem-solving tasks. Performance was equal on simple tasks. Differences in patterns of interaction with the system (based on a stochastic process model) showed parallel results. Most subjects were able to articulate some description of the system's operation, but few articulated a model similar to the card catalog analogy provided in training. Eleven of 43 subjects were unable to achieve minimal competency in system use. The failure rate was equal between training conditions and genders; the only differences found between those passing and failing the benchmark test were academic major and in frequency of library use.