Understanding E-business competencies in SMES
Seeking sucess in E-business
The lonely comate: the adoption-failure of an intranet-based consumer and market intelligence system
Annals of cases on information technology
IS-enabled performance improvement at the individual level: evidence of complementarity
Proceedings of the 2006 ACM SIGMIS CPR conference on computer personnel research: Forty four years of computer personnel research: achievements, challenges & the future
Experience effects on the accuracy of self-assessed user competence
Information and Management
Extending task technology fit with computer self-efficacy
ACM SIGMIS Database
Exploring the influence of perceptual factors in the success of web-based spatial DSS
Decision Support Systems
Reconceptualizing System Usage: An Approach and Empirical Test
Information Systems Research
A Longitudinal Field Study of Training Practices in a Collaborative Application Environment
Journal of Management Information Systems
Digital Inclusiveness--Longitudinal Study of Internet Adoption by Older Adults
Journal of Management Information Systems
Dialectic decision support systems: System design and empirical evaluation
Decision Support Systems
Information Technology Competence of Business Managers: A Definition and Research Model
Journal of Management Information Systems
Self-efficacy, overconfidence, and the negative effect on subsequent performance: A field study
Information and Management
Understanding the determinants of EKR usage from social, technological and personal perspectives
Journal of Information Science
An Evaluation System for End-User Computing Capability in a Computing Business Environment
IEICE - Transactions on Information and Systems
Measures of perceived end-user computing competency in an organizational computing environment
Knowledge-Based Systems
Experience effects on the accuracy of self-assessed user competence
Information and Management
What makes them so special?: identifying attributes of highly competent information system users
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: interaction design and usability
Individual Virtual Competence and Its Influence on Work Outcomes
Journal of Management Information Systems
Predictive analytics in information systems research
MIS Quarterly
Member use of social networking sites - an empirical examination
Decision Support Systems
The Role of Computer Attitudes in Enhancing Computer Competence in Training
Journal of Organizational and End User Computing
Realising IT value: post adoptive IS usage and performance impacts at individual level
International Journal of Business Information Systems
User expertise in contemporary information systems: Conceptualization, measurement and application
Information and Management
Technical proficiency for IS Success
Computers in Human Behavior
Hi-index | 0.01 |
Organizations today face great pressure to maximize the bene its from their investments in information technology (IT). They are challenged not just to use IT, but to use it as effectively as possible. Understanding how to assess the competence of users is critical in maximizing the effectiveness of IT use. Yet theuser competence construct is largely absent from prominent technology acceptance and it models, poorly conceptualized, and inconsistently measured. We begin by presenting a conceptual model of the assessment of user competence to organize and clarify the diverse literature regarding what user competence means and the problems of assessment. As an illustrative study, we then report the findings from an experiment involving 66 participants. The experiment was conducted to compare empirically two methods (paper and pencil tests versus self-report questionnaire), across two different types of software, or domains of knowledge (word processing versus spreadsheet packages), and two different conceptualizations of competence (software knowledge versus self-efficacy). The analysis shows statistical significance in all three main effects. How user competence is measured, what is measured, what measurement context is employed:all influence the measurement outcome. Furthermore, significant interaction effects indicate that different combinations of measurement methods, conceptualization, and knowledge domains produce different results. The concept of frame of reference, and its anchoring effect on subjects' responses, explains a number of these findings. The study demonstrates the need for clarity in both defining what type of competence is being assessed and in drawing conclusions regarding competence, based upon the types of measures used. Since the results suggest that definition and measurement of the user competence construct can change the ability score being captured, the existing information system (IS) models of usage must contain the concept of an ability rating. We conclude by discussing how user competence can be incorporated into the Task-Technology Fit model, as well as additional theoretical and practical implications of our research.