An evaluation of empirical research in managerial support systems
T.H.E. Journal (Technological Horizons in Education)
Mining association rules between sets of items in large databases
SIGMOD '93 Proceedings of the 1993 ACM SIGMOD international conference on Management of data
Management Science
Enhancing the explanatory power of usability heuristics
CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Enriching buyers' experiences: the SmartClient approach
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Designing the User Interface: Strategies for Effective Human-Computer Interaction
Designing the User Interface: Strategies for Effective Human-Computer Interaction
Electronic Commerce Research
The FindMe Approach to Assisted Browsing
IEEE Expert: Intelligent Systems and Their Applications
RABBIT: An interface for database access
ACM '82 Proceedings of the ACM '82 conference
Empirical research in on-line trust: a review and critical assessment
International Journal of Human-Computer Studies - Special issue: Trust and technology
Evaluating example-based search tools
EC '04 Proceedings of the 5th ACM conference on Electronic commerce
Experiments in dynamic critiquing
Proceedings of the 10th international conference on Intelligent user interfaces
Integrating tradeoff support in product search tools for e-commerce sites
Proceedings of the 6th ACM conference on Electronic commerce
Artificial Intelligence Review
Perspectives of online trust and similar constructs: a conceptual clarification
ICEC '06 Proceedings of the 8th international conference on Electronic commerce: The new e-commerce: innovations for conquering current barriers, obstacles and limitations to conducting successful business on the internet
What Trust Means in E-Commerce Customer Relationships: An Interdisciplinary Conceptual Typology
International Journal of Electronic Commerce
Evaluating critiquing-based recommender agents
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
On the evaluation of dynamic critiquing: a large-scale user study
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
A personalized system for conversational recommendations
Journal of Artificial Intelligence Research
Preference-based search using example-critiquing with suggestions
Journal of Artificial Intelligence Research
ExpertClerk: navigating shoppers' buying process with the combination of asking and proposing
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Interfaces for eliciting new user preferences in recommender systems
UM'03 Proceedings of the 9th international conference on User modeling
Knowledge-based navigation of complex information spaces
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
A live-user evaluation of incremental dynamic critiquing
ICCBR'05 Proceedings of the 6th international conference on Case-Based Reasoning Research and Development
A user-centric evaluation framework for recommender systems
Proceedings of the fifth ACM conference on Recommender systems
Critiquing-based recommenders: survey and emerging trends
User Modeling and User-Adapted Interaction
Evaluating recommender systems from the user's perspective: survey of the state of the art
User Modeling and User-Adapted Interaction
Explaining the user experience of recommender systems
User Modeling and User-Adapted Interaction
User Modeling and User-Adapted Interaction
User effort vs. accuracy in rating-based elicitation
Proceedings of the sixth ACM conference on Recommender systems
Human Decision Making and Recommender Systems
ACM Transactions on Interactive Intelligent Systems (TiiS)
Review: Mobile recommender systems in tourism
Journal of Network and Computer Applications
Hi-index | 0.00 |
A critiquing-based recommender system acts like an artificial salesperson. It engages users in a conversational dialog where users can provide feedback in the form of critiques to the sample items that were shown to them. The feedback, in turn, enables the system to refine its understanding of the user's preferences and prediction of what the user truly wants. The system is then able to recommend products that may better stimulate the user's interest in the next interaction cycle. In this paper, we report our extensive investigation of comparing various approaches in devising critiquing opportunities designed in these recommender systems. More specifically, we have investigated two major design elements which are necessary for a critiquing-based recommender system: critiquing coverage--one vs. multiple items that are returned during each recommendation cycle to be critiqued; and critiquing aid--system-suggested critiques (i.e., a set of critique suggestions for users to select) vs. user-initiated critiquing facility (i.e., facilitating users to create critiques on their own). Through a series of three user trials, we have measured how real-users reacted to systems with varied setups of the two elements. In particular, it was found that giving users the choice of critiquing one of multiple items (as opposed to just one) has significantly positive impacts on increasing users' decision accuracy (particularly in the first recommendation cycle) and saving their objective effort (in the later critiquing cycles). As for critiquing aids, the hybrid design with both system-suggested critiques and user-initiated critiquing support exhibits the best performance in inspiring users' decision confidence and increasing their intention to return, in comparison with the uncombined exclusive approaches. Therefore, the results from our studies shed light on the design guidelines for determining the sweetspot balancing user initiative and system support in the development of an effective and user-centric critiquing-based recommender system.