Sorting things out: classification and its consequences
Sorting things out: classification and its consequences
Why you can't cite Wikipedia in my class
Communications of the ACM - ACM's plan to go online first
Lifting the veil: improving accountability and social transparency in Wikipedia with wikidashboard
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Navigating between chaos and bureaucracy: backgrounding trust in open-content communities
SocInfo'12 Proceedings of the 4th international conference on Social Informatics
Hi-index | 0.00 |
In this paper I use philosophical accounts on the relationship between trust and knowledge in science to apprehend this relationship on the Web. I argue that trust and knowledge are fundamentally entangled in our epistemic practices. Yet despite this fundamental entanglement, we do not trust blindly. Instead we make use of knowledge to rationally place or withdraw trust. We use knowledge about the sources of epistemic content as well as general background knowledge to assess epistemic claims. Hence, although we may have a default to trust, we remain and should remain epistemically vigilant; we look out and need to look out for signs of insincerity and dishonesty in our attempts to know. A fundamental requirement for such vigilance is transparency: in order to critically assess epistemic agents, content and processes, we need to be able to access and address them. On the Web, this request for transparency becomes particularly pressing if (a) trust is placed in unknown human epistemic agents and (b) if it is placed in non-human agents, such as algorithms. I give examples of the entanglement between knowledge and trust on the Web and draw conclusions about the forms of transparency needed in such systems to support epistemically vigilant behaviour, which empowers users to become responsible and accountable knowers.