Social trust: a cognitive approach
Trust and deception in virtual societies
Reputation in Artificial Societies: Social Beliefs for Social Order
Reputation in Artificial Societies: Social Beliefs for Social Order
The Socio-cognitive Dynamics of Trust: Does Trust Create Trust?
Proceedings of the workshop on Deception, Fraud, and Trust in Agent Societies held during the Autonomous Agents Conference: Trust in Cyber-societies, Integrating the Human and Artificial Perspectives
Simulating Multi-Agent Interdependencies. A Two-Way Approach to the Micro-Macro Link
Social Science Microsimulation [Dagstuhl Seminar, May, 1995]
Principles of Trust for MAS: Cognitive Anatomy, Social Importance, and Quantification
ICMAS '98 Proceedings of the 3rd International Conference on Multi Agent Systems
Formalizing Excusableness of Failures in Multi-Agent Systems
Agent Computing and Multi-Agent Systems
Users and trust: the new threats, the new possibilities
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: applications and services
Evidence-based trust: A mathematical model geared for multiagent systems
ACM Transactions on Autonomous and Adaptive Systems (TAAS)
Trust decision-making in multi-agent systems
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume One
A comprehensive approach to trust management
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Hi-index | 0.00 |
Trust can be viewed as an instrument both for an agent selecting the right partners in order to achieve its own goals (the point of view of the trustier), and for an agent of being selected from other potential partners (the point of view of the trustee) in order to establish with them a cooperation/ collaboration and to take advantage from the accumulated trust. In our previous works we focused our main attention on the first point of view. In this paper we will analyze trust as the agents' relational capital. Starting from the classical dependence network (in which needs, goals, abilities and resources are distributed among the agents) with potential partners, we introduce the analysis of what it means for an agent to be trusted and how this condition could be strategically used from it for achieving its own goals, that is, why it represents a form of power. Although there is a big interest in literature about ‘social capital' and its powerful effects on the wellbeing of both societies and individuals, often it is not clear enough what is it the object under analysis. Individual trust capital (relational capital) and collective trust capital not only should be disentangled, but their relations are quite complicated and even conflicting. To overcome this gap, we propose a study that first attempts to understand what trust is as capital of individuals. In which sense “trust” is a capital. How this capital is built, managed and saved. In particular, how this capital is the result of the others' beliefs and goals. Then we aim to analytically study the cognitive dynamics of this object.