In her paper "Trusting social robots", Paula Sweeney, lecturer at the University of Aberdeen, discusses the need for a more robust account of our ability and willingness to trust social robots. The author argues that existing accounts of trust and of trusting social robots are inadequate. The feature of a façade or deception inherent in our engagement with social robots both facilitates and is in danger of undermining trust. The author utilizes the fictional dualism model of social robots to clarify that trust in social robots, unlike trust in humans, must rely on an independent judgment of product reliability.
The author identifies that the facilitator of the trust we bestow on social robots appears to be their ability to mimic human-to-human social behavior. However, it is an expected link between human behavior (the outer) and human attitudes (the inner) that facilitates trust between humans, and as this link is missing in the case of social robots, our willingness to trust them might be undermined by what amounts to a façade of agency. What we need, and what the author provides, is some way to hold our attitude of trust accountable to both the robot’s social behavior (the outer) and its functionality and design (the inner). In this way, we can temper our judgment of trust based on the appearances of social robots with a judgment of reliability regarding the robot as a product.
In summary, Paula Sweeney’s article “Trusting social robots” highlights the need for a more robust account of our ability and willingness to trust social robots. The author argues that existing accounts of trust and of trusting social robots are inadequate, and we need more sophisticated models of trust that take into account both the outer and inner aspects of social robots.
Comments
Post a Comment