THE BROAD VIEW
·
SUBSCRIBERS

Could more human-like AI undermine trust?

Designing modern AI in a way that makes it appear to have more agency could backfire by making people trust the technology less

    • A self-driving robotaxi in Los Angeles, California. Human trust in AI is multifaceted and significantly influenced by how agentic the AI is perceived to be.
    • A self-driving robotaxi in Los Angeles, California. Human trust in AI is multifaceted and significantly influenced by how agentic the AI is perceived to be. PHOTO: EPA-EFE
    Published Fri, Nov 22, 2024 · 08:00 AM

    FROM virtual assistants to self-driving cars to medical imaging analysis, the use of modern artificial intelligence (AI) technologies based on deep-learning architectures has increased dramatically in recent years. But what makes us put our faith in such technologies to, say, get us safely to our destination without running off the road? And why do we perceive some forms of AI to be more trustworthy than others?

    It may depend on how much agency we believe the technology possesses. Modern AI systems are often perceived as agentic (that is, displaying the capacity to think, plan and act) to varying degrees. In our research, recently published in Academy of Management Review, we explore how different levels of perceived agency can affect human trust in AI.

    Modern AI appears agentic

    The perception of agency is crucial for trust. In human relationships, trust is built on the belief that the individual you place your trust in can act independently and make their own choices.

    Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.

    Share with us your feedback on BT's products and services