Trust in Healthcare AI Can Be Hurt Intentionally or Innocuously
The race for supremacy among major artificial intelligence (AI) providers, including OpenAI, Anthropic, and Google, is approaching peak intensity. Alongside this growth, concerns about customer trust and distrust have become paramount. These concerns are appropriate—our own work suggests that in the face of the ambiguity and uncertainty typically accompanying a new technology such as healthcare AI, customers and users rely heavily on their trust in the provider to dampen risk and obtain peace of mind.
Healthcare buyers and users’ trust in their AI providers will likely drive their consumption and long-term loyalty to specific providers. Being mindful of the role of potential actions and strategies in shaping user trust during the course of developing and deploying their AI products may help companies avoid damaging actions and course correct, for the benefit of users and ultimately to foster adoption of the company’s products.
Contrary to common belief, trust in companies is not only depleted through wanton and Machiavellian actions. Trust can be hurt, and distrust can be built, for a variety of reasons ranging from the relatively innocuous to volitionally bad actions.
Below, I outline a range of factors that can affect trust among AI providers’ buyers and clinical users. This framework can be extended further down the user chain to patients and their families.
Trust among nascent clinical users of AI can be affected by the reputational halo from AI-related news in........
© Psychology Today
