Photo: Tom Werner/Getty Images
Sixty percent of respondents said they would feel uncomfortable if their healthcare provider relied on AI to diagnose and treat them, according to a survey conducted by the Pew Research Center.
The 2022 survey, which included responses from more than 11,000 U.S. adults, found that 66% of women said they would feel uncomfortable if their provider relied on AI for their medical care, while 54% of men expressed discomfort.
Still, 38% of respondents said AI's use to diagnose and treat patients would generally lead to better patient health outcomes, while 33% said it would lead to worse results. Twenty-seven percent said it would make little difference.
Of the respondents who saw a problem with racial and ethnic bias in healthcare, 51% said these situations would improve if AI was used to diagnose disease and recommend treatments, while 15% said its use would worsen it. One-third of respondents said the problem of bias and unfair treatment would stay about the same.
Still, 57% of respondents said AI's use to diagnose disease and recommend treatment would worsen the patient-provider relationship. However, 40% said AI in healthcare would reduce provider mistakes.
Three-quarters of the adults surveyed expressed concerns that providers will implement AI too quickly before fully understanding the risks for patients, and 37% said using AI would worsen the security of healthcare records.
Regarding specific use cases of AI, 79% of respondents said they would not want to use an AI chatbot if they sought mental health support, while only 20% said they would.
Within the group of respondents aware of mental health chatbots before the survey, 71% of them said they would not want to use an AI chatbot for their own mental healthcare, and 46% of U.S. adults said AI chatbots should only be used by people who are also seeing a therapist. Twenty-eight percent said they should not be available to people at all.
WHY IT MATTERS
Investment in AI-based tools for healthcare has increased since the COVID-19 pandemic.
Still, doctors and patients need to build trust in the effectiveness and accuracy of machine learning models and be assured of AI's lack of bias.