June 14, 2023 / Thought Leadership

Consumer Acceptance of AI in Healthcare

Michele Hart-Henry, Managing Director, Connelly Partners Health

Are We Ready for the Future?

The headlines from the most recent global healthcare meetings, such as the annual Healthcare Information Management Systems Society (HIMSS) meeting or The MedTech Forum, are notable in their focus on artificial intelligence (AI) and its application across the care continuum. These events and others like them feature presentations on AI developments, new collaborations built around AI tools, and partnerships among health technology companies introducing new uses of AI in care settings. 

However, any conversation about how patients and consumers understand and view AI in healthcare is missing from the headlines and articles. 

Back to the Future?

AI is already in use in multiple healthcare arenas. Algorithms help predict the onset or potential severity of disease, read and interpret radiology studies, screen for complications from diseases like diabetes and assist in the drug development process, just to name a few typical applications. However, the conversations are now expanding to include generative AI.

As described by TechTarget, Generative AI is “a type of artificial intelligence technology that can produce various types of content including text, imagery, audio and synthetic data. The recent buzz around generative AI has been driven by the simplicity of new user interfaces for creating high-quality text, graphics and videos in a matter of seconds.” 

Seeking to learn more, we recently went to ChaptGPT4 and asked: “How do patients feel about AI in healthcare”? The answer generated via this technology:

The use of AI in healthcare is a relatively new and rapidly evolving field, so there is no single answer to how patients feel about it. However, research suggests that patients generally have a positive attitude towards AI use in healthcare, especially when it comes to improving the accuracy and speed of diagnosis, treatment, and disease prevention.” 

Based on this answer, generated using AI, all is well in healthcare, and patients generally accept artificial intelligence in their care. But is that really the case? Do we, as healthcare consumers and patients, know enough or understand how AI is useful in our care? You may be surprised by some of the results.

AI Can Do Anything You Can Do Better?

One recent study published by the notable journal JAMA Internal Medicine reported that patients and consumers actually preferred the responses from a generative AI chatbot to those generated by physicians. Researchers used questions posted on a social network and used generative AI to create new answers to those 195 questions. They then compared the chatbot answers to the ones provided by physicians using a team of licensed healthcare professionals to choose “which response was better” and judge both “the quality of the information provided” and “the empathy or bedside manner provided.” In nearly all cases, the evaluators preferred the generative AI answers, rating them as higher in quality and significantly higher in empathy. 

If generative AI is rated more empathetic and thorough than physicians in answering questions, are healthcare consumers more likely to view its use favorably? Not if it’s a replacement for actual providers, according to a study published in The Lancet Digital Health. In a review of more than twenty studies from eight countries, researchers found the acceptability of AI in care settings was more likely if it was used as a support tool rather than a substitute for providers. In many reviewed studies, participants envisioned AI as a second opinion or a means to simplify notes or instructions provided by their caregivers. However, participants indicated concern about depersonalization, lack of privacy and loss of provider control in health decision-making. 

One application of AI as a supplement to providers is the recent collaboration between Microsoft’s Azure OpenAI and Epic’s EHR platform. In this use case, the generative AI will fill in missing information in patient records but could also suggest diagnoses and be used to predict disease outcomes based on analysis of historical data. According to the companies’ joint announcement, the integration “is meant to increase provider’s productivity, reduce administrative burden and improve care by giving clinicians more time to spend with their patients.”

Proceed With Caution

Last month, The World Health Organization (WHO) issued a call for what it refers to as the “safe and ethical application of AI” for health, citing concerns that “precipitous adoption of untested systems could lead to errors by health-care workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world.”  

But as developments like the partnership between Azure OpenAI and Epic become more mainstream, how will they affect consumers and healthcare consumerism? According to a recent Pew Research Center survey, 60% of Americans indicated discomfort with providers relying on AI in their own healthcare. Among the primary drivers of this view is disbelief that using AI will improve health outcomes, with only 38% of those surveyed saying that using AI to diagnose disease and recommend treatments would lead to better health outcomes, 33% saying it would lead to worse outcomes and 27% saying it won’t make much difference.

Interestingly, in that same Pew survey, of those who indicated concern about health equity, 51% of respondents said that AI could help improve bias and unfair treatment if it was used to diagnose disease and recommend treatments for patients. And survey respondents also believe that AI use in healthcare could reduce medical mistakes.

Pew survey respondents also indicated concern with the impact of AI use on the personal connection between a provider and a patient. With increased consumer involvement in a patient-centered healthcare model, patients might question clinicians’ decisions and want to be informed whether the decisions are based on AI recommendations. 

In Ireland, a country championing AI and its benefits to the economy, the government is building a coordinated approach to help drive public trust, including creating “an AI ambassador to promote awareness among the public and businesses of the potential that AI offers, serving as a champion of AI as a positive force for the economy and society, and emphasizing an ethical approach.”

Irish acceptance of AI in all facets of its economy, including healthcare, is driven by the country’s early identification of deep tech, like AI, as drivers in economic growth. But, as in the US, there are recently reported concerns about privacy, misuse, degrading of relationships and potential medical errors due to using AI in patient care.

Brand and Broader Implications

When discussing AI in the context of health brands, it’s important to be transparent and clear about what AI is, how it is used and how it can benefit health organizations, providers and consumers. Doing so requires: 

  1. Defining AI: Begin by explaining what AI is and how it works in simple, accessible language. It’s important to avoid jargon and technical terms that may confuse audiences.
  2. Focusing on Benefits: Highlight the benefits that AI can bring, such as improved diagnosis, more accurate treatment planning, and better outcomes. It’s important to emphasize that AI is a tool that can help provide better care rather than a replacement for humans, clinicians or experts.
  3. Addressing Concerns: Acknowledge concerns about AI, such as fears of job loss or privacy concerns. Be transparent about how AI is used and what data is being collected and used.
  4. Being Honest: It’s important to be honest about the limitations of AI and its potential benefits. AI is not a magic bullet that can solve all healthcare problems; acknowledging this is important.
  5. Providing Examples or Success Stories: Use real-world examples to illustrate how AI is used in healthcare and makes a difference for organizations and patients. This can help your audiences understand the potential of AI and how it can benefit them.

Overall, health brands should strive to be transparent, honest and informative about AI. Especially since the Pew survey suggests that three-quarters of Americans question the pace of adoption of AI in healthcare, fearing that the system is moving too fast before fully understanding the implications and usefulness of this technology. 

This article is co-authored by Michele Hart-Henry, Global Managing Director of Connelly Partners Health and Mary McMahon, Group Strategy Director and Lead for Connelly Partners Health in Ireland.