April 20, 2026
John Schnobrich/Unsplash
A new Gallup polls shows 1 in 4 Americans are turning to artificial intelligence for health information and advice. But research shows that chatbots sometimes provide inaccurate information or are misinformative.
American adults are increasingly relying on artificial intelligence chatbots like ChatGPT for health advice, viewing AI as a quick and convenient research tool. But researchers caution that the information they're receiving may not always be accurate.
A new Gallup poll shows 25% of Americans report using AI for health information or advice in the past 30 days, with most of them using it in conjunction with doctors visits or to answer questions. Though AI can help people become more informed, it also is prone to giving out misinformation, prior research shows.
The Gallup poll surveyed 5,000 Americans last fall about how often they used AI, and why they did so.
Most people who used AI for health care advice viewed it as a supplemental tool, the poll found, with 59% using it before visiting their doctors and 56% using it afterward. Most of these respondents said they used it to answer questions about nutrition, exercise and physical health symptoms.
However, some adults said they relied on AI due to cost and access barriers. Fourteen percent of respondents who used AI for health advice said they did so because they could not afford a doctor's visit, and 16% said they used AI because they could not access a health care provider.
The reasons for using AI for health research varied between income and age groups. Younger adults were much more likely to use AI to conduct research before visiting their doctors than older adults. Similarly, 32% of people from households earning less than $24,000 annually said they used AI because they could not afford to visit a doctor. Only 2% of respondents earning at least $180,000 used AI for the same reason.
About half of respondents who used AI said they feel more confident visiting their doctors and asking followup questions.
Another poll on AI use for health questions, published last month by KFF, found similar results. It showed that one-third of adults are turning to AI for health information and advice, especially young adults, uninsured people and people of color.
Dr. Karandeep Singh, chief health AI officer at University of California, San Diego, told PBS that AI has the potential to serve as upgraded versions of the online searches people have been conducting for a long time.
"I almost view it like a better entry portal into web search," Singh said. "Instead of someone having to comb through the top, you know, 10, 20, 30 links in a web search, they can now have an executive summary."
Tech companies like ChatGPT and Microsoft CoPilot appear to be taking note of the trend. Both have rolled out platforms dedicated to answering health-related questions, and ChatGPT estimated that more than 230 million people globally ask health and wellness-related questions every week.
However, Duke University researchers have found that chatbots like ChatGPT provide information that is accurate, it may not be medically appropriate.
"The objective (for chatbots) is to provide an answer the user will like," said Monica Agrawal, an assistant professor of biostatistics and bioinformatics. "People like models that agree with them, so chatbots won't necessarily push back."
A study published in the British Medical Journal in February analyzed the responses to health questions from five popular chatbots including ChatGPT. The researchers found that about half of the responses were ineffective or potentially harmful. The chatbots also consistently expressed certainty of their answers, oftentimes not offering disclaimers or caveats.
Using AI to seek general information, medical definitions or cross-checking information can be an effective tool, experts at the Mayo Clinic say. But relying on it for diagnoses or sharing private medical information can be dangerous.