A new study finds a troubling problem with popular AI chatbots: when it comes to medical advice, they’re often inaccurate—or incomplete.Researchers tested five widely used tools, including ChatGPT, Gemini, DeepSeek, Meta AI and Grok.Each chatbot was prompted with 10 questions covering cancer, vaccines, stem cells, nutrition, and athletic performance.Their responses were scored for accuracy and completeness—and whether they blurred the line between science and misinformation.The results? Half of the answers to clear evidence-based questions were rated “somewhat” or “highly” problematic, meaning they could potentially mislead users or even cause harm if followed.The chatbots did best on vaccines and cancer but struggled with stem cells, athletic performance, and nutrition.Open-ended questions led to more inaccurate or misleading answers.Researchers say the responses were often delivered with confidence—but without important caveats. And in some cases, sources were incomplete… or even made up.The authors warn that without “public education, professional training, and regulatory oversight,” AI could erode public health.Source: BMJ OpenAuthor Affiliations: Harbor-UCLA Medical Center, University of Alberta, University of Ottawa, Wake Forest School of Medicine, Loughborough University .Sign up for our weekly HealthDay newsletter