Millions Turn to ChatGPT for Medical Advice as Healthcare Costs Soar, Raising Safety Concerns

Pasukan Editorial BigGo
Millions Turn to ChatGPT for Medical Advice as Healthcare Costs Soar, Raising Safety Concerns

A new report from OpenAI reveals a staggering trend: over 40 million people globally are now using ChatGPT for healthcare-related queries. This surge coincides with a critical moment in the United States, where the expiration of Affordable Care Act subsidies has left millions facing dramatically higher insurance premiums. As AI steps into the role of an always-available medical advisor, experts are urgently questioning the safety and reliability of this digital shift, highlighting the chatbot's potential for dangerous inaccuracies.

The Scale of AI's Role in Healthcare

According to an OpenAI report shared with Axios, more than 5% of all messages sent to ChatGPT globally concern healthcare topics. With the platform processing an estimated 2.5 billion prompts daily as of mid-2025, this translates to at least 125 million health-related questions answered by the AI every single day. The report, based on anonymized interaction data and user surveys, indicates that people are using the chatbot for a wide range of sensitive tasks. These include describing symptoms to seek diagnoses, asking for treatment advice, navigating insurance denial appeals, and checking medical bills for potential overcharges. Notably, around 70% of these healthcare conversations occur outside standard clinic hours, underscoring the AI's role as a 24/7 resource for users who cannot access human professionals.

Global ChatGPT Healthcare Usage (OpenAI Report Data):

  • Total Users for Medical Advice: Over 40 million people globally.
  • Share of Total Queries: More than 5% of all ChatGPT messages are healthcare-related.
  • Estimated Daily Health Queries: At least 125 million (based on 2.5 billion total daily prompts in July 2025).
  • After-Hours Use: ~70% of health conversations happen outside normal clinic hours.
  • U.S. Insurance Queries: 1.5 to 2 million messages per week.
  • Queries from "Hospital Deserts": Nearly 600,000 per week.

A Perfect Storm: Rising Costs and AI Accessibility

This reliance on AI for medical guidance is intensifying against a backdrop of growing healthcare insecurity, particularly in the U.S. The recent expiration of enhanced Affordable Care Act subsidies has led to a reported 114% average increase in monthly premiums for over 20 million enrollees. A December 2025 Gallup poll found a record low 16% of Americans were satisfied with the healthcare system. With an already complex system bogged down by high administrative costs and confusing coverage, many are turning to AI out of necessity. OpenAI's data shows that in the U.S. alone, ChatGPT handles between 1.5 and 2 million health insurance-related messages weekly. The trend is also pronounced in underserved areas, with nearly 600,000 weekly healthcare queries originating from "hospital deserts"—rural communities over a 30-minute drive from the nearest medical center.

U.S. Healthcare Context (Early 2026):

  • ACA Subsidy Impact: Expiry led to an average 114% increase in monthly premiums for over 20 million enrollees.
  • System Satisfaction: A December 2025 Gallup poll found only 16% of Americans were satisfied with the healthcare system.
  • Uninsured Population: Approximately 27 million Americans, per CDC data.

The Critical Risks of AI Hallucination and Bias

While AI offers constant availability, it carries significant and potentially dangerous flaws. The core risk lies in "hallucination," where large language models like ChatGPT generate plausible-sounding but completely fabricated information. A July 2025 study posted to arXiv found that leading chatbots, including OpenAI's own GPT-4o and Meta's Llama, responded to medical questions with dangerously inaccurate information 13% of the time. The study's authors warned that "millions of patients could be receiving unsafe medical advice." Furthermore, research has shown these models can produce biased recommendations based on a patient's race, income, or sexual orientation. The legal risks are also mounting; OpenAI faced multiple lawsuits in 2025 alleging ChatGPT contributed to psychological harm and suicides, leading the company to explicitly walk back the AI's ability to give medical advice in its usage policies that November.

AI Model Safety Data (arXiv Study, July 2025):

  • Dangerously Inaccurate Response Rate: 13% for both OpenAI's GPT-4o and Meta's Llama models when answering medical questions.

Navigating the Future of AI in Medicine

For now, the consensus among observers is that generative AI should be treated as a preliminary resource, not a definitive authority. It can be useful for understanding basic medical terminology or the broad strokes of insurance processes, much like a more interactive WebMD. However, it is no substitute for a licensed medical professional, especially for diagnosing chronic conditions or treating serious injuries. OpenAI has stated it is working to improve its models' safety in responding to health queries. As the healthcare landscape grows more challenging for consumers, the tension between the convenience of AI and the imperative for accurate, safe medical information will only become more acute. The current situation serves as a stark reminder that while AI can be a powerful tool for information, it requires vigilant human oversight, particularly in matters of life and health.