OpenAI has been hinting this week at AI’s evolving role as a “healthcare ally”—and today, the company unveiled a product that aligns with that vision: ChatGPT Health.
ChatGPT Health is a sandboxed tab within ChatGPT designed to let users ask health-related questions in a more secure and personalized environment, with separate chat history and memory functions distinct from the rest of ChatGPT. OpenAI encourages users to link personal medical records and wellness apps such as Apple Health, Peloton, MyFitnessPal, Weight Watchers, and Function to receive responses that are more tailored and context-aware. Connecting medical records allows ChatGPT to analyze lab results, visit summaries, and clinical histories; nutrition apps offer dietary insights; Apple Health contributes fitness data including "exercise, sleep, and activity patterns"; and Function provides deeper analysis from lab tests.
For medical record integration, OpenAI is partnering with b.well, which offers backend infrastructure for uploading health data through its network of approximately 2.2 million healthcare providers. Access to ChatGPT Health currently requires signing up for a waitlist, as the service launches with a limited group of early testers. However, OpenAI plans to roll it out gradually to all users regardless of subscription tier.
The company clearly states in its blog post that ChatGPT Health is not intended for diagnosis or treatment. Yet, it cannot fully control how users apply AI-generated advice once they leave the chat interface. OpenAI acknowledges that underserved rural communities already send nearly 600,000 health-related messages per week on average, and about 70% of medical conversations in ChatGPT occur outside regular clinic hours. Last August, doctors reported a case where a man followed ChatGPT's suggestion to replace dietary salt with sodium bromide—resulting in hospitalization due to an 18th-century medical condition. Weeks after Google launched its AI Overviews, dangerous recommendations made headlines—including advice to spread glue on pizza. A recent investigation by The Guardian found ongoing issues with harmful health guidance, including false suggestions for liver function tests, women’s cancer screenings, and inappropriate diet recommendations for pancreatic cancer patients.
In a blog post, OpenAI revealed that over 230 million people globally ask ChatGPT health and wellness-related questions each week, based on anonymized conversation analysis. The company also shared that over the past two years, it has collaborated with more than 260 physicians who provided feedback on over 600,000 model outputs across 30 key health domains to refine the accuracy and relevance of responses.
According to OpenAI, “ChatGPT can help you understand recent test results, prepare for doctor appointments, get guidance on adjusting your diet and exercise routine, or evaluate trade-offs between different insurance options based on your medical profile.”
One aspect notably downplayed in OpenAI’s announcement is mental health. There have been documented cases of both adults and minors turning to ChatGPT during emotional crises, some resulting in tragic outcomes. In the blog, OpenAI only vaguely mentions user customization options to “avoid sensitive topics” within the health experience. During a press briefing on Wednesday, when asked whether ChatGPT Health would summarize mental health sessions or offer related advice, OpenAI Applications CEO Fidji Simo responded: “Mental health is undoubtedly part of overall health, and we see many people already turning to ChatGPT for these conversations.” She added, “This new product can support any aspect of your health, including mental well-being. We’re deeply focused on ensuring appropriate responses in difficult situations and providing pathways to connect with medical professionals, loved ones, or other support resources.”
The product may also risk exacerbating health anxiety conditions like hypochondria. When questioned about safeguards to prevent worsening symptoms for such users, Simo stated: “We’ve done extensive work fine-tuning the model to ensure responses are informative without being alarmist. If action is needed, we direct users straight to the healthcare system.”