In January 2026, OpenAI introduced ChatGPT Health, a specialized feature integrated into the ChatGPT platform aimed at delivering AI-assisted insights for health and wellness. This tool enables users to connect personal health data—such as electronic medical records (via integrations like b.well), Apple Health, MyFitnessPal, and various fitness and wellness apps—to receive more tailored responses. Users can query explanations of lab results, prepare questions for doctor appointments, seek dietary or workout recommendations, and gain general guidance on managing their well-being.
OpenAI positions ChatGPT Health as a supportive companion rather than a substitute for professional medical care. The company stresses that it is not designed for diagnosis, treatment, or emergency advice, and consistently urges users to consult qualified healthcare providers for serious concerns.
To address the sensitivity of health information, OpenAI has implemented several enhanced privacy measures:
- Health-related conversations are isolated in a dedicated, compartmentalized section separate from regular chats.
- Data is protected with purpose-built encryption both at rest and in transit, along with additional isolation layers.
- Health data and associated conversations are explicitly excluded from training OpenAI’s core AI models.
- Users retain full control, including options to view, delete, or manage stored memories and connected data sources.
Despite these safeguards, the launch has triggered widespread debate among privacy advocates, cybersecurity experts, medical professionals, and researchers regarding both privacy vulnerabilities and potential safety risks.
Major Privacy Concerns
A primary point of contention is the absence of HIPAA (Health Insurance Portability and Accountability Act) protections. Unlike information managed by hospitals, physicians, or other HIPAA-covered entities, data shared with ChatGPT Health falls outside these federal regulations in the U.S. Once connected or uploaded, it relies solely on OpenAI’s internal policies and commitments—which could be subject to future changes. Advocacy groups and experts have described this as a significant gap, especially given the lack of a comprehensive national privacy law in the U.S. to enforce long-term safeguards. Potential risks include data breaches, unintended leakage, or shifts in company practices that could affect user information.
Even with strong encryption and user controls, critics argue that entrusting highly sensitive health details—particularly those involving mental health, chronic conditions, or substance use—to a non-HIPAA entity inherently carries elevated exposure.
Significant Safety and Reliability Issues
On the safety front, concerns center on the inherent limitations of large language models. AI systems like ChatGPT can produce plausible-sounding but inaccurate information (often called “hallucinations”), leading to inconsistent or unreliable medical guidance. Studies and early evaluations have highlighted risks, such as under-triaging urgent cases or mishandling critical mental health scenarios. Professionals warn that over-reliance on the tool could result in delayed medical attention, false reassurance in emergencies, or misguided self-management of conditions.
While OpenAI continues to refine the feature through gradual rollout and user feedback—initially limited to select users outside regions like the EU, UK, and Switzerland—these limitations underscore the challenges of applying general-purpose AI to personalized health contexts.
Balancing Innovation and Caution
ChatGPT Health represents a bold advancement in making AI a more accessible health companion, potentially empowering users with better information and preparation for healthcare interactions. However, the combination of non-HIPAA coverage and AI’s probabilistic nature has amplified calls for caution.
Experts recommend approaching the feature conservatively: limit connections to non-sensitive data, avoid relying on it for critical decisions, and always prioritize consultations with licensed medical professionals. As AI tools evolve in healthcare, the ongoing tension between convenience, personalization, and robust protection remains a key area of scrutiny.