Can AI Make You Healthier? The Wellness Industry Assesses ChatGPT And Claude’s New Tools
Millions of people are already using artificial intelligence platforms to answer their burning (sometimes literally) health and wellness questions, and the platforms are hungry for even more health and wellness-related visits.
OpenAI is making an aggressive move into the health care arena with plans to launch ChatGPT Health. Announced on Jan. 7, it’s a version of the company’s wildly popular chatbot dedicated to all things health and wellness. Interested parties can sign up for the waitlist.
According to the announcement, ChatGPT Health has been designed in collaboration with physicians to “support, not replace, medical care” and isn’t “intended for diagnosis or treatment.” It’s likely that users will wield the tool to do just that and much more. ChatGPT Health promises to streamline health and wellness information by allowing the secure uploading and integration of medical records and inputs from wellness apps like Apple Health, Function Health and MyFitnessPal, providing a comprehensive picture of health.
Befitting the cutthroat AI arms race, OpenAI has competitors in the battle to become people’s one-stop shop for health and wellness assistance. Anthropic introduced Claude for Healthcare on Jan. 11, its own approach to a health care-focused chatbot offering tools and resources for health care providers, payers and patients.
As AI startups duke it out for health and wellness hegemony, brands have a unique opportunity to reach curious, highly motivated users seeking customized, actionable insights, as chatbots like ChatGPT and Claude increasingly surface products in their answers. At the same time, they are sorting through how best to leverage that opportunity, especially given the risks posed by large language models, which are prone to inaccuracies, a particularly perilous complication in health and wellness.
To make sense of the implications of AI’s advances in health and wellness, for the latest edition of our ongoing series exploring developments relevant to indie beauty, we asked 14 founders, doctors, executives and investors to share their takes on the launch of ChatGPT Health.
- Nathalie Walton General Manager, BetterSleep
As a parent in my 40s and the GM of BetterSleep, I spend a lot of time thinking about sleep, not just for the millions of people who use our product, but on a very personal level, too, my own, my kids’ and how little of it any of us seem to get.
Like a lot of people, when something feels off at 2 a.m., I’m far more likely to reach for my phone than a medical textbook. That’s already how most of us behave. AI doesn’t change that. It just changes the interface. Instead of scrolling through search results, people are now having conversations with AI.
ChatGPT Health doesn’t create a new problem. It reflects an old one. People have always looked for reassurance, context and answers when they’re tired, stressed or overwhelmed. In the sleep space, we see this constantly. People aren’t looking for a diagnosis. They’re looking for something that helps them calm their mind, understand what might be happening and feel a little less alone in the dark.
Used thoughtfully, AI can help with that. It can explain why stress affects sleep, why routines matter or why waking up at night is so common. At BetterSleep, we think about how to use technology to support healthier sleep habits, not replace professional care or pretend there’s one perfect solution for everyone.
Where it gets tricky is when AI starts to feel like an authority instead of a tool. Sleep and wellness live in gray areas. What works for one person or one family might not work for another. No AI, no app, no article can replace a doctor, a therapist or a deep understanding of your own body. That line needs to stay clear.
From a business perspective, this raises the bar for wellness brands. Consumers will expect experiences that feel more personal and more relevant, but personalization without responsibility is a problem. In wellness, trust matters more than speed. Brands that rush to use AI without grounding it in real expertise and care are going to lose credibility fast.
For me, the question isn’t whether people should use tools like ChatGPT Health. They will. The real question is whether those tools help people sleep better, feel better and make healthier choices over time or whether they add more noise to an already crowded mental space.
As a parent, a consumer and someone building in this space, I want technology that helps me wind down, not wind up. AI has the potential to do that, but only if we use it with humility and clear boundaries. In wellness, being helpful is more important than being impressive.
- Chris Bustamante Founder, Lushful Aesthetics
ChatGPT is our age's Google search. It pulls various data across all platforms to try to answer your questions versus you digging into Google searches to try to interpret the findings. I think, when it's used as a top-of-funnel exploration tool, it's a great system for consumers to become educated and learn what questions to ask their providers for more clarity and insight on treatments and their overall health, but it should not be used as a standalone tool to base important decisions off of such as medical treatments and major lifestyle changes.
I think it's good for consumers to keep up with technology and learn to access the tools when appropriate, but not to rely completely on it. Use it as a tool to grow and expand your knowledge base but more importantly ask stronger questions to professionals who can deliver experience-backed answers.
My concerns are only around those who use ChatGPT as a standalone tool and take its advice over health care professionals and other experts. It has been wrong many, many times and can lead to major risk and confusion if you're not willing to question its feedback.
Since ChatGPT pulls from various websites, ads, socials, media, etc., as businesses, we're learning how to get our information noticed in ChatGPT. Sort of like SEO for Google, we're learning how to get the system to notice us. With my personal brand InjectorChris and my clinic, Lushful Aesthetics, we focus on education-first treatments that are innovative, science-backed and artistic. So, naturally, we want to make sure consumers are accessing the best information in their searches.
In summary, learn to use it as a tool for baseline knowledge and to grasp concepts. Build off of that by writing down higher level questions to then follow up with your health care provider so you're able to better access risk/benefit scenarios for future treatments and understand pre/post-care and all components of procedures/health care plans you're navigating.
- Amanda Eilian Partner, _Able
I’m still on the waitlist, so my thoughts are preliminary, but ChatGPT Health feels like a formal recognition of what 230 million people are already doing: using AI to navigate a health care system that is failing them. In a country that spends more than any developed nation on health care only to net some of the worst outcomes, I’m all for democratizing access to information and guidance.
While there is plenty of handwringing about data privacy, let’s not pretend our records are safe in the current system. Just look at the endless breaches in Epic/MyChart. OpenAI’s promise to use higher encryption and not train foundation models on health data is actually a huge win for those of us who have been tempted to dump our entire lab history into a chat box.
That said, consumers should “trust but verify.” We should be skeptical of getting medical advice from the same AI that once suggested Elmer’s glue to keep cheese on pizza.
From a startup perspective, the bar for consumer value has officially been raised. I’ve already seen one company forced into a hard pivot because ChatGPT Health essentially vaporized their value proposition overnight.
For those founders too young to remember Zynga and Facebook, it’s an important reminder not to build your growth engine on a platform you don't control. On the flip side, at least one company has come out a winner with OpenAI acquiring Torch, which was building a "unified medical memory" for AI.
Right now, I'm watching the regulatory gray area. ChatGPT Health sits in ambiguous territory. Is it a wellness tool providing general information (largely unregulated) or is it providing personalized medical guidance, which could trigger FDA oversight?
OpenAI appears to be positioning it as the former, which aligns with FDA Commissioner Marty Makary's recent signals about limiting regulation of wellness-focused software. But the line is blurry, and we haven't yet seen how regulators will treat conversational AI that gives health recommendations.
The bigger question is liability. If ChatGPT Health gives harmful advice, who's responsible? This uncertainty creates both opportunity and risk for the market. For venture-backed digital health companies that have invested heavily in regulatory compliance, there's a question of whether that investment still provides a moat, or if the regulatory bar has effectively shifted. That's what will reshape the competitive landscape.
- Monica Cepak CEO, Wisp
People are already using tools like ChatGPT to answer everyday health and wellness questions, from nutrition to supplements to daily routines. The reality is that most general chat tools weren’t built or qualified to give health guidance.
ChatGPT Health is a step in the right direction because it treats health questions differently. It gives people clearer, more consistent information around everyday wellness, which is genuinely helpful as long as it’s not mistaken for medical care.
From a brand perspective, this can take a lot of pressure off support teams by handling the basics well. The downside is when people start treating general guidance like a diagnosis, that’s where things can go sideways.
ChatGPT Health has the potential to create more interactive, personalized experiences for people, even at scale. Brands just need to be thoughtful about how they use it, whether that’s for content, answering FAQs or offering basic wellness guidance, and make sure the boundaries are clear so consumers aren’t misled.
- Jon Cohen CMO, Pure Daily Care and Aquasonic
ChatGPT Health gives consumers a more structured and responsible way to explore health questions than random Google searches or unregulated AI tools. That matters in areas like skin, hair and preventative care, where misinformation is common. It’s not a replacement for professionals, but it’s a safer place to start.
From a product marketing standpoint, this is less about immediate sales and more about understanding how people are learning about their health. As consumers increasingly use AI to ask questions about skin, hair and wellness, we want to learn from that behavior and use it to shape how we explain our products more clearly, more accurately and more in line with what people actually want to know. Over time, that leads to better-informed customers and stronger trust.
Right now, it’s a wait-and-see moment. How ChatGPT ultimately chooses to structure or monetize this will matter and so will how consumers respond to it. As brands, the important thing is to watch how people’s behavior changes, what they ask, what they trust and how they make decisions and adapt thoughtfully from there.
- Shlomi Madar CEO, SpotitEarly
OpenAI’s entry into the health and wellness space has already been underway for some time, with many users leveraging ChatGPT across a wide range of applications, from personal nutrition and diet optimization to exploring symptoms and disease-related questions. The recent feature allowing users to upload personal health data marks an important milestone. It enables ChatGPT to operate in a far more personalized and potentially more accurate way.
The key question is not whether consumers will use it, because adoption at scale is inevitable, but rather how trustworthy and accurate LLM-based platforms will be in health-related use cases, and what impact they will ultimately have on human health and wellness. The most obvious concerns are data security and the risk of incorrect or misleading medical interpretations. These are issues that OpenAI will need to address directly and transparently.
For our cancer screening brand, this development is not a direct threat, but rather an opportunity. If tools like ChatGPT Health become widely adopted and sufficiently de-risked, they will increase demand for high quality, real-world health data and clinically validated diagnostic inputs. That creates potential for alignment between AI-driven consumer platforms and evidence-based screening technologies.
While caution is clearly warranted, this development also represents a new chapter for patients and consumers. It shifts more control and agency toward the individual, which will have meaningful consequences for the health care industry as a whole. If implemented responsibly, this shift will be consequential for the health care industry. However, its ultimate impact on patients and outcomes remains to be seen.
- Allie Egan Founder and CEO, Veracity
Overall, tools like ChatGPT Health that have the potential to help people better understand their options, risks and tradeoffs and put those into their own risk assessment model to become a better CEO of their own health are a good thing. The limitation, as with many AI and health-testing tools, is context.
Labs and data don’t mean much without understanding the human behind them, including lifestyle, stress, history and personal preferences. For example, on my Function Health tests, my MCH has been flagged twice, but I am a marathon runner, so having more hemoglobin in my red blood cells is a direct result of training, not a problem to be solved.
The other big issue I worry about is ChatGPT Health providing insights that have been established with weak or single source data that is not personalized. Even today you can get an AI assessment of your running form, but if you ask any orthopedic doctor who has been practicing for decades, they will confirm there is no perfect form because our bodies are built so differently.
We could be trying to chase improvements or perfection that may hurt versus help us in the end. If we take the insights from ChatGPT Health as experiments to conduct versus gospel that can make it a useful tool.
Data privacy is often cited as the main downside, but (even an older millennial) I realize my data is not truly secure. I, however, want to restrict the benefits of my data to organizations that I feel will benefit the greater good from its use. I don’t want tech companies benefiting off my data for advertising, but I do want to use my data to help other women uncover early signs of Hashimoto’s and hopefully reverse disease before symptoms even surface.
- Amir Karam Founder, KaramMD Skincare
AI lowers the barrier to information. People will come into conversations more informed, but also more opinionated. That raises the standard for professionals and brands to be clearer, more evidence-based and more honest.
This represents a shift from brand-led storytelling to knowledge-led discovery. Consumers won’t just be marketed to; they’ll arrive already informed, asking deeper questions. That fundamentally changes how trust is built in health and wellness.
This should serve as an educational starting point, but not as a replacement for professional judgment. It’s useful for understanding concepts and asking better questions, but health decisions still require human expertise and context, with my main concern being false confidence. Health is nuanced, and AI can unintentionally oversimplify complex issues. Without proper framing, people may mistake information for diagnosis or advice.
There is also a risk in mistaking access to information for true understanding. Health isn’t binary and context matters. Without guidance, AI can unintentionally flatten nuance, which is where real expertise still matters most.
This is a reckoning moment for the industry. AI will reward brands with real clinical grounding, consistency and long-term results because those patterns surface over time. Brands built on hype or trend-driven claims will be exposed, while those built on integrity and outcomes will become the default answers.
- Hallie McDonald Co-Founder, Erly Skincare
As a dermatologist and co-founder of Erly, a science-driven skincare brand, I see ChatGPT Health as both an important opportunity and something that needs to be used thoughtfully. From an opportunity standpoint, this is a meaningful shift in how consumers engage with their health. Patients now have full access to their medical records thanks to the 21st Century Cures Act, but most people are not trained to interpret lab results, visit notes or care plans.
AI tools like ChatGPT Health can help bridge that gap by translating complex medical information into something more understandable, helping patients prepare better questions for their doctors and identify potential gaps in care. That is a significant improvement over isolated Google searches that often create unnecessary anxiety without context.
At a broader level, consumers are already using AI for health. The question is no longer whether people should use tools like this, but how we can help them do so safely, with appropriate expectations and guardrails. AI can be a helpful educational assistant, but it is not a replacement for medical care or clinical judgment.
My biggest concern is privacy. Health data shared with AI platforms is not protected by HIPAA, and unlike conversations with physicians, there is no legal privilege. That data could theoretically be accessed through legal processes.
This is particularly concerning at a time when access to reproductive health care and gender-affirming care is under threat at both the state and federal levels. Consumers need to understand that, while AI can be useful for education, it is not a protected clinical space, especially for highly sensitive health topics.
Looking ahead, I am encouraged by the rapid advancement of on-device AI. Running health-focused AI models directly on a patient’s phone or wearable, without sending data to the cloud, has the potential to address many of these privacy concerns. Within the next few years, it is realistic to imagine patients using fully private, local AI assistants to analyze their own medical records without subscriptions or data-sharing tradeoffs. That represents true democratization of health AI.
From a business perspective, this shift has real implications for health, wellness and beauty brands like Erly. Consumers are becoming more educated, more data literate and more skeptical. They will increasingly cross-check product claims, ingredients and recommendations through AI tools. Brands built on dermatologic science, transparency and conservative claims are well-positioned in this environment. Hype, exaggerated promises and vague “clean” language will not hold up.
Overall, I view ChatGPT Health as a powerful educational tool with enormous potential, but one that requires thoughtful use, clear consumer education and continued innovation around privacy. For brands and founders, the opportunity lies in earning trust, staying scientifically grounded and meeting a more informed consumer where they are.
- Ellen Marmur Founder, MMSkincare
AI like ChatGPT and Gemini 3 are excellent tools for the health and wellness industry that consumers should use, but not rely upon entirely. AI may hallucinate or provide information that has been promoted by paid marketing over PubMed information. Most AI platforms will say they cannot provide medical advice, but with beauty treatments especially about skincare, there is abundant and skewed advice.
The first time a new patient came to me and said they were recommended to me from AI, I was stunned. I am not a big social media personality and have not invested in a strategy for AI. Since then, I have a steady stream of smart, informed patients for procedures. It's especially nice because they come to Marmur Medical with trust and are prepared to have a consultation in person.
I haven't had with AI the same concerns we had with TikTok and facetuning when patients wanted to look more anime or like a celebrity. All in all, I see AI as a net good resource for physicians in private practice.
My prediction is that this space is already commercialized where paid ads will push information that is biased by money spent on that platform. For example, I recently searched on Gemini 3 "Four Days in Taipei" and instantly Google began pushing ads for luxury travel guides in Taipei. Most physicians will not have the budget to compete with big industry, so consumers will be fed biased information.
- Sachin M. Shridharani Founder, Luxurgery
I think the use of AI in health care is 100% a double-edged scalpel, so to speak. Of course, it's great to have consolidated information at a very high level, but, at the same time, there are several elements of clinical scenarios, patient presentations, patient manifestations and the physical exam that play a tremendous role in helping guide patients on diagnosis and a treatment plan.
All too often already, people use, as I say, Dr. Google, to go down a rabbit hole often to diagnose and treat certain conditions. I think the diagnosis and treatment requires a certain element of critical thought process by qualified physicians who take symptomology, timing and the very important physical exam into consideration for various health conditions.
The bread-and-butter cold might be able to be treated in a certain capacity and understood by AI with up-to-date resources on antibiotics or antiviral therapies. However, certain very challenging conditions often require a multitude of tests and imaging to give us a good diagnosis. So, a little information sometimes can be helpful, and a little too much information becomes dangerous, where people try to self-diagnose and self-treat.
This is already the case in therapeutics and even in aesthetics, where patients are now searching online to look for the best way to inject Botox, buying counterfeit toxins from the Internet to save money and injecting themselves as backyard Botox only to have catastrophic, complications, create immune responses and even death. So, across the board, there can be some tremendous problems and issues with using AI to try to solve, diagnose and treat.
I think where it will be helpful is to understand a clinical diagnosis more and realize what optionalities there are or advances that not necessarily every clinician is up to speed on, and especially because not everyone always has access to the best health care and the best doctors in the world. So, I think when used responsibly to understand a condition, to understand a diagnosis and seek out a qualified clinician and treatment plan, AI can be very helpful.
When we're using ChatGPT Health to self-diagnose and think about our own treatment plans and challenge things in a way that may not be the most reasonable without all the information and a physical exam imaging, it can be dangerous.
- Clint Weiler CEO, Milan Laser Hair Removal
ChatGPT Health represents a real shift in power toward the consumer, and I think that's a good thing. AI can help consumers get oriented, discover the right questions to ask their medical provider and do their research more quickly.
That said, I often describe AI as being like your neighbor. You can sometimes get good information if you ask the right questions, but it can also be wrong. That's why licensed medical professionals will always provide the most accurate guidance.
Should people use it? Absolutely, as long as they understand it's a starting point for informed decision-making, not a replacement for professional care.
For Milan Laser, this level of transparency is a competitive advantage. We find that customers who come to us from AI sources are more ready to buy because they've done the research that matters to them. As AI helps consumers look beyond marketing claims, the difference between outcome-driven care and session-based sales models becomes clearer.
AI doesn't replace trust. It helps you discover it. As consumers become more informed, brands grounded in medical integrity and real outcomes will stand out, while those with misaligned incentives will be exposed.
We're pleased that ChatGPT Health gives everyone access to AI technology in a safe and responsible way to research laser hair removal and other health services. Milan Laser welcomes this shift because our model has always aligned our mission with our clients' goal of becoming hair-free as efficiently as possible. Informed customers make better decisions, and we're prepared for the future that AI is accelerating.
- Lindsay Wynn Founder, Momotaro Apothecary
This is pretty complicated. As somebody who encourages the democratization of health and wellness education, I think there can be wonderful aspects of AI question and answer modalities. People can ask questions in the comfort of their own homes about things that they might not be comfortable with in a professional setting.
Without deep diving into our entire flawed medical health care system, the fact that you can upload imaging and records from your doctor could be potentially beneficial for folks who need more time and have more questions beyond what they are given at a doctor's office.
While ChatGPT should never be a replacement for physicians and specialized medical practitioners, I believe we have to approach AI with the reality that people will be using it. We should better educate on how to use it rather than suggest people don't at all.
With that, I think it's extremely important that everybody knows AI, just like "Dr. Google," is incredibly flawed. AI cannot diagnose you, and no matter how specific your questions are, A.I.'s answer will often include much of the medical bias and misinformation that the internet holds.
What also interests me is how, with ChatGPT, we now recognize these markers of its tone, voice and punctuation. What and how could something like that show up with regards to medical advice in an AI model in a way that could be harmful?
But, of course, as someone who relied heavily on the internet and personal research to find my own pathway to healing and eventually to the start of my company, I would encourage folks if they do take that route to make sure that they do additional research or bring questions they may have regarding something they had a discussion with ChatGPT/Health with back to their doctor. If and when used at all, like so many things on the internet, ChatGPT Health should be taken with a grain of salt, not as an absolute truth.
If I imagine the best possible version of this, knowing there are very real limitations, I hope ChatGPT Health can serve as a starting point. Helping people put language to their concerns, ask better questions and feel inspired to seek the care, conversations and answers they ultimately deserve.
- Daniel Abell Digital Technical Manager, Apothekary
Anthropic [recently] announced Claude for Healthcare, timing probably not coincidental. They did lean into HIPAA messaging, mentioned three times in the announcement. OpenAI seems to be ahead on everyday third-party integrations (Apple Health, MyFitnessPal, Weight Watchers, Peloton, etc.), where Anthropic has invested more in clinical integrations through what they're calling “Connectors” to streamline the patient-provider relationship and assist with claims/coverage/compliance. Anthropic integrations with Apple Health and Android Health were to be released in beta last week.
That tells me OpenAI wants to appeal to everyday users, where Anthropic wants a primarily more "white coat"/clinical adoption with secondary concern for personal wearables. Either way, AI companies consider AI wellness inevitable.
Biggest concern, in my view, is privacy. Users would effectively be giving unfettered intimate access to Silicon Valley. Both OpenAI and Anthropic assure privacy, but does anyone actually believe them?
And even if data is kept private, do I as a consumer want my AI agent to know me better than I know myself? This could also have sobering implications for women living in states with strict reproductive laws. Would state DAs have legal authority to access OpenAI/Anthropic data?
Flip side, this means the tech infrastructure is now available to easily present ourselves as a solution to problems/deficiencies identified by your personal AI physician. This removes the guesswork and potentially reduces the amount of effort I need to expend determining what I need and takes me directly to the fix.
There's also a question of how widely this will be adopted. According to OpenAI, "Over 230 million people globally ask health and wellness related questions on ChatGPT every week." But it doesn't necessarily follow that those people will want to integrate Apple Health into ChatGPT, whether because of privacy concerns or just because they like the more noncommittal nature of asking one-off questions.
- Nadya Okamoto Co-Founder, August
Given how complicated and often unaffordable health care is in this country, I think there are many benefits to something like ChatGPT Health in democratizing access to basic health questions. I do think this will basically be a WebMD on steroids in the sense that I could see there being more manufactured paranoia as people have the tools to over/self-diagnose with various medical conditions.
As a consumer myself, I'm nervous about the privacy risks of sharing personal medical information. I think back to when Roe vs. Wade was overturned and there was heightened fear about how menstrual cycle tracking could be used against you, and there are probably a lot of question marks about privacy here.
As a period advocate, I'm excited about how this might be a tool for people to learn more about their bodies. It's no secret that our sex education system in the U.S. is faulty and basically non-existent, and this seems like it could be a great tool for people to learn more about their bodies basic functions if they are curious.
I'm curious to see if this becomes an opportunity for brand discovery for health brands. Given the stigma around periods, I imagine that people will be searching for recommendations about taking care of their periods.
If someone asks about the importance of using clean/organic tampons, and for product recommendations, will there be an opportunity to suggest August? On the flip side, there is also still so much misinformation about periods and period products, and we don't want this to magnify that.
I am curious about how rhetoric on this tool will use language around periods and menstrual health. Will it use gender-inclusive language to talk about menstruation? Will it perpetuate stigma around periods or encourage people to be proud of this natural bodily function?
- Robin Berzin Founder, Parsley Health
Tools like ChatGPT Health reflect a real shift in consumer expectations. People want faster, more accessible health insight. Used well, these tools can support education, improve health literacy and help people ask more informed questions of their clinicians.
But information alone is not care. AI cannot replace clinical judgment, longitudinal data or individualized risk assessment, and any patient who “consults” ChatGPT and then talks to an expert, trained clinician can immediately see and feel the difference.
We can’t mistake confident language for accurate guidance, especially when it comes to nuanced decisions around hormones, medications, or chronic disease. AI can and does empower better conversations with clinicians, but does not replace them.
AI will absolutely reshape health care, but it will reward companies grounded in real clinical infrastructure, validated data and human oversight. For Parsley Health, the opportunity is in understanding and integrating AI responsibly to help patients interpret labs, track trends and ask better questions, while trained clinicians remain accountable for decisions and outcomes.
The future belongs to companies that combine the best of AI technology with evidence-based care, and Parsley will be one of those companies.
If you have a question you'd like Beauty Independent to ask founders, doctors, executives and investors, send it to [email protected].

Leave a Reply
You must be logged in to post a comment.