
News Desk
WASHINGTON: A new study has cautioned against relying entirely on artificial intelligence tools like ChatGPT for medical guidance, highlighting potential risks when seeking health-related advice from AI platforms.
Researchers noted that over 230 million people worldwide consult ChatGPT each week with questions about symptoms, safe foods, and home remedies. While the AI often correctly identifies clear medical emergencies, it underestimated the severity in more than half of cases requiring immediate care.
The study examined 60 medical scenarios across 21 specialties, ranging from minor ailments to severe emergencies. Responses were less reliable for sensitive issues such as self-harm, where guidance sometimes appeared inconsistent or contradictory.
Co-authors stressed that AI can still be useful if used cautiously alongside professional medical consultation. Experts advised that anyone experiencing chest pain, severe allergic reactions, or rapidly worsening conditions should seek immediate medical attention rather than relying solely on AI.
The research also emphasized that AI language models are continuously updated, meaning their performance may improve over time, underscoring the need for ongoing monitoring.
#ChatGPT #AIHealth #MedicalAdvice #HealthcareSafety #ArtificialIntelligence #PatientSafety #AIResearch


