ChatGPT can't pass these medical exams - yet

ChatGPT has failed to pass the American College of Gastroenterology exams and is not capable of generating accurate medical information for patients, doctors have warned.

A study led by physicians at the Feinstein Institutes for Medical Research tested both variants of ChatGPT - powered by OpenAI's older GPT-3.5 model and the latest GPT-4 system. The academic team copy and pasted the multiple choice questions taken from the 2021 and 2022 American College of Gastroenterology (ACG) Self-Assessment Tests into the bot, and analyzed the software's responses.

Interestingly, the less advanced version based on GPT-3.5 answered 65.1 percent of the 455 questions correctly while the more powerful GPT-4 scored 62.4 percent. How that happened is hard to explain as OpenAI is secretive about the way it trains its models. Its spokespeople told us, at least, both models were trained on data dated as recent as September 2021.

In any case, neither result was good enough to reach the 70 percent threshold to pass the exams.

Arvind Trindade, an associate professor at The Feinstein Institutes for Medical Research and senior author of the study published in the American Journal of Gastroenterology, told The Register.

"Although the score is not far away from passing or obtaining a 70 percent, I would argue that for medical advice or medical education, the score should be over 95."

"I don't think a patient would be comfortable with a doctor that only knows 70 percent of his or her medical field. If we demand this high standard for our doctors, we should demand this high standard from medical chatbots," he added.

The American College of Gastroenterology trains physicians, and its tests are used as practice for official exams. To become a board-certified gastroenterologist, doctors need to pass the American Board of Internal Medicine Gastroenterology examination. That takes knowledge and study - not just gut feeling.

ChatGPT generates responses by predicting the next word in a given sentence. AI learns common patterns in its training data to figure out what word should go next, and is partially effective at recalling information. Although the technology has improved rapidly, it's not perfect and is often prone to hallucinating false facts - especially if it's being quizzed on niche subjects that may not be present in its training data.

"ChatGPT's basic function is to predict the next word in a string of text to produce an expected response based on available information, regardless of whether such a response is factually correct or not. It does not have any intrinsic understanding of a topic or issue," the paper explains.

Trindade told us that it's possible that the gastroenterology-related information on webpages used to train the software is not accurate, and that the best resources like medical journals or databases should be used.

These resources, however, are not readily available and can be locked up behind paywalls. In that case, ChatGPT may not have been sufficiently exposed to the expert knowledge.

"The results are only applicable to ChatGPT - other chatbots need to be validated. The crux of the issue is where these chatbots are obtaining the information. In its current form ChatGPT should not be used for medical advice or medical education," Trindade concluded. ®

About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Jun 8
US Senators take Meta to task for releasing LLaMA AI model after token safety checks

Suggest that Zuck has yet again unleashed stuff without a thought for the downsides

Jun 8
About ducking time: Apple fixes up autocorrect in iOS 17

WWDC And makes developer-grade OS betas available to all ducking loyalists

Jun 7
Waymo robo-car slays dog in San Francisco

Deadly accident said to be unavoidable

Jun 7
Atlassian pipes software flaw reports into Jira, so the boss can see them too

This could be a useful way to show what you're up against, or give the clueless a stick to beat you with

Jun 7
Search engines don't always help chatbots generate accurate answers

Research shows developers have to find new ways to manipulate information for AI

Jun 7
The age of AI

Webinar Scaling up AI Initiatives to SuperPOD levels

Jun 7
Red Hat to stop packaging LibreOffice for RHEL

The sky isn't falling... but it's sign of bigger changes to come