Since its introduction to the general public in November 2022, ChatGPT, a man-made intelligence system, has substantially grown in use, creating written stories, graphics, art and more with just a brief prompt from the user. But on the subject of scientific, peer-reviewed research, could the tool be useful?
“Immediately, many journals don’t want people to make use of ChatGPT to put in writing their articles, but a variety of individuals are still attempting to use it,” said Melissa Kacena, PhD, vice chair of research and a professor of orthopaedic surgery on the Indiana University School of Medicine. “We wanted to review whether ChatGPT is in a position to put in writing a scientific article and what are the several ways you possibly can successfully use it.”
The researchers took three different topics — fractures and the nervous system, Alzheimer’s disease and bone health and COVID-19 and bone health — and prompted the subscription version of ChatGPT ($20/month) to create scientific articles about them. The researchers took 3 different approaches for the unique draft of the articles — all human, all ChatGPT or a mixture. The study is published in a compilation of 12 articles in a brand new, special edition of Current Osteoporosis Reports.
“The usual way of writing a review article is to do a literature search, write an overview, start writing, after which faculty members revise and edit the draft,” Kacena said. “We collected data about how much time it takes for this human method and the way much time it takes for ChatGPT to put in writing after which for faculty to edit the several articles.”
Within the articles written only by ChatGPT, as much as 70% of the references were improper. But when using an AI-assisted approach with more human involvement, they saw more plagiarism, especially when giving the tool more references up front. Overall, the usage of AI decreased time spent to put in writing the article, but required more extensive fact checking.
One other concern is with the writing style utilized by ChatGPT. Though the tool was prompted to make use of a better level of scientific writing, the words and phrases weren’t necessarily written at the extent someone would expect to see from a researcher.
“It was repetitive writing and even when it was structured the best way you learn to put in writing in class, it was scary to know there have been possibly incorrect references or improper information,” said Lilian Plotkin, PhD, professor of anatomy, cell biology and physiology on the IU School of Medicine and coauthor on five of the papers.
Jill Fehrenbacher, PhD, associate professor of pharmacology and toxicology at the varsity and coauthor on nine of the papers, said she believes regardless that many scientific journals don’t want authors to make use of ChatGPT, many individuals still will — especially non-native English speakers.
“People should still write every little thing themselves, but then put it into ChatGPT to repair their grammar or help with their writing, so I believe we want to take a look at how can we shepherd people in using it appropriately and even helping them?” Fehrenbacher said. “We hope to supply a guide for the scientific community in order that if individuals are going to make use of it, listed below are some suggestions and advice.”
“I believe it’s here to remain, but we want to know how we will use it in an appropriate manner that will not compromise someone’s status or spread misinformation,” Kacena said.
Faculty and students from several departments and centers across the IU School of Medicine were involved, including orthopaedic surgery; anatomy, cell biology and physiology; pharmacology and toxicology; radiology and imaging sciences; anesthesia; the Stark Neuroscience Research Institute; the Indiana Center for Musculoskeletal Health; and the IU School of Dentistry. Authors are also affiliated with the Richard L. Roudebush Veterans Affairs Medical Center in Indianapolis, Eastern Virginia Medical School in Norfolk, Virginia, and Mount Holyoke College in South Hadley, Massachusetts.