Suscribirse

What is the rate of text generated by artificial intelligence over a year of publication in Orthopedics & Traumatology: Surgery & Research? Analysis of 425 articles before versus after the launch of ChatGPT in November 2022 - 29/11/23

Doi : 10.1016/j.otsr.2023.103694 
Théophile Bisi a, b, , Anthony Risser c, Philippe Clavert c, d, Henri Migaud a, b, Julien Dartus a, b
a Département universitaire de chirurgie orthopédique, université de Lille, CHU de Lille, 59000 Lille, France 
b Service de chirurgie orthopédique, centre hospitalier universitaire (CHU) de Lille, hôpital Roger-Salengro, place de Verdun, 59000 Lille, France 
c Service de chirurgie du membre supérieur, Hautepierre 2, CHRU Strasbourg, 1, avenue Molière, 67200 Strasbourg, France 
d Faculté de médecine, institut d’anatomie normale, 4, rue Kirschleger, 67085 Strasbourg, France 

Corresponding author.

Bienvenido a EM-consulte, la referencia de los profesionales de la salud.
Artículo gratuito.

Conéctese para beneficiarse!

Abstract

Background

The use of artificial intelligence (AI) is soaring, and the launch of ChatGPT in November 2022 has accelerated this trend. This “chatbot” can generate complete scientific articles, with risk of plagiarism by mining existing data or downright fraud by fabricating studies with no real data at all. There are tools that detect AI in publications, but to our knowledge they have not been systematically assessed for publication in scientific journals. We therefore conducted a retrospective study on articles published in Orthopaedics & Traumatology: Surgery & Research (OTSR): firstly, to screen for AI-generated content before and after the publicized launch of ChatGPT; secondly, to assess whether AI was more often used in some countries than others to generate content; thirdly, to determine whether plagiarism rate correlated with AI-generation, and lastly, to determine whether elements other than text generation, and notably the translation procedure, could raise suspicion of AI use.

Hypothesis

The rate of AI use increased after the publicized launch of ChatGPT v3.5 in November 2022.

Material and methods

In all, 425 articles published between February 2022 and September 2023 (221 before and 204 after November 1, 2022) underwent ZeroGPT assessment of the level of AI generation in the final English-language version (abstract and body of the article). Two scores were obtained: probability of AI generation, in six grades from Human to AI; and percentage AI generation. Plagiarism was assessed on the Ithenticate application at submission. Articles in French were assessed in their English-language version as translated by a human translator, with comparison to automatic translation by Google Translate and DeepL.

Results

AI-generated text was detected mainly in Abstracts, with a 10.1% rate of AI or considerable AI generation, compared to only 1.9% for the body of the article and 5.6% for the total body+abstract. Analysis for before and after November 2022 found an increase in AI generation in body+abstract, from 10.30±15.95% (range, 0–100%) to 15.64±19.8% (range, 0–99.93) (p < 0.04; NS for abstracts alone). AI scores differed between types of article: 14.9% for original articles and 9.8% for reviews (p<0.01). The highest rates of probable AI generation were in articles from Japan, China, South America and English-speaking countries (p<0.0001). Plagiarism rates did not increase between the two study periods, and were unrelated to AI rates. On the other hand, when articles were classified as “suspected” of AI generation (plagiarism rate ≥ 20%) or “non-suspected” (rate<20%), the “similarity” score was higher in suspect articles: 25.7±13.23% (range, 10–69%) versus 16.28±10% (range, 0–79%) (p < 0.001). In the body of the article, use of translation software was associated with higher AI rates than with a human translator: 3.5±5% for human translators, versus 18±10% and 21.9±11% respectively for Google Translate and DeepL (p < 0.001).

Discussion

The present study revealed an increasing rate of AI use in articles published in OTSR. AI grades differed according to type of article and country of origin. Use of translation software increased the AI grade. In the long run, use of ChatGPT incurs a risk of plagiarism and scientific misconduct, and needs to be detected and signaled by a digital tag on any robot-generated text.

Level of evidence

III; case-control study.

El texto completo de este artículo está disponible en PDF.

Keywords : Artificial intelligence, Scientific misconduct, Plagiarism, Fraud, Intellectual property


Esquema


© 2023  Publicado por Elsevier Masson SAS.
Añadir a mi biblioteca Eliminar de mi biblioteca Imprimir
Exportación

    Exportación citas

  • Fichero

  • Contenido

Vol 109 - N° 8

Artículo 103694- décembre 2023 Regresar al número
Artículo precedente Artículo precedente
  • Evaluation of the impact of large language learning models on articles submitted to Orthopaedics & Traumatology: Surgery & Research (OTSR): A significant increase in the use of artificial intelligence in 2023
  • Gaëlle Maroteau, Jae-Sung An, Jérome Murgier, Christophe Hulet, Matthieu Ollivier, Alexandre Ferreira
| Artículo siguiente Artículo siguiente
  • Detecting generative artificial intelligence in scientific articles: Evasion techniques and implications for scientific integrity
  • Guillaume-Anthony Odri, Diane Ji Yun Yoon

Bienvenido a EM-consulte, la referencia de los profesionales de la salud.

@@150455@@ Voir plus

Mi cuenta


Declaración CNIL

EM-CONSULTE.COM se declara a la CNIL, la declaración N º 1286925.

En virtud de la Ley N º 78-17 del 6 de enero de 1978, relativa a las computadoras, archivos y libertades, usted tiene el derecho de oposición (art.26 de la ley), el acceso (art.34 a 38 Ley), y correcta (artículo 36 de la ley) los datos que le conciernen. Por lo tanto, usted puede pedir que se corrija, complementado, clarificado, actualizado o suprimido información sobre usted que son inexactos, incompletos, engañosos, obsoletos o cuya recogida o de conservación o uso está prohibido.
La información personal sobre los visitantes de nuestro sitio, incluyendo su identidad, son confidenciales.
El jefe del sitio en el honor se compromete a respetar la confidencialidad de los requisitos legales aplicables en Francia y no de revelar dicha información a terceros.


Todo el contenido en este sitio: Copyright © 2025 Elsevier, sus licenciantes y colaboradores. Se reservan todos los derechos, incluidos los de minería de texto y datos, entrenamiento de IA y tecnologías similares. Para todo el contenido de acceso abierto, se aplican los términos de licencia de Creative Commons.