Photo Credit: Munna Islam
The following is a summary of “Comparison of emergency medicine specialist, cardiologist, and chat-GPT in electrocardiography assessment,” published in the June 2024 issue of Emergency Medicine by Günay, et al.
ChatGPT, developed by OpenAI, represents the forefront of artificial intelligence with its latest iteration, GPT-4. While research on GPT-4 spans various domains, including cardiovascular diseases, there is a notable absence of studies evaluating its proficiency in diagnosing conditions based on Electrocardiography (ECG) data. For a study, researchers sought to assess the diagnostic accuracy of GPT-4 when analyzing ECG data and compare its performance with that of emergency medicine specialists and cardiologists.
Approval for the study was obtained from the Clinical Research Ethics Committee of Hitit University Medical Faculty on August 21, 2023 (decision no: 2023–91). A total of 40 ECG cases from the “150 ECG Cases” book were selected, comprising 20 everyday and 20 more challenging ECG questions. The participant pool included 12 emergency medicine specialists and 12 cardiology specialists. GPT-4 underwent evaluation in 12 separate sessions, in which it responded to the questions. The three groups independently assessed responses from emergency medicine specialists, cardiology specialists, and GPT-4.
In the everyday ECG questions, GPT-4 exhibited significantly higher performance than emergency medicine and cardiology specialists (P < 0.001, P = 0.001, respectively). For the more challenging ECG questions, GPT-4 outperformed emergency medicine specialists (P < 0.001) while showing comparable performance to cardiology specialists (P = 0.190). Across all ECG questions, GPT-4 demonstrated superior accuracy compared to emergency medicine specialists and cardiologists (P < 0.001, P = 0.001, respectively).
The study highlighted that GPT-4 shows greater proficiency than emergency medicine specialists in analyzing everyday and challenging ECG questions. While it performed better than cardiologists on everyday questions, its performance aligned closely with cardiologists’ as the questions’ difficulty increased. The findings suggested that GPT-4 holds promise as an effective tool in ECG interpretation, potentially offering valuable support to healthcare professionals in clinical settings.
Reference: sciencedirect.com/science/article/abs/pii/S073567572400127X
Source: Comparing Cardiologists, Emergency Medicine Specialists, and Chat-GPT in ECG Assessment