This is a preview of subscription content, access via your institution
Access options
Subscribe to this journal
Receive 18 print issues and online access
$259.00 per year
only $14.39 per issue
Buy this article
- Purchase on Springer Link
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
Lee P, Bubeck S, Petro J. Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine. N Engl J Med. 2023;388:1233–39. https://doi.org/10.1056/NEJMsr2214184.
Ali R, Tang OY, Connolly ID, Zadnik Sullivan PL, Shin JH, Fridley JS, et al. Performance of ChatGPT and GPT-4 on neurosurgery written board examinations. medRxiv. 2023;2:e0000198. https://doi.org/10.1101/2023.03.25.23287743.
Nori H, King N, McKinney SM, Carignan D, Horvitzet E. Capabilities of GPT-4 on medical challenge problems. arXiv. 2023. https://doi.org/10.48550/arXiv.2303.13375.
Fares A. Evaluating the performance of ChatGPT in ophthalmology: an analysis of its successes and shortcomings. medRxiv. 2023:2023.01.22.23284882. https://doi.org/10.1101/2023.01.22.23284882.
American Academy of Ophthalmology. Board prep resources for ophthalmology residents San Francisco, CA: American Academy of Ophthalmology; 2023. Available from: https://www.aao.org/education/board-prep-resources accessed April, 2023.
Acknowledgements
The authors thank Rohaid Ali, MD of Brown University and Ian Connolly, MD, MS of Massachusetts General Hospital for their contributions to this study’s design.
Funding
John Lin was awarded departmental funding from Brown University for expenses related to this study.
Author information
Authors and Affiliations
Contributions
All authors were responsible for conceptualization and research design; JCL, DNY, and SSK were involved in data acquisition and research execution; JCL, DNY, and OYT conducted the data analysis; all authors worked on data interpretation and manuscript preparation.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Lin, J.C., Younessi, D.N., Kurapati, S.S. et al. Comparison of GPT-3.5, GPT-4, and human user performance on a practice ophthalmology written examination. Eye 37, 3694–3695 (2023). https://doi.org/10.1038/s41433-023-02564-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41433-023-02564-2
This article is cited by
-
Recommendations for diabetic macular edema management by retina specialists and large language model-based artificial intelligence platforms
International Journal of Retina and Vitreous (2024)
-
Recommendations for initial diabetic retinopathy screening of diabetic patients using large language model-based artificial intelligence in real-life case scenarios
International Journal of Retina and Vitreous (2024)
-
Comment on: ‘Comparison of GPT-3.5, GPT-4, and human user performance on a practice ophthalmology written examination’ and ‘ChatGPT in ophthalmology: the dawn of a new era?’
Eye (2024)
-
ChatGPT und die deutsche Facharztprüfung für Augenheilkunde: eine Evaluierung
Die Ophthalmologie (2024)
-
Comparative performance of humans versus GPT-4.0 and GPT-3.5 in the self-assessment program of American Academy of Ophthalmology
Scientific Reports (2023)