Print this page

Artificial Intelligence The Future for Eye Care

Dr. Amy Cohn | 2 April 2018
Artificial intelligence (AI) has received an enormous amount of publicity in the last year. From autonomous cars to robots in the work place, to face recognition apps on our phones, AI seems to play an increasingly prominent role in all aspects of our lives. As eye health care professionals, this is also becoming apparent. So, what does AI in ophthalmology mean and where are we in 2018?

Artificial intelligence is a term that refers to the theoretical ability of a machine to perform tasks usually ascribed only to a human. Machine learning is when a computer is able to take information delivered to it and learn from it – much like we as humans learn from performing repetitive tasks. This learning is independent of human programming. Further, deep learning is a subset of machine learning whereby a computer uses something known as an artificial neuronal network to complete more a complex brief. In the case of ophthalmology, this has centred initially on the interpretation of retinal images. An artificial neuronal network can be likened to a human brain – interconnected ‘neurons’ that allow for learning by experience.


It is probably worthwhile briefly understanding how deep learning occurs. Artificial neuronal networks (ANN) consist of anywhere from a few hundred to several million units or processors arranged in highly organised layers. Some are input units that receive information. Others are output units that gives a response to the information fed into it.1 Connecting the two are hidden units – these form the bulk of the ANN. If an ANN is fully connected, then all the hidden units and output units are connected in all the adjacent layers, allowing for flow of information between them. The units can either excite or inhibit their neighbour which determines the response to a particular stimulus – much like synapses in the human brain. During training or actual operation, information flows from the input units to the hidden units and then out via the output units in a process known as feedforward networking. If the unit receives enough excitatory inputs it will ‘fire’ and stimulate the next unit. If it receives a negative input, the adjacent unit will not be triggered. The learning then occurs via a mechanism known as backpropagation – the output units compare the derived result with the desired result.1

If there are any differences, the connections within the ANN are adjusted to reflect what should have occurred. This allows for constant real time evaluation as the outcome is compared to the intended result. The utility of deep learning is the hope it will provide greater efficiency and reproducibility for more ‘menial’ tasks that humans currently undertake.


As we are all acutely aware, the incidence of diabetes is rapidly increasing worldwide. Current estimates suggest over one million Australians already have diabetes and that number is set to double by 2025.2 Globally, diabetes affects approximately 382 million people and this is projected to increase to 592 million by the year 2035.3 It is well known that retinopathy occurs in one in three Australian diabetics, however worryingly, only half will be regularly screened for eye complications of their disease.4 Unfortunately in Australia – unlike the UK – there is no national screening service for diabetic retinopathy (DR). The hope is that AI can provide an avenue for automated screening of DR images.


This gap between service provision for DR screening and current worldwide standards was realised by a Google employee during a trip to see family in India. This led to the development of an algorithm to screen fundus photographs for diabetes and the eventual publication in JAMA of their work – it is well worth noting that while ophthalmologists were involved in the study, both first and last authors on the paper were not doctors but Google employees!5 This highlights the importance of multiple disciplines working together to combat the diabetic issue.


When we as eye care providers look at a photograph of a diabetic fundus, we rely on feature recognition to determine the severity of disease. Instinctively we look for retinal haemorrhages, exudate or new vessels to classify the extent of pathology. On the other hand, using the units or processing method outlined above, AI algorithms analyse every pixel of a photograph to determine what is ‘different’ in that image to indicate pathology. The more difference, potentially the more severe the disease. In the Google study, the algorithm was trained using over 128,000 retinal images from both the Eye Picture Archive Communication System (EyePACS) telemedicine program and from three eye hospitals in India. Just under half of the images were non-mydriatic and there were several different types of fundus cameras. This is useful when considering the number of different imaging modalities we currently use across different practices. All the images were then graded initially between four and seven times by ophthalmologists or final year trainees. Finally intra-grader reliability was assessed by randomly selecting 10 per cent of the images to be regraded. Images were graded according to the International Clinical Diabetic Retinopathy scale – none, mild, moderate, severe or proliferative retinopathy. Exudates at the macula were used as a proxy for diabetic macula oedema.

Referable DR was classified as moderate DR or worse or the presence of macula oedema. Once the images had been graded, they were submitted to the Google AI algorithm in order to train it.



After the algorithm was trained, its reliability was tested against two new sets of images that had also been verified by ophthalmologists - (EyePACS set =9,963 images; Messidor database set =1,748 images).

Current guidelines for DR screening programmes suggest a minimum of 80 per cent sensitivity and specificity. The algorithm was tested for two different operational points – one for high sensitivity and one for high specificity. Usually screening programs are set for high sensitivity – that is, the ability of a test to correctly diagnose those with a disease (true positive rate). In the Google algorithm’s case, the algorithm achieved 97.5 per cent (EyePACS set) and 96.1 per cent (Messidor set) sensitivity and 93.4 per cent (EyePACS set) and 93.9 per cent (Messidor set) specificity. (Gulshan, 2016). Obviously well above what was the minimum standard.


Clearly the need for improved access to diabetic screening services is essential as we as health care providers look for ways to ensure earlier disease detection.6 The hope is that AI will form part of the solution. We know that currently approximately half of diabetic Australians are not screened for retinopathy. Although there is fear in some circles that the ‘machine’ will replace the clinician, in actual fact it is more likely to bring to our attention a greater number of patients requiring intervention – and hopefully at an earlier stage. By capturing those patients not currently screened, workload is likely to increase rather than decrease.7

However, there are challenges and limitations. Firstly, government needs to work with optometrists and ophthalmologists in order that a national screening service be implemented. The benefits of uniform screening using AI technology include potentially increased efficiency and reproducibility. Additionally,  screening patients in remote parts of Australia (and indeed the world), where access to services are limited, may allow for earlier disease detection and subsequent prompt initiation of treatment, thus potentially reducing preventable blindness.

Secondly, the current technology for DR image interpretation does not incorporate optical coherence tomography (OCT) analysis – the data sets used hard exudates as a proxy for diabetic macula edema (DME). Given the crucial role OCT plays in both diagnosis and monitoring of the treatment of DME, this will clearly need to be incorporated into later platforms.


As illustrated by the Google study, technology companies are proceeding at pace to develop AI algorithms for all avenues of heath care. DR is not the only area currently being explored for deep learning applications. The same work is being done for age related macular degeneration8 and glaucoma.9 Google Deep Mind has partnered with Moorfields Eye Hospital in London to develop a protocol for AMD screening using OCT technology. It is up to optometrists and ophthalmologists to work with these organisations to determine best patient care and how these applications are used. We all need to be part of the conversation to advocate for our patients.

Dr. Amy Cohn is a Melbourne based ophthalmologist with special interest in medical retina and cataract surgery. She trained at the Royal Victorian Eye and Ear Hospital before completing a cataract surgery senior registrar year at Southern Health. After this she relocated to London where she undertook nearly three years as a Medical Retina Fellow at Moorfields Eye Hospital. Upon her return she has taken up VMO posts at
RVEEH and Southern Health. She is also a Senior Research Fellow at the Centre for Eye Research
Australia. Dr. Cohn sees patients privately in East Melbourne, Footscray, Glen Waverley and Armadale.
In August 2017 she was asked to appear on the ABC’s Lateline to comment on the emerging role
of artificial intelligence in ophthalmology.

1. Woodford, C. (2017). Explain that stuff. Retrieved from Explain that stuff: www.explainthatstuff.com/introduction-toneural-networks.html
2. Magliano DJ, P. A. (2009). Projecting the burden of diabetes in Australia: what is the size of the matter? Aust N Z J Public Health, 540-543
3. Forouhi, N. (2014). Epidemiology of Diabetes. Medicine, 42(12), 698-702.
4. Foreman, J. (2017). Adherence to diabetic eye examination guidelines in Australia: the National Eye Health Survey. Med J Aust, 402-406
5. Gulshan, V. e. (2016). Development and Validation of a Deep Learning Algorithm for Detection of Diabetic
Retinopathy in Retinal Fundus Photographs. JAMA, 16(22), 2402-2410
6. Lu Y, S. L. (2016). Disparities in diabetic retinopathy screening rates within minority populations: differences in reported screening rates among African American and Hispanic patients. Diabetes Care, e31-32
7. Karth, P. R. (2017, March 3). Is Automated Interpretation of DR Images in Our Future? Retrieved from Retina Today: retinatoday.com/2017/03/is-automated-interpretation-ofdr-images-in-our-future
8. Burline, P. e. (2017). Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks. JAMA Ophthalmol, 135(11), 1170-1176
9. Kim SJ, C. K. (2017, May 23). Development of machine learning models for diagnosis of glaucoma. Retrieved from PLoS One: www.ncbi.nlm.nih.gov/pmc/articles/PMC5441603

' An artificial neuronal network can be likened to a human brain – interconnected ‘neurons’ that allow for learning by experience '