Artificial intelligence (AI) is the newest and perhaps the most dynamic segment in eye care at the moment. AI leverages a computer science approach known as machine learning to allow computers to self-learn how to best perform a specific task. Machine learning also allows computers to learn to see (an ability known as computer vision). This enables computers to perform dynamic visual tasks such as driving a car in a busy city as well as narrowly defined tasks such as recognizing faces or identifying lesions on retinal images. The level of activity and investment in AI has skyrocketed in recent years as the field saw rapid advancements in performance resulting from key breakthroughs in computer hardware, the availability of large amounts of digital data and novel machine learning approaches.

While many people are asking for the killer app of computer vision, I would say that computer vision is the killer app of AI. Dr. Fei-Fei Li, VP at Google Cloud

This phenomenon has fueled an enthusiastic push to apply AI to virtually every industry and has turned the term “artificial intelligence” into the ultimate buzzword. While attending the HIMSS medical device and healthcare IT conference in Las Vegas earlier this year I found myself hard-pressed to find a vendor that didn’t advertise that they were using AI to enhance their technology offerings. My goal for attending the conference was to learn about the technology companies who intend on entering the eye care space. I came away from the conference with a deeper understanding of the eye care AI ecosystem which includes a mix of established tech companies, startups, and universities.

Big Tech

Technology companies believe that AI has the power to revolutionize virtually all industries through a combination of task automation and optimization. Most of the established technology companies are working to establish AI as a core competency and they are investing in AI accordingly.

We’ve been working hard, continuing our shift from a mobile first to an AI first world. We are re-thinking all our core products and working hard to solve user problems by applying machine learning and AI. Sundar Pichai, Google CEO

It only makes sense that we see these companies begin to invest in bringing AI products to the healthcare space which is seen as an ideal environment for AI because it contains large amounts of data (in the form of EMR) that can be used to train algorithms and has many narrow high-value tasks (such as medical image analysis) that lend themselves well to automation and optimization via AI.


Now we’re in the healthcare space and in healthcare, it’s wide open. The scale of impact for healthcare [is so large that it] is hard to describe. Eric Schmidt, Chairman of Alphabet

Photo courtesy of Seeking Alpha

Google is currently pursuing several AI projects that focus on the eye care space. These projects are split among 3 companies (DeepMind, Google Brain and Verily) which are owned by Google’s parent company Alphabet.


DeepMind is a UK based AI research company that is recognized as having one of the world’s elite teams of AI researchers. The company was acquired by Google in 2014 for a sum estimated to be in excess of $660 million, besting Facebook which had launched a similar effort to acquire the company. (1) In 2015, the company’s “DeepMind AlphaGo” algorithm made history by defeating the 3-time reigning European champion of the Chinese board game Go, winning in all 5 matches that were played. In 2016, the algorithm defeated Lee Sedol, an 18-time world Go champion and widely considered to be the best Go player of the past decade. AlphaGo beat Sedol in 4 out of 5 matches a feat that was witnessed by 200 million viewers.

Go Master Ke Jie defeated by AlphaGo. Photo courtesy of Breaking Defense

These were landmark events in artificial intelligence because Go is 10100 times more complex than chess and it’s estimated that there are more possible moves in the game (10360) than there are atoms in the universe (thought to be 1078 to 1082). (2,3) AI experts expected that such a feat would not be accomplished for another ten years. DeepMind was able to achieve this by combining Deep Neural networks and Reinforcement Learning techniques to enable AlphaGo to self-learn from studying thousands of games played by professional human players as opposed to following rules encoded by its programmers. In 2017, DeepMind launched AlphaGo Zero, an algorithm that improved on the performance of AlphaGo all the while not utilizing any outside data from previously played games by humans and instead started from scratch and learned to play the game through only playing against itself.

Moorfields Eye Hospital. Photo courtesy of DeepMind Technologies Ltd.

DeepMind placed a focus on healthcare space by creating a division known as DeepMind Health which aims to utilize the insights that it gains from projects like AlpahGo and AlpahaGo Zero to contribute to the healthcare system. To that end, the company partnered with the UK’s Moorfields Eye Hospital to conduct a retrospective, non-interventional exploratory study on the use of AI to diagnose diabetic retinopathy and AMD from fundus photographs and OCT scans. The study incorporated 14,884 OCT scans from 7,621 patients who were previously seen at Moorfields Hospital after being referred for vision symptoms suggestive of macular pathology.

The results of the study were published in August of this year in Nature Medicine. As part of the study, the algorithm was tasked with analyzing OCT scans of patients and accordingly classifying cases as urgent, semi-urgent, routine or observation. Its performance was then measured against a gold standard which consisted of the real-world diagnoses and follow-up prescribed when the patient was initially seen at the hospital. The algorithm’s classifications were also compared to those issued by a team of eight eye care providers (four retina ophthalmologists and four optometrists) who were recruited for the study.

The algorithm demonstrated an excellent ability to correctly identify urgent referrals (error rate of 5.5%) and was shown to perform on par with the two best performing clinicians (error rates of 6.7% and 6.8%) and significantly exceeded the performance of the remaining six clinicians when referral decisions were made based on OCT data alone. (4) The performance of the clinicians improved once they were given access to additional data including fundus images and patient encounter notes at which point the algorithm was shown to perform on par with the five best-performing clinicians and outperformed three clinicians. (4)

A weighted scale was created to clarify the impact of classification errors by more severely penalizing instances where high-risk cases were underdiagnosed. The penalty score for misdiagnosis was found to be lower for the algorithm than that of all expert clinicians. (4) DeepMind’s algorithm is unique because its capabilities are not limited to recognizing only one retinal disorder as is the case with most algorithms developed by other groups. In fact, the algorithm was able to recognize and properly triage cases with structural anomalies associated with up to 50 different diseases found in the macular region. The next steps for this project will involve clinical trials to test the efficacy of this AI system as well as pursuit of regulatory approval to clear the system for commercial release.

Google Brain

In a study published in JAMA in 2016, the Google Brain team showed that with slight modifications to Google’s Inception V3 algorithm (the same algorithm that power’s Google’s image search engine) they could autonomously recognize and grade diabetic retinopathy in fundus photos with accuracy comparable to human doctors. (5) The team later named this algorithm ARDA (short for Automated Retinal Disease Assessment) and proceeded to continue validating its performance in additional clinical trials in Sankara Nethralaya Hospital and Aravind Eye Hospital in India. Google stated that the results of the Aravind Eye Hospital trial were similar to those of the study published in JAMA in 2016 and a source from the Sankara Nethralaya reports that the results from that hospital’s trial show over 90% sensitivity for the detection of diabetic retinopathy, although actual figures have not been published to date. (6) Google is currently working with the FDA in pursuit of regulatory approval to make this AI system available in the United States. (7)


Dr. Lily Peng, MD, PhD, Product Manager at Google Brain. Video courtesy of Google.

Recently the Google Brain team published the results of a study showing that they were able to train an algorithm to estimate refractive error from fundus images. The algorithm was validated on two data sets and was able to estimate refractive error to within 0.56 diopters in the first dataset and within 0.91 diopters in the second dataset. (8) Analysis of the algorithm’s attention maps showed that the foveal region was a large contributor to the overall prediction, though data from other regions played a role as well.

Additional impressive findings came in a Google Brain study titled “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning” (Nature 2018) in which researchers reported creating a machine learning algorithm that is able to identify 5 risk factors of cardiovascular disease from a retinal image. (9) These risk factors include age, gender, ethnicity, systolic blood pressure and whether the patient is a smoker.

The accuracy of identifying each risk factor varied, but as a whole the algorithm was able to predict risk of future major adverse cardiovascular events within a five year period 70% of the time, a performance similar to the composite European Systemic Coronary Risk Evaluation (SCORE) calculator which can correctly predict the occurrence of such an event 72% of the time ((AUC) of 0.70 (95% CI: 0.648 to 0.740) vs. AUC of 0.72 (95% CI: 0.67 to 0.76) respectively). This is noteworthy because Google’s algorithm was able to attain this level of performance from one data point, a fundus photograph, as opposed to the SCORE risk calculator which requires gathering 5 data points (age, sex, smoking status, total cholesterol, and systolic blood pressure).

The team went further to develop algorithms that can determine many of the risk factors just by analyzing a fundus image. The algorithm could correctly identify whether the fundus image belonged to a smoker vs a non-smoker 71% of the time and was able to correctly estimate systolic blood pressure within 11 mmHg on average for patients overall (for systolic blood pressures below 150 mmHg). The algorithms were also able to correctly predict the patient’s age to within +/- 5 years 78% of the time. In the future the Google Brain team will be investigating the effects of interventions such as lifestyle changes or medications on risk predictions.


Formerly known as Google Life Sciences, Verily is a science research and engineering company that was spun off from Google in 2015 and now operates as a subsidiary of Alphabet. The company focuses exclusively on developing technology solutions for healthcare. In 2016, Verily established a strategic alliance with Nikon and its subsidiary, Optos, to develop AI enabled devices that are capable of autonomously diagnosing diabetic retinopathy and diabetic macular edema. In a press release, Nikon stated the following about the partnership:

“Nikon (including its subsidiary Optos) and Verily will co-develop solutions for the earlier detection of diabetic retinopathy and diabetic macular edema. The partnership will combine Nikon’s leadership in optical engineering and precision manufacturing, its proprietary ultra-widefield technology, and strong commercial presence among eyecare specialists, and Verily’s machine learning technology.” (10)

Photo courtesy of EyePACS

The devices are designed to be deployed in clinics of primary care physicians who manage diabetes and will screen patients for diabetic eye disease, which if found would trigger a referral to an eye care provider. The first of these devices is expected to be a hardware platform from Optos that will be paired with the ARDA algorithm that was previously mentioned to have been developed by the Google Brain team and tested in clinical trials in India.


Microsoft’s efforts in this field have culminated in the formation of an international consortium called the Microsoft Intelligence Network for Eyecare (MINE). The members of the consortium include India’s L V Prasad Eye Institute (LVPEI), the University of Miami’s Bascom Palmer Eye Institute, the University of Rochester’s Flaum Eye Institute and Brazil’s Federal University of Sao Paulo. The initiative’s goals include developing AI tools that can predict the progression of refractive errors, provide predictive outcomes of refractive surgery, determine optimal refractive surgery parameters and automate detection of diabetic eye disease.

Photo courtesy of L V Prasad Eye Institute

Through this collaboration, the consortium was able to generate a uniquely large bank of data by aggregating datasets from its member institutions. This increased the amount of data available for training algorithms as well as added but also added geographic, socioeconomic, and genetic diversity which improved the overall quality of the aggregate data set. The group also benefits from Microsoft’s sophisticated AI and cloud computing infrastructure and expertise.

Among the most notable achievements of the consortium thus far has been its development of a machine learning algorithm to predict the progression of refractive error in children and young adults over a two-year period. This involves a unique approach which combines anonymous medical records and therapy data to train machine learning models. The algorithm is currently integrated in EMR systems at 174 centers of LVPEI in India and further validation of its performance is ongoing.


IBM stands out among the large tech companies as being the most well established in the healthcare field with technology solutions for eye care, oncology, genomics, drug discovery and patient care management. As with all tech companies, IBM is looking for ways to leverage AI in all these areas. Their AI strategy in eye care consists of a 2 pronged approach which involves partnering with startup companies which have developed market-ready solutions while exploring and developing their own in-house solutions.

IBM has entered a strategic partnership with IDx, a startup that is covered in detail later in this article, to distribute products developed by IDx in 31 European countries where the company’s technology was granted regulatory approval. IBM is currently working to implement a distribution model for the united states following the FDA clearance of an IDx autonomous diagnostic system for use in the United States with plans to expand to Mexico and Canada once regulatory approval is granted in those countries.

AI optic nerve segmentation (yellow) vs. manual segmentation performed by clinicians (red). (12)

The company has also developed an algorithm that can autonomously diagnose diabetic retinopathy through a novel technique that combines two machine learning methods, convolutional neural networks, and dictionary-based learning. The algorithm can screen an image in 20 seconds and can classify diabetic retinopathy across five severity levels, if present, with 86% accuracy. (11) As part of an initiative to develop clinical tools for diagnosis of glaucoma, the company created an algorithm to autonomously grade optic nerve cupping from fundus photographs. Analysis of the algorithm’s performance showed that the cup and disk boundaries drawn by the algorithm were a 95% match to those drawn by human experts. (12)


Being a hardware company, Intel’s strategy has primarily been to grow the demand for its hardware products used to power AI applications. To this end, the company has made itself open to collaborating with startups interested in getting into the field of clinical AI with the hope of encouraging innovations that will grow the segment. Most notably, the company partnered with China’s Aier Eye Hospital and MedImaging Integrated Solutions (MiiS) to improve the performance of two artificial intelligence algorithms that screen for diabetic retinopathy and AMD in images acquired via MiiS’ handheld Horus fundus camera. Intel® helped the group deploy a new technological specification known as Caffe* optimized for Intel® which allowed the AI algorithm to utilize Intel’s cloud-based Xeon® Bronze processor. This improved model loading by 103 times, resulting in a seven-fold increase in the speed of the algorithm’s ability to generate a diagnosis.

The MiiS Horus fundus camera. Photo courtesy of MiiS.

The Chinese national government has joined the effort to scale availability of the system across the country where it will be utilized by community health workers and general practitioners to supplement the efforts of the country’s overburdened eye care system. Aier Eye Hospital is planning to deploy this solution to 200 branch clinics in 2018 with a target of reaching 30,000 primary care clinics in China and screening 30 million Chinese patients for diabetic retinopathy. (13) The Chinese central government has set a goal of making the MiiS and Aier eye health screening system available to all primary care clinics in the country by 2020 and making it a standard component of those clinics. (13)

The Startups


IDx is the startup that made history by receiving the first-ever FDA approval for an autonomous system for disease diagnosis in April of this year. The approval was issued for an algorithm called IDx-DR which works with the Topcon TRC-NW400 retinal camera/OCT instrument to detect lesions specific to diabetic retinopathy that can be found in fundus images. The system is indicated for use as a diabetic retinopathy screening tool for adults (22 years of age or older) who have previously been diagnosed with diabetes but have not been previously diagnosed with diabetic retinopathy. The algorithm demonstrated 87% sensitivity and 90% specificity at detecting more than mild diabetic retinopathy in fundus images in a 2017 U.S. clinical trial involving 10 sites and 900 subjects with diabetes. (14) The system was designed to be operated by technicians in primary care settings in order to facilitate referral of patients with more than mild diabetic retinopathy to an eye care professional.

IDx team members. Photo courtesy of IDx Technologies, Inc.

The company is currently developing new diagnostic systems that are aimed at detecting additional eye diseases. These include IDx-AMD which is a solution that the company is developing for automated diagnosis of AMD from retina images and IDx-G which is a system that utilizes a set of algorithms to detect and track a range of glaucoma indicators. The company’s efforts involve incorporating a mix of currently recognized glaucoma indicators found in OCT images as well as novel indicators that the company is working to establish using new AI-based clinical tools. One such tool that the company is exploring can create implied visual field measurements from OCT data. IDx plans to go beyond diagnostics for eye disease and is currently developing algorithms that can utilize retinal scans to derive diagnostic markers for Alzheimer’s disease, cardiovascular disease, and stroke risk. (15)

The story behind IDx is as fascinating as the technology that the company is developing. The company’s founder and president are Dr. Michael Abramoff, an ophthalmologist, and professor of electrical engineering, computer engineering, and biomedical engineering at the University of Iowa. He has been working for over 20 years as a clinical and computer science researcher in order to develop AI-based clinical tools for eye doctors. You can learn more about Dr. Abramoff’s journey and why he feels AI will make a positive impact in eye care by listening to the interview that I conducted with him earlier this year on the VisionTECH Podcast.


Advanced Ophthalmic Systems’ (AOS) is the UK based maker of the anterior segment grading software “AOS Anterior.” This is an FDA approved system that can autonomously perform anterior segment grading and provide objective, quantitative grading of bulbar redness, lid redness, and fluorescein staining.

A comparison of SPK seen under cobalt blue light (left) vs. the AOS Digital Wratten Filter feature (middle) vs. the AOS automated SPK grading feature(right). (16)

The software also provides additional digital tools that the company calls “Vessel Enhancement,” “Redness Map,” and “Digital Wratten Filter” which are assistive in nature and complement the autonomous features mentioned previously. (16) The vessel enhancement tool selectively enhances the visibility of conjunctival blood vessels which can be used to better identify vessel compression when fitting scleral lenses. The “Digital Wratten filter” tool can provide the functionality of a wratten filter on images taken under cobalt blue illumination without a wratten filter or standard illumination.

Image enhancement and autonomous ocular surface grading with the AOS Redness Map (top left and bottom left), Vessel Enhancement (top right), and Digital Wratten Filter (bottom right) features. (16)

The system also features a “Digital ruler” which can be used to measure various features for specialty lens fitting such as horizontal visible iris diameter (HVID), amount of edge lift and scleral lens vault. The software is able to do this by utilizing an image of the lens on the eye an and using the lens’ known parameters such as central thickness and diameter as a reference point. (14)

Comparison of scleral lens vault via OCT (left) vs the AOS digital ruler feature. (16)

The main benefits of a system like this include increased objectivity and consistency when grading anterior segment findings. This aids in patient monitoring over time and reduces doctor to doctor variability. The software also offers the benefit of flexibility in image acquisition as it does not require use of a particular camera and can even analyze images taken with smartphones. AOS has partnered with Keeler Instruments, Inc to distribute the software in the US and Europe. The company is also working on bringing to market additional algorithms that can perform various posterior segment analyses. This includes the ability to autonomously measure retinal perfusion among other features.  

Vmax Vision

Vmax Vision is a company that has been in the eye care industry for a number of years and is best known for their electronic refractors that incorporate wavefront correction and point spread function (PSF) refraction. Recently, the company has incorporated artificial intelligence in its flagship refractor, the Voice Assisted Subjective Refractor (VASR™) to enable it to communicate with patients and guide them through the refraction. This functionality effectively makes refraction an automated, patient-driven process.

A refraction using the VASR™ refraction system which is being operated by a technician. Photo courtesy of Vmax Vision.

The process can be supervised by a technician with minimal training and has built-in checks that discard inconsistent results and can alert the doctor if it suspects that the findings are not sufficiently accurate. In a recent study involving 50 patients at the Southern College of Optometry (SCO) the performance of the VASR™, operated by an optometry student, was compared to findings obtained with a traditional phoropter operated by SCO faculty. The VASR system produced equal or better visual acuity compared to a manual phoropter in 97% of subjects. (17)


Visualytix is a startup from the United Kingdom that is developing clinical decision support algorithms that can analyze fundus and OCT images and offer insights from them to the managing physician. It’s Pegasus platform consists of several clinical tools that can detect signs of glaucoma, diabetic retinopathy and wet or dry AMD from fundus photos. At the recent ARVO 2018 meeting, the company presented the results of a study that compared the ability of its Pegasus-disc tool to the consensus opinion of two specialists when detecting glaucoma from optic disc images. The study was conducted at a Harvard University teaching hospital and involved 186 patients. It is now being extended to include 400 subjects. (18)

Pegasus system highlighting retinal lesions on fundus photos. Photo courtesy of Visulytix.

Pegasus aims to increase clinician productivity by prioritizing OCT slides based on highest confidence of pathology and highlighting exudates, microaneurysms, and hemorrhages in fundus photos. The system will also have the ability to aggregate a series of OCT images to asses the 3D volume of retinal structures. The goal of which will be to detect signs of diseases such as AMD and DME. This approach considered assistive AI because it’s designed to work in conjunction with a physician to augment and enhance their decision making. This is unlike autonomous AI which aims at replacing the physician altogether with an algorithm that can make clinical decisions autonomously.


The AI race in eye care will only intensify in the coming months and years as technology companies big and small seek to leverage the unique capabilities of AI to modernize the delivery of eye care. The artificial intelligence tools discussed in this article show that not only is AI here today, it’s likely coming to every part of the eye exam in the near future. The conversation now shifts from whether or not AI has viable applications in eye care to how quickly and in what ways will AI be integrated into our field. We have an opportunity to become more productive, provide a higher quality of care and increase access to care by implementing various AI tools, but in order to do so, we need to make sure that these solutions are developed with input from all the stakeholders in eye care.

This means that we need to make sure that we have optometrists, ophthalmologists, computer scientists, healthcare administrators, managed care organizations, public health professionals and regulators at the table. We should all offer our respective expertise help the companies developing these tools to make informed decisions and we need to partner with them to create an outcome that is in the best interest of our patients. We should also enlist the help of our computer and data science colleagues to help clinicians expand their understanding of machine learning so eye care professionals can participate at all levels of this movement.

Our professional organizations and teaching institutions have an important role to play as well. Professional organizations can help by facilitating profession-wide discussions around AI, creating interdisciplinary forums where the AI industry can learn from clinicians and vice versa, providing education about AI-based clinical tools at professional conferences, creating practice guidelines that define the standard of care for the use of AI tools and engaging with regulators to offer input on regulatory oversight measures that should be applied to this technology. Our teaching institutions will need to take on the important task of preparing tomorrow’s clinicians to be knowledgeable about the capabilities of AI tools and to be proficient in their use. They will also have an opportunity to contribute to the research front by helping validate the performance of newly introduced AI systems as well as investigate novel application of AI in eye care.


  1. What DeepMind brings to Alphabet. The Economist. Published online December 17, 2016.
  2. Villanueva, John Carl. How Many Atoms Are There in the Universe? Universe Today. July 30, 2009.
  3. Koch, Christof. How the Computer Beat the Go Master. Scientific American. March 19, 2016.
  4. De Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Blackwell S, Askham H, Glorot X, O’Donoghue B, Visentin D, van den Driessche G, Lakshminarayanan B, Meyer C, Mackinder F, Bouton S, Ayoub K, Chopra R, King D, Karthikesalingam A, Hughes CO, Raine R, Hughes J, Sim DA, Egan C, Tufail A, Montgomery H, Hassabis D, Rees G, Back T, Khaw PT, Suleyman M, Cornebise J, Keane PA, Ronneberger O. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018 Sep;24(9):1342-1350. doi: 10.1038/s41591-018-0107-6. Epub 2018 Aug 13.
  5. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J, Kim R, Raman R, Nelson PC, Mega JL, Webster DR. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA. 2016;316(22):2402–2410. doi:10.1001/jama.2016.17216
  6. Lakshman, Sriram. Diabetes induced blindness: AI detection shows clinical promise. The Hindu. Published online January 6, 2018.
  7. Peng, Lily. Doctors Working for Google. YouTube. Published online May 6, 2018.
  8. Varadarajan AV, Poplin R, Blumer K, Angermueller C, Ledsam J, Chopra R, Keane PA, Corrado GS, Peng L, Webster DR. Deep Learning for Predicting Refractive Error From Retinal Fundus Images. Invest Ophthalmol Vis Sci. 2018 Jun 1.
  9. Poplin, R., A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2:158–164, 2018.
  10. Nikon and Verily Establish Strategic Alliance to Develop Machine Learning-enabled Solutions for Diabetes-related Eye Disease. Published online December 27, 2016.
  11. IBM Machine Vision Technology Advances Early Detection of Diabetic Eye Disease Using Deep Learning. PR Newswire. April 20, 2017.
  12. Sedai, Suman; Roy, Pallab; Mahapatra, Dwarikanath; and Garnavi, Rahil. Segmentation of Optic Disc and Optic Cup in Retinal Fundus Images Using Coupled Shape Regression. In: Chen X, Garvin MK, Liu J, Trucco E, Xu Y editors. Proceedings of the Ophthalmic Medical Image Analysis Third International Workshop, OMIA 2016, Held in Conjunction with MICCAI 2016, Athens, Greece, October 21, 2016. 1–8.
  13. Deep Learning Aids Blindness Prevention in China – Intel.
  14. IDx-DR product information.
  15. IDx product pipeline.
  16. AOS Product Catalog.
  17. Vmax Vision Press Release. May 1, 2018.
  18. Visulytix Press Release.