Every week, Sumit Khake visits the slums of Mumbai to hold screening camps for the private eye hospital in which he works.

His job is to identify and refer people who might have diabetic retinopathy for treatment.

But Khake is no doctor. He was hired in 2016 by the hospital as a clerk, and knows little about the condition that can cost diabetic patients their vision.

What enables him to do the highly specialised task of screening patients is the artificial intelligence software downloaded on a smartphone.

The phone is attached with a handheld camera, which takes high-resolution images of the retina. The software, called FOP, scans the image taken by the camera and looks for abnormal growth of blood vessels, a sign of diabetic retinopathy.

If the software detects such a symptom, the mobile screen produces a red alert – for Khake, an indicator that the patient might have retinopathy.

Khake is amongst hundreds of unskilled workers in India, who have been using artificial intelligence, or AI, for diagnosis that otherwise only specialised doctors were fit to do.

“India has only about 2,000-3,000 retina specialists to detect diabetic retinopathy,” said Dr S Natarajan, director of Aditya Jyot Eye hospital where Khake works. “This software has made our job easier.”

Workers like Khake, he said, screen people and doctors only have to think about treatment. “We can save more people from losing their vision,” he said.

Since 2018, the hospital has screened more than a lakh people, with the AI detecting 17% of the retinopathy cases.

The software used by the hospital is one of numerous AI tools available in the Indian healthcare market that are being used by both private and government doctors.

However, Natarajan is cautious. To confirm the diagnosis made by AI, he has deputed an ophthalmologist to verify all cases the software flags as diabetic retinopathy.

The ophthalmologist checks the images taken by the camera, and examines the patient.

In a study carried out at the Aditya Jyot Eye hospital, Natarajan found that the software could detect confirmed diabetic retinopathy in 85.2% of cases, which leaves room for error if there is no human monitoring.

“AI can’t replace a doctor, not yet,” Natarajan said.

The warning

Natarajan’s wariness is echoed by the wider medical fraternity. As AI has evolved and spread rapidly in a short period of time, it has made its way into a range of sectors. Health is no exception. But as powerful a tool it may be, doctors and experts say the healthcare sector must be cautious about deploying such technology given the range of concerns to which it gives rise.

On May 16, the World Health Organization called for caution while using AI or language model tools, such as ChatGPT or Bard, for healthcare diagnosis or treatment.

“Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world,” the World Health Organization said. It did not warn against any particular diagnostic tools.

The European Union has proposed a set of regulations to control AI with a ban on its use in some sectors. The United States is studying how to implement rules to govern AI and has published five principles to improve safety, data privacy, and algorithmic discrimination protection.

In India, the technology remains unregulated. In April, Union Information Technology minister Ashwini Vaishnaw had told Lok Sabha that the government had no plans to formulate a law on artificial intelligence. The G7 meeting in Japan in May – where leaders discussed drafting guidelines to regulate AI – may change India’s position. Vaishnaw has now hinted at a possible “framework” for AI tools.

So far, in the health sector in India, artificial intelligence is being used for diagnostics, and has begun to enter therapeutics.

A hospital employee operates a traditional testing machine to detect diabetic retinopathy.

A diagnostic tool

The entry point for AI in diagnostics was radiology, prompted by the shortage of radiologists (there are only just over 11,000 radiologists in India). Sometime in the last decade, artificial intelligence began to be used to analyse X-ray and CT scans and generate reports.

For example, the 5C Network, a remote radiology service provider, is attached to 2,000 hospitals, laboratories and clinics across the country. It processes 1,500 X-rays and chest scans through artificial intelligence every day.

Its AI uses a neural network, a mathematical system that enhances its learning – much like the human brain – by analysing thousands of medical data sets, texts or images sourced from research institutes in the United States and the United Kingdom. It is trained to spot anomalies in scans.

Anand Iyer, chief operating officer at 5C, said it was initially a struggle to convince hospitals and labs to trust AI-generated reports but the Covid-19 pandemic facilitated the shift.

In India, the law demands that a diagnostic report must be verified by a radiologist. At 5C, the level of a radiologist’s intervention depends on the performance of AI.

“We have another algorithm to detect the level of confidence the AI has in generating the report,” Iyer explained. This is measured by analysing the kind of errors the AI makes while dealing with complex cases.

“If the confidence score is high, we have 400 radiologists on board who just skim through the report and sign it,” Iyer said. “If the confidence score is low, the radiologist rechecks the scan.”

In diagnosing tuberculosis, AI has been most useful.

For example, Mylab Discovery Solutions, a manufacturer of diagnostic kits, has tied up with Qure.ai, an AI provider, which helps it detect tuberculosis through X-ray scans.

The government-run Revised National Tuberculosis Control Programme is supporting a company that has created a mobile app to detect tuberculosis. The app, called Timbre, aims to detect pulmonary tuberculosis from the sound of a cough – currently, it is being run as a pilot project.

Using AI-powered software can cut costs by a third when compared to the standard tests required to detect tuberculosis, said Saurabh Gupta, head of special projects in Mylab Discovery Solutions. “For screening purposes, we find AI accurate and safe,” HE said. “But for confirmatory diagnosis, there is room for improvement.” Mylab will soon launch an AI-powered tool to detect latent tuberculosis using a skin test.

Increasingly, more hospitals are turning to AI technology. In March, the chain of Apollo Hospitals launched Clinical Intelligence Engine, a software that can suggest basic diagnostic tests and prescribe medicines for patients at home.

The hospital chain has also tied up with Microsoft’s AI Network for Healthcare to develop a model to predict early heart attack risk. “Using clinical and lab data from over four lakh patients, the AI solution can identify new risk factors and provide a heart risk score to patients without a detailed health check-up, enabling early disease detection,” the hospital spokesperson told Scroll.

Created via Canva.

Lack of local data

Mumbai’s Tata Memorial hospital, which records the largest number of cancer patients in the country, is cautiously optimistic about the use of AI.

“AI has a lot of promise in therapy but it is not validated through a clinical trial,” said hospital director Dr CS Pramesh. “When a new treatment is introduced, it has to be proven effective in a clinical study.”

The hospital receives thousands of requests for a second opinion on cancer treatment. To address this need, the hospital introduced AI in an online portal called Navya that provides a second opinion.

The patient has to upload medical reports, which are analysed by AI that recommends multiple lines of treatment. “This is checked by our experts,” said Pramesh. “AI cannot be a standalone decision maker.”

The lack of trust in AI boils down to two reasons: it is new and its algorithms are not built on local Indian data.

India lacks a culture of data-sharing between government and private companies. Health-based technology companies in India procure data from institutes in the US and the UK to build algorithms, said Gupta of Mylab. Predictably, the AI is not accurate in analysing Indian patients using international data.

“For example, Indian lungs are smaller than the lungs of people in the US. There is a higher level of pollution here, so scans have a certain fogginess,” said Gupta. “The AI may get confused.”

The information and technology ministry in April told Parliament that the Centre has developed an Open Government Data Platform that has six lakh data resources from central and state departments. This includes health and other sectors of research.

But none of the health technology companies or labs that Scroll spoke to had found the data sets useful for medical diagnostics or therapeutics so far.

Pramesh, too, said that in the absence of local data, AI models may not work accurately. With that in mind, Tata Hospital is working with the Indian Institute of Technology, Bombay, to archive the scans of cancer patients.

The idea is to create an AI-based model for cancer diagnosis that will save time for doctors and help respond to patients faster. “We have archived thousands of scans,” Pramesh said. “By next year, we should be able to build an algorithm. But we will first need to validate it.”

Many government medical colleges are also still wary of AI. “There are more questions than answers,” said Dr Vipin Kaushal, superintendent at the Post Graduate Institute of Medical Education and Research, Chandigarh, which has been designated as a centre of excellence for artificial intelligence.

“We are not sure how valid AI’s interpretations are,” he said. “There are many issues that need to be looked at.”

Hospital staff work at an emergency ward. Credit: Reuters.

Patient privacy, data concerns

AI’s advances in healthcare have uncovered many grey areas.

For instance, it is not clear who will be responsible if AI generates an erroneous medical report. If the data source, based on which the AI learns, is skewed, manipulated and incorrect, the medical reports generated will be incorrect.

Anushka Jain, policy counsel at the Internet Freedom Foundation, said hospitals must also think carefully about deploying AI. “How is the data for its algorithm collected?” she asked. “ Is it assessed well? They have to ensure all this before they actually use it on a patient.”

Dr Kshitij Jadhav, assistant professor at Koita Centre for Digital Health, Indian Institute of Technology-Bombay, said patients should have primary ownership of data. It can be shared after seeking consent and anonymising personal details, said Jadhav. “However, the dilemma is that data sharing is necessary since only then population level interpretations can be drawn,” said Jadhav.

Doctors, too, have raised ethical concerns over patient privacy and data security. “There is a genuine concern about whether patients are informed that their data will be used to build algorithms for AI,” said Dr Samir Malhotra, head of the pharmacology and AI unit at PGIMER, Chandigarh. Malhotra said patients also need to be told if their medical report has been generated using AI. “That is not happening,” he said.

Legal loopholes, data breach

In the absence of any law, no doctor or lab is mandated to take informed consent from patients to use their data for AI. Malhotra said that AI is moving at such a fast pace that regulations will continue to fall behind.

But with so much data on digital platforms, security challenges are inevitable.

Rahul Sasi, co-founder and CEO of CloudSek that predicts cyber threats, said that as AI evolves, digital safety concerns will also grow. “More and more hospitals are putting their data on a cloud, and some are already facing security threats,” said Sasi. The health sector is not investing enough to safeguard data in hospitals, he added.

Anita Gurumurthy, executive director of the non-profit IT for Change, said that it is quite likely that AI will come to be used not just for diagnostics and therapeutics but also in developing drugs, public policymaking and vaccine deployment. “We need rules and guidelines for that,” she said.

There are also concerns that AI might end up replacing doctors to some extent. But Jadhav of IIT-Bombay said that would be unwise. “… This should not even be the aim of AI researchers,” said Jadhav. AI researchers, instead, should help healthcare professionals ease their processes and decrease repetitive tasks.

Jadhav said that in the future, AI will help screen radiological images, generate drafts of discharge summaries, and use speech-to-text to fill up electronic medical records.

For the medical community too, the use of AI must be limited – only as a tool, not as a final decision maker, emphasised Malhotra of PGIMER.