Subscribe and track India like never before..

Get full online access to
Civil Society magazine.

Already a subscriber? Login

Feedback

Comment here

  • Home
  • INTERVIEWS
  • ‘AI should be used to help doctors, not replace them’

Dr Raju Sharma: ‘Checks and balances are needed’

‘AI should be used to help doctors, not replace them’

Civil Society News, New Delhi

Published: Jul. 29, 2024
Updated: Aug. 30, 2024

With the arrival of artificial intelligence (AI), the time is here to consider its implications for public healthcare. Currently, AI is an expensive and corporation-grown technology. Private hospitals will flaunt it. But for the tens of millions of people awaiting medical services, how AI is developed and deployed will determine how much they finally benefit from it.

With AI many wondrous capacities are on offer — far outdoing what human beings can achieve in terms of speed and numbers. Scans will get read faster. Path samples will be processed quicker. The physician making a diagnosis would get a helping hand. Beyond such simple uses, AI will also interpret data in ways that can’t be imagined.

It is the inventiveness of start-ups that defines the uses of AI. But can it be left to them and the bigger tech companies that are taking it over to decide how it should be employed?

To find out more, we spoke at some length to Dr Raju Sharma, head of the department of radiology at the All-India Institute of Medical Sciences (AIIMS) in New Delhi.  

 

Q: Do you see AI as a boon? Particularly in diagnostics. Or do you see it as a huge challenge? 

Like anything else, it has its pros and cons and it will eventually depend on us how we leverage it to our advantage. It can certainly be a boon because it can come to the assistance of the physician, both to increase his access and to decrease the time that we take for various processes.

It can have multiple users in the field of healthcare, especially in radiology where we rely a lot on images. AI can come to our rescue in interpreting these images. It can make healthcare more accessible in the sense that even in remote areas where radiologists are not available at least the basic tasks can be done by the AI algorithms.

Having said that, there are many challenges and there are many downsides. Remember, these are early days for AI. We don’t know the full potential as yet. No one can call himself an expert in the field right now because it is in a state of evolution.

 

Q: What do you think it could do for public healthcare in India? What are the possibilities that you see? It is early days, but is there any vision for it? 

There is immense potential. Because AI can handle large volumes of data, it can certainly come to our rescue in fulfilling the shortfall in physicians. In any country, physicians are in short supply. In India that is especially true because the distribution of physicians across the country is very uneven. The majority of physicians and experts are in the cities. There is a definite shortage in the rural areas. Eventually, though not right now because currently it is an expensive technology, once the technology gets more accepted the cost will come down and that will be a time when one can scale it up to the entire country, take it to the rural areas and, first, try and fill that shortage. If I talk of radiology as a field, there are no radiologists available in the rural settings of the country. 

 

Q: How many radiologists do we produce in a year? 

I can’t give you a number, but the Indian Society of Radiology has about 25,000 members across the country. 

 

Q: That’s all. 

That gives you a sense that it’s not a very large community. A country like ours needs many more radiologists than we currently have. 

 

Q: Technology is moving at a great pace, but there is a human resource deficit that is not improving. Is that worrying? 

Definitely, because it’s very hard to fill that deficit. Just training more physicians is not going to be a solution. Unless we can improve the infrastructure in the peripheries of the country, especially in the rural areas, physicians will not move there. 

 

Q: Is there a danger that we would use AI to avoid putting in better infrastructure and a healthcare system in place? Shouldn’t technology be a part of a larger infrastructure instead of infrastructure getting hooked to technology to make up for deficiencies?

I couldn’t agree more. I don’t think this is the only solution. This is perhaps one small solution for a much larger problem which has multiple dimensions and needs to be looked at from multiple angles. This alone will not be a solution to our lack of infrastructure.

It will come to the aid of the physician. Remember, the physician today is very challenged when he works in a hospital, whether a government set-up or a corporate set-up. The numbers are overwhelming and physicians are stressed.

AI can come to the rescue of the physician by taking over some of the mundane tasks we do. We can then spend our time on the more complex tasks which need the cognitive skills of a physician.

There have been articles published where people have called AI “assisted intelligence” rather than “artificial intelligence”. Used like this I think we won’t see a time when the physician gets replaced. I think that’s more of conjecture. But there are certainly a lot of tasks which AI could take over and leave the physician to spend that time in a more productive manner — thereby increasing efficiency and containing cost.

 

Q: In a typical working day, what are these mundane tasks that can be taken over so that physicians can be freed up to use their expertise in better ways? 

If I talk about radiology, when we look at CTs and MRIs, there are a lot of measurements that we make. This is especially true when we’re comparing studies. Cancer patients have many studies and they need to be compared to see how the patient is responding to therapy for which we have criteria laid down. We measure each lesion. We sit and see how they are progressing with time. This is something that can very easily be done by a computer.

Very often, when the patient is on follow-up there’s a lot of measurement which is done and computers are obviously far more efficient than humans at doing that in a much shorter span of time. We use AI in many ways, not just for making a diagnosis.

AI today is used in a hospital right from the time scheduling is being done. Hospitals in the developed world have algorithms which help them triage patients. This way, those who are more sick get priority appointments, get seen earlier, get reported earlier. Let’s say 40 scans come into the reporting station of a radiologist. The AI algorithm tells you which are the ones which you should report first because those patients are critical. And are waiting for your report. Or which are the relatively older patients who can wait for some time.

So, right from the time of triaging your patients for the order you should be booking them in, to the order in which you should be reviewing them, to a date-wise arrangement of those scans — these are things we very often do physically because it is required but can easily be done by AI algorithms.

 

Q: AIIMS is typically overcrowded. How many scans do you do in a day?

In my department alone, we do about  200 CT scans in a day. This does not include the many centres which we have. We have a trauma centre, a geriatric centre, a mother and child centre. If I include those numbers, it would probably be 300 to 400 scans per day.  

And today the complexity of scanning has gone so high we generate scans which have a sliced thickness of less than 1 mm. So typically, when a patient scan is done, I have about 3,000 images in front of me to look at. This obviously takes time and the scanners have become much better. Our resolution has improved. The slice thickness has reduced. All this finally contributes to a huge volume of data which we need to look at and it is time consuming. So, if we could do some of these tasks by using AI, we could concentrate on the more complex tasks. For example, we do close to 700 X-rays in a day. We had reached a point where we could not report all those X-rays. We told our administration that they will have to do without a report. When the physician wants a report, he can come back to us. And only those X-rays will get reported. Today we have adopted an AI algorithm which kind of triages these X-rays and sends the normal ones to the physician straightaway, and the ones it thinks are abnormal are then sent for our second opinion. So that tells you how in a hospital which is busy AI can really be very useful. 

 

Q: An algorithm is based on the data at hand. How do you create algorithms for the Indian body?

Very good data is needed to train the algorithm. AI algorithms typically learn based on the data we provide them. It needs to be high-quality labelled data. The physician first labels the data and says, This is the abnormality and this is the nature of the abnormality. Those X-rays are fed to the computer algorithm. It learns what we are trying to train it for and also some extra bit which we may not have trained it for. But the basic requirement for training a good algorithm is that you need to have good, clean, labelled data. And today data is everything.

As you very rightly point out, this data needs to be locally representative of the population we are going to apply it to. There’s no point in using data from the US and just adopting their algorithm. Before we adopt any algorithm, we typically validate it in our patient population. We give data to the tech companies which are developing these algorithms. We also have collaborations with the IITs in Delhi and Jodhpur — we provide them our data and they provide their domain expertise in developing computer algorithms. And this synergy of physicians and computer scientists within the country helps to develop algorithms which are more locally relevant.

But it’s absolutely correct that algorithms will need to be developed on data which is representative of our patient population. If they have been developed overseas, they need to be validated before they can be applied to the general population. 

 

Q: This must be a very laborious process. 

It is. When we are validating an algorithm, we have it read the imaging and then the radiologist reads the imaging and we see how well they correlate with each other and how reliable the algorithm is. It is a laborious process, but it’s something that needs to be done and is unavoidable. 

 

Q: How much does genetic diversity matter in a country as large as India? 

We are like many countries put together and the disease profile from the northern part of India to the southern part of India is very different. You know, cancer of the gall bladder is very common in the Gangetic belt. But if you fly down to Chennai and speak to them in meetings, they say we almost never see carcinoma of the gall bladder. So, yes, there is a very variable spectrum of disease in the country, which is going to make things more difficult because our patient populations are very different. The South Indian population is very different from the northeastern population, for example, and we will have to adapt and modify these algorithms to the requirement of individual states and hospitals also. 

 

Q: Commercial interests of one kind or the other drive AI right now. Do you think that is enough to make it truly useful in India? Or should AI be led in a more academic way, which perhaps we are not doing now, with proper oversight?

I wouldn’t say we are not doing it. We are taking steps towards that. But you are right that in the future this will have to have regulation. It will have to have legislation relating to data privacy, to patient safety. There are so many issues which come up and go beyond the domain of science. Patient confidentiality will have to be maintained. Informed consent of the patient will need to be taken before this data can be used for training algorithms. And you’re right that tech companies have a very different perspective. You know, their economics determines everything. 

 

Q: It would seem that AI is all about market expectations. There seems to be no attempt to integrate it into the scientific part of our lives, no scientific authority that is able to do this. 

No, I wouldn’t say that there is no authority. There is the Indian Council of Medical Research, which has taken cognizance of the field of AI. So has the Union Ministry of Health and so have the academic institutions across the country. I can name at least five or six leading academic institutions which are involved in both developing and validating algorithms, helping the government in deciding how they should be deployed. A lot needs to be done but, certainly, steps have been taken in the right direction. I would agree totally that this should not be only a tech-driven innovation. It should be driven both by the academic institutions and by the Government of India. 

 

Q: You know, when AI was first offered to private hospitals it was seen as a marketing edge. This tendency has continued to grow. Is there scope to now pause to think of the need for greater regulation? 

Absolutely. I think everywhere, even in the US, there is a lot of thinking going on both in academia and in the corporate world about whether we are really heading in the wrong direction as far as AI is concerned. Can we afford to let machines take over everything?

A lot needs to be done and, definitely, checks and balances need to be in place. I don’t foresee a time when physicians will get replaced by a machine. I think the physician offers much more to a patient than just a diagnosis. There is a whole component of a patient-physician relationship which is empathy with the patient. These are things which cannot be eliminated just by putting a machine in place. There is a lot of rethinking going on across the world about this. We need to step back and think where we are going and channelize it in the right direction as assisted intelligence, as I mentioned earlier. 

 

Q: But, as you well know, the patient-doctor equation has been changing because of unregulated privatization and several other factors. Patients are at the receiving end of a lot of stuff they can’t understand. In India this adverse equation is particularly problematic.

That’s true. I think the reason our issues are somewhat different is that our population is so large. We are always overwhelmed by numbers. Only a very small percentage of our society can actually go for corporate healthcare. The bulk of our population relies on healthcare provided by the government, especially in the peripheries.

So, certainly, I’m not saying that AI is going to solve all the problems or that it is a panacea for all our ills. A small fraction of our patient problems can be resolved by it.

We also cannot say we don’t want it. It’s here to stay. And, as you rightly pointed out, the economics of it will dictate that it will be adopted, so we cannot shy away from it. But it’s the right time to step back and think, we should not make mistakes which have been made with other technologies when they were adopted without much thought. So, it certainly needs some thinking. But it does not need to be rejected. It needs to be adopted to the advantage of the physician and the patient.

There are, of course, many challenges relating to AI in a country like ours. There is the challenge of whether it will be equitably used across society. In the initial phases it will be very expensive. Therefore, we also stand to, you know, run the risk of depriving a large chunk of our population of the benefits of AI. Perhaps that can be taken care of by more indigenous efforts at developing AI algorithms, rather than only relying on giant technology firms like Google and Microsoft, whose technologies typically tend to be more expensive. It’s heartening to see that there are Indian vendors in the field. And they are doing a good job and are also not only developing algorithms for India, but many of their algorithms have been FDA approved and are being adopted. I think there need to be more indigenous efforts. There need to be more efforts from the public sector, from institutions like the IITs. They need to collaborate with physicians because eventually it will be a bipartite kind of collaboration between physicians and the computer scientists and data scientists who will come together to solve the problems which are specific to us.

 

Q: More socially driven. 

Definitely. For wider adoption so that the benefits reach the more deprived sections of the population.

 

Q: And to actually have an impact on healthcare in a country like ours, this technology needs to be more socially driven than profit driven. 

I would agree with that. 

 

Q: What is the margin of error that normally happens in a laboratory?

When a radiologist is seeing images, we would say that an accuracy of 80 to 90 percent is very good. With his best effort a fully qualified well-trained physician can make a mistake because the nature of the problem that he’s addressing is so complex. Algorithms are always tested against the accuracy of a physician. Having said that, there have been algorithms which have gone beyond the accuracy of the physicians. There are publications, for example, in breast cancer, where they have shown that for very small lesions, the computer had better accuracy. We hope that machines will decrease that margin of error in the future. 

 

Q: Well, AI is generative. So it goes ahead and starts learning on its own. Is there the likelihood of a growing gulf between the technology and the physician?

There is certainly a risk that in the future the algorithm may actually understand medicine better than a physician. I’m unsure about this right now. But there is certainly a risk, you’re absolutely right, there are parts of the AI algorithm that are not transparent. 

We don’t know how the computer learns, but it learns things from that data which we never intended to train it for. For example, AI algorithms looked at the retina of patients and could tell their gender. Now that was in no way the intention of the algorithm’s training, but the computer on its own learned to identify the gender of a patient by looking at the retina. Clearly, AI does things which are beyond the control of the person training the algorithm. So, 20 years from now, who knows what will happen? 

Comments

Currently there are no Comments. Be first to write a comment!