Concerns about AI in healthcare

Rithesh Shetty
7 min readApr 20, 2021

Introduction

It’s amazing how the first thoughts that come to mind are “ROBOTS! WORLD DOMINATION! SENTIENCE!”
However, I don’t believe in any of these dystopian visions. You know why? Because if robots could feel, just like us, I’m sure most of them would be too lazy for this take-over-the-world schtick.
They’d have bills to pay, batteries to feed.
I also think they’d end up being too busy dealing with very human-like problems, “why hasn’t cyborg Vanessa-XLR8M replied to my DMs? Would teleporting to her room seem too thirsty?”
Now…
does that sound like a mass-destruction causing robot race?

Credits: blendspace.com

Anyway, we do have some serious concerns in the short-term that need to be addressed to ensure a more predictable and palatable future involving AI. Shaping AI policies will define a drastically different human lifestyle, a new beginning of sorts. Thus, it is paramount for us to get this transition right.
Where else to start than from the sector that deals with life and death?

AI in healthcare has its own distinct set of challenges to overcome, in addition to the challenges that will be encountered by the general adoption of AI. Let’s take a look:

Hurdles ahead…

Workforce displacement

The big fish, isn’t it?
“AI will take all our jobs!”
“Automation will kill livelihoods!”
We’ve heard this tune for a while now, and this is one of the significant concerns about AI. A two-year study from McKinsey Global Institute suggests that by 2030, intelligent agents and robots could replace as much as 30 percent of the world’s current human labor.

“That’s a yikes from me dawg” gif
Credits: giphy.com

Now, it’s a given that AI will displace quite a few jobs, simply because it will perform them better (goodbye, rude hospital receptionist). The resultant automation will potentially put low-skilled workers at risk, but it is important to note that this will most likely be a short-term effect, until the labour market readjusts and reskills itself.

“During the Industrial Revolution more and more tasks in the weaving process were automated, prompting workers to focus on the things machines could not do, such as operating a machine, and then tending multiple machines to keep them running smoothly. This caused output to grow explosively. In America during the 19th century the amount of coarse cloth a single weaver could produce in an hour increased by a factor of 50, and the amount of labour required per yard of cloth fell by 98%. This made cloth cheaper and increased demand for it, which in turn created more jobs for weavers: their numbers quadrupled between 1830 and 1900. In other words, technology gradually changed the nature of the weaver’s job, and the skills required to do it, rather than replacing it altogether.” — The Economist, Automation and Anxiety

However, it is absolutely critical for us to provide workers with the opportunities to reskill and emphasize on lifelong learning to make this transition as smooth as possible. The world is changing rapidly, and policymakers will be responsible for how we keep up.

Skill atrophy

Credits: dreamstime.com

It’s incredible to think that there was a point in time when humans took their ships out onto the open seas in the name of exploration, sometimes found themselves at the opposite end of the world, dealt with unanticipated storms, and yet found their way back to the mainland from where they started, all without the use of technology.
Yet here we are today, using Google Maps to go to a restaurant we’ve been to several times before and also, use it on the way back, even if the route is the same in reverse.
Needless to say, some of our spatial sense and navigation skills have taken a hit over generations, due to technology that simply does it for us. Our skills have atrophied.

AI is going to kill certain skills and correspondingly, jobs, but it may not be such a bad thing…

“Simply put, jobs that robots can replace are not good jobs in the first place. As humans, we climb up the rungs of drudgery — physically tasking or mind-numbing jobs — to jobs that use what got us to the top of the food chain, our brains.” — The Wall Street Journal, The Robots Are Coming. Welcome Them.

Of course, considerations have to be made about our people skills, our human interactions. Dealing with that rude hospital receptionist, instead of a mobile app, and getting your work done is a skill of its own.
Are we also willing to forego such skills, skills that are developed over the course of human experiences? How does that impact our society? How does that affect the nature of an individual in that society?

Algorithmic and data bias

Cardiovascular disease (CVD) is the leading cause of death in both men and women. We’d love for AI to help with this, wouldn’t we?
Well, the good news is that it definitely can.
The bad news? It might help some more than others.

“All animals are equal, but some animals are more equal than others” — George Orwell

As amazing as the human mind is, it is quite easily susceptible to biases, and this can reflect itself in the algorithms we create. Moreover, our environment is affected by social, economic and cultural factors which can further lead to biased data generation from these environments. AI is pretty transparent in the sense that biased data will lead to biased solutions.

Research suggests that CVDs develop 7–10 years later in women than in men. The differences between men and women, in terms of health, are related not only to their biological characterization and reproductive functions, but also environmental, social, and cultural factors (for instance, differences in access to good healthcare). The risk of heart disease in women is underestimated, which leads to less accurate treatment solutions. Now, if our clinical data is representative of this bias and under-represents a particular class, the use of this data for an AI solution will lead to distorted predictions/outcomes, disadvantaging women.

Credits: blogs.gartner.com

Bias is a major issue in AI, and it is especially crucial in the healthcare context, because equity is one of the core principles of healthcare. The goal is to ensure the best possible care for everyone, regardless of their idiosyncrasies. Hence, research is being carried out to mitigate such biases, in our algorithms and in our data. It is an important component to consider when we assess our formulated solutions as well.

Patient privacy

An important challenge in healthcare is the necessity for patient data/records to be safeguarded with stringent protocols. In the wrong hands, this data could be absolutely detrimental.
While AI-based solutions can potentially have a massively positive impact, it is critical for us to be ethical about the process cycle, starting from the data collection activities. As I alluded to in my previous post, the ethics of medicine in the past have been questionable, to say the least. Therefore, we have to be able to find a balance between innovation and infringement going forward.

The personal health information (PHI) of a patient is currently protected by 2 methods:

  • De-identification
    Process by which the identifiers of a patient are removed from the PHI, identifiers such as name, ids, etc. It is to be noted that re-identification may be possible.
  • Anonymization
    Ideally, an irreversible process to ensure that information cannot be traced back to the patient, i.e., re-identification is impossible.

The reality today though is that complete anonymization cannot always be guaranteed. There are so many sources of data out there that would still have identifying information in them and sophisticated techniques can be used to align the identified data with the de-identified data to potentially re-identify the patient.

Credits: giphy.com

Regulation and oversight

The healthcare sector is heavily regulated and rightfully so. The advent of new AI technologies potentially introduces risk that is not currently addressed within the current portfolio of standards and guidance for software. The cost of oversight could be conceivably devastating. Therefore, regulation norms will have to evolve to ensure improved patient safety and appropriate health outcomes.

Today, while we have reached amazing levels of accuracy and performance when it comes to these AI solutions, they largely remain a “black-box”, which means that the explainability and interpretability of these solutions is still contentious.
Why did it give us the outcome that it did? What is the reasoning behind it? How do we know if the system has captured the right concept? What are the implications of an incorrect outcome?
These are all questions that we need answers to before we deploy a product into the mass market, and this is the responsibility of regulation committees worldwide.

Conclusion

Look, we might be a fair way off of world-conquering robots but we surely are at a definitive point in human history. These concerns about AI in healthcare will have to be addressed to assure ourselves of the upshots we dream of today.
Or, you know? We could just let all of this be and deal with that rude hospital receptionist.

Thanks for reading.

--

--

Rithesh Shetty

24 and curious. The blog works at the intersection of philosophy, perspectives and healthcare.