Posted on Wednesday, November 30th, 2022 at 11:43 am
By Attorney Malia Tartt
We’ve all heard the adage, “To err is human.” But what about to err is robot? Artificial intelligence (AI) powered medical technologies are rapidly evolving and finding a home in many clinical practices. The intention behind the push is to automate more tasks so that doctors can spend more time addressing patient concerns. But can a computer safely replace a skilled doctor responding in real time?
A popular buzzword, AI is a technological advancement that involves programming technology to problem solve. Basically, it is the theory and development of computer programs capable of doing tasks and solving problems that usually require human intelligence. AI is different than human intelligence because humans have a far greater capacity to multitask, create memories and enjoy social interactions. While AI applications can run efficiently, and be more objective and accurate, they simply cannot replicate human intelligence.
You may not realize it, but AI is already being used in hundreds of ways such as chatbots on your favorite shopping website or self-driving cars. In fact, AI is already playing a huge role in healthcare technology as a tool to diagnose, develop medicine, monitor patients, etc. The technology can learn and develop as it is used, learning more about the patient or the medicine, and evolve overtime.
AI imaging has already demonstrated the potential to improve patient safety, but in turn, researchers have determined significant practical risks associated with its use. For example, an area of rapid growth for AI in the medical field is diagnostic imaging. AI algorithms look at images to identify patterns and then use pattern recognition to flag apparent abnormal findings or identify masses and fractures. But, if the program misinterprets those images, the consequences could be severe. It is an unfortunate truth that doctors make mistakes, but those mistakes are generally limited to a single doctor-patient relationship. With AI, an untold number of patients could be at risk before the problem is detected, traced and corrected, as the problem results from a fundamental issue with the program itself. Scientists have recently learned that the use of AI can introduce new potential errors to arise because AI systems may not be trained in the same way as doctors to “err on the side of caution”. The doctor approach may result in more false positives, but that may be the preferred approach when the alternative is a serious safety outcome for the patient.
In addition to the practical risks associated with AI powered medical technologies, there are also significant ethical concerns to unravel. The first consideration is accountability. When an AI system injures someone, who should be held liable? The designer or manufacturer of the system? The doctor? The hospital? Another ethical concern is data privacy. Lawsuits have already been filed based on data-sharing between large health systems and AI developers and some patients may be concerned that an AI system’s collection of data may violate their privacy. A final ethical concern is bias and inequality. AI systems learn from data that is entered by programmers and can incorporate those programmers’ bias. For instance, AI systems developed in an academic medical center will know less (and therefore make more errors) if placed in a rural area with a population that does not typically frequent those centers. Even if AI systems learn from accurate, representative data, there are still inherent biases and inequalities in the American health system. For instance, African American patients receive, on average, less treatment for pain than white patients. Based on this data, an AI system might learn to suggest lower doses of painkillers to African American patients.
With proper oversight and training, medical technology powered by AI has the capability to transform healthcare and improve healthcare. The FDA is already considering regulation of AI technologies to ensure safety and effectiveness, but is that sufficient? Even everyday products like CPAP machines or hip implants, which are regulated by the FDA, have the potential to be defective and cause serious injury or even death. While the future is hopeful, don’t expect to see C-3PO in your doctor’s office anytime soon.