As physicians and practices consider the addition of artificial intelligence tools, and how AI fits into the medical field, there are new risks and liabilities to consider, including the possibility of AI malpractice lawsuits, according to a June 6 report from Medscape.
Many physicians are already using AI every day, from electronic health records with integrated AI-powered clinical decision support tools, to using ChatGPT for drafting prior authorization requests.
However, physicians need to remember they are still liable for bad decisions in medical care, even if the decision or advice was made or given by AI.
Physicians can become too reliant on AI-based system suggestions, which can lead to poor care. While tools like ChatGPT can be helpful for gathering advice, without vetting the information, physicians and surgeons are at an increased risk for lawsuits.
As more healthcare providers use ChatGPT, health systems are implementing best practices to keep themselves, and their physicians, safe from legal issues.
Best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, according to the report.
Physicians also risk misdiagnosis complaints if they rely too heavily on AI. If an AI algorithm predicts a patient is having a severe medical event and the physician orders tests without diagnosing the patient themselves, patients may complain of unnecessary and expensive medical testing.
AI systems have also had instances of bias reported, favoring certain genders, races or socioeconomic statuses.
To prevent AI-related lawsuits, physicians should be aware of when and how AI is being used in their practices, be informed about who trained the AI, never blindly trust AI, use ChatGPT as a support tool rather than a diagnostic tool and implement strong data governance strategies.