Chatbots are conversation platforms driven by artificial intelligence (AI), that respond to queries based on algorithms. They are considered to be ground-breaking technologies in customer relationships. Since healthcare chatbots can be on duty tirelessly both day and night, they are an invaluable addition to the care of the patient.
Chatbots can provide a tireless, constant source of interaction for patients with the healthcare system. The anonymity associated with these chats is a source of confidence for patients sharing personal information, especially in the area of mental healthcare.
Other important areas that could safely be entrusted to chatbots include gathering routine data, scheduling appointments, administrative work around admissions and discharges, sending reminders, tracking symptoms, creating medical records, facilitating insurance and payment procedures, and telehealth.
Reduced costs, improved efficiency
Chatbots called virtual assistants or virtual humans can handle the initial contact with patients, asking and answering the routine questions that inevitably come up. During the coronavirus disease 2019 (COVID-19) pandemic, especially, screening for this infection by asking certain questions in a certain predefined order, and thus assessing the risk of COVID-19 could save thousands of manual screenings.
This would save physical resources, manpower, money and effort while accomplishing screening efficiently. The chatbots can make recommendations for care options once the users enter their symptoms.
Medical chatbots are especially useful since they can answer questions that definitely should not be ignored, questions asked by anxious patients or their caregivers, but which do not need highly trained medical professionals to answer. Since such tools avoid the need for patients to come in for an appointment just to have their questions answered, they can prevent wastage of time for both patients and healthcare providers while providing useful information in a timely fashion.
“The answers not only have to be correct, but they also need to adequately fulfill the users’ needs and expectations for a good answer.” More importantly, errors in answers from automated systems destroy trust more than errors by humans.
Improving diagnostic accuracy
One stream of healthcare chatbot development focuses on deriving new knowledge from large datasets, such as scans. This is different from the more traditional image of chatbots that interact with people in real-time, using probabilistic scenarios to give recommendations that improve over time.
Besides answering questions related to illness, medications and common occurrences during the course of a chronic condition, chatbots can help evaluate how a patient is doing during follow-up, and schedule an appointment with the physician where further care is required.
Chatbots as healthcare companions
Medical (social) chatbots can interact with patients who are prone to anxiety, depression and loneliness, allowing them to share their emotional issues without fear of being judged, and providing good advice as well as simple company.
Smoothing insurance issues
Chatbots are well equipped to help patients get their healthcare insurance claims approved speedily and without hassle since they have been with the patient throughout the illness. Not only can they recommend the most useful insurance policies for the patient’s medical condition, but they can save time and money by streamlining the process of claiming insurance and simplifying the payment process.
Improving patient satisfaction
By allowing patients to get instant answers at the first point of contact, providing simplified and timely appointments and following up on patients after the visit, healthcare chatbots not only provide satisfaction with the whole healthcare experience, but may allow physicians to spend more time with their patients.
Chatbot advocates say that the time and effort saved for healthcare professionals, the more accurate recording and handling of information, the reduced risk of mistakes and the ability to use past and present data to predict the outcome are bound to increase the efficiency of healthcare in public hospitals. Chatbots can be exploited to automate some aspects of clinical decision-making by developing protocols based on data analysis.
Routine diagnostic tasks, online consultations and other virtual assistance can be done by chatbot algorithms, but other factors may be left out that should be included for a reliable outcome.
Despite the obvious pros of using healthcare chatbots, they also have major drawbacks.
The development of more reliable algorithms for healthcare chatbots requires programming experts who require payment. Moreover, backup systems must be designed for failsafe operations, involving practices that make it more costly, and which may introduce unexpected problems.
Many healthcare experts feel that chatbots may help with the self-diagnosis of minor illnesses, but the technology is not advanced enough to replace visits with medical professionals. However, collaborative efforts on fitting these applications to more demanding scenarios are underway. Beginning with primary healthcare services, the chatbot industry could gain experience and help develop more reliable solutions.
Still, being unable to take all the personal details associated with the patient may drive chatbots and the experts who rely on them to inaccuracies in their medical practice, raising medical liabilities and the prospect of new ethical issues.
For all their apparent understanding of how a patient feels, they are machines and cannot show empathy. They also cannot assess how different people prefer to talk, whether seriously or lightly, keeping the same tone for all conversations.
Chatbots cannot read body language, which hampers the flow of information. And if there is a short gap in a conversation, the chatbot cannot pick up the thread where it fell, instead having to start all over again. This may not be possible or agreeable for all users, and may be counterproductive for patients with mental illness.
Also, if the chatbot has to answer a flood of questions, it may be confused and start to give garbled answers.
Negative impact on professional skills
The widespread use of chatbots can transform the relationship between healthcare professionals and customers, and may fail to take the process of diagnostic reasoning into account. This process is inherently uncertain, and the diagnosis may evolve over time as new findings present themselves. Hence the need for prudence in making clinical decisions.
“What doctors often need is wisdom rather than intelligence, and we are a long way away from a science of artificial wisdom.” Chatbots lack both wisdom and the flexibility to correct their errors and change their decisions.
As chatbots remove diagnostic opportunities from the physician’s field of work, training in diagnosis and patient communication may deteriorate in quality. It is important to note that good physicians are made by sharing knowledge about many different subjects, through discussions with those from other disciplines and by learning to glean data from other processes and fields of knowledge.
Physicians must also be kept in the loop about the possible uncertainties of the chatbot and its diagnoses, such that they can avoid worrying about potential inaccuracies in the outcomes and predictions of the algorithm. This reduces cognitive load and thus improves physician performance.
Failure of trust
Moreover, as patients grow to trust chatbots more, they may lose trust in healthcare professionals. Secondly, placing too much trust in chatbots may potentially expose the user to data hacking. And finally, patients may feel alienated from their primary care physician or self-diagnose once too often.
Such self-diagnosis may become such a routine affair as to hinder the patient from accessing medical care when it is truly necessary, or believing medical professionals when it becomes clear that the self-diagnosis was inaccurate. The level of conversation and rapport-building at this stage for the medical professional to convince the patient could well overwhelm the saving of time and effort at the initial stages.
Business logic rules
Another ethical issue that is often noticed is that the use of technology is frequently overlooked, with mechanical issues being pushed to the front over human interactions. The effects that digitalizing healthcare can have on medical practice are especially concerning, especially on clinical decision-making in complex situations that have moral overtones.
Over-reliance on chatbots may also give the green signal to healthcare companies to follow the bait of market logic, making profits rather than benefiting the patient as the primary outcome, and allowing such companies to dominate healthcare at the cost of ethical function.
Moreover, training is essential for AI to succeed, which entails the collection of new information as new scenarios arise. However, this may involve the passing on of private data, medical or financial, to the chatbot, which stores it somewhere in the digital world. Such data could be hacked, and privacy breaches could occur. This is among the pressing concerns of today.
Lack of accountability
Despite the emergence of a principle-based approach to AI in medical care, it remains true that this lacks the trust-based foundation of a patient-physician relationship, the wisdom of past experience, and dependable mechanisms to ensure legal and medical accountability.
Despite the many difficulties in identifying the complexities of chatbot use in healthcare, efforts must be made to approach this area both ethically and professionally, rather than from the viewpoint of business. “Chatbots have the potential to be integrated into clinical practice by working alongside health practitioners to reduce costs, refine workflow efficiencies, and improve patient outcomes.”
Nonetheless, “insufficient consideration regarding the implementation of chatbots in health care can lead to poor professional practices, creating long-term side effects and harm for professionals and their patients. Whether [the benefits] outweigh the potential risks to both patients and physicians has yet to be seen.”
- Parviainen, J. et al. (2021). Chatbot Breakthrough in the 2020s? An Ethical Reflection on The Trend of Automated Consultations in Health Care. Medicine, Health Care and Philosophy. https://doi.org/10.1007/s11019-021-10049-w. https://link.springer.com/article/10.1007/s11019-021-10049-w
- Powell, J. (2019). Trust Me, I’m a Chatbot: How Artificial Intelligence in Health Care Fails the Turing Test. Journal of Internet Medical Research. https://doi.org/10.2196/16222. https://www.jmir.org/2019/10/e16222/
- Chew, H. S. J. et al. (2022). Perceptions and Needs of Artificial Intelligence in Health Care to Increase Adoption: Scoping Review. Journal of Internet Medical Research. https://doi.org/10.2196/32939. https://www.jmir.org/2022/1/e32939
- Xu, L. et al. (2021). Chatbot for Health Care and Oncology Applications Using Artificial Intelligence and Machine Learning: Systematic Review. Journal of Internet Medical Research Cancer. https://doi.org/10.2196/27850. https://cancer.jmir.org/2021/4/e27850
- Mittelstadt, B. (2019). Principles Alone Cannot Guarantee Ethical AI. Nature Machine Intelligence. http://dx.doi.org/10.2139/ssrn.3391293. Available at SSRN: https://ssrn.com/abstract=3391293.
- Palanica, A. et al. (2019). Physicians’ Perceptions of Chatbots in Health Care: Cross-Sectional Web-Based Survey. Journal of Internet Medical Research. https://dx.doi.org/10.2196%2F12887. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6473203/
- All Healthcare Content
- Healthcare Systems Around the World
- What is the Role of Regulatory Bodies in Healthcare?
- How is the ‘omics’ Revolution Changing Healthcare?
- Primary Healthcare Providers
Last Updated: May 4, 2022
Dr. Liji Thomas
Dr. Liji Thomas is an OB-GYN, who graduated from the Government Medical College, University of Calicut, Kerala, in 2001. Liji practiced as a full-time consultant in obstetrics/gynecology in a private hospital for a few years following her graduation. She has counseled hundreds of patients facing issues from pregnancy-related problems and infertility, and has been in charge of over 2,000 deliveries, striving always to achieve a normal delivery rather than operative.
Source: Read Full Article