Chatbot for Palliative Care Training
Georgia Tech HCI Master’s Thesis 2018. “Best Project Award” Nominee. Examined the use of chatbot technology to train healthcare providers to better identify and respond to their patient’s spiritual distress.
Palliative care patients’ spiritual needs are not being met. 70% of patients want their health team to talk to them about their religious and spiritual beliefs, but only 30% of patients ever do. A literature review revealed that a patient’s spiritual beliefs affect their communication styles, healthcare decisions, assessment of abilities, and perception of pain. Unfortunately, healthcare providers receive little training on how to talk to their patients about their beliefs. This is due to a combination of factors including: the high cost of this training, the novelty of the field of palliative care, and the changing religious landscape in the US.
How can technology be used to improve communication training for doctors and nurses about religious and spiritual issues?
I chose to explore an educational chatbot as bots are are already being used successfully in the education field for other purposes. Additionally there have been several successful Spiritual guidance chatbots aimed at helping patients including the Spiritual Palliative Care Agent and Emily at Lifefolder.
A text based chatbot patient simulation informed by social media data
I validated the project’s objective through initial interviews with nurses, medical students and chaplains.
Patient Communication: As chatbots have been successfully using social media data to imitate groups of individuals I decided to inform my chatbot’s communication with social media posts in support groups for palliative care patients. Additionally, I conducted a survey with the goal of better understanding how patients might want these topics discussed in an in-person setting. (As opposed to an online support group).
Doctor Nurse Communication: In order to create the best responses for users of this chatbot, I conducted interviews with palliative care doctors who specialize in spiritual health, and a literature review of the current best practices. I also interviewed 1 medical student, 3 nurses and 2 chaplains.
Social Media Data
I collected upwards of 5,000 posts from public Facebook support groups for terminal illnesses. Upwards of 1,000 of these posts directly addressed how the person’s religious or spiritual beliefs were affecting their lives. I used NVivo to code these posts based on religious and spiritual affiliation. From there, I analyzed the posts for signs of spiritual distress, spiritual wellbeing, and spiritual practice identified though the literature review.
Based on the coded social media posts I created 5 personas from the most common expressions of spiritual and religious distress.
Patient responses were generated from the social media posts.
Physician and Nurse responses (which the user could pick from) were sourced from the FICA, HOPE, and SPIRIT models of spiritual assessment as well as best and worst practices from the 8 expert interviews.
Iteration 1- Chat fuel
The first iteration of the chatbot was created in Chat fuel, a Facebook Messenger bot creator. When user testing was conducted on 5 medical students they expressed that this format was too impersonal and did not suggest better communication practices when they failed to correctly identify spiritual distress in the patient bot.
Iteration 2- Twine
I used Twine, an interactive storytelling platform for the second iteration of the bot. This allowed me to put an image of the patient on screen, and also gave the user feedback when they chose good or bad responses to the patients concerns. While this iteration was more successful, participants believed that the patient illustrations lacked non-verbal cues they would use in a real patient situation.
Iteration 3- Twine with character expressions
The third and final iteration includes renderings of the 6 core facial expressions, as well as compound expressions based on research into the expressions for virtual agents. These expressions were paired with the emotional coding of the social media posts the chatbot was displaying.
Users go through a conversation with the virtual patient and select answers to the patient’s communication. The user is provided with feedback throughout the experience if they choose a particularly good, or detrimental response. At the end of the conversation, users received a scorecard based on their responses that lets them know where their communication could improve.
Testing was conducted with 5 medical students at Mercer University who were pursuing palliative care work. A survey was conducted both before and after the user interacted with the chatbot. This was done to better understand the user’s current understanding of spiritual communication, as well as their understanding after interacting with the bot.
These questions measured the use of social media data in a chatbot for clarity and usefulness of information. They also measured the usefulness of a chatbot platform for feedback recollection and increased preparedness for patient interaction.
Overall users found the simulated patient’s communication to be clear and useful, validating the use of social media data for simulated conversation. Users felt moderately more prepared for in person conversations after using the chatbot. 4 participants wanted longer interactions with the chatbot to interact with additional personas. They believed repeated use of the tool would increase the value of the exercise.
Additionally, users understood more deeply that spiritual and religious beliefs were important to their patients and their patients’ care. They gained a greater appreciation for the importance of these conversations through the chatbot.