Scientists have developed a “chatbot” to advise patients about their end-of-life care, according to New Scientist.
— New Scientist (@newscientist) September 20, 2017
According to the report, Timothy Bickmore, a professor at Northeastern University in Boston, and his team developed a chatbot to “offer spiritual and emotional guidance” to people making decisions about the end of their lives.
Bickmore told New Scientist that his team developed the chatbot because sometimes patients don’t have an opportunity to discuss end-of-life care until it is “too late.”
“We see a need for technology to intervene at an earlier point,” Bickmore said.
How does the chatbot work?
The tablet-based chatbot asks patients about their funeral plans, their wills, and their religious beliefs.
According to the report, the chatbot doesn’t formalize these plans, but will notify a family member or a designated caregiver to follow up with the patient about their plans in real life.
Brickmore will soon test a “slightly more souped-up version” that can “take users through guided meditation sessions” regarding their health and their medications, as well as “a wide range of religious topics” with terminally ill patients.
Is it artificial intelligence?
According to the report, the chatbot “isn’t fully autonomous” and it “sticks to a fairly rigid script.” It asks users to choose from pre-written options. Bickmore acknowledged that an unscripted system might lead to “situations where the agent recommends things that are dangerous.”
How have people reacted?
In the report, spokesmen for The Conversation Project, an organization that encourages people to discuss their wishes for their end-of-life care, praised the chatbot as a “non-judgmental” way to approach difficult discussions.
Twitter users expressed concern about the potential ramifications of such technology:
this is the bleakest sentence I've ever read https://t.co/7AHxTFgSPr
— Brandy Jensen (@BrandyLJensen) September 20, 2017
— Matthew 🌹🌆🏙 (@riotpedestrian) September 20, 2017
Prime the shuttle, I'm leaving https://t.co/tF7U4CsXCv
— McRib is back (@iainhaywood) September 20, 2017
this is not "having a conversation" at all https://t.co/zfzeXxSoDK
— Jonathan R Goodman (@JonathnRGoodman) September 20, 2017
Well this is depressing on several levels. https://t.co/AcVulW6eNS
— Erik Holm (@erikholmwsj) September 20, 2017
We need to train and support clinicians BEFORE we start creating chatbots to try to replace complex communication https://t.co/oMdFL4OV2R
— Lucy Selman (@Lucy_Selman) September 19, 2017