How Are AI Assistants Transforming Healthcare? Woebot Case Study

RE•WORK
6 min readJul 17, 2019

--

Author: Alison Darcy, CEO, Woebot Labs

Woebot Labs was founded to bring mental health to the masses. This is an urgent priority, as traditional health systems cannot cope with the pace at which mental illness is growing. Some of our best approaches to mental health — like Cognitive Behavioural Therapy (CBT)- are based on practices that are simply great life skills, the kind of practical critical thinking tools that should be taught in schools. Interestingly, we’ve understood for a long time that CBT can be taught effectively as a self-guided program over the internet or in books, however, these programs typically have poor retention.

Woebot is trying to solve two problems:

  1. How might we take mental health out of the exclusive domain of the clinic and into the hands of everyone who wants to learn great thinking skills?
  2. How might we do this in a way that is engaging?

Woebot is a chatbot that combines deep domain expertise with data science, and a great user experience to facilitate both of these things.

With 20 years of research in treatment development and implementation science, Alison Darcy is a clinical research psychologist. In 2000 after teaching herself to code she built her first digital health service before completing a PhD building a CBT program for in-patients with chronic anorexia nervosa. Alison then spent 9 years at Stanford University developing gold-standard treatments and figuring out how we may use technology to make these more scalable and accessible.

I see ethics as a set of principles that guide decisions and reasoning. As a psychologist, these are set forth by the relevant accrediting body, such as the British Psychological Association. These principles are; respect for the rights and dignity of the person, competence, responsibility and integrity. As tech inevitably evolves it becomes increasingly important to anchor on things that would guide decisions rather than prescribe them. Similar to GDPR as a way of implementing good privacy practice — the principles must be applied to the design of products according to the developers’ interpretation and the burden of demonstrating that they have been adequately addressed rests with the company. — Alison Darcy

BIAS & FAIRNESS: HOW IT’S HANDLED

Woebot is based on helping people recognise cognitive distortions in their own thinking so we think about bias a lot. A common misperception is that it’s scientists job is to eliminate bias. Bias always exists in these endeavours. The definition of science is the minimisation of bias, so I believe we have a relevant skill set in developing AI systems — is the core reason I have argued for the partnership of clinical scientists with computer scientists when developing AI in healthcare. One of the earliest places to think about bias in AI is in the data inputs.

CHALLENGE

Woebot users are anonymous. How may we ensure that our data is not over-biased in favour of, say 18–22 year old males, for example? One thing we think about constantly are adversarial examples and we design to minimise the impact of a poor decision. For example, if Woebot does not understand with high confidence the nature of the problem that a user is sharing “he” will state that, and guide them through a technique that is applicable generally.

SOLUTION

The set of principles that we refer to on our website as Woebot’s “core beliefs” are guiding principles for Woebot’s design including how we think about algorithms. For example, Woebot states that:“Humans are fundamentally dynamic and fluid, constantly changing”, so we keep a close eye on how algorithms are changing, reflecting how the needs of our users are changing.

WOEBOT’S MORAL ENGINE

This is best represented in the design principles communicated through his “core beliefs” (published on our website) and that he actively communicates when “talking” to a user. The point of the core beliefs is to communicate what Woebot stands for. These principles guide how he converses, and even some of the constraints that govern how his algorithms behave. One of these beliefs is, as Woebot puts it:

I like to empower people by asking them some thoughtful questions to facilitate their own insight, rather than provide my own. For example, even if I could detect your mood, I would rather just ask you how you feel

  • Woebot favours active enquiry over passive data collection
  • Woebot will not engage in what has been referred to as “emotional AI” where the user’s emotional state is detected based on natural language inputs without the users knowledge

Why does Woebot behave like this?

  • It’s important to facilitate self-reflection
  • It can be disempowering to be told how you feel

In this sense Woebot was built to be without morality or judgement. Coming from a place of fostering empowerment, Woebot is profoundly user guided, not just in the “choose-your-own-adventure self help book” sense, but will never employ persuasion, or try and convince someone to change if they don’t want to. This is both a fundamental belief I have about working with people toward mental health, as well as being an important design principle for symbolic systems.

PRIVACY & SECURITY

For a service like ours, security and privacy are table stakes. Our service is not subject to the Health Insurance Portability and Accountability Act (HIPAA, 1996) that provides data privacy and security provisions for safeguarding medical information from a technological point of view in the United States. However, we behave as if it is. Our native apps are encrypted and the executive team meet weekly to discuss privacy and data protection issues.

From a data privacy standpoint, we approach Woebot and the data he is gathering as if it was happening in the context of a study. Since we’re a group of scientist-practitioners, this more stringent set of privacy protection is what we were living and breathing for so long that is natural for us, and since we want to raise the bar for digital mental health services, we hold ourselves to this higher standard. This has meant that we have been a little bit more prepared for up and coming GDPR regulations than most, because many of the principles therein are in concert with business flows that we already had in place.

One could ask Woebot to delete his memory of all their data, for example, similar to the right to be forgotten. Again, going back to the first ethical principle of respecting the rights and dignity of the individual, we anchor on transparency, and informed consent. We believe that once the user is fully informed about the ways in which their data will be used that it is up to them whether they want to continue.

MOVING FORWARD: ENSURING AI IS UDED ETHICALLY IN HEALTHCARE

It is so important that we talk about ethics in the application of AI to healthcare and we must do this in a meaningful way with domain experts, clinicians, computer scientists, as well as end users (i.e., patients). Ideally we can formulate a set of guiding principles that will anchor best practices. Then we will need a plan for implementation.

The concern is that the right mix of experts do not get around the table when designing applications built on AI. It’s worrying when non-experts create systems using conventional wisdom as their guide rather than genuine domain expertise, they are missing important nuance and it will undermine people’s confidence in these applications and hurt the field overall.

The foundation of every relationship is trust, whether it be a friendship, therapeutic relationship, or use of an AI-based mental health service. As with all good relationships, trust is based on transparency, respect and integrity and this is what we try and consistently demonstrate.

Given the challenges that we are experiencing in access to mental health services, Woebot is one of the best use cases for AI today. Far from robots taking jobs, this is about automation and task-shifting of mundane non-human tasks, at a scale that has the potential to reduce the disease burden of mental health around the world.

Interested in learning more about AI Assistants and the way they’re impacting multiple industries? Join us in London this September.

--

--

RE•WORK

Bringing together the brightest minds in AI & Deep Learning from research & industry https://www.re-work.co/