As a leader in conversational AI, Lark sat down with our Head of Data Science, Wesley Pasfield, and Lark’s VP of Engineering, Magnus Hedemark, to hear their take on the buzz surrounding ChatGPT and large language models.
Question 1: Wesley, can you please give us a short overview of the different types of AI and of what ChatGPT is?
Wesley: Sure! I’ll try to keep this brief since this is a big topic we could spend a ton of time on. But in general you can think of this whole answer as a funnel getting down to how ChatGPT fits into the broader AI picture. Depending on where you look, AI might have different definitions, but at a high level, AI uses computing to automate problem solving or decisions that would typically be done by humans.
There are many subsets of AI, with machine learning as the most prominent example. Machine Learning itself is a superset of many different techniques, including Generative AI. Generative AI aims to create new content based on a provided set of input data, with common examples including image, music and text generation.
ChatGPT is an example of Generative AI, leveraging a large language model and reinforcement learning from human feedback as a final step to provide an experience that is explicitly designed to generate human-like conversation across a wide array of topics.
Question 2: With large language models creating a storm of excitement surrounding AI and its possible applications, how do you think this translates into the healthcare space and the opportunities that health plans can take advantage of?
Welsey: We recognize the exciting possibilities of generating custom and engaging content on demand with large language models. ChatGPT has certainly captured the most attention of the general public and has driven the recent buzz on AI. But I think it’s important to remind ourselves that AI in healthcare is nothing new. Lark, for example, has been using AI for almost a decade now. Other use cases can be seen in medical record scanning, analyzing unstructured data, medical imaging analysis, and the list goes on.
Question 3: What are some risks associated with AI in healthcare?
Wesley: Machine learning, including generative AI, is probabilistic in nature, meaning that there is always some inherent probabilistic error involved. Just look at the recent news with Cass. It’s really important to have companies with deep experience in this area thinking critically how to deploy these technologies to optimize for driving improvements in patient healthcare, while balancing risks. For us here at Lark, we try to think about the probabilistic and deterministic trade-off in all areas of our app.
Question 4: Can you elaborate a little bit more on that?
Wesley: Yes, so as I was saying, there is a balance between leveraging advanced tech with maintaining sufficient clinical rigor. For us, food logging is an area where we can lean in further and leverage more advanced techniques. If we get information that warrants a potential escalation or intervention, like high blood pressure– this is not an area to mess around with probabilistic options, so we chose to be more deterministic. We want to make sure we’re conscious of probabilistic error everywhere in our app. Especially in the more clinically-oriented conversations, we want to eliminate any possibility of error. Additionally, offline we build models to predict user behavior, and intervene if users are trending in the wrong direction.
Question 5: Thank you, Wesley. Now a question for both Wesley and Magnus, how does Lark’s culture of innovation contribute to our views on AI in healthcare going forward? Let’s start with Wesley.
Wesley: Obviously, the pace of innovation is rapidly increasing in the AI space, but when you have a hammer, everything can seem like a nail. It’s really important to have companies with deep experience in this area thinking critically how to deploy these technologies to optimize for driving improvements in patient healthcare, while balancing risks. We try to use our depth of knowledge and experience to not just serve end users, but to be really a consultative partner in how to approach these newer technologies in healthcare. We’re investing a lot of time in thinking about the probabilistic and deterministic trade-off that I’ve described throughout this conversation, and developing a robust evaluation framework to be able to quantitatively measure performance in situations that otherwise would be left to purely subjective evaluation.
Magnus: I’ve only been with Lark a few months now so I’m still seeing the team with “new eyes”. And I love what I see. This is a place where anyone, anywhere in the organization feels safe to ask difficult questions and to hold their leadership team accountable. This is also a place where we are very much mission-focused and feel a deep sense of responsibility to the well-being of the people we serve. There’s this really amazing balance between all of the new capabilities emerging in AI space vs the deep experience our engineers and data scientists already have with conversational AI, combined with the deep sense of purpose in helping people to live longer, healthier lives.
Question 6: What opportunities for deterministic AI are you most excited about?
Magnus: As a champion of diversity and inclusion, I’m really excited about how these emerging conversational AI technologies can help us to reach more people in ways that respect how they communicate. Meeting people where they are, using language that is familiar and supportive. Often people from marginalized communities are the ones who also have the greatest struggles getting access to life-extending health care and wellness coaching. So, I’m very excited about the potential for technology to work better for all the people we serve.
Question 7: As a man of many interests and hobbies, are you using AI for anything in your personal life?
Magnus: I am! Aside from using Lark, of course, I also use two different AI’s pretty regularly. I use Stable Diffusion for generating artistic images to help tell stories. The prompt crafting can take a long time to get right, but that’s part of the fun of using the tool. I also use ChatGPT as a sort of personal assistant. Lately I’ve been having fun giving ChatGPT a list of things I have in my refrigerator and asking it to suggest a recipe for a healthy meal. Right now both uses really require a responsible human to de-risk the outputs before anyone else sees them. Like if Stable Diffusion gives me an inappropriate image, or if ChatGPT tries getting me to use half a stick of butter in my recipe. Seeing these outliers in my personal use of AI gives me a greater appreciation for the value of the work we’re doing professionally.