Bob Dylan’s song Like A Rolling Stone contains the famous chorus: “How does it feel?”
This line heralded a generation of music, art, and science that tackled the messiness and complexity of human feeling with an intensely directed focus.
The emphasis on emotion eventually reached the tech world, too, giving rise to a focus on “user experience” and “emotional design.”
But increasingly, user experience is shaped not just by human designers with empathy, but by artificial intelligence with no inherent concern for the multitude of our emotions. As AI technology begins to shape every aspect of our lives, can we ensure that it cultivates our emotional well-being as a fundamental, overriding objective? Can we optimize algorithms for our happiness and satisfaction as opposed to the engagement-driven methods that drive many applications today?
Hume AI believes we can. The company aims to do for AI technology what Bob Dylan did for music: endow it with EQ and concern for human well-being. Dr. Alan Cowen, who leads Hume AI’s fantastic team of engineers, AI scientists, and psychologists, developed a novel approach to emotion science called semantic space theory. This theory is what’s behind the data-driven methods that Hume AI uses to capture and understand complex, subtle nuances of human expression and communication—tones of language, facial and bodily expressions, the tune, rhythm, and timbre of speech, “umms” and “ahhs.”
Hume AI has productized this research into an expressive communication toolkit for software developers to build their applications with the guidance of human emotional expression. The toolkit contains a comprehensive set of AI tools for understanding vocal and nonverbal communication – models that capture and integrate hundreds of expressive signals in the face and body, language and voice. The company also provides transfer learning tools for adapting these models of expression to drive specific metrics in any application. Its technology is being explored for applications including healthcare, education, and robotics. With that early focus on healthcare, the company has partnerships with Mt. Sinai, Boston University Medical Center, and Harvard Medical School.
Empathic AI such as this could pose risks; for example, interpreting emotional behaviors in ways that are not conducive to well-being. It could surface and reinforce unhealthy temptations when we are most vulnerable to them, help create more convincing deepfakes, or exacerbate harmful stereotypes. Hume AI has established the Hume Initiative with a set of six guiding principles: beneficence, emotional primacy, scientific legitimacy, inclusivity, transparency and consent. As part of those principles and guidelines developed by a panel of experts (including AI ethicists, cyberlaw experts, and social scientists), Hume AI has committed to specific use cases that they will never support.
This fits squarely in USV’s thesis, by broadening access to well-being. We are leading Hume AI’s Series A fundraise, joined by Northwell Holdings, Comcast Ventures, LG Technology Ventures, Wisdom Ventures, and Evan Sharp. We’re psyched to have the opportunity to be involved in the development of empathetic and emotional AI.