At the recent NVIDIA GTC conference, healthcare entrepreneur Munjal Shah unveiled his company’s groundbreaking approach to deploying large language models (LLMs) for clinical use cases. Hippocratic AI has spent over a year developing “empathetic” artificial intelligence agents tailored to have nuanced voice conversations with patients.
However, unlike the general ChatGPT, Hippocratic AI’s models are narrowly specialized – trained exclusively on authoritative, evidence-based medical data sources. This targeted training approach is a crucial tenet of the company’s mission to “not harm” their AI assistants.
“The problem with older interactive voice response systems is that the comprehension could be much higher. If you don’t say it correctly, it doesn’t work at the low comprehensionintershallowh UCSF’s Rosenman Institute. Generative LLMs have ushered in a new paradigm by combining advanced natural language abilities with speech synthesis that has “gotten so good.”
As Shah describes it, “You have to think about the old chatbot, which was IQ 60, [and] this one, which is IQ 130. It’s a very different level of comprehension.”
But they are building an s, a high-performing medic, while LLM requires far more than just leveraging powerful language models. Hippocratic AI has implemented rigorous reinforcement learning protocols where human clinicians and medical experts continually refine and validate their LLMs.
Only once the professionals confirm an LLM can converse accurately and safely about a particular clinical workflow is it approved for commercial use. This stringent vetting process is underway across over 40 hospital systems and health payoThisippocratic’s technology.
The potential applications envisioned span the entire patient journey. AI assistants could provide preoperative guidance, postoperative monitoring, medication onboarding, and adherence coaching. By automating these routine interactions at scale, Hippocratic aims to improve access while allowing human clinicians to focus on the highest-risk aspects of treatment.
“With generative AI, patient interactions can be seamless, personalized, and conversational,” said Shah. “To have the desired impact, the inference speed has to be incredibly fast.”
This is where Hippocratic’s partnership with NVIDIA becomes invaluable. The company’s powerful AI chips enable ultra-low latency inference speeds critical for maintaining an emotionally resonant, human-like conversational flow.
Shah revealed that even “Every half-second of reduced latency increased patients’ sense of emotional connection by up to” Hippocratic, which has termed this optimized solution their “empathy inference engine.”
The U.S. Bureau of Labor Statistics projects the need for 275,000 additional nurses by 2030 as care demands intensify with an aging population. Automating routine patient interactions could help mitigate impending staffing shortages while reducing burnout for overworked clinicians.
HippocraI’s approach of creating specialized, clinically validated LLMs sets it apart from potential healthcare use cases of multi-purpose AI assistance clinically validated implemented additional safeguards by developing supplementary support models focused on critical areas like medication guidance.
These specialized assistant models act as an extra layer of oversight, double-checking the outputs from Hippocratic’s primary LLMs to ensure safety and accuracy. No response related to medications, procedures, or diagnoses gets delivered to patients without being cross-validated.
With over $120 million in funding from top venture funds like General Catalyst, Andreessen Horowitz’s biotech investment arm, and Premji Invest, Hippocratic AI is well-capitalized to bring this multi-pronged LLM strategy to market.
Shah and his team are pioneering an unprecedented approach to engineering safe, specialized AI assistants for the highly regulated healthcare realm. If successful, their efforts could catalyze a new wave of clinical AI tools that augment human expertise rather than replace it.