What I Built
Sanjeevani is a multilingual AI-powered virtual doctor that accepts voice, text, or image inputs and responds with realistic, human-like diagnosis and remedy using Groq's LLM and Murf AI.
It addresses the problem of language barriers and accessibility in digital healthcare. Whether a patient speaks Hindi, French, Spanish, or Chinese — Sanjeevani listens, understands, and speaks back like a real doctor.
Demo
🎥 Watch Sanjeevani in action:
🔗 Code Repository:
https://github.com/this-is-rachit/Sanjeevani
🌐 Try it live:
https://sanjeevani-6vck.onrender.com/
🔊 How I Used Murf API
Murf AI powers the voice and translation layer in Sanjeevani:
- ✅ Text-to-Speech: Converts Groq-generated medical advice into lifelike speech using Murf’s voice models.
- 🌐 Multilingual Translation: Automatically translates diagnosis into the selected language before speech synthesis.
- 🎙️ Voice Mapping: Used Murf's voice IDs to customize the sound per language (e.g., Hindi, Japanese, German).
This brings a human warmth to AI conversations — vital for a healthcare app.
💡 Use Case & Impact
Real-World Applications
- 🏥 Rural/Remote Healthcare: For patients who can’t read or write, Sanjeevani offers voice-based, language-native assistance.
- 🌍 Global Accessibility: With 16+ language support, it’s usable from India to Italy.
- 🖼️ Image Support: Users can upload a rash or wound image for visual diagnosis via LLM.
Impact
Sanjeevani enhances digital healthcare accessibility, especially for non-English speaking and underserved populations. It’s a step toward inclusive AI in medicine.
🧪 Tech Stack
- Murf AI – Text-to-speech and multilingual translation
- Groq LLaMA 4 – Medical advice generation
- Groq Whisper – Voice-to-text transcription
- Gradio – Web-based interface
- Python, langdetect, PyDub, SpeechRecognition
🧠 Built solo by @rachit_bansal