What I Built
moodLog is a React Native app that tracks your mood using data from a simulated smart ring. The app gets real-time data like temperature and humidity through the DHT22 sensor attached to an ESP32. The data is then sent to a WebSocket server where we ise the openai API to figure out what the mood will be and the app displays it in near real time. It also gives you some song suggestions based on your mood patterns which are pulled from Spotify.
The Idea Behind It
This was for my Wearable Computing class at VIT. Honestly, I was just curious about how those fitness trackers work and thought it would be cool to build something similar but for emotions. And since it was a wearable project, a ring with all these concepts sounded like a cool idea.
The idea came to me when I saw my peers and a few other people constantly complaining about their mood and how they feel at random parts of the day. I thought, there might be a way to track this right?
Disclaimer: Although you might assume that this project will store the data and give insights, but this time I am not storing the data in any database. It's all on the fly, the data is generated, is analyzed via the OpenAI API and instantly sent to all clients on the websocket. It was a 3 credit course lol.
Tech Stack & Architecture
Technologies Used
- Frontend: React Native with Expo - I chose this because it's easy to test and I could share the app using just a QR code.
- Backend: Node.js with WebSocket support - needed this for real-time data streaming, pretty straightforward.
- IoT Simulation: Wokwi Simulator - we didn't have actual hardware, so had to simulate a smart ring.
- OpenAI: Used the OpenAI API to analyze the data and give a mood prediction.
- Spotify API: Used the Spotify API to get song suggestions based on the mood.
System Architecture
The flow is pretty simple: the fake ring sends sensor data to a server, the server analyzes the data using the OpenAI API (this causes a bit of delay to show the outputs in the app), the server then sends the prediction to all the clients connected to the websocket (which is our react native app). The app then displays the prediction in near-real time.
Technical Decisions & Implementation
Getting real-time data working
Setting up the WebSocket connection was something I was really excited about. I wanted the app to feel alive, like it's actually connected to a real device sending data every second. There's something satisfying about seeing numbers change in real-time on your phone screen.
I built a simple WebSocket server that takes data from Wokwi and pushes it to the React Native app. But man, dealing with connection drops and data loss was way more annoying than I thought it would be. Spent a lot of time debugging why the connection would just randomly fail.
Then, another headache was where to deploy the websocket, since Vercel wasn't working with Websockets at least for me. Ended up using Koyeb. It was an amazing experience, both, my hardware and the app could talk to an actual deployed backend and i did not have to run anything locally for my demo. Felt proud.
You can check out Koyeb btw, its pretty neat, they only give you one instance for free, but it was enough for my use case.
Why I went with React Native and Expo
I picked React Native because I already had some experience with React from other projects. Plus, Expo made my life so much easier - I could test the app instantly.
I used my deployed backend to connect to the websocket as soon as the app was opened, then I kept the connection alive. This meant whatever data was pushed to the socket kept updating in the app and it felt like a real-time connection.
Here I also used the spotify API to get some songs based on the mood that was predicted. Spotify has a nice api where you can specify specific genres and it gives you songs based on that. I got the genres using the OpenAi API again along with the mood data in the same websocket request. It was nice.
I somehow made the app UI, if you know me I'm not very good with frontend lol. But it was fun. I've attached some slides at the end, you can check out the app UI there.
Simulating the Smart Ring
Since we didn't have budget for actual hardware (and building a real smart ring would probably take the entire semester), I had to get creative with Wokwi.
if you don't know, Wokwi is like TinkerCAD but it gave me access to ESP32 which TinkerCAD doesn't have (at least at that time it didn't). The reason I used ESP32 is because it has an inbuilt Wi-Fi module and well, we wanted to communicate via the internet so :)
I set up temperature and humidity sensors in the simulator and programmed it to send data over WiFi to the deployed backend. For the demo, I was manually changing the values on the simulator and it was showing up on the app perfectly. It felt like working with real hardware, which made the whole project more engaging.
Reflections
This project taught me a lot about working with real-time data. The WebSocket part was harder than I expected - handling disconnections and making sure data doesn't get lost required a lot of debugging.
The AI part was the most challenging. Trying to connect body temperature to emotions? That's not easy. I realized I needed way more data and probably some machine learning knowledge to make it work properly.
If I were to do this again, I'd focus more on the data collection part. Maybe add heart rate or sleep data to make the predictions more meaningful. Also, the UI could be prettier - I was more focused on getting the tech to work.
Working with my teammates was fun. We split the work pretty well - I handled the app and server, while they worked on the presentation and business model stuff.
Check It Out
- Live Demo: You can try it with Expo Go app (there's a QR code in our presentation although I'm not sure if it's still working)
- Wokwi: Wokwi Project
- Slides: Slides
I also created a small landing page for the same here.