Build a Chat App
Vimal

Vimal @vimaltwit

About: Passionate Technologist

Location:
India
Joined:
Oct 16, 2020

Build a Chat App

Publish Date: May 14
2 0

This is second part in the series Fresher's guide to Generative-AI. We will learn how to build a chat app that can answer questions using locally available LLMs in VS Code. This is possible with the AI Toolkit available as an extension in VS Code without the need for any subscription.

Step-by-Step Guide: Build a Chat App with AI Toolkit in VS Code

STEP 1: Install Prerequisites

  1. Install Visual Studio Code (VS Code) from: https://code.visualstudio.com
  2. Install the following extensions:
  • AI Toolkit: Search for "AI Toolkit" in the Extensions view and install it.
  • Python: For Python development and environment management.

Ensure Python 3.9 or newer is installed from: https://www.python.org/downloads

Verify installation:

python --version
pip --version
Enter fullscreen mode Exit fullscreen mode

STEP 2: Set Up Your Project

Create a New Project Directory:

mkdir genai-chat-app
cd genai-chat-app
Enter fullscreen mode Exit fullscreen mode

Initialize a Python Virtual Environment:

python -m venv venv
Enter fullscreen mode Exit fullscreen mode

Activate the Virtual Environment:

Windows:

.\venv\Scripts\activate
Enter fullscreen mode Exit fullscreen mode

macOS/Linux:

source venv/bin/activate
Enter fullscreen mode Exit fullscreen mode

Install Required Python Libraries:

pip install streamlit
Enter fullscreen mode Exit fullscreen mode

STEP 3: Install and Configure AI Toolkit

Launch VS Code.
Click on the AI Toolkit icon in the Activity Bar on the side.
Download a Local Model:
  • In the AI Toolkit view, go to MODELS > Catalog.
  • Use filters to find models that can run locally on your machine. For example, you can filter by "Model type" and select "Local run w/ GPU" or "Local run w/ CPU".
  • Choose a model suitable for your system's capabilities. For instance, models like Phi-3-mini-128k-cpu-int4-onnx are optimized for CPU usage.
  • Select the model and click on + Add button. The model will be downloaded and appear under My Models in the AI Toolkit.

aitk

Load the Model in the Playground:
  • Right-click on the downloaded model under MY MODELS.
  • Select Load in Playground to interact with the model and test its responses.

STEP 4: Build the Chat Application

Create a Python Script for the Chat App. In your project directory, create a new file named chat_app.py.

write the chat application code:

python

import streamlit as st
import requests

# Define the local API endpoint provided by AI Toolkit
API_URL = "http://127.0.0.1:5272/v1/chat/completions"

# Set up the Streamlit interface

st.title("Local GenAI Chatbot")

if "messages" not in st.session_state:
    st.session_state.messages = []


# Input field for user messages


user_input = st.text_input("You:", key="input")

if user_input:
    st.session_state.messages.append({"role": "user", "content": user_input})

    # Prepare the payload for the API request
    payload = {
        "model": "Phi-3-mini-128k-cpu-int4-onnx",  # Replace with your model's name
        "messages": st.session_state.messages
    }

    # Send the request to the local API
    response = requests.post(API_URL, json=payload)
    if response.status_code == 200:
        bot_reply = response.json()['choices'][0]['message']['content']
        st.session_state.messages.append({"role": "assistant", "content": bot_reply})
    else:
        st.error("Error: Unable to get response from the model.")


## Display the conversation

for msg in st.session_state.messages:
    st.markdown(f"**{msg['role'].capitalize()}:** {msg['content']}")
Enter fullscreen mode Exit fullscreen mode

Run the Chat Application:

bash
streamlit run chat_app.py
Enter fullscreen mode Exit fullscreen mode

This will start the Streamlit server and open the chat application in your default web browser.

STEP 5: Enhance and Explore

Here are few additional enhancements that can be made to the basic chat app.

  1. Add Conversation Memory: Implement logic to maintain context over multiple interactions.
  2. Customize Prompts: Modify the prompt structure to guide the model's responses.
  3. Integrate Multi-Modal Inputs: Use AI Toolkit's support for attachments to handle images or files.
  4. Deploy the Application: Consider deploying your application using platforms like Streamlit Cloud or Heroku for broader access.

✅ Summary

What we learnt so far:

  1. Install VS Code and required extensions
  2. Set up a Python project with a virtual environment
  3. Install and configure AI Toolkit to download a local model
  4. Build and run a chat application using Streamlit and the local model
  5. Enhance the application with additional features and explore deployment options

Comments 0 total

    Add comment