Imagine typing a single line describing the app you want — and moments later, having the complete, ready-to-run code in your hands. No endless Googling, no boilerplate hunting, no copy-pasting from half-working GitHub repos. That’s exactly what this Code Generator delivers.
In this guide, we’re going from zero to a fully functional AI-powered coding assistant — one that lives in your browser, lets you describe what you need, and instantly generates clean, runnable code. We’ll wire it up with Streamlit for a beautiful UI, connect it to OpenAI’s latest models for powerful code generation, and add smart features like project scaffolding, JSON-based multi-file outputs, and one-click ZIP downloads.
By the end, you won’t just have a coding tool — you’ll have a personal code factory that can spin up anything from a FastAPI backend to a React app in minutes.
Prerequisites
Before we dive in, make sure you have:
- Python 3.11+ installed (check with python3 --version)
- pip 24+ installed (check with pip --version)
- An OpenAI API key from the OpenAI dashboard
- Basic familiarity with running Python scripts
Step 1: Verify Python & Pip
We’re going to make sure you have a modern Python and the right pip before doing anything else.
python3 --version
pip --version
# (also useful)
python3 -m pip --version
What you want to see (or newer):
- Python 3.11.x ✅
- pip 24.x ✅
Your screenshot shows:
- Python 3.11.9 → perfect
- pip 24.0 → perfect
Step 2: Create Project Folder & Virtual Environment
Now that Python and pip are verified, we’ll set up a clean workspace so dependencies stay isolated.
Make a new folder for the project
mkdir codegen && cd codegen
This:
- Creates a folder named codegen.
- Moves you into it so all files stay organized.
Create a virtual environment
python3 -m venv venv
- python3 -m venv venv creates a self-contained environment in a folder named venv.
- This ensures packages installed here won’t affect your global Python setup.
Activate the virtual environment
macOS / Linux
source venv/bin/activate
Windows (PowerShell)
venv\Scripts\Activate
When active, you’ll see (venv) at the start of your terminal prompt — like in your screenshot.
Step 3: Install Required Dependencies
Run:
pip install streamlit openai python-dotenv
What these do:
- streamlit → For creating the interactive web UI
- openai → To access GPT-5 (or any other OpenAI model) for doc generation
- python-dotenv → For securely loading your API keys from a .env file
After this step, your environment is ready to start coding generator.
Step 4: Upgrade the OpenAI SDK
# make sure your virtual env is active: (venv) in the prompt
pip install --upgrade openai
Verify the install:
pip show openai
You should see something like:
Name: openai
Version: 1.99.x
Location: .../venv/lib/python3.11/site-packages
Why this matters: we’re using the new 1.x SDK (from openai import OpenAI + client.chat.completions.create(...)).
Older code (openai.ChatCompletion.create) will break.
Step 5: Add your API key
Create a file named .env in the project root:
Inside .env, add:
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxxxxxx
Replace sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx with your actual API key from the OpenAI dashboard
Step 6: Write the Python Script
In your project root, create a file named app.py.
Add the following code:
import os
import io
import json
import time
import zipfile
from dotenv import load_dotenv
import streamlit as st
from openai import OpenAI
# -------------------- Setup --------------------
load_dotenv()
st.set_page_config(page_title="Code Generator", layout="wide")
st.title("🧠➡️💻 Code Generator")
st.caption(
"Type what you want built. Single prompt in → code out. "
"Optionally scaffold multi-file projects and export as a ZIP."
)
# ---- Sidebar: config ----
with st.sidebar:
st.subheader("Configuration")
default_key = os.getenv("OPENAI_API_KEY", "")
api_key = st.text_input(
"API Key (uses OPENAI_API_KEY if blank)",
value="",
type="password",
help="Leave empty to use environment variable."
)
base_url = st.text_input("Custom Base URL (optional)", placeholder="https://api.openai.com/v1")
st.caption("Tip: Point this to OpenRouter or a self-hosted vLLM that speaks the OpenAI API.")
st.divider()
st.subheader("Presets")
PRESETS = {
"FastAPI hello endpoint": 'Build a FastAPI endpoint /hello that returns JSON {"message":"Hello, <name>"} and accepts ?name= query param.',
"Flask minimal app": "Create a minimal Flask app with one route, plus a requirements.txt content.",
"React + Vite starter": "Create a Vite + React starter with a Hello component and an API client file. Include package.json and README.",
"Node Express API": "Create an Express server with /health, /users CRUD routes, and a Dockerfile + docker-compose.yml.",
"Python CLI tool": "Create a Python CLI that fetches a URL and prints title + HTTP status. Package with pyproject.toml.",
}
chosen_preset = st.selectbox("Quick prompt", ["—"] + list(PRESETS.keys()))
st.caption("Selecting a preset will replace the main prompt.")
# Build client
client_kwargs = {}
if base_url.strip():
client_kwargs["base_url"] = base_url.strip()
client = OpenAI(api_key=(api_key or default_key), **client_kwargs)
# -------------------- UI --------------------
default_prompt = PRESETS["FastAPI hello endpoint"]
if chosen_preset != "—":
default_prompt = PRESETS[chosen_preset]
prompt = st.text_area(
"Describe what code you want:",
value=default_prompt,
height=160,
)
col1, col2, col3, col4 = st.columns([1, 1, 1, 1])
with col1:
model = st.selectbox(
"Model",
options=["gpt-5-chat-latest", "gpt-4o"],
index=0,
help="Pick the model to generate code.",
)
with col2:
language = st.text_input("Target language (hint for the model)", value="python")
with col3:
temperature = st.slider("Creativity (temperature)", 0.0, 1.0, 0.2, 0.1)
with col4:
top_p = st.slider("Top-p", 0.0, 1.0, 1.0, 0.05)
mode = st.radio(
"Output mode",
["Single file (raw code)", "Project (multi-file JSON manifest)"],
horizontal=True,
help="Project mode expects STRICT JSON: {'files':[{'path':'...','content':'...'}]}",
)
streaming = st.checkbox("Stream tokens", value=True)
add_scaffolding = st.checkbox("Suggest README/requirements/Dockerfile/tests (project mode)", value=True)
seed = st.number_input(
"Seed (optional, for reproducibility where supported)",
value=0, min_value=0, step=1,
help="Set > 0 to request deterministic-ish output (if the model supports it)."
)
# History state
if "history" not in st.session_state:
st.session_state.history = []
# -------------------- Helpers --------------------
def ext_for_lang(lang: str) -> str:
if not lang:
return "txt"
lang = lang.lower()
return {
"python": "py", "javascript": "js", "typescript": "ts", "bash": "sh", "go": "go",
"java": "java", "c": "c", "cpp": "cpp", "csharp": "cs", "rust": "rs", "php": "php",
"ruby": "rb", "swift": "swift", "kotlin": "kt", "html": "html", "css": "css",
"sql": "sql", "markdown": "md",
}.get(lang, "txt")
def system_message(mode: str, add_scaf: bool) -> str:
if mode.startswith("Single"):
return (
"You are a senior software engineer.\n"
"Return ONLY runnable source code for the user's request. No explanations, no markdown fences.\n"
"Prefer minimal, dependency-light solutions."
)
extra = " Include README.md, dependency files, Dockerfile, and tests where reasonable." if add_scaf else ""
return (
"You are a senior software engineer.\n"
"Return STRICT JSON ONLY with this schema (no markdown, no extra text):\n"
'{\n "files": [\n {"path": "string (posix file path)", "content": "string file content"}\n ]\n}\n'
"Paths must be relative and safe (no absolute or parent traversal)." + extra
)
def render_manifest(manifest: dict, language_hint: str):
st.subheader("📂 Project Files")
for f in manifest.get("files", []):
st.markdown(f"**`{f['path']}`**")
st.code(f.get("content", ""), language=language_hint or "python")
buf = io.BytesIO()
with zipfile.ZipFile(buf, "w", zipfile.ZIP_DEFLATED) as z:
for f in manifest.get("files", []):
z.writestr(f["path"], f.get("content", ""))
st.download_button(
"📦 Download project.zip",
data=buf.getvalue(),
file_name="project.zip",
mime="application/zip",
use_container_width=True,
)
# -------------------- Generate --------------------
generate = st.button("⚡ Generate Code", type="primary")
if generate:
if not prompt.strip():
st.warning("Please enter a prompt.")
elif not (api_key or default_key):
st.error("Missing API key. Provide one in sidebar or set OPENAI_API_KEY.")
else:
with st.spinner("Generating…"):
try:
sys_msg = system_message(mode, add_scaffolding)
user_msg = (
f"Language: {language}\n\nTask:\n{prompt.strip()}\n\n" +
("Output format: Return only raw code."
if mode.startswith("Single")
else 'Output format: Return strict JSON object exactly like {"files":[{"path":"...","content":"..."}]}')
)
t0 = time.time()
usage = None
output_text = ""
# ---- Streaming ----
if streaming:
placeholder = st.empty()
acc = []
try:
with client.chat.completions.stream(
model=model,
messages=[
{"role": "system", "content": sys_msg},
{"role": "user", "content": user_msg},
],
temperature=temperature,
top_p=top_p,
seed=(None if seed == 0 else seed),
) as stream:
for event in stream:
token_text = None
if getattr(event, "type", None) == "token":
token_text = event.token
elif hasattr(event, "choices") and event.choices:
delta = event.choices[0].delta
if hasattr(delta, "content") and delta.content:
token_text = delta.content
elif isinstance(delta, dict) and delta.get("content"):
token_text = delta["content"]
if token_text:
acc.append(token_text)
placeholder.code(
"".join(acc),
language="json" if mode.startswith("Project") else (language or "python")
)
try:
final_resp_fn = getattr(stream, "get_final_response", None)
if callable(final_resp_fn):
resp_obj = final_resp_fn()
usage = getattr(resp_obj, "usage", None)
except Exception:
pass
except Exception as stream_err:
st.info(f"Streaming failed, falling back to non-streaming. ({stream_err})")
acc = []
output_text = "".join(acc).strip()
elapsed = time.time() - t0
# Fallback if no output
if not output_text:
resp = client.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": sys_msg},
{"role": "user", "content": user_msg},
],
temperature=temperature,
top_p=top_p,
seed=(None if seed == 0 else seed),
)
output_text = resp.choices[0].message.content.strip()
usage = getattr(resp, "usage", None)
elapsed = time.time() - t0
# ---- Non-streaming ----
else:
resp = client.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": sys_msg},
{"role": "user", "content": user_msg},
],
temperature=temperature,
top_p=top_p,
seed=(None if seed == 0 else seed),
)
output_text = resp.choices[0].message.content.strip()
usage = getattr(resp, "usage", None)
elapsed = time.time() - t0
# ---- Render output ----
if not output_text:
st.error("The model returned an empty response. Try turning OFF streaming or upgrading the `openai` package.")
elif mode.startswith("Single"):
st.subheader("🧩 Generated Code")
st.code(output_text, language=language or "python")
st.download_button(
"⬇️ Download code",
data=output_text,
file_name=f"generated.{ext_for_lang(language)}",
mime="text/plain",
use_container_width=True,
)
else:
try:
manifest = json.loads(output_text)
if not isinstance(manifest, dict) or "files" not in manifest:
raise ValueError("Invalid manifest: top-level 'files' missing")
render_manifest(manifest, language)
except Exception as je:
st.error(f"Failed to parse JSON manifest. Showing raw output for debugging.\n\n{je}")
st.code(output_text, language="json")
if usage:
try:
st.caption(
f"Tokens — prompt: {usage.prompt_tokens}, completion: {usage.completion_tokens}, "
f"total: {usage.total_tokens} • Latency: {elapsed:.2f}s"
)
except Exception:
st.caption(f"Latency: {elapsed:.2f}s")
else:
st.caption(f"Latency: {elapsed:.2f}s")
st.session_state.history.append(
{"prompt": prompt, "model": model, "mode": mode, "language": language, "output": output_text}
)
except Exception as e:
st.error(f"Error: {e}")
st.divider()
# -------------------- History --------------------
with st.expander("History (last 10)"):
if not st.session_state.history:
st.write("No history yet.")
else:
for i, h in enumerate(reversed(st.session_state.history[-10:]), 1):
st.markdown(f"**{i}. {h['model']} • {h['mode']} • {h['language']}**")
st.text_area("Prompt", h["prompt"], height=80, key=f"hist_prompt_{i}", disabled=True)
code_lang = "json" if h["mode"].startswith("Project") else (h["language"] or "python")
st.code(h["output"][:2000], language=code_lang)
st.caption(
"Tip: In Project mode, the model returns a JSON manifest so you can scaffold full repos and download them as a ZIP."
)
This script:
- Launches a Streamlit web app called “Code Generator” where you type what you want built and get code back.
- Lets you plug in an API key + optional custom base URL (so you can hit OpenAI, OpenRouter, or your own vLLM).
- Includes quick-start presets (FastAPI/Flask/React/Express/CLI) that auto-fill the main prompt.
- You choose the model (gpt-5-chat-latest or gpt-4o), target language, temperature, top-p, and an optional seed for repeatability.
Two output modes:
- Single file (raw code): Returns only runnable source code (no Markdown fences, no explanations).
- Project (multi-file): Returns a strict JSON manifest: {"files":[{"path":"...","content":"..."}]} to scaffold an entire repo.
- “Suggest scaffolding” toggle (project mode) nudges the model to also include README.md, requirements.txt, Dockerfile, tests, etc.
- Streaming support: Shows tokens live as they arrive; falls back to non-streaming if needed.
- Strict system prompts force the model to output exactly raw code (single-file) or strict JSON (project mode).
- Parses the JSON manifest (project mode), previews each file, and offers a Download ZIP of the generated project.
- Auto-detects file extensions from your chosen language for clean downloads.
- Shows usage info when available (prompt/completion/total tokens + latency).
- Keeps a small history of your last 10 generations with prompts and outputs for quick reference.
- Graceful error handling: empty outputs, bad JSON manifests, missing API key, streaming errors → all handled with helpful messages.
Step 7: Run it
streamlit run app.py
Once it starts, you’ll see something like this in your terminal:
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://<your-local-ip>:8501
Step 8: Check App
Now open your browser and visit:
http://localhost:8501
Step 9 – Configure and Generate Code in the Code Generator UI
Now that your environment is ready and the Code Generator UI is loaded, it’s time to set up your request and generate the code.
Enter API Key
In the left panel under Configuration, paste your OpenAI API key in the API Key field.
If you're running against a self-hosted or alternative endpoint, add it in Custom Base URL (optional). By default, it’s set to:
https://api.openai.com/v1
Describe Your Code
In the prompt box (middle section), clearly describe the code you want generated.
Example:
Build a FastAPI endpoint /hello that returns JSON {"message": "Hello, <name>"} and accepts ?name= query param.
Model Selection
From the Model dropdown, select:
gpt-5-chat-latest
In Target language, type:
python
Adjust Creativity & Sampling
- Set Creativity (temperature) to 0.20 for more deterministic output.
- Set Top-p to 1.00 for full probability sampling.
Choose Output Mode
- Select Single file (raw code) if you want just the Python script.
- Keep Stream tokens enabled for real-time output.
- Suggest README/requirements/Dockerfile/tests should be checked only if you want the AI to also generate project setup files.
Set Optional Parameters
- You can set a Seed value (e.g., 0) for reproducible output.
Generate the Code
- Once all fields are set, click the ⚡ Generate Code button.
- The model will process your request and output the generated Python code directly in the UI.
Save the Code
- Copy the generated code and save it in your project folder (e.g., main.py) inside your virtual environment.
Conclusion
And that’s it — your very own AI-powered Code Generator is up and running! With just a few simple steps, you’ve created a tool that can turn plain-English prompts into complete, production-ready code.
The best part? This setup isn’t limited to just Python scripts or single-file outputs — you can generate full projects, complete with Dockerfiles, READMEs, and test suites, all zipped and ready to go.
Now it’s your turn to experiment:
- Try different prompts.
- Switch between models.
- Build APIs, dashboards, CLIs, or even multi-file web apps.
Your imagination is now the only limit — the Code Generator will take care of the rest.