How I Built a GPT-OSS 120B-Parameter Coding Beast That Reviews, Fixes, and Writes Code Like Magic
Ayush kumar

Ayush kumar @ayush7614

About: Lead Developer Advocate @NodeShift | Cloud, DevOps, Open Source & AI Enthusiast | Freelance Technical Writer & Content Creator | 80k+ Reads

Location:
Delhi, India
Joined:
Oct 16, 2020

How I Built a GPT-OSS 120B-Parameter Coding Beast That Reviews, Fixes, and Writes Code Like Magic

Publish Date: Aug 7
6 1

Ever wondered what it’s like to have a 120-billion-parameter AI developer at your fingertips? Today, you can run OpenAI’s new gpt-oss-120B coding monster!

In this post, I’ll show you how I set up a next-level “Code Wizard” app: an AI assistant that reviews, fixes, and writes code like magic—using nothing but open-source models, a GPU-powered VM, Ollama, Streamlit, and a few lines of Python.

No API keys, no hidden fees—just you, your GPU, and the most powerful open-weight coding brain on the planet. Ready to build your own dev superpower? Let’s get started!

Prerequisites

Before you dive in, make sure you have the following:

  • A GPU-powered virtual machine (recommended: NVIDIA H100, A100, or H200, 80GB VRAM for 120B; 24GB+ for 20B)
  • Ubuntu 22.04 or similar Linux OS (tested with NodeShift’s templates)
  • Python 3.11+
  • SSH access to your VM

Resources

Link 1: https://huggingface.co/openai/gpt-oss-120b

Link 2: https://github.com/openai/gpt-oss

Link 3: https://ollama.com/library/gpt-oss

Step-by-Step Process to Build

For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.

Step 1: Sign Up and Set Up a NodeShift Cloud Account

Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.

Follow the account setup process and provide the necessary details and information.

Step 2: Create a GPU Node (Virtual Machine)

GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.


Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy

Step 3: Select a Model, Region, and Storage

In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.


We will use 1 x H200 SXM GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.

Step 4: Select Authentication Method

There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.

Step 5: Choose an Image

In our previous blogs, we used pre-built images from the Templates tab when creating a Virtual Machine. However, for running OpenAI GPT-OSS, we need a more customized environment with full CUDA development capabilities. That’s why, in this case, we switched to the Custom Image tab and selected a specific Docker image that meets all runtime and compatibility requirements.

We chose the following image:

nvidia/cuda:12.1.1-devel-ubuntu22.04

Enter fullscreen mode Exit fullscreen mode

This image is essential because it includes:

  • Full CUDA toolkit (including nvcc)
  • Proper support for building and running GPU-based applications like OpenAI GPT-OSS
  • Compatibility with CUDA 12.1.1 required by certain model operations

Launch Mode

We selected:

Interactive shell server

Enter fullscreen mode Exit fullscreen mode

This gives us SSH access and full control over terminal operations — perfect for installing dependencies, running benchmarks, and launching tools like OpenAI GPT-OSS.

Docker Repository Authentication

We left all fields empty here.

Since the Docker image is publicly available on Docker Hub, no login credentials are required.

Identification

Template Name:

nvidia/cuda:12.1.1-devel-ubuntu22.04

Enter fullscreen mode Exit fullscreen mode

CUDA and cuDNN images from gitlab.com/nvidia/cuda. Devel version contains full cuda toolkit with nvcc.

This setup ensures that the OpenAI GPT-OSS runs in a GPU-enabled environment with proper CUDA access and high compute performance.

After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.

Step 6: Virtual Machine Successfully Deployed

You will get visual confirmation that your node is up and running.

Step 7: Connect to GPUs using SSH

NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.

Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.

Now open your terminal and paste the proxy SSH IP or direct SSH IP.

Next, If you want to check the GPU details, run the command below:

nvidia-smi

Enter fullscreen mode Exit fullscreen mode

Step 8: Check the Available Python version and Install the new version

Run the following commands to check the available Python version.

If you check the version of the python, system has Python 3.8.1 available by default. To install a higher version of Python, you’ll need to use the deadsnakes PPA.

Run the following commands to add the deadsnakes PPA:

sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt update

Enter fullscreen mode Exit fullscreen mode

Step 9: Install Python 3.11

Now, run the following command to install Python 3.11 or another desired version:

sudo apt install -y python3.11 python3.11-venv python3.11-dev

Enter fullscreen mode Exit fullscreen mode

Step 10: Update the Default Python3 Version

Now, run the following command to link the new Python version as the default python3:

sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 2
sudo update-alternatives --config python3

Enter fullscreen mode Exit fullscreen mode

Then, run the following command to verify that the new Python version is active:

python3 --version

Enter fullscreen mode Exit fullscreen mode

Step 11: Install and Update Pip

Run the following command to install and update the pip:

curl -O https://bootstrap.pypa.io/get-pip.py
python3.11 get-pip.py

Enter fullscreen mode Exit fullscreen mode

Then, run the following command to check the version of pip:

pip --version

Enter fullscreen mode Exit fullscreen mode

Step 12: Created and activated Python 3.11 virtual environment

Run the following commands to created and activated Python 3.11 virtual environment:

apt update && apt install -y python3.11-venv git wget
python3.11 -m venv chatbot
source chatbot/bin/activate

Enter fullscreen mode Exit fullscreen mode

Step 13: Install Ollama

Run the following command to install the Ollama:

curl -fsSL https://ollama.com/install.sh | sh

Enter fullscreen mode Exit fullscreen mode

Step 14: Serve Ollama

Run the following command to host the Ollama so that it can be accessed and utilized efficiently:

ollama serve

Enter fullscreen mode Exit fullscreen mode

Step 15: Pull GPT-OSS 120b Model

GPT-OSS comes in two main versions—20B and 120B.
We will pull 120b version and then run it.
Run the following command to pull the GPT-OSS 120b Model:

ollama pull gpt-oss:120b

Enter fullscreen mode Exit fullscreen mode

Step 16: Run the 120B GPT-OSS Model

To start an interactive session with the 120B model, run:

ollama run gpt-oss:120b

Enter fullscreen mode Exit fullscreen mode

Step 17: Install Streamlit and Requests

Run the following command to install Streamlit and Requests:

pip install streamlit requests

Enter fullscreen mode Exit fullscreen mode

Step 18: Connect to Your GPU VM with a Code Editor

Before you start running Python and Streamlit scripts with the GPT-OSS models, it’s a good idea to connect your GPU virtual machine (VM) to a code editor of your choice. This makes writing, editing, and running code much easier.

  • You can use popular editors like VS Code, Cursor, or any other IDE that supports SSH remote connections.
  • In this example, we’re using cursor code editor.
  • Once connected, you’ll be able to browse files, edit scripts, and run commands directly on your remote server, just like working locally.

Why do this?
Connecting your VM to a code editor gives you a powerful, streamlined workflow for Python development, allowing you to easily manage your code, install dependencies, and experiment with large models.

Step 19: Build the Streamlit App

Create a file named app.py in your project folder, and add the following code:

import streamlit as st
import requests

OLLAMA_API_URL = "http://localhost:11434/v1/chat/completions"

st.set_page_config(
    page_title="🧑‍💻 Code Wizard: Your Personal AI Dev Sidekick (gpt-oss-120B + Ollama + Streamlit)",
    page_icon="💻",
    layout="wide"
)

st.title("🧑‍💻 Code Wizard: Your Personal AI Dev Sidekick (gpt-oss-120B + Ollama + Streamlit)")
st.caption("Review, Refactor, or Generate code using OpenAI's open gpt-oss-120B model—all on your own GPU (cloud or local). No API keys needed.")

mode = st.radio("Choose a mode:", ["Review", "Refactor", "Generate"], horizontal=True)
code_input = st.text_area("Paste your code here 👇", height=250, placeholder="Paste code or text here...")

gen_task = ""
if mode == "Generate":
    gen_task = st.text_input("Describe what you want to generate (e.g., 'Python script for web scraping Google News')")

def build_prompt(mode, code_input, gen_task):
    if mode == "Generate":
        return (
            f"You are a senior developer. Write code for: {gen_task}. "
            "Respond with code only and explanations in comments."
        )
    elif mode == "Review":
        return (
            "You are a code reviewer. Review the following code for bugs, improvements, and style. "
            "Respond with inline comments:\n\n" + code_input
        )
    else:  # Refactor
        return (
            "You are an expert developer. Refactor the following code to make it cleaner, more efficient, and easier to read. "
            "Respond with the improved code:\n\n" + code_input
        )

if st.button("Run"):
    if mode == "Generate" and not gen_task.strip():
        st.warning("Please describe what code you want to generate.")
    elif mode in ["Review", "Refactor"] and not code_input.strip():
        st.warning("Please paste your code first.")
    else:
        prompt = build_prompt(mode, code_input, gen_task)
        payload = {
            "model": "gpt-oss:120b",
            "messages": [{"role": "user", "content": prompt}],
            "stream": False
        }
        with st.spinner("Thinking... (120B can take 10–60+ sec, depending on GPU!)"):
            try:
                response = requests.post(OLLAMA_API_URL, json=payload, timeout=300)
                response.raise_for_status()
                data = response.json()
                output = data["choices"][0]["message"]["content"]
                language = "python" if "python" in output.lower() or mode == "Generate" else "text"
                st.code(output, language=language)
            except Exception as e:
                st.error(f"Failed to get response from Ollama: {e}")

st.info(
    "Just make sure Ollama is running the `gpt-oss:120b` model (`ollama run gpt-oss:120b🚀`). "
)

Enter fullscreen mode Exit fullscreen mode

Step 20: Run the Streamlit App

Start the app by running this command in your terminal:

streamlit run app.py

Enter fullscreen mode Exit fullscreen mode

After a few seconds, you’ll see a message like this:

You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://172.17.0.2:8501
External URL: http://149.7.4.152:8501

Enter fullscreen mode Exit fullscreen mode

Open your web browser and go to the “External URL” (e.g.,
http://149.7.4.152:8501
) to access your Code Wizard app from anywhere!

Step 21: Start Using Your Code Wizard!

In your browser, choose a mode:

  • Review
  • Refactor
  • Generate

Paste your code (for Review/Refactor) or describe what you want to generate (for Generate mode).

Click the “Run” button.

View the results instantly in your browser!

Get AI-powered code reviews, refactored code, or fresh code generated just for you.

Conclusion

That’s it—you’ve now got your very own “GPT-OSS 120B Coding Beast” up and running, reviewing, fixing, and writing code with superhuman power. Whether you’re building tools, automating code reviews, or just experimenting with next-gen AI, this setup gives you full freedom and total privacy—no more cloud restrictions or API keys.
The best part? You can tweak, fine-tune, and expand your Code Wizard however you like, all powered by open weights and your own hardware.
Ready to take your dev workflow to the next level? Try it out, share your results, and let me know what wild AI-powered coding projects you build next!

Comments 1 total

  • Peter
    PeterAug 19, 2025

    This a fascinating article and project and I really wish to follow all the steps but it looks like nvidia-smi is not installed ("command not found") so I could not precede.

    Maybe its the "wrong" image I have choosen.

    PS: I finally got it working (but unfortunately on the GPU droplet of a well known competitor). But great article.

Add comment