About: I'm a fan of Open Source and have a growing interest in serverless and edge computing. I'm not a big fan of spiders, but they're doing good work eating bugs. I also stream on Twitch.
Location:
Montréal, Québec, Canada
Joined:
Mar 11, 2017
Building an Ollama-Powered GitHub Copilot Extension
Publish Date: Jan 6
23 0
A few months ago, I wrote about creating your first GitHub Copilot extension, and later discussed this topic on the GitHub Open Source Friday live stream.
Building off the Copilot extension template I made based on that initial blog post, I decided to take a crack at an Ollama-powered GitHub Copilot extension that brings local AI capabilities directly into your development workflow.
What is Ollama?
Before diving into the extension, let's briefly talk about Ollama. It's a fantastic tool that lets you run large language models locally on your machine. Think of it as having your own personal AI assistant that runs completely on your hardware – no cloud services required. This means better privacy, lower latency, and the ability to work offline.
That said, you're running off your own machine, which means you don't need to pay for Ollama.
Introducing the Ollama Copilot Extension
The Ollama Copilot extension demonstrates the potential of combining local AI processing with GitHub Copilot chat. While still under development, it showcases several powerful features:
Key Features:
Local AI Processing: All AI operations run on your local machine through Ollama (for the Copilot extension, not Copilot Chat overall)
CodeLlama Integration: Leverages the CodeLlama model, which is specifically trained for programming tasks
Low Latency: Direct communication with a local AI model means faster response times and no cost to you. This is true, but only in the context of running this Copilot extension in development mode.
While Ollama enhances privacy by running locally, it's important to note that GitHub Copilot still uses cloud-based models, so complete privacy isn't guaranteed in this context.
Core Extension Structure
The extension is built using Hono.js, a lightweight web framework. To get things running, you can configure a couple of environment variables or go with the defaults.
The main endpoint handles incoming requests from GitHub Copilot, verifies them, and streams responses from Ollama:
app.post("/",async (c)=>{// validation logic// ...returnstream(c,async (stream)=>{try{stream.write(createAckEvent());// TODO: detect file selection in question and use it as context instead of the whole fileconstuserPrompt=getUserMessageWithContext({payload,type:"file"});constollamaResponse=awaitfetch(`${config.ollama.baseUrl}/api/generate`,{method:"POST",headers:{"Content-Type":"application/json",},body:JSON.stringify({model:config.ollama.model,prompt:userPrompt,stream:true,}),});if (!ollamaResponse.ok){stream.write(createErrorsEvent([{type:"agent",message:`Ollama request failed: ${ollamaResponse.statusText}`,code:"OLLAMA_REQUEST_FAILED",identifier:"ollama_request_failed",},]));}forawait (constchunkofgetOllamaResponse(ollamaResponse)){stream.write(createTextEvent(chunk));}stream.write(createDoneEvent());}catch (error){console.error("Error:",error);stream.write(createErrorsEvent([{type:"agent",message:errorinstanceofError?error.message:"Unknown error",code:"PROCESSING_ERROR",identifier:"processing_error",},]));}});});
exportfunctiongetUserMessageWithContext({payload,type,}:{payload:CopilotRequestPayload;type:FileContext;}):string{const[firstMessage]=payload.messages;constrelevantReferences=firstMessage?.copilot_references?.filter((ref)=>ref.type===`client.${type}`);// Format context into markdown for OllamaconstcontextMarkdown=relevantReferences.map((ref)=>{return`File: ${ref.id}\n${ref.data.language}\`\`\`\n${ref.data.content}\n\`\`\``;}).join("\n\n");return`${firstMessage.content}\n\n${contextMarkdown?`${FILES_PREAMBLE}\n\n${contextMarkdown}`:""}`;}
Setting Up Your Development Environment
Prerequisites
Install and run Ollama locally
Install the CodeLlama model:
ollama pull codellama
Set up the extension:
npm install
npm run dev
Exposing Your Extension
To test the extension, make the web app's port publicly accessible using one of these methods:
I've called my extension ollamacopilot but you can call yours whatever you want when running in development mode. When using a GitHub Copilot extension in Copilot chat, the prompt must always start with @ followed by the name of your extension and then your prompt, e.g. @ollamacopilot how would you improve this code?. Otherwise, Copilot chat will not call your extension.
Current Limitations
Works only in local development environment at the moment
Requires local Ollama installation
Needs public Ollama API access for deployment
Future Possibilities
Multiple AI model support
Context-aware coding suggestions
Specialized development commands via slash commands
expose Ollama securely on my local network so that I can use it anywhere
If I end up securing Ollama remotely, I'm probably going to use Pomerium for this on my local network. While Pomerium is known for its enterprise features, it's also perfect for hobbyists and self-hosters who want to secure their personal projects. There are other options, but that's what I'm going to go with.
One thing that this experiment has got me thinking is it'd be great if local Copilot extensions were a thing or if GitHub Copilot supported running local models. This wouldn’t work on GitHub.com or Codespaces, but would be viable for local development environments and still be valuable. I don't think this would ever happen, but you never know.
Contributing
Contributions are welcome! Feel free to open an issue to:
Suggest new features & enhancements
Improve documentation
Report bugs
Wrapping Up
I'm hoping to get this to a place where people can deploy the GitHub app for production, but at the moment, it is still super useful running it in development mode.
Get started by checking out the project on GitHub, and don't forget to star it and the template it's based on if you find it useful!
This is a Copilot extension that leverages the Ollama API. This is a WIP and currently works only in a local development environment. You must have Ollama running locally.
It can work deployed, but it would require being able to access your Ollama API at a public address.