With the rapid advancement in Large Language Models (LLMs), Deepseek-r1 has emerged as a top contender. Matching the performance of the GPT-o1 model in reasoning and code generation, Deepseek-r1 leverages transformers and logical tree reasoning to optimize decision-making, setting a new benchmark in AI-powered development tools.
Having used tools like Cursor and other paid code helpers, I was curious to see how Deepseek-r1 would perform in my workflow. This sparked my journey to integrate it into my environment and evaluate its capabilities.
Let’s Get Started 🚀
Step 1: Install Ollama
To get started, you’ll need Ollama, a platform that allows you to run LLMs locally. If you don’t already have it installed, download it from Ollama's official site. Follow the setup instructions to get it running smoothly on your machine.
Step 2: Download Deepseek-r1
Once Ollama is up and running, it’s time to pull the Deepseek-r1 model to your local environment. Run the following command in your terminal:
$ ollama pull deepseek-r1
After downloading, you can test if the model is working correctly by running a simple curl command:
$ curl http://localhost:11434/api/generate -d '{
"model": "deepseek-r1:latest",
"prompt": "Why is the sky blue?"
}'
If the model is successfully set up, you’ll see a chunked output in your terminal—indicating that Deepseek-r1 is ready for action.
Step 3: Install the Continue.dev Extension
With Visual Studio Code as a prerequisite, head to the Extensions Marketplace and install the Continue.dev extension. This extension bridges the gap between VS Code and advanced AI models, enabling seamless integration for coding assistance.
Once installed, you're one step closer to unleashing the power of Deepseek-r1!
Step 4: Configure Deepseek-r1 in Continue.dev
Open Continue.dev by clicking on its icon in the Activity Bar of VS Code.
In the chat window, locate the model selection button at the bottom-left corner.
Click on the button to open a dropdown menu.
From the menu, select Ollama as the platform.
Once Ollama is selected, you’ll see all available models in your local environment. Select Deepseek-r1 from the list.
At this point, Continue.dev is configured to use Deepseek-r1 via Ollama.
Ready to Code!
With everything set up, you can now take full advantage of Deepseek-r1's advanced reasoning and code-generation capabilities directly within VS Code. Continue.dev enables additional context-aware features, enhancing the development experience:
Autocomplete: Get precise suggestions for completing lines or blocks of code.
Code Refactoring: Highlight sections of code and ask for optimizations or rewrites.
Code Explanations: Select any code snippet and let the AI break it down for you.
Here’s a quick look at what the workflow will look like:
Why Deepseek-r1?
Reasoning Excellence: Logical tree reasoning ensures smarter, more optimized decisions.
Transformer Power: Delivers robust performance in code generation tasks.
Local Execution: Run models locally for better privacy and faster responses.
By integrating Deepseek-r1 with Continue.dev, you unlock a powerful development workflow tailored to your specific needs.
Cheers to smarter coding! 🥂
r1
comes with a lot of variants:The
1.5b
variant could very easily run on a weak or older device, and the8b
variant works fine on my device.