Home » Blog » How to self-host DeepSeek R1 locally

How to self-host DeepSeek R1 locally

DeepSeek R1 is a free and open-source LLM, serving as an alternative to OpenAI’s ChatGPT, that can be fully self-hosted. This guide will walk you through the steps to deploy DeepSeek R1 locally on your own computer.

What you need

You need a PC, Mac, or Linux machine with at least 4.7 GB of available storage. Also, you need to have Docker pre-installed.

Step 1: Install Ollama to run DeepSeek using terminal (macOS/Linux/PC)

Copy and paste the following code:

Bash
curl -fsSL https://ollama.com/install.sh | sh
ollama -v #Check current Ollama version

More info here:

Step 2: Install DeepSeek using Ollama

Select the model that best suits your needs. The model named deepseek-r1:671b has all R1 capabilities.

Bash
# Default 7B parameter model (4.7GB : ideal for consumer GPUs)
ollama run deepseek-r1

# Larger 70B parameter model (24GB+ VRAM required)
ollama run deepseek-r1:70b

# Full DeepSeek-R1 (336GB+ VRAM required : for 4-bit quantization) 
ollama run deepseek-r1:671b

More info here:

Step 3: Set up Open Web UI

You need to have Docker pre-installed on your machine. Then, run the following command.

Bash
docker run -d -p 3000:8080 \
  --add-host=host.docker.internal:host-gateway \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:main

You can now access DeepSeek at http://localhost:3000 and select deepseek-r1:latest. If you chose to install DeepSeek on a cloud server, you can access DeepSeek under http://<your-server-ip>:3000.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *