AI/ML

Deploying DeepSeek-R1 8B in Docker with Ollama: A Complete Setup Guide

deepseek
Deepseek Model for your Business?
  • check icon

    Cost Efficiency (Open Source)

  • check icon

    Lower Long Term costs

  • check icon

    Customised data control

  • check icon

    Pre-trained model

Read More

Get Your Deepseek AI Model Running in a Day


Free Installation Guide - Step by Step Instructions Inside!

This amazing video explains DeepSeek AI Deployment.

Problem

You want to run Deepseek-r1 8B inside a Docker container for portability and isolation but are unsure about setting up Ollama within the container.

Solution

  • Run the Docker container using Ollama's official Docker image.
  • Pull and run Deepseek-r1 8B inside the container.
  • Allow the container to interact with the host system for efficient execution.

1. Run the Ollama Docker container

Run the Ollama container using the following command

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

This starts the container and exposes Ollama’s API on port 11434.

2. Logged in to the ollama container

Login into Ollama container by using docker exec command

docker exec -it ollama /bin/bash

This will log in to a docker container.

3. Pull the deepseek-r1 model

Inside the Ollama Container pull the required model of deepseek

ollama pull deepseek-r1:8b 

This will pull the all dependencies of the model

4. Run the model

Now run the pulled model by using the following command.

ollama run deepseek-r1:8b

 

Now the model will run in interactive mode you can test with any prompt you want to test.

>>>Hello!

5. Run ollama web gui container

Run the ollama web gui container using the following command. You need to change <YOUR-IP> filed with your local IP address.

docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=http://<YOUR-IP>:11434 -vopen-webui:/app/backend/data --name open-webui --restart alwaysghcr.io/open-webui/open-webui:main Now open your browser, and hit the URL “http://<YOUR-IP>:3000” and start to interact with the deepseek model.

Conclusion

Running Deepseek-r1 8B in a Docker container provides portability, isolation, and easy deployment across multiple environments. You can now interact with the AI model using local commands or API requests.

 

Ready to transform your business with our technology solutions? Contact Us  today to Leverage Our AI/ML Expertise. 

0

AI/ML

Related Center Of Excellence