Cost Efficiency (Open Source)
Lower Long Term costs
Customised data control
Pre-trained model
Get Your Deepseek AI Model Running in a Day
You want to run Deepseek-r1 8B inside a Docker container for portability and isolation but are unsure about setting up Ollama within the container.
Run the Ollama container using the following command
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
This starts the container and exposes Ollama’s API on port 11434.
Login into Ollama container by using docker exec command
docker exec -it ollama /bin/bash
This will log in to a docker container.
Inside the Ollama Container pull the required model of deepseek
ollama pull deepseek-r1:8b
This will pull the all dependencies of the model
Now run the pulled model by using the following command.
ollama run deepseek-r1:8b
Now the model will run in interactive mode you can test with any prompt you want to test.
>>>Hello!
Run the ollama web gui container using the following command. You need to change <YOUR-IP> filed with your local IP address.
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=http://<YOUR-IP>:11434 -v
open-webui:/app/backend/data --name open-webui --restart always
ghcr.io/open-webui/open-webui:main
Now open your browser, and hit the URL “http://<YOUR-IP>:3000” and start to interact with the deepseek model.Running Deepseek-r1 8B in a Docker container provides portability, isolation, and easy deployment across multiple environments. You can now interact with the AI model using local commands or API requests.
Ready to transform your business with our technology solutions? Contact Us today to Leverage Our AI/ML Expertise.