How to Install and Run DeepSeek R1 and DeepSeek Coder Models Using Ollama

DeepSeek R1 is a powerful AI model designed for advanced data analysis and insight extraction. Leveraging the simplicity and flexibility of Ollama, you can get DeepSeek R1 up and running in no time. This guide will walk you through the installation process and demonstrate how to test the model effectively.


Prerequisites

Before diving into the setup, ensure you have the following:

  1. Ollama Installed: Ollama is a platform for running AI models locally. Download Ollama and follow the installation instructions for your operating system.
  2. System Requirements:
    • Minimum 16GB RAM
    • GPU (for optimal performance)
    • At least 10GB of free disk space

Step 1: Install Ollama

If you haven’t installed Ollama yet, follow these steps:

  1. Download Ollama:

    • Visit the official Ollama website.
    • Choose the version suitable for your operating system (Windows, macOS, or Linux).
  2. Install the Application:

    • Run the downloaded installer and follow the on-screen instructions.
    • Once installed, open Ollama to verify the setup.
  3. Command Line Interface (CLI):

    • Ollama includes a CLI for managing and running models. Open your terminal and type ollama to check if it’s installed correctly.

Step 2: Download and Install DeepSeek R1

To use DeepSeek R1 with Ollama:

  1. Install DeepSeek R1:

    • Run the command:

      1ollama pull deepseek-r1
      
    • The system will download the necessary files. This may take a few minutes depending on your internet speed.

  2. Verify Installation:

    • After installation, confirm by running:

      1ollama list
      

      Ensure deepseek-r1 is listed among the installed models.


Step 3: Run DeepSeek R1 Model

Once installed, you can run DeepSeek R1 using the Ollama CLI or the web-based user interface (WebUI):

Using the CLI

  1. Run the Model:

    • Use the command:

      1ollama run deepseek-r1
      
    • This will launch an interactive session where you can input queries.

  2. Example Queries:

    • For data analysis:

      1Analyze this dataset: [paste dataset here]
      
    • For insights:

      1What are the key trends in this data?
      

Using the WebUI

Benefits of the WebUI:

- User-friendly and intuitive interface.
- Supports visualizing outputs and exploring data interactively.

open-web ui github

Using Docker Compose for the WebUI

If you prefer to use Docker Compose to run the WebUI:

  1. Install Docker and Docker Compose:

    • Ensure Docker is installed on your system. Download Docker.
    • Docker Compose is included with Docker Desktop. For Linux, install it separately if needed.
  2. Create a docker-compose.yml File:

    • Create a new file named docker-compose.yml in your desired directory with the following content:

       1services:
       2  ollama:
       3    volumes:
       4      - ollama:/root/.ollama
       5    container_name: ollama
       6    pull_policy: always
       7    tty: true
       8    restart: unless-stopped
       9    image: ollama/ollama:latest
      10
      11  open-webui:
      12    build:
      13      context: .
      14      args:
      15        OLLAMA_BASE_URL: '/ollama'
      16      dockerfile: Dockerfile
      17    image: ghcr.io/open-webui/open-webui:latest
      18    container_name: open-webui
      19    volumes:
      20      - open-webui:/app/backend/data
      21    depends_on:
      22      - ollama
      23    ports:
      24      - 3000:8080
      25    environment:
      26      - 'OLLAMA_BASE_URL=http://ollama:11434'
      27      - 'WEBUI_SECRET_KEY='
      28    extra_hosts:
      29      - host.docker.internal:host-gateway
      30    restart: unless-stopped
      31
      32volumes:
      33  ollama: {}
      34  open-webui: {}
      
    • This configuration pulls the latest Ollama image, maps port 3000 for the WebUI, and mounts a data directory for persistence.

  3. Run Docker Compose:

    • Navigate to the directory containing your docker-compose.yml file and run:

      1docker compose up
      
    • This starts the Ollama WebUI, accessible at http://localhost:3000.

  4. Interact with DeepSeek R1:

    • Once the WebUI is running, access it through your browser (typically at http://localhost:3000 or a similar local address).
    • Select DeepSeek R1 from the available models.
    • Input queries directly into the interface and view results in real-time.
  5. Stop the WebUI:

    • To stop the service, press Ctrl+C or run:

      1docker compose down
      

Step 4: DeepSeek Coder Model V2

DeepSeek Coder is another powerful model in the DeepSeek series, specifically designed to assist with coding tasks, including code generation, debugging, and explaining complex logic. You can install and use it similarly to DeepSeek R1.

Download and Run DeepSeek Coder

  1. Install DeepSeek Coder:

    • Run the command:

      1ollama pull deepseek-coder-v2
      
    • This will download and set up DeepSeek Coder. The process may take a few minutes.

  2. Verify Installation:

    • Confirm that the model is installed:

      1ollama list
      

      Ensure deepseek-coder is listed.

Run DeepSeek Coder

  1. Run the Model:

    • Start DeepSeek Coder with:

      1ollama run deepseek-coder-v2
      
    • Input queries related to coding tasks.

  2. Example Queries:

    • Code generation:

      1Write a Python script to scrape data from a website.
      
    • Debugging:

      1Find the error in this code snippet: [paste code here]
      
    • Code explanation:

      1Explain what this function does: [paste function here]
      

Using DeepSeek Coder in the WebUI

  1. Select DeepSeek Coder:

    • Open the WebUI and choose DeepSeek Coder from the model list.
  2. Interact with the Model:

    • Input your programming-related queries and get detailed responses directly in the WebUI.

Conclusion

With Ollama and DeepSeek R1, you have a robust setup for advanced data analysis. By following this guide, you’ve successfully installed and tested DeepSeek R1, ensuring it’s ready to tackle your analytical challenges. Start exploring its capabilities today using both the CLI and the WebUI, whether natively or through Docker Compose!

comments powered by Disqus