0% found this document useful (0 votes)
91 views10 pages

Run DeepSeek Models Locally in 5 Minutes

DeepSeek provides various models optimized for different tasks, including coding, general queries, advanced reasoning, and vision tasks. To run the DeepSeek R1 model locally, users must install Ollama, pull the model, and execute it via the command line. Additionally, instructions for running the DeepSeek-Janus-Pro-1B model on Google Colab are provided, emphasizing the need for GPU optimization and necessary dependencies.

Uploaded by

Ramez Maher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views10 pages

Run DeepSeek Models Locally in 5 Minutes

DeepSeek provides various models optimized for different tasks, including coding, general queries, advanced reasoning, and vision tasks. To run the DeepSeek R1 model locally, users must install Ollama, pull the model, and execute it via the command line. Additionally, instructions for running the DeepSeek-Janus-Pro-1B model on Google Colab are provided, emphasizing the need for GPU optimization and necessary dependencies.

Uploaded by

Ramez Maher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Run DeepSeek Models

Locally in 5 Minutes

Running
on
R1
Running

on

Janus-Pro-7B
Overview of DeepSeek Models
DeepSeek offers a diverse range of models, each
optimized for different tasks. Below is a breakdown of
which model suits your needs best:

For Developers & Programmers: The DeepSeek-


Coder and DeepSeek-Coder-V2 models are designed
for coding tasks such as writing and debugging code.

For General Users: The DeepSeek-V3 model is a


versatile option capable of handling a wide range of
queries, from casual conversations to complex
content generation.

For Researchers & Advanced Users: The DeepSeek-


R1 model specializes in advanced reasoning and
logical analysis, making it ideal for problem-solving
and research applications.

For Vision Tasks: The DeepSeek-Janus family and


DeepSeek-VL models are tailored for multimodal
tasks, including image generation and processing.
Running DeepSeek R1 on Ollama
Step 1: Install Ollama

To run DeepSeek models on your local machine, you


need to install Ollama:

Download Ollama: Click here to download*


For Linux users: Run the following command in your
terminal:bashCopyEdit

Step 2: Pull the DeepSeek R1 Model

Once Ollama is installed, open your Command Line


Interface (CLI) and pull the model:

*If you have downloaded the PDF, you can able to click on the link
You can explore other DeepSeek models available on
Ollama here: Ollama Model Search.

This step may take some time, so wait for the download
to complete.

Step 3: Run the Model Locally

Once the model is downloaded, you can run it using the


command:
The model is now available to use on the local machine
and is answering my questions without any hiccups.

Running DeepSeek-Janus-Pro-1B
on Google Colab

In this section, we’ll try out DeepSeek-Janus-Pro-1B


using Google Colab. Before starting, make sure to set the
runtime to T4 GPU for optimal performance.

Step 1: Clone the DeepSeek-Janus Repository

Run the following command in a Colab notebook:


Step 2: Install Dependencies

Navigate to the cloned directory and install the required


packages:

Step 3: Load the Model and Move It to GPU

Now, we’ll import necessary libraries and load the model


onto CUDA (GPU):
Step 4: Pass an Image for Processing

Now, let’s pass an image to the model and generate a


response.

📷 Input Image
Initializing the Prompt and System Role

Processing the Input


Initializing the Prompt and System Role

Processing the Input

You might also like