0% found this document useful (0 votes)
230 views10 pages

Hands-On Guide Running DeepSeek LLMs Locally

DeepSeek provides various models tailored for different user needs, including coding, general queries, advanced reasoning, and multimodal tasks. Users can run the DeepSeek R1 model locally using Ollama or utilize the DeepSeek-Janus-Pro-1B model on Google Colab with GPU support. The document outlines installation steps and usage instructions for both methods.

Uploaded by

suresh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
230 views10 pages

Hands-On Guide Running DeepSeek LLMs Locally

DeepSeek provides various models tailored for different user needs, including coding, general queries, advanced reasoning, and multimodal tasks. Users can run the DeepSeek R1 model locally using Ollama or utilize the DeepSeek-Janus-Pro-1B model on Google Colab with GPU support. The document outlines installation steps and usage instructions for both methods.

Uploaded by

suresh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Run Affordable

DeepSeek LLMs &


Multimodal LLMs
Locally in 5 Minutes
Running
on
R1
Running

on

Janus-Pro-1B
Overview of DeepSeek Models
DeepSeek offers a diverse range of models, each
optimized for different tasks. Below is a breakdown of
which model suits your needs best:

For Developers & Programmers: The DeepSeek-


Coder and DeepSeek-Coder-V2 models are designed
for coding tasks such as writing and debugging code.
For General Users: The DeepSeek-V3 model is a
versatile option capable of handling a wide range of
queries, from casual conversations to complex
content generation. For Researchers & Advanced
Users: The DeepSeek- R1 model specializes in
advanced reasoning and logical analysis, making it
ideal for problem-solving and research applications.
For Vision Tasks: The DeepSeek-Janus family and
DeepSeek-VL models are tailored for multimodal
tasks, including image generation and processing.
Running DeepSeek R1 on Ollama
Step 1: Install Ollama
To run DeepSeek models on your local machine, you
need to install Ollama:

Download Ollama: Click here to download*


For Linux users: Run the following command in your
terminal:bashCopyEdit

Step 2: Pull the DeepSeek R1 Model (distilled


variant) Once Ollama is installed, open your
Command Line Interface (CLI) and pull the model:

*If you have downloaded the PDF, you can able to click on the link
You can explore other DeepSeek models available on
Ollama here: Ollama Model Search.

This step may take some time, so wait for the download
to complete.

Step 3: Run the Model Locally


Once the model is downloaded, you can run it using the
command:
The model is now available to use on the local machine
and is answering my questions without any hiccups.

Running DeepSeek-Janus-Pro-1B
on Google Colab
In this section, we’ll try out DeepSeek-Janus-Pro-1B
using Google Colab. Before starting, make sure to set the
runtime to T4 GPU for optimal performance.
Step 1: Clone the DeepSeek-Janus Repository
Run the following command in a Colab notebook:
Step 2: Install Dependencies
Navigate to the cloned directory and install the required
packages:

Step 3: Load the Model and Move It to GPU


Now, we’ll import necessary libraries and load the model
onto CUDA (GPU):
Step 4: Pass an Image for Processing

Now, let’s pass an image to the model and generate a


response.

📷 Input Image
Initializing the Prompt and System Role

Processing the Input


Initializing the Prompt and System Role

Processing the Input

You might also like