0% found this document useful (0 votes)
19 views3 pages

00 - Introduct - JupyterLab

This document introduces a lab on Disaster Risk Monitoring using Satellite Imagery, focusing on building and deploying a flood detection segmentation model with Sentinel-1 satellite data. It outlines the learning objectives, including computer vision applications, data manipulation, and deep learning model development. The lab is structured into four parts, covering data processing, model training, deployment, and a case study on UNOSAT flood events, utilizing JupyterLab for hands-on exercises.

Uploaded by

doder17443
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views3 pages

00 - Introduct - JupyterLab

This document introduces a lab on Disaster Risk Monitoring using Satellite Imagery, focusing on building and deploying a flood detection segmentation model with Sentinel-1 satellite data. It outlines the learning objectives, including computer vision applications, data manipulation, and deep learning model development. The lab is structured into four parts, covering data processing, model training, deployment, and a case study on UNOSAT flood events, utilizing JupyterLab for hands-on exercises.

Uploaded by

doder17443
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

21/03/2025, 22:58 00_introduction

Header

Disaster
Imagery Risk Monitoring Using Satellite
00 - Introduction
Welcome to the NVIDIA DLI Disaster Risk Monitoring Using Satellite Imagery lab. In
this lab, you will learn how to build and deploy a flood detection segmentation model
using data collected from Sentinel-1 satellites. In order to achieve this, we will use
NVIDIA's DALI, TAO Toolkit, TensorRT, and Triton Inference Server.
Learning Objectives
In this course, you will learn important concepts about using satellite imagery to
develop deep learning-based disaster risk monitoring systems, including:
The Application of Computer Vision for Disaster Risk Monitoring
Manipulation of Data Collected by Earth Observation Satellites
Ways to Efficiently Process Large Imagery Data
Deep Learning Model Development Challenges
End-to-End Machine Learning Workflow
Table of Contents
This lab is broken down into four parts. In the first part, our focus is primarily on
learning how to process large satellite images. In the second part of the lab, we will
learn to efficiently train a segmentation model. Next, we will deploy the model in the
third part of the lab. In the last part of the lab, we will utilize the deep learning model
to perform an impact analysis. This hands-on lab will provide opportunities to get
familiar with each step in the development workflow and to enable customizations
that are most appropriate for your own use cases.
1. Disaster Risk Monitoring Systems and Data Pre-processing
2. Efficient Model Training
3. Model Deployment for Inference
4. UNOSAT Flood Event Case Study
JupyterLab
For this hands-on lab, we use JupyterLab to manage our environment. The
JupyterLab Interface is a dashboard that provides access to interactive iPython
notebooks, as well as the folder structure of our environment and a terminal window
into the Ubuntu operating system. The first view you will see includes a menu bar at
the top, a file browser in the left sidebar, and a main work area that is initially open
to this "introduction" notebook.
dli-e5d62e622240-86cddf.westus2.cloudapp.azure.com/lab/lab/tree/00_introduction.ipynb 1/3
21/03/2025, 22:58 00_introduction

No description has been provided for this image


The file browser can be navigated just like any other file explorer. A double click
on any of the items will open a new tab with its content.
The main work area includes tabbed views of open files that can be closed,
moved, and edited as needed.
The notebooks, including this one, consist of a series of content and code cells.
To execute code in a code cell, press Shift+Enter or the Run button in the
menu bar above, while a cell is highlighted. Sometimes, a content cell will get
switched to editing mode. Executing the cell with Shift+Enter or the Run
button will switch it back to a readable form.
To interrupt cell execution, click the Stop button in the menu bar or navigate to
the Kernel menu, and select Interrupt Kernel .
We can use terminal commands in the notebook cells by prepending an
exclamation point/bang( ! ) to the beginning of the command.

Exercise #1 - Practice
Try executing the simple print statement in the cell below.
In [ ]: # DO NOT CHANGE THIS CELL
# activate this cell by selecting it with the mouse or arrow keys then us
print('This is just a simple print statement')

Now try executing the terminal command in the cell below.


In [ ]: # DO NOT CHANGE THIS CELL
# activate this cell by selecting it with the mouse or arrow keys then us
!echo 'This is another simple print statement'

NVIDIA GPU Cloud


This lab requires that the user have an NVIDIA GPU Cloud (NGC) account and API
key. If you have not done so already, please:
1. Register and activate a free NGC account
2. Generate your NGC API key and save it in a safe location

Exercise #2 - NGC CLI and Docker Registry


Instructions:
Edit the cell below by replacing <MY_NGC_API_KEY> with your key and execute
the cell to set the variable.
Execute the cell below to create a configuration file for NGC.
Execute the cell below to move the newly created config file to .ngc to
configure the NGC CLI.
Execute the cell below to log in to NGC's Docker registry.
dli-e5d62e622240-86cddf.westus2.cloudapp.azure.com/lab/lab/tree/00_introduction.ipynb 2/3
21/03/2025, 22:58 00_introduction

Execute the cell below to pull the TAO container images.


In [ ]: apikey='<<<<MY_NGC_API_KEY>>>>'

In [ ]: # DO NOT CHANGE THIS CELL


# write the config file
config_dict={'apikey': apikey, 'format_type': 'json', 'org': 'nvidia'}
with open('config', 'w') as f:
f.write(';WARNING - This is a machine generated file. Do not edit ma
f.write(';WARNING - To update local config settings, see "ngc config
f.write('\n[CURRENT]\n')
for k, v in config_dict.items():
f.write(k+'='+v+'\n')

# preview the config file


!cat config

In [ ]: # DO NOT CHANGE THIS CELL


# move the config file to ~/.ngc
!mkdir -p ~/.ngc & mv config ~/.ngc/

In [ ]: # DO NOT CHANGE THIS CELL


# login to NGC's docker registry
!docker login -u '$oauthtoken' -p $apikey nvcr.io

In [ ]: # DO NOT CHANGE THIS CELL


# pull TAO docker container images
!docker pull nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
!docker pull nvcr.io/nvidia/tao/tao-toolkit:5.5.0-deploy

Well Done! If the output includes "Login Succeed", then you are ready to begin. Let's
move to the next notebook.
Header

dli-e5d62e622240-86cddf.westus2.cloudapp.azure.com/lab/lab/tree/00_introduction.ipynb 3/3

You might also like