0% found this document useful (0 votes)
30 views89 pages

CV SVD L02 P1 IntroImageProcColor

The document outlines the course plan for CE632: Computer Vision, detailing prerequisites, objectives, outcomes, and a comprehensive syllabus covering image processing techniques, feature extraction, and applications in computer vision. It includes a list of experiments and recommended textbooks, emphasizing practical programming experience and theoretical understanding. Key topics include image acquisition, processing stages, and various applications in fields like biometrics and autonomous vehicles.

Uploaded by

Aman pradeep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views89 pages

CV SVD L02 P1 IntroImageProcColor

The document outlines the course plan for CE632: Computer Vision, detailing prerequisites, objectives, outcomes, and a comprehensive syllabus covering image processing techniques, feature extraction, and applications in computer vision. It includes a list of experiments and recommended textbooks, emphasizing practical programming experience and theoretical understanding. Key topics include image acquisition, processing stages, and various applications in fields like biometrics and autonomous vehicles.

Uploaded by

Aman pradeep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

1

Computer Vision(CE611)
L#01
Digital Image Processing Fundamentals
By
Dr Sunita Dhavale

2
Computer Vision

3
CE632: Course Plan
Department: Computer Science Course Type:
and Engineering Professional Core
Course Title: Computer Vision Course Code: CE632
L-T-P: 3-0-2 Credits: 4
Semester: I Specialization: AI
Total Lectures : 48L + 2P(per
week)
EndSem Marks: 50 Internal Marks:
30(Quiz)+20(Lab)

4
Course objectives
• Prerequisites: Statistical techniques, Linear
algebra and computer programming knowledge is
required.
• Course objectives: To introduce students the
fundamentals of image formation; To introduce
students the major ideas, methods, and
techniques of computer vision and pattern
recognition; To develop an appreciation for
various issues in the design of computer vision
and object recognition systems; and To provide
the student with programming experience from
implementing computer vision and object
recognition applications. 5
Course Outcome
Course Description Blooms Taxanomy No. of Marks
Outcom Level Targeted Contact
es Hours
CO1 Students will be able to understand and apply image Level 2: 14 10
processing techniques including filtering operations, Remembering,
thresholding techniques, edge detection techniques etc. Understanding
(PO1, PO2, PO3, PSO2)
CO2 Students will be able to understand and extract image Level 3: 19 10
features using techniques like corner and interest point Remembering,
detection, shape analysis, fourier descriptors, Ransac, Understanding,
GHT etc. (PO1, PO2, PO3, PSO2) Analysing
CO3 Students will be able to understand and learn how the Level 3: 13 10
extracted features can be used to solve problems in Remembering,
various computer vision related applications. (PO1, PO2, Understanding,
PO3, PSO2) Analysing
CO4 Students will be capable of applying their knowledge and Level 4: Applying, 2Hrs/Week 20
skills to solve engineering problems in computer vision Analysing (LAB)
related domain. (PO1, PO2, PO3, PSO2)
CO1-CO3 EndSem 50
Total 100
Marks
6
Syllabus
Unit 1 Image processing foundations: Review of image processing techniques
Image processing foundations: Review of image processing techniques classical filtering operations,
thresholding techniques, edge detection techniques, mathematical morphology, texture analysis, Shapes
and regions: Binary shape analysis – connectedness – object labeling and counting – size filtering – distance
functions – skeletons and thinning
Unit 2 corner and interest point detection, deformable shape analysis – boundary tracking procedures – active
contours – shape models and shape recognition – centroidal profiles – handling occlusion – boundary
length measures – boundary descriptors – chain codes, Fourier descriptors – region descriptors – moments,
Hough transform: Line detection – Hough Transform (HT) for line detection – foot-of-normal method – line
localization – line fitting
Unit 3 Case study: spatial matched filtering – GHT for ellipse detection – object location – GHT for feature
collation, RANSAC for straight line detection – HT based circular object detection – accurate center location
– speed problem – ellipse detection
Unit 4 Case Study: Image based spam detection, Case Study: CV Applications - Face detection – Face recognition –
Eigen faces, Case Study: CV Applications - human gait analysis, Case Study: CV based Surveillance
Applications

7
List of Experiments
Sr No. Experiment Name
1 Introduction to Digital Image Processing using python
2 Study and Implement Image Transformation Techniques
3 Study and Implement Image Transformation Techniques
4 Study and Implement Edge Detection Techniques
5 Study and Implement Image Thresholding Transform
6 Study and Implement Morphological Operations
7 Study and Implement Harris Corner Point Detection
8 Study and Implement SIFT
9 Mini assignment: Apply CV techniques to solve any real world
problem/ Presentations
10 CV Practice Test -1Quiz/ Presentations

8
Text/Reference Books
• Text/Reference Books:
• E. R. Davies, “Computer & Machine Vision”, Fourth Edition, Academic
Press, 2012.
• R. Szeliski, “Computer Vision: Algorithms and Applications”, Springer 2011.
• Simon J. D. Prince, “Computer Vision: Models, Learning, and Inference”,
Cambridge University Press, 2012.
• Mark Nixon and Alberto S. Aquado, “Feature Extraction & Image
Processing for Computer Vision”, Third Edition, Academic Press, 2012.
• D. L. Baggio et al., “Mastering OpenCV with Practical Computer Vision
Projects”, Packt Publishing, 2012.
• Jan Erik Solem, “Programming Computer Vision with Python: Tools and
algorithms for analyzing images”, O'Reilly Media, 2012.
• Sunita Vikrant Dhavale, “Advanced Image-based Spam Detection and
Filtering Techniques”, IGI Global, 2017
• Research paper for study (if any) - White papers on multimedia from
IEEE/ACM/Elsevier/Spinger/ NVidia sources.

9
Image Acquisition Process
Introduction
• What is Digital Image Processing?
Digital Image
— a two-dimensional function f ( x, y )
x and y are spatial coordinates
The amplitude of f is called intensity or gray level at the point (x, y)

Digital Image Processing


— process digital images by means of computer, it covers low-, mid-, and high-level
processes
low-level: inputs and outputs are images
mid-level: outputs are attributes extracted from input images
high-level: an ensemble of recognition of individual objects

Pixel
— the elements of a digital image
A Simple Image Formation Model
f ( x , y ) = i ( x, y ) r ( x, y )

f ( x, y ) : intensity at the point (x, y)


i ( x, y ) : illumination at the point (x, y)
(the amount of source illumination incident on the scene)
r ( x, y ) : reflectance/transmissivity at the point (x, y)
(the amount of illumination reflected/transmitted by the object)
where 0 < i( x, y ) <  and 0 < r ( x, y) < 1
Some Typical Ranges of Reflectance
• Reflectance

– 0.01 for black velvet

– 0.65 for stainless steel

– 0.80 for flat-white wall paint

– 0.90 for silver-plated metal

– 0.93 for snow


Image Sampling and Quantization

Digitizing the
coordinate
values
Digitizing the
amplitude
values
Image Sampling and Quantization
Representing Digital Images

• The representation of an M×N numerical


array as

 f (0, 0) f (0,1) ... f (0, N − 1) 


 f (1, 0) f (1,1) ... f (1, N − 1) 
f ( x, y ) = 
 ... ... ... ... 
 
 f ( M − 1, 0) f ( M − 1,1) ... f ( M − 1, N − 1) 
Representing Digital Images

• The representation of an M×N numerical


array as

 a0,0 a0,1 ... a0, N −1 


 a a1,1 ... a1, N −1  
A=  1,0

 ... ... ... ... 


 
 aM −1,0 aM −1,1 ... aM −1, N −1 
Representing Digital Images

• The representation of an M×N numerical


array in MATLAB

 f (1,1) f (1, 2) ... f (1, N ) 


 f (2,1) f (2, 2) ... f (2, N ) 
f ( x, y ) = 
 ... ... ... ... 
 
 f ( M ,1) f ( M , 2) ... f (M , N )
Representing Digital Images
• Discrete intensity interval [0, L-1], L=2k

• The number b of bits required to store a M × N


digitized image

b=M×N×k
Representing Digital Images
Digital Image?
•Common image formats include:
– 1 sample per point (B&W or Grayscale)
– 3 samples per point (Red, Green, and Blue)
– 4 samples per point (Red, Green, Blue, and “Alpha”, a.k.a. Opacity)

•For most of this course we will focus on grey-scale images


Image processing
• An image processing operation typically defines a new
image g in terms of an existing image f.
• We can transform either the range of f.

• Or the domain of f:

• What kinds of operations can each perform?


What is Computer Vision? (cont…)
•The continuum from image processing to
computer vision can be broken up into low-,
mid- and high-level processes
Low Level Process Mid Level Process High Level Process
Input: Image Input: Image Input: Attributes Output:
Output: Image Output: Attributes Understanding
Examples: Noise removal, Examples: Object Examples: Scene
image sharpening recognition, understanding,
segmentation autonomous navigation
Key Stages in Digital Image Processing

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Image Acquisition
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Image Enhancement
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Image Restoration
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Morphological Processing
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Segmentation
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Object Recognition
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Representation & Description
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Image Compression
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Colour Image Processing
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Applications & Research Topics
Document Handling
Signature Verification
Biometrics
Fingerprint Verification / Identification
Fingerprint Identification Research

Minutiae Matching

Delaunay Triangulation
Object Recognition
Object Recognition Research

reference view 1 reference view 2

novel view recognized


Indexing into Databases

• Shape content
Indexing into Databases (cont’d)

• Color, texture
Target Recognition

• Department of Defense (Army, Airforce, Navy)


Interpretation of Aerial Photography

Interpretation of aerial photography is a problem domain in both


computer vision and registration.
Autonomous Vehicles

• Land, Underwater, Space


Traffic Monitoring
Face Detection
Face Recognition
Face Detection/Recognition Research
Facial Expression Recognition
Hand Gesture Recognition

• Smart Human-Computer User Interfaces


• Sign Language Recognition
Human Activity Recognition
Medical Applications

• skin cancer breast cancer


Morphing
Inserting Artificial Objects into a Scene
Companies In this Field In India
• Sarnoff Corporation
• Kritikal Solutions
• National Instruments
• GE Laboratories
• Ittiam, Bangalore
• Interra Systems, Noida
• Yahoo India (Multimedia Searching)
• nVidia Graphics, Pune (have high requirements)
• Microsoft research
• DRDO labs
• ISRO labs
• …
Image Processing applications

RADAR is Radio Detection And Ranging. Uses extremely short bursts of radio energy
(traveling at the speed of light) are transmitted, reflected off a target and returned as echo.
Particle/high energy physics is the study of fundamental particles and forces that constitute
matter and radiation.
Radiology represents a branch of medicine that deals with radiant energy in the diagnosis
and treatment of diseases.
Seismology is the study of earthquakes and seismic waves (acoustic energy that travels
through the Earth or another planetary body) that move through and around the Earth.
Electromagnetic Spectrum
Electromagnetic Spectrum

Non-ionizing radiation is composed of electric and magnetic fields (EMFs) that do not have
sufficient energy to remove electrons from atoms or molecules. E.g. ultraviolet (UV), visible
light, infrared (IR), microwave, radio (and television), and extremely low frequency (ELF).
produced by lasers, power lines, household appliances, cellular phones, radios.
TYPES OF IMAGES
• our eyes record very little of the information that is available at any given
moment
• human eye has a limited bandwidth, band of electromagnetic (EM) radiation
that we are able to see, or “visible light”
• diversity of image types that arise from nearly every type of radiation.
– medical imaging
– new sensors that record image data
– PET (positron emission tomography)
– MRI (magnetic resonance imaging)
– CAT (computer-aided tomography)
– X-ray data
• Non-EM radiation is also useful for imaging.
• ultrasound - acoustic waves which propagate through a medium by means
of vibrations of the molecules that make the medium
• E.g. high-frequency sound waves (ultrasound) - to create images of the
human body, low-frequency sound waves – to create images of the earth’s
subsurface.
63
Recording the various types of
interaction of radiation with matter

Opaque-> can not transmit light, it reflects, scatters, or absorbs all of it e.g. Carbon black and
mirrors
Luminous objects:- can emit light energy by themselves, light bulb
Radiation source-> radioactive substances
TYPES OF IMAGES
• Reflection images sense radiation that has been reflected from the
surfaces of objects.
• radiation itself may be ambient or artificial, may be from a localized
source or from multiple or extended sources.
• optical imaging through the eye is of reflection images.
• nonvisible light examples - radar images, sonar images, laser images,
and some types of electron microscope images.
• Emission images - objects being imaged are self-luminous. E.g.
thermal or infrared images in medical, astronomical, and military
applications; self-luminous visible light objects - light bulbs and
stars; and MRI images, which sense particle emissions.
TYPES OF IMAGES
• image may reveal how the object creates
radiation/internal structure of the object being imaged.
• a thermal camera ->low-light situations captures warm
objects, such as people.
• Absorption images yield information about the internal
structure of objects. radiation passes through objects and
is partially absorbed or attenuated by the material
composing them.
• The degree of absorption dictates the level of the sensed
radiation in the recorded image. X-ray images, microscopic
images, and certain types of sonic images.
Reflection images - (a), (b) - visible light band (bas-relief patterns cast onto the coins) and microwave band
(synthetic aperture radar image of DFW airport)
Emission images - (c), (d) - forward-looking infrared (FLIR- used on military aircraft/thermographic camera
that senses infrared radiation) image and a visible light image of the globular star cluster Omega Centauri-
consists of over a million stars
Absorption - (e), (f) - digital (radiographic-low energy xrays) mammogram and a conventional light
micrograph (using microscope)
scale of images- possible to image objects extending over 1030 m and as small as 10-10 m.
Dimensionality of images and video
Sampling is the process of converting a continuous-space (or continuous-space/time)
signal into a discrete-space (or discrete-space/time) signal i.e. vector of numbers
Sampled Image as matrix – in color
image: 1 for each color

Indexed array elements -> picture element, or pixel


aspect ratio of 4:3 is common
visual effect of different image sampling
densities
Under-sampling effect ->
insufficient sampling causes
aliasing, where image frequencies
appear that have no physical
meaning creating a false pattern
potential loss of information
relative to the original analog
image
Nyquist Theorem: in order to
perfectly reconstruct a continuous
signal through its samples - If the
spatial sampling frequency is
greater than or equal to twice the
maximum frequency present in
the image.
Quantization is the process of converting a continuous-valued
image that has a continuous range into a discrete-valued image
that has a discrete range. nonlinear irreversible.
Example: 4, 2, and 1 bit(s) per pixel
Gray level quantization -> different gray level resolution, false
contouring effect
Example: 8, 4, 2, and 1 bit(s) per pixel
Three types of digital images
• Color: three layers of grayscale images normally RGB, 24 bits
per pixel of color
• Grayscale: Intensity levels, 0– 255, (8‐bit images)
• Black and White: Two intensity levels, binary image, 0 and 1
• f(x,y)∈[Lmin,Lmax] or [0, L‐1], 0- Black, L‐1-White
• Spatial Resolution: measure of smallest discernible detail in an
image, resize image to see effects, pixels per unit distance, dots
per inch (dpi), 20Mega pixel camera –higher capability to
resolve detail than 8 mega pixel camera
• Intensity Resolution: measure of smallest discernible change in
intensity level, no. of bits used to quantize intensity, 8 bit
image, 16 bit image etc.
Image as function
Color Representation
• The colors are actually electromagnetic waves described by
their wavelength.
• The visible spectrum, i.e., the portion of the
electromagnetic spectrum that can be detected by the
human eye, ranges from 390 nm (violet) to 750 nm (red).
• colors are considered to be formed from different
combinations of the primary colors red, green, and blue.
• These three colors can be added to create the secondary
colors magenta (red + blue), cyan (green + blue), and
yellow (green + red).
• The white color can be formed if the three primary colors
are mixed or if a secondary color is mixed with its opposite
primary color (all in the right intensities).
RGB Color Model
• used for representing colors in electronic devices as
TV and computer monitors, scanners, and digital
cameras.
• RGB -> additive model, RGB colors are combined on
different quantities or portions to reproduce other
colors.
• Pixels-> 8 bits depth, range of [0,255] for each color.
• When all the colors have the minimum value, the
resulting color is black.
• when all the colors have the maximum value, the
resulting color is white.
RGB Color Model

RGB color cube -> based on the Cartesian coordinate system.


The primary and secondary colors are at the corners of the cube.
The black color is at the origin and the white color is at its opposite corner.
The diagonal between the black and the white colors is the gray scale
RGB space is device-dependent -> same RGB triplet will be displayed slightly differently on
different monitors.
doesn’t correlate well with human perception and is sensitive to changes in lighting
conditions
HSV Color Model
• created by Alvy Ray Smith
• composed by three components: hue,
saturation, and value. also known as
HSB (hue, saturation and brightness).
• The possible values for the hue
attribute range from 0 to 360 and the
values for the other two attributes
range from 0 to 100.
• based on cylindrical coordinates and
it is actually a nonlinear
transformation of the RGB system.
• This color system allows the
separation of the three components
of a specific color (hue, saturation,
and intensity)
• well suited to characterize colors in
practical terms for human
interpretation.
HSV Color Model
• The hue attribute brings the information
concerning the main wavelength in the
color, i.e., colors in their purest form.
• Saturation refers to how strong or weak a
color is (high saturation being strong). i.e.
refers to the dominance of hue in the color
• brightness/Value refers to how light or dark
a color is (light having a high value).
dimension of lightness/darkness.
• A fully saturated color does not contain
white light. S=1->pure color. If pure color
mixed with black, Intensity decreases.
• chromaticity is a description that combines
hue and saturation.
• HSV lacks perceptual uniformity – do not
ensure that equal distances in color space
correspond to similar perceptual
differences
RGB->HSV->RGB
CMYK Color Model
• composed by the cyan, magenta,
yellow, and black colors.
• The basis of this model is the light
absorption, as the visible colors
come from the non-absorbed light.
• This space is usually used by
printers and photocopiers to
reproduce the majority of the colors
in the visible spectrum.
• subtractive color system
• Cyan is the opposite color of red,
i.e., it plays as a filter that absorbs
the red color. The same occurs with
magenta and green, and with
yellow and blue.
CMYK Color Model
Actually, the original subtractive model is CMY. Although equal
amounts of cyan, magenta, and yellow produce the black color in
theory, this combination in practice (printing on a paper) does not
produce a true black. In order to overcome this problem, the fourth
color (black) is added to the model (CMYK).
The black key (K) color is calculated from the red (R'), green (G') and blue (B')
colors: K = 1-max(R', G', B’)
R' = R/255 The cyan color (C) is calculated from the red (R') and black (K) colors:
G' = G/255 C = (1-R'-K) / (1-K)
B' = B/255 The magenta color (M) is calculated from the green (G') and black (K) colors:
M = (1-G'-K) / (1-K)
The yellow color (Y) is calculated from the blue (B') and black (K) colors:
Y = (1-B'-K) / (1-K)
R = 255 × (1-C) × (1-K)
G = 255 × (1-M) × (1-K)
B = 255 × (1-Y) × (1-K)
YIQ Color Model
• most widely colour model used in Television broadcasting.
• Y stands for luminance part and IQ stands for chrominance part.
• In the black and white television, only the luminance part (Y) was broadcast.
• The y value is similar to the grayscale part.
• The colour information is represented by the IQ part. YIQ model is used in the
conversion of grayscale images to RGB colour images
LAB Color Model/Space
• encapsulates Lightness (L) and two color-
opponent dimensions: Green-Red (A) and Blue-
Yellow (B).
• designed to approximate human vision more
closely.
• L (Lightness): Represents the perceived brightness
from black to white. (0–100)
• A and B channels: Represent color information,
with A axis ranging from green to red, and B axis
from blue to yellow. (-128–127)
• LAB’s separation of color from brightness makes it
more suitable for tasks requiring precise color
analysis and manipulation.
• LAB color space is device-independent. i.e. colors
are represented consistently across different https://medium.com/@weichen
devices and lighting conditions, making it ideal for pai/understanding-rgb-ycbcr-
applications requiring uniform color and-lab-color-spaces-
representation f9c4a5fe485a#d8f5
Color Representation
• The color depth measures the amount of color information available
to display or print each pixel of a digital image.
• A high color depth leads to more available colors, and consequently
to a more accurate color representation. For example, a pixel with
one bit depth has only two possible colors: black and white.
• A pixel with 8 bits depth has 256 possible values and a pixel with 24
bits depth has more than 16 million of possible values.
• Usually, the color depths vary between 1 and 64 bits per pixel in
digital images.
• The color models are used to specify colors as points in a coordinate
system, creating a specific standard.
Image Formats
• most usual raster formats are GIF, JPEG, PNG, TIFF, and BMP
• TIFF (Tagged Image File Format) is a flexible format that usually stores 8
bits or 16 bits per color (red, green, blue) for a total of 24 or 48 bits,
respectively. The extensions used are TIFF or TIF. data inside TIFF files
can be lossless compressed or lossy compressed.
• JPEG (Joint Photographic Experts Group) files store data in a format with
loss (in major cases). Almost all digital cameras can save images in JPEG
format, which supports 8 bits per color for a total of 24 bits, usually
producing small files.
• The PNG (Portable Network Graphics) format was created as a free and
open source version of GIF. This format supports true color (16 million of
colors) while GIF only supports 256 colors. PNG stands out when an
image is formed by large uniformly colored areas. Lossless.
• BMP (Windows Bitmap) supports graphic files inside the Microsoft
Windows Operational System. Typically, BMP files data are not
compressed which results in big size files.
Python
• # Taking user inputs for R, G and B
• R = int(input("Enter R value: "))
• G = int(input("Enter G value: ")) # saturation calculation
• B = int(input("Enter B value: ")) if (Cmax == 0):
• # Constraining the values to the range 0 to 1 S=0
• R_dash = R / 255 else:
• G_dash = G / 255 S = delta / Cmax
• B_dash = B / 255 # value calculation
• Cmax = max(R_dash, G_dash, B_dash) V = Cmax
• Cmin = min(R_dash, G_dash, B_dash) # print output. H in degrees. S and V in
• delta = Cmax - Cmin percentage.
• # hue calculation # these values may also be represented
• if (delta == 0):
from 0 to 255.
• H=0
print("H = {:.1f}°".format(H))
• elif (Cmax == R_dash):
print("S = {:.1f}%".format(S * 100))
• H = (60 * (((G_dash - B_dash) / delta) % 6))
• elif (Cmax == G_dash):
print("V = {:.1f}%".format(V * 100))
• H = (60 * (((B_dash - R_dash) / delta) + 2))
• elif (Cmax == B_dash):
• H = (60 * (((R_dash - G_dash) / delta) + 4))
Thank you

Any Questions???

You might also like