Lect#3 Basic Concepts Part2
Lect#3 Basic Concepts Part2
ARealExampleCaseStudy:
Fish Classification:
Sea Bass / Salmon.
Salmon
theprocessofsorting
incomingFish ona
conveyorbelt
accordingtoitstypes
Three basic
Separate seabass steps
from salmon
Example:“Seabass”vs.“Salmon”(Cont.)
StepI:Preprocessing
Anoisemaybecausedbythesensorsisrequiredtoberemoved(the
effectofnoisecanreducethereliabilityofthemeasuredfeature
values).
Goal:Preprocesstheimagecapturedbythecamera,suchthat
subsequentoperationscouldbesimplifiedforreducingthe
noisewithoutlosingrelevantinformation
Ɣ preprocessingsteps:
image • Adjustthelevelofillumination
processing • Removenoising
• Enhancethelevelofcontrast
• Isolatedifferentfishesfromone
segmentation another
• Isolatefishesfromthebackground
Example:“Seabass”vs.“Salmon”(Cont.)
StepII:FeatureExtraction
isoneofthemostcriticalstepsinthepatternrecognitionsystemdesign
Thisstepisprocessedbymeasuringandselectingcertain"features"
or"properties"of thefishtobeclassified.
Goal: Extractfeatures(withgooddistinguishingability)from
thepreprocessedimagetobeusedforsubsequent
classification
Ɣ Examples of how to extract features:
Seabassisusually
“length” couldbeagood
longer than asalmon
candidateforfeatures
Seabassisusually “lightness”offishscales
brighter thanasalmon couldbeanothergood
candidateforfeatures
Example:“Seabass”vs.“Salmon”(Cont.)
StepIII:Classification
Featuresarepassedtoaclassifier(model)thatevaluatesthe
measurements ofthe feature&makesadecision.
Goal:Todistinguishdifferenttypesofobjects(inthiscase,sea
bass vs.salmon)basedontheextractedfeatures
ProblemAnalysis
Some steps must be taken if we want to design the model
of the system that automate the process of sorting
incoming fish on a conveyor belt according to types
(Salmon or Sea bass).
¾ Set up a camera
¾ Take some sample images to extract features
¾ Note the physical differences between the two types
of fish
Length
Lightness
Width
No. & shape of fins
Position of the mouth
¾ This is the set of all suggested features to be used in our
classifier!
ProblemAnalysis
¾ Learning:
Obtain training samples
Make measurements
Inspect results (Histogram)
hͲaxis:lengthoffish
vͲaxis:numberoffishes
withacertainlength
Onaverage,seabassis
somewhat longerthan
salmon
Toomuchoverlaps
histogramforlength poorseparationwiththe
lengthfeature
ProblemAnalysis
• Lengthbyitselfisnotreliable hͲaxis:lightnessoffish
• Tryanotherfeature:Lightness scales
vͲaxis:numberoffishes
withacertain
lightness
Onaverage,seabassis
much brighterthan
salmon
Lessoverlaps better
separationwiththe
lightnessfeature,butstill
histogramforlightness
abitunsatisfactory
• Cost of misclassification: depends on application
Is it better to misclassify salmon as bass or vice versa?
¾ Put salmon in a can of bass loose profit
¾ Put bass in a can of salmon loose customer
ProblemAnalysis
There is a cost associated with our decision.
Our task is to make a decision rule to minimize a given cost.
• We can use more than one feature.
• By adopting the lightness and add the width of the fish
The image of each fish has reduced to a point or a feature
vector in a 2-D feature space.
Fish xT = [x1, x2]
Lightness Width
Problem Analysis
.
Problem Analysis
Complex models:
e.g. it’suselesstoget100%accuracywhenanswering
homeworkquestionswhilegetlowaccuracywhen
answeringexamquestions.
Performanceon
thetrainingset
scatter plot for the feature vectors
Tradeoff
Simplicityof
theclassifier
scatterplotforthefeaturevectors
Problem Analysis
Try and try again to reach to "Optimal" model
def extract_fish_length(image):
# Use OpenCV to detect fish head and tail points
# Calculate the Euclidean distance between these points
# Return the fish length
def extract_lightness(image):
# Convert the image to grayscale
# Compute the average pixel intensity (lightness)
# Normalize for illumination variations
# Return the lightness value
Problem Analysis
def classify_fish(image):
fish_length = extract_fish_length(image)
lightness = extract_lightness(image)
# Set critical values (l* and x*)
l_critical = 10 # Example threshold for fish length
x_critical = 150 # Example threshold for lightness
if fish_length > l_critical and lightness > x_critical:
return "Sea Bass"
else:
return "Salmon"
# Load a test fish image
test_image = cv2.imread("path/to/your/test/image.jpg")
Thefaceisanimportantpartofwhoyouareandhowpeople
identifyyou.
Facialrecognitionsoftwarehastheabilitytofirstrecognizefaces
basedonfacialfeatures.
• Ifyoulookatthemirror,youcanseethatyourfacehascertain
clearablelandmarks.
• Thesearethepeaks thatisdetectedtoextractthedifferent
facialfeatures.
Face Recognition Technology
HOWFACERECOGNITIONSYSTEMSWORK
Facialrecognitionsoftwarehastheabilitytofirstrecognizefaces
basedonfacialfeatures,whichisatechnologicalfeatinitself.
Ifyoulookatthemirror,youcanseethatyourfacehascertain
clearablelandmarks.
Thesearethepeaks thatisdetectedtoextractthedifferentfacial
features.
Face Recognition Technology
Face Recognition Technology
HOWFACERECOGNITIONSYSTEMSWORK(Cont.)
Thereareabout80nodalpointsonahumanface.
Herearenodalpointsthataremeasuredbythesoftware.
• Distancebetweentheeyes
• Widthofthenose
• Depthoftheeyesocket
• Cheekbones
• Jawline
• Chin
Thesenodalpointsaremeasuredtocreateanumericalcode,a
stringofnumbersthatrepresentsafaceinthedatabase,thiscode
iscalledfaceprint.
Only14to22nodalpointsareneededforfacialsoftwareto
completetherecognitionprocess
Face Recognition Technology
A face recognition system generally consists of four
modules as depicted in Figure
Ɣ Detection,
Ɣ Alignment,
Ɣ Feature extraction,,
Ɣ Matching,
• Face detection
segments the face areas from the background.
• Face alignment
normalizing faces can be achieved with more accurate
localization.
• Feature extraction
provide effective information (template / feature vector )
that is useful for distinguishing between faces of different
persons.
• Face matching,
the extracted feature vector of the input face is matched
against those of enrolled faces in the database; and then
produces the identity of the face when a match is found or
indicates an unknown face (no match)
Face Recognition Technology
Ɣ Itistheelectronicconversionof
imagesofatexttothecharacters.
Ɣ Ithastheabilitytoscantextfrom
imagesofhandwrittenand
printedtexts.
Ɣ Ithelpstoeditandsearchthe
wordsinthescanneddocuments
likePDFfiles.
OpticalCharacterRecognitionTechnology
Let’s explore the hardware and software components that make
up (OCR) systems:
• Hardware Components:
–Optical Scanner: It reads physical, printed documents and converts
them into digital data.
• Software Components:
–Image Analysis: The OCR software begins by analyzing the scanned
image. It classifies light areas as the background and dark areas as the
text. This step is crucial for identifying characters.
– Character Recognition Process:
Once the image is pre-processed, the OCR software identifies individual
characters. It matches these characters against known patterns.
ImplementationofanOCRsystem
AtypicalOCRsystem
consistsofseveral
Processingsteps.
Ɣ Thefirststepinthe
processistodigitizethe
analogdocumentinto
regionsusinganoptical
scanner.
Ɣ Whentheregionscontainingtextarelocated,each
symbolisextractedthroughasegmentationprocess.
ImplementationofanOCRsystem(Con.)
ƔTheextractedsymbolsmay
thenbepreprocessed
(eliminatingnoise),to
facilitatetheextractionof
featuresinthenextstep.
ƔTheidentityofeachsymbolisfoundbycomparingthe
extractedfeatureswithdescriptionsofthesymbolclasses
obtainedthroughapreviouslearningphase.
Ɣ Finallyappropriateinformationisusedtoreconstructthe
wordsandnumbersoftheoriginaltext.
ImplementationofanOCRsystem(Con.)
For example:
Weseparatethe
charactersinto
segments anduse
theirgeometric
propertiestoidentify
them(feature
extraction)andthen
usearecognition
modeltrainedona
priordatabase
(recognition)
ImplementationofanOCRsystem(Con.)
Here’s a simple example using Keras and TensorFlow:
Python
import cv2
import numpy as np
from keras.models import load_model