Chapter-1: The "Best" Biometric Characteristic
Chapter-1: The "Best" Biometric Characteristic
INTRODUCTION
There are two key words in this definition: “automated” and “person”. The word “automated”
differentiates biometrics from the larger field of human identification science. Biometric authentication
techniques are done completely by machine, generally (but not always) a digital computer. Forensic
laboratory techniques, such as latent fingerprint, DNA, hair and fiber analysis, are not considered part of
this field. Although automated identification techniques can be used on animals, fruits and vegetables
manufactured goods and the deceased, the subjects of biometric authentication are living humans. For this
reason, the field should perhaps be more accurately called “anthropometric authentication”. The second
key word is “person”. Statistical techniques, particularly using fingerprint patterns, have been used to
differentiate or connect groups of people or to probabilistically link persons to groups, but biometrics is
interested only in recognizing people as individuals. All of the measures used contain both physiological
and behavioral components, both of which can vary widely or be quite similar across a population of
individuals. No technology is purely one or the other, although some measures seem to be more
behaviorally influenced and some more physiologically influenced. The behavioral component of all
biometric measures introduces a “human factors” or “psychological” aspect to biometric authentication as
well. In practice, we often abbreviate the term “biometric authentication” as “biometrics”, although the
latter term has been historically used to mean the branch of biology that deals with its data statistically
and by quantitative analysis So “biometrics”, in this context, is the use of computers to recognize people,
despite all of the across-individual similarities and within-individual variations. Determining “true”
identity is beyond the scope of any biometric technology. Rather, biometric technology can only link a
person to a biometric pattern and any identity data (common name) and personal attributes (age, gender,
profession, residence, nationality) presented at the time of enrollment in the system. Biometric systems
inherently require no identity data, thus allowing anonymous recognition. Ultimately, the performance of
a biometric authentication system, and its suitability for any particular task, will depend upon the
interaction of individuals with the automated mechanism. It is this interaction of technology with human
physiology and psychology that makes “biometrics” such a fascinating subject.
Examples of physiological and behavioral characteristics currently used for automatic identification
include fingerprints, voice, iris, retina, hand, face, handwriting, keystroke, and finger shape. But this is
only a partial list as new measures (such as gait, ear shape, head resonance, optical skin reflectance and
body odor) are being developed all of the time. Because of the broad range of characteristics used, the
imaging requirements for the technology vary greatly. Systems might measure a single one-dimensional
signal (voice); several simultaneous one-dimensional signals (handwriting); a single two-dimensional
image (fingerprint); multiple two dimensional measures (hand geometry); a time series of two-
dimensional images (face and iris); or a three-dimensional image (some facial recognition systems).
Which biometric characteristic is best? The ideal biometric characteristic has five qualities: robustness,
distinctiveness, availability, accessibility and acceptability. By “robust”, we mean unchanging on an
individual over time. By “distinctive”, we mean showing great variation over the population. By
“available”, we mean that the entire population should ideally have this measure in multiples. By
“accessible”, we mean easy to image using electronic sensors. By “acceptable”, we mean that people do
not object to having this measurement taken from them.
The Applications
The operational goals of biometric applications are just as variable as the technologies: some systems
search for known individuals; some search for unknown individuals; some verify a claimed identity; some
verify an unclaimed identity; and some verify that the individual has no identity in the system at all. Some
systems search one or multiple submitted samples against a large database of millions of previously
stored “templates” – the biometric data given at the time of enrollment. Some systems search one or
multiple samples against a database of a few “models” – mathematical representations of the signal
generation process created at the time of enrollment. Some systems compare submitted samples against
models of both the claimed identity and impostor identities. Some systems search one or multiple samples
against only one “template” or “model”.
Although these devices rely on widely different technologies, much can be said about them in general.
Figure 1.1 shows a generic biometric authentication system divided into five subsystems: data collection,
transmission, signal processing, decision and data storage. We will consider these subsystems one at a
time.
Data Collection
Biometric systems begin with the measurement of a behavioral/physiological characteristic. Key to all
systems is the underlying assumption that the measured biometric characteristic is both distinctive
between individuals and repeatable over time for the same individual. The problems in measuring and
controlling these variations begin in the data collection subsystem.
Transmission
Some, but not all, biometric systems collect data at one location but store and/or process it at another.
Such systems require data transmission. If a great amount of data is involved, compression may be
required before transmission or storage to conserve bandwidth and storage space. Figure 1.1 shows
compression and transmission occurring before the signal processing and image storage. In such cases,
the transmitted or stored compressed data must be expanded before further use. The process of
compression and expansion generally causes quality loss in the restored signal, with loss increasing with
increasing compression ratio. The compression technique used will depend upon the biometric signal. An
interesting area of research is in finding, for a given biometric technique, compression methods with
minimum impact on the signal-processing subsystem.
Signal Processing
Having acquired and possibly transmitted a biometric characteristic, we must prepare it for matching with
other like measures. Figure 1.1 divides the signal-processing subsystem into four tasks: segmentation,
feature extraction, quality control, and pattern matching. Segmentation is the process of finding the
biometric pattern within the transmitted signal. For example, a facial recognition system must first find
the boundaries of the face or faces in the transmitted image. Once the raw biometric pattern of interest has
been found and extracted from larger signal, the pattern is sent to the feature extraction process. In
general, feature extraction is a form of non-reversible compression, meaning that the original biometric
image cannot be reconstructed from the extracted features after feature extraction, or maybe even before,
we will want to check to see if the signal received from the data collection subsystem is of good quality.
If the features “don’t make sense” or are insufficient in some way we can conclude quickly that the
received signal was defective and request a new sample from the data collection subsystem while the user
is still at the sensor. The development of this “quality control” process has greatly improved the
performance of biometric systems in the last few short years.”, now of very small size compared to the
original signal, will be sent to the pattern matching process for comparison with one or more previously
identified and stored feature templates or models. The purpose of the pattern matching process is to
compare a presented feature sample to the stored data, and to send to the decision subsystem quantitative
Measure of the comparison.
Storage
The remaining subsystem to be considered is that of storage. There will be one or more forms of storage
used, depending upon the biometric system. Templates or models from enrolled users will be stored in a
database for comparison by the pattern matcher to incoming feature samples. For systems only
performing “one-to-one” matching, the database may be distributed on smart cards, optically read cards or
magnetic stripe cards carried by each enrolled user. Depending upon system policy, no central database
need exist, although in this application a centralized database can be used to detect counterfeit cards or to
reissue lost cards without re-collecting the biometric pattern.
Decision
The decision subsystem implements system policy by directing the databases search, determines
“matches” or “non-matches” based on the distance or similarity measures received from the pattern
matcher, and ultimately makes an “accept/reject” decision based on the system policy. Such a decision
policy could be to reject the identity claim (either positive or negative) of any user whose pattern could
not be acquired. For an acquired pattern, the policy might declare a match for any distance lower than a
fixed threshold and “accept” a user identity claim on the basis of this single match, or the policy could be
to declare a match for any distance lower than a user-dependent, time-variant, or environmentally linked
threshold and require matches from multiple measures for an “accept” decision.
Background
An automated face recognition system includes several related face processing tasks, such as detection of
a pattern as a face, face tracking in a video sequence, face verification, and face recognition. Face
detection generally learns the statistical models of the face and non-face images, and then applies a two-
class classification rule to discriminate between face and non-face patterns. Face tracking predicts the
motion of faces in a sequence of images based on their previous trajectories and estimates the current and
future positions of those faces. While face verification is mainly concerned with authenticating a claimed
identity posed by a person, such as “Is she the person who she claims to be?” face recognition focuses on
recognizing the identity of a person from a database of known individuals. Shows a block diagram of the
overall face recognition system.
When an input image is presented to the face recognition system, the system first performs face detection
and facial landmark detection, such as the detection of the centers of the eyes. The system then
implements the normalization and cropping procedures, which perform the following three tasks:
(1) Spatial normalization, which aligns the centers of the eyes to predefined locations and fixes the
number of pixels between the eyes (intraocular distance) via rotation and scaling transformations;
(2) Facial region extraction, which crops the facial region that contains only the face, so that the
performance of face recognition is not affected by the factors not related to the face itself, such as hair
styles;
(3) Intensity normalization, which converts the facial region to a vector by concatenating its rows (or
columns), and then normalizes the pixels in the vector to zero mean and unit variance. Finally, the system
extracts features with high discriminating power for face recognition.
Performance evaluation is an important factor for a face recognition system. The strength and weakness
of an automated face recognition system are evaluated using standard databases and objective
performance statistics.
Statistical methods usually start with the estimation of the distributions of the face and non-face patterns,
and then apply a pattern classifier or a face detector to search over a range of scales and locations for
possible human faces. Neural network-based methods, however, learn to discriminate the implicit
distributions of the face and non-face patterns by means of training samples and the network structure,
without involving an explicit estimation procedure.
Rowley et al. developed a neural network-based upright, frontal face detection system, which applies a
retinally connected neural network to examine small windows of an image and decide whether each
window contains a face. The face detector, which was trained using a large number of face and non-face
examples, contains a set of neural network-based filters and an arbitrator which merges detections from
individual filters and eliminates overlapping detections. In order to detect faces at any degree of rotation
in the image plane, the system was extended to incorporate a separate router network, which determines
the orientation of the face pattern. The pattern is then derotated back to the upright position, which can be
processed by the early developed system.
Hsu et al. presented a color-based face detection method under variable illumination and complex
background. First, the method applies a lighting compensation technique and a nonlinear color
transformation to detect skin regions in a color image. Then it generates face candidates based on the
spatial arrangement of the skin patches. Finally, the method constructs eye, mouth and boundary maps to
verify those candidates. Experiments show that the method is capable of detecting faces over a wide range
of facial variations in color, position, scale, orientation, pose and expression
Robust face recognition schemes require both low-dimensional feature representation for data
compression purposes and enhanced discrimination abilities for subsequent image classification.
The representation methods usually start with a dimensionality reduction procedure, since the high
dimensionality of the original space makes the statistical estimation very difficult, if not impossible,
due to the fact that the high-dimensional space is mostly empty. The discrimination methods often
try to achieve high separability between different patterns. Table 1.4 shows some popular
representation and classification techniques and some methods that apply these techniques for face
recognition
Principal Component Analysis is commonly used for deriving low-dimensional representations of input
images. Specifically, PCA derives an orthogonal projection basis that directly leads to dimensionality
reduction and possibly to feature selection. Applying PCA technique to face recognition, Turk and
Pentland developed the well-known “Eigen face” method, where the Eigen faces correspond to the
eigenvectors associated with the largest eigenvalues of the face covariance matrix. The Eigen faces thus
define a feature space, or “face space”, which drastically reduces the dimensionality of the original space,
and face recognition is then carried out in the reduced space.
The Gabor wavelets, whose kernels are similar to the 2D receptive field profiles of the mammalian
cortical simple cells, exhibit desirable characteristics of spatial locality and orientation selectivity. The
biological relevance and computational properties of Gabor wavelets for image analysis have been
described. Lades et al. applied the Gabor wavelets for face recognition using dynamic link architecture
(DLA). This starts by computing the Gabor jets, and then performs a flexible template comparison
between the resulting image decompositions using graph matching. Based on the 2D Gabor wavelet
representation and labeled elastic graph matching, Lyons et al. proposed an algorithm for two-class
categorization of gender, race and facial expression. The algorithm includes two steps: registration of a
grid with the face using either labeled elastic graph matching or manual annotation of 34 points on every
face image, and categorization based on the features extracted at grid points using Linear Discriminant
Analysis (LDA). Donato et al. recently compared a method based on Gabor representation with other
techniques and found that the former gave better performance. Liu and Wechsler presented a Gabor–
Fisher Classifier (GFC) method for face recognition. The GFC method, which is robust to illumination
and facial expression variability, applies the enhanced Fisher linear discriminant model or EFM to an
augmented Gabor feature vector derived from the Gabor wavelet transformation of face images. To
encompass all the features produced by the different Gabor kernels one concatenates the resulting Gabor
wavelet features to derive an augmented Gabor feature vector. The dimensionality of the Gabor vector
space is then reduced under the eigenvalue selectivity constraint of the EFM method to derive a low-
dimensional feature representation with enhanced discrimination power. Liu and Wechsler recently
developed an Independent Gabor Features (IGF) method for face recognition The IGF method derives the
independent Gabor features, whose independence property facilitates the application of the PRM method
for classification.
This chapter has surveyed recent research in biometric technology, its application in face detection and
recognition, discussed performance of the current face recognition systems, and presented promising
research directions. In particular, face detection methods reviewed include statistical, neural network-
based and color based approaches. Face recognition methods surveyed include PCA-based approaches,
shape and texture-based approaches, Gabor wavelet-based approaches, these methods provide new
research directions for potential solutions to facial recognition under conditions of pose and illumination
variation, which recent vendor tests show are challenging issues for face recognition.
CHAPTER 2
MATLAB is a high-level technical computing language and interactive environment for algorithm
development, data visualization, data analysis, and numeric computation. Using the MATLAB product,
we can solve technical computing problems faster than with traditional programming languages, such as
C, C++, and FORTRAN. We can use MATLAB in a wide range of applications, including signal and
image processing, communications, control design, test and measurement, financial modeling and
analysis, and computational biology. Add-on toolboxes (collections of special-purpose MATLAB
functions, available separately) extend the MATLAB environment to solve particular classes of problems
in these application areas. MATLAB provides a number of features for documenting and sharing your
work. We can integrate your MATLAB code with other languages and applications, and distribute your
MATLAB algorithms and applications .Features includes:
MATLAB is an interactive system whose basic data element is an array that does not require
dimensioning. This allows you to solve many technical computing problems, especially those with matrix
and vector formulations, in a fraction of the time it would take to write a program in a scalar non
interactive language such as C or FORTRAN.
The name MATLAB stands for matrix laboratory. MATLAB was originally written to provide easy
access to matrix software developed by the LINPACK and EISPACK projects. Today, MATLAB uses
software developed by the LAPACK and ARPACK projects, which together represent the state-of-the-art
in software for matrix computation.
MATLAB has evolved over a period of years with input from many users. In university environments, it
is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and
science. In industry, MATLAB is the tool of choice for high-productivity research, development, and
analysis.
2.1.1 Simulink
Simulink, a companion program to MATLAB, is an interactive system for simulating nonlinear dynamic
systems. It is a graphical mouse-driven program that allows you to model a system by drawing a block
diagram on the screen and manipulating it dynamically. It can work with linear, nonlinear, continuous-
time, discrete-time, multirate, and hybrid systems.
Blocksets are add-ons to Simulink that provide additional libraries of blocks for specialized applications
like communications, signal processing, and power systems.
Real-Time Workshop® is a program that allows you to generate C code from your block diagrams and to
run it on a variety of real-time systems.
Stateflow
Stateflow is an interactive design tool for modeling and simulating complex reactive systems. Tightly
integrated with Simulink and MATLAB, Stateflow provides Simulink users with an elegant solution for
designing embedded systems by giving them an efficient way to incorporate complex control and
supervisory logic within their Simulink models.
With Stateflow, you can quickly develop graphical models of event-driven systems using finite state
machine theory, statechart formalisms, and flow diagram notation. Together, Stateflow and Simulink
serve as an executable specification and virtual prototype of your system design.
2.2 The MATLAB System
Development Environment This is the set of tools and facilities that help you use MATLAB functions
and files. Many of these tools are graphical user interfaces. It includes the MATLAB desktop and
Command Window, a command history, and browsers for viewing help, the workspace, files, and the
search path.
The MATLAB Mathematical Function Library. This is a vast collection of computational algorithms
ranging from elementary functions like sum, sine, cosine, and complex arithmetic, to more sophisticated
functions like matrix inverse, matrix eigenvalues, Bessel functions, and fast Fourier transforms.
The MATLAB language. This is a high-level matrix/array language with control flow statements,
functions, data structures, input/output, and object-oriented programming features. It allows both
“programming in the small” to rapidly create quick and dirty throw-away programs, and “programming in
the large” to create complete large and complex application programs.
Handle Graphics®. This is the MATLAB graphics system. It includes high-level commands for two-
dimensional and three-dimensional data visualization, image processing, animation, and presentation
graphics. It also includes low-level commands that allow you to fully customize the appearance of
graphics as well as to build complete graphical user interfaces on your MATLAB applications.
The MATLAB Application Program Interface (API). This is a library that allows you to write C and
FORTRAN programs that interact with MATLAB. It include facilities for calling routines from
MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing
MAT-files.
MATLAB Desktop
When we start MATLAB, the MATLAB desktop appears, containing tools (graphical user interfaces) for
managing files, variables, and applications associated with MATLAB.
Desktop tools
This section provides an introduction to MATLAB’s desktop tools. You can also use MATLAB functions
to perform most of the features found in the desktop tools. The tools are:
• Command Window
• Command History
• Launch Pad
• Help Browser
• Current Directory Browser
• Workspace Browser
• Array Editor
• Editor/Debugger
Figure 2.2 (i) – MATLAB Dextop
Command Window
Use the Command Window to enter variables and run functions and M-files.
Lines you enter in the Command Window are logged in the Command History window. In the
Command History, we can view previously used functions, and copy and execute selected lines.
Launch Pad
Use the Help browser to search and view documentation for all Math Works products. The Help browser
is a Web browser integrated into the MATLAB desktop that displays HTML documents. To open the
Help browser, click the help button in the toolbar, or type help browser in the Command Window
The Help browser consists of two panes, the Help Navigator, which you use to find information, and the
display pane, where you view the information.
Help Navigator
•Product filter – Set the filter to show documentation only for the products you specify.
•Contents tab – View the titles and tables of contents of documentation for your products.
•Index tab – Find specific index entries (selected keywords) in the Math Works documentation for your
Products.
•Search tab – Look for a specific phrase in the documentation. To get help for a specific function, set the
Search type to Function Name.
•Favorites tab – View a list of documents you previously designated as favorites.
Display Pane
After finding documentation using the Help Navigator, view it in the display pane. While viewing the
documentation, we can:
•Browse to other pages – Use the arrows at the tops and bottoms of the pages, or use the back and
forward buttons in the toolbar.
•Bookmark pages – Click the Add to Favorites button in the toolbar.
•Print pages – Click the print button in the toolbar.
•Find a term in the page – Type a term in the Find in page field in the toolbar and click Go.
MATLAB file operations use the current directory and the search path as reference points. Any file you want to run
must either be in the current directory or on the search path.
A quick way to view or change the current directory is by using the Current Directory field in the desktop toolbar as
shown below.
To search for, view, open, and make changes to MATLAB-related directories and files, use the MATLAB Current
Directory browser. Alternatively, you can use the functions dir, cd, and delete.
Array Editor
Double-click on a variable in the Workspace browser to see it in the Array Editor. Use the Array Editor to
view and edit a visual representation of one- or two-dimensional numeric arrays, strings, and cell arrays
of strings that are in the workspace.
rray Editor
Editor Debugger
Use the Editor/Debugger to create and debug M-files, which are programs you write to run MATLAB
functions. The Editor/Debugger provides a graphical user interface for basic text editing, as well as for M-
file debugging.
•Importing and Exporting Data – Techniques for bringing data created by other applications into the
MATLAB workspace, including the Import Wizard, and packaging MATLAB workspace variables for
use by other applications.
•Improving M-File Performance – The Profiler is a tool that measures where an M-file is spending its
time. Use it to help you make speed improvements.
•Interfacing with Source Control Systems – Access your source control system from within MATLAB,
Simulink, and Stateflow.
•Using Notebook – Access MATLAB’s numeric computation and visualization software from within a
word processing environment (Microsoft Word).
2.2.3 MATLAB Toolbox
3.1 Introduction
The Image Processing Toolbox software is a collection of functions that extend the capability of the
MATLAB numeric computing environment. The toolbox supports a wide range of image processing
operations, including
Many of the toolbox functions are MATLAB M-files, a series of MATLAB statements that implement
specialized image processing algorithms. We can view the MATLAB code for these functions using the
statement
type function_name
we can extend the capabilities of the toolbox by writing your own M-files, or by using the toolbox in
combination with other toolboxes, such as the Signal Processing Toolbox™ software and the Wavelet
Toolbox™ software.
3.2 Images
The first step in MATLAB image processing is to understand that a digital image is composed of a two or
three dimensional matrix of pixels. Individual pixels contain a number or numbers representing what
grayscale or color value is assigned to it. Color pictures generally contain three times as much data as
grayscale pictures, depending on what color representation scheme is used. Therefore, color pictures take
three times as much computational power to process. In this tutorial the method for conversion from color
to grayscale will be demonstrated and all processing will be done on grayscale images. However, in order
to understand how image processing works, we will begin by analyzing simple two dimensional 8-bit
matrices.
3.2.1 Read and Display an Image
First, clear the MATLAB workspace of any variables and close open figure windows.
close all
To read an image, use the imread command. The example reads one of the sample images included with
the toolbox, pout.tif, and stores it in an array named I.
I = imread ('pout.tif');
imread infers from the file that the graphics file format is Tagged Image File Format (TIFF). For the list
of supported graphics file formats
Now display the image. The toolbox includes two image display functions: imshow and imtool. imshow
is the toolbox's fundamental image display function. imtool starts the Image Tool which presents an
integrated environment for displaying images and performing some common image processing tasks. The
Image Tool provides all the image display capabilities of imshow but also provides access to several other
tools for navigating and exploring images, such as scroll bars, the Pixel Region tool, Image Information
tool, and the Contrast Adjustment tool. This example uses imshow.
imshow(I)
whos
figure, imhist(I)
Notice how the intensity range is rather narrow. It does not cover the potential range of [0, 255], and is
missing the high and low values that would result in good contrast.
The toolbox provides several ways to improve the contrast in an image. One way is to call the histeq
function to spread the intensity values over the full range of the image, a process called histogram
equalization.
I2 = histeq(I);
Call imhist again to create a histogram of the equalized image I2. If we compare the two histograms, the
histogram of I2 is more spread out than the histogram of I1.
figure, imhist(I2)
imfinfo('pout2.png')
The imfinfo function returns information about the image in the file, such as its format, size, width, and
height.
ans =
Filename: 'pout2.png'
FileModDate: '29-Dec-2005 09:34:39'
FileSize: 36938
Format: 'png'
FormatVersion: []
Width: 240
Height: 291
BitDepth: 8
ColorType: 'grayscale'
FormatSignature: [137 80 78 71 13 10 26 10]
Colormap: []
Histogram: []
InterlaceType: 'none'
Transparency: 'none'
SimpleTransparencyData: []
BackgroundColor: []
RenderingIntent: []
Chromaticities: []
Gamma: []
XResolution: []
YResolution: []
ResolutionUnit: []
XOffset: []
YOffset: []
OffsetUnit: []
SignificantBits: []
ImageModTime: '29 Dec 2005 14:34:39 +0000'
Title: []
Author: []
Description: []
Copyright: []
CreationTime: []
Software: []
Disclaimer: []
Warning: []
Source: []
Comment: []
OtherText: []
I = imread('rice.png');
imshow(I)
background = imopen(I,strel('disk',15));
The example calls the imopen function to perform the morphological opening operation. Note how the
example calls the strel function to create a disk-shaped structuring element with a radius of 15. To remove
the rice grains from the image, the structuring element must be sized so that it cannot fit entirely inside a
single grain of rice.
The example uses MATLAB indexing syntax to view only 1 out of 8 pixels in each direction; otherwise,
the surface plot would be too dense. The example also sets the scale of the plot to better match the range
of the uint8 data and reverses the y-axis of the display to provide a better view of the data. (The pixels at
the bottom of the image appear at the front of the surface plot.)
In the surface display, [0, 0] represents the origin, or upper-left corner of the image. The highest part of
the curve indicates that the highest pixel values of background (and consequently rice.png) occur near the
middle rows of the image. The lowest pixel values occur at the bottom of the image and are represented in
the surface plot by the lowest part of the curve.
I2 = I - background;
imshow(I2)
The following example adjusts the contrast in the image created in the previous step and displays it:
I3 = imadjust(I2);
imshow(I3);
level = graythresh(I3);
bw = im2bw(I3,level);
bw = bwareaopen(bw, 50);
imshow(bw)
cc = bwconncomp(bw, 4);
cc.NumObjectscc =
Connectivity: 4
ImageSize: [256 256]
NumObjects: 95
PixelIdxList: {1x95 cell}
ans =
95
grain = false(size(bw));
grain(cc.PixelIdxList{50}) = true;
imshow(grain);
labeled = labelmatrix(cc);
whos labeled
Name Size Bytes Class Attributes
In the pseudo-color image, the label identifying each object in the label matrix maps to a different color in
the associated colormap matrix. Use label2rgb to choose the colormap, the background color, and how
objects in the label matrix map to colors in the colormap:
graindata =
To find the area of the 50th component, use dot notation to access the Area field in the 50th element of
graindata structure array:
graindata(50).Area
ans =
194
grain_areas = [graindata.Area];
61
idx =
16
Figure 3.3 (viii)-Smallest Grain
nbins = 20;
figure, hist(grain_areas, nbins)
title('Histogram of Rice Grain Area');