Vision
Vision
Vision
Toolbox
for MATLAB
Release 4
Peter Corke
2
Release 4.1
Release date July 2017
Licence LGPL
Toolbox home page http://www.petercorke.com/robot
Discussion group http://groups.google.com.au/group/robotics-tool-box
Copyright
2017
c Peter Corke
[email protected]
http://www.petercorke.com
Preface
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Functions by category . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1 Introduction 6
1.1 Changes in MVTB 4 . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.1 Incompatible changes . . . . . . . . . . . . . . . . . . . . . . 6
1.1.2 New features . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.3 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 How to obtain the Toolbox . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 From .mltbx file . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.2 From .zip file . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 MATLAB OnlineTM . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Simulink
R
. . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.5 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Compatible MATLAB versions . . . . . . . . . . . . . . . . . . . . . 9
1.4 Use in teaching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Use in research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.7 Related software . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.7.1 Image Processing Toolbox . . . . . . . . . . . . . . . . . . . 10
1.7.2 Computer Vision System Toolbox . . . . . . . . . . . . . . . 10
1.7.3 Octave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7.4 Robotics Toolbox . . . . . . . . . . . . . . . . . . . . . . . . 11
1.8 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
CentralCamera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
chi2inv_rtb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
cie_primaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
closest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
cmfrgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
cmfxyz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
col2im . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
colnorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
colordistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
colorize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
colorkmeans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
colorname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
colorseg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
dtransform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
e2h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
EarthView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
edgelist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
epidist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
epiline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
FeatureMatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
filt1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
FishEyeCamera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
fmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
h2e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
hist2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
hitormiss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
homline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
homography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
homtrans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
homwarp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Hough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
humoments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
ianimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
ibbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
iblobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
icanny . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
iclose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
icolor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
iconcat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
iconvolve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
icorner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
icp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
idecimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
idilate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
idisp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
idisplabel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
idouble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
iendpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
ierode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
igamm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
igraphseg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
ihist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
iint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
iisum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
ilabel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
iline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
im2col . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
ImageSource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
imatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
imeshgrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
imoments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
imono . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
imorph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
imser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
inormhist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
intgimage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
invcamcal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
iopen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
ipad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
ipaste . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
ipixswitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
iprofile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
ipyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
irank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
iread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
irectify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
ireplicate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
iroi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
irotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
isamesize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
iscale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
iscalemax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
iscalespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
iscolor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
ishomog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
ishomog2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
isift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
isimilarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
isize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
ismooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
isobel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
isrot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
istereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
istretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
isurf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
isvec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
ithin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
ithresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
itrim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
itriplepoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
ivar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
iwindow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
kcircle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
kdgauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
kdog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
kgauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
klaplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
klog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
kmeans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
ksobel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
ktriangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
lambda2rg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
lambda2xy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
LineFeature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
loadspectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
luminos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
mkcube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
mkgrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
morphdemo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Movie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
mpq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
mpq_poly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
ncc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
niblack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
npq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
npq_poly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
numcols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
numrows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
OrientedScalePointFeature . . . . . . . . . . . . . . . . . . . . . . . . . . 149
otsu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
peak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
peak2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
pickregion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
plot_arrow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
plot_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
plot_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
plot_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
plot_homline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
plot_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
plot_poly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
plot_sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Plucker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
pnmfilt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
PointFeature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
polydiff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
radgrad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Ray3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
RegionFeature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
rg_addticks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
rgb2xyz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
rluminos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
sad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
ScalePointFeature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
showcolorspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
showpixels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
SiftPointFeature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
SphericalCamera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
ssd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
stdisp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
SurfPointFeature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
tb_optparse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
testpattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Tracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
tristim2cc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
upq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
upq_poly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
usefig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
VideoCamera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
VideoCamera_fg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
VideoCamera_IAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
xaxis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
xyzlabel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
yaxis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
YUV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
yuv2rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
yuv2rgb2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
zcross . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
zncc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
zsad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
zssd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
blackbody . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Devices
ccdresponse . . . . . . . . . . . . . . . . . . . . . . . . 31
AxisWebCamera . . . . . . . . . . . . . . . . . . . . 14
ccxyz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
EarthView . . . . . . . . . . . . . . . . . . . . . . . . . . 52
cie_primaries . . . . . . . . . . . . . . . . . . . . . . . 44 ImageSource . . . . . . . . . . . . . . . . . . . . . . . 97
cmfrgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Movie . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
cmfxyz . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 VideoCamera_IAT . . . . . . . . . . . . . . . . . 205
colordistance . . . . . . . . . . . . . . . . . . . . . . . 47 VideoCamera_fg . . . . . . . . . . . . . . . . . . . 203
colorname . . . . . . . . . . . . . . . . . . . . . . . . . . 49 VideoCamera . . . . . . . . . . . . . . . . . . . . . . 202
lambda2rg . . . . . . . . . . . . . . . . . . . . . . . . 136 YUV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
lambda2xy . . . . . . . . . . . . . . . . . . . . . . . . 137
loadspectrum . . . . . . . . . . . . . . . . . . . . . . 140 Test patterns
luminos . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
rg_addticks . . . . . . . . . . . . . . . . . . . . . . . . 181 mkcube . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
rgb2xyz . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 mkgrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
testpattern . . . . . . . . . . . . . . . . . . . . . . . . . 197
rluminos . . . . . . . . . . . . . . . . . . . . . . . . . . 181
showcolorspace . . . . . . . . . . . . . . . . . . . . 184
tristim2cc . . . . . . . . . . . . . . . . . . . . . . . . . 200 Monadic operators
yuv2rgb2 . . . . . . . . . . . . . . . . . . . . . . . . . 210
colorize . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
yuv2rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
icolor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
igamm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
imono . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
inormhist . . . . . . . . . . . . . . . . . . . . . . . . . 104
istretch . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Camera models
icanny . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
iconvolve. . . . . . . . . . . . . . . . . . . . . . . . . . .81 Features
ismooth . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
isobel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Region features
radgrad . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
RegionFeature . . . . . . . . . . . . . . . . . . . . . 176
colorkmeans . . . . . . . . . . . . . . . . . . . . . . . . 48
Kernels colorseg . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
ibbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
kcircle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 iblobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
kdgauss . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 igraphseg . . . . . . . . . . . . . . . . . . . . . . . . . . 92
kdog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 ilabel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
kgauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 imoments . . . . . . . . . . . . . . . . . . . . . . . . . 100
klaplace . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 imser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
klog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 ithresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
ksobel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 niblack . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
ktriangle . . . . . . . . . . . . . . . . . . . . . . . . . . 136 otsu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Other features
Similarity
apriltags . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
imatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 hist2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
isimilarity . . . . . . . . . . . . . . . . . . . . . . . . . 119 ihist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
ncc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 iprofile . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
sad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 peak2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
ssd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 peak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Plotting
Image sequence
plot_arrow . . . . . . . . . . . . . . . . . . . . . . . . 154
BagOfWords . . . . . . . . . . . . . . . . . . . . . . . 16 plot_box . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Tracker . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 plot_circle . . . . . . . . . . . . . . . . . . . . . . . . 155
ianimate . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 plot_ellipse . . . . . . . . . . . . . . . . . . . . . . . . 156
plot_homline . . . . . . . . . . . . . . . . . . . . . . 157
plot_point . . . . . . . . . . . . . . . . . . . . . . . . . 158
Shape changing plot_poly . . . . . . . . . . . . . . . . . . . . . . . . . 159
plot_sphere . . . . . . . . . . . . . . . . . . . . . . . . 160
homwarp . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
idecimate . . . . . . . . . . . . . . . . . . . . . . . . . . 85
ipad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
ipyramid . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Homogeneous coordinates
ireplicate . . . . . . . . . . . . . . . . . . . . . . . . . . 112 e2h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
iroi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 h2e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
irotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 homline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
isamesize . . . . . . . . . . . . . . . . . . . . . . . . . 114 homtrans . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
iscale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
itrim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Homogeneous coordinates in
2D
Utility
ishomog2 . . . . . . . . . . . . . . . . . . . . . . . . . 117
Image utility
idisplabel . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Homogeneous coordinates in
idisp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3D
iread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
pnmfilt . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 ishomog . . . . . . . . . . . . . . . . . . . . . . . . . . 116
showpixels . . . . . . . . . . . . . . . . . . . . . . . . 185 isrot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
3D geometry col2im . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
colnorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Plucker . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Ray3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
filt1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
icp. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84
im2col . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
imeshgrid . . . . . . . . . . . . . . . . . . . . . . . . . 100
Integral image iscolor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
isize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
iisum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 isvec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
intgimage . . . . . . . . . . . . . . . . . . . . . . . . . 104 kmeans . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
numcols . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Edges and lines numrows . . . . . . . . . . . . . . . . . . . . . . . . . . 148
pickregion . . . . . . . . . . . . . . . . . . . . . . . . 153
bresenham . . . . . . . . . . . . . . . . . . . . . . . . . 20 polydiff . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
edgelist . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
tb_optparse . . . . . . . . . . . . . . . . . . . . . . . . 196
usefig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
General xaxis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
about . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 xyzlabel . . . . . . . . . . . . . . . . . . . . . . . . . . 207
chi2inv_rtb . . . . . . . . . . . . . . . . . . . . . . . . . 43 yaxis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
closest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 zcross . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Introduction
• solar.dat has the units changed from W/m2 /nm to W/m2 /m.
• Data files that were previously in the folder private but are now in data.
• Options of the form ’Tcam’, ’Tobj’, ’T0’ or ’Tf’ are now respectively
’pose’, ’objpose’, ’pose0’ or ’posef’.
• For a vector of RegionFeature objects all the properties can now be extracted
as vectors.
• The folder symbolic contains Live Scripts that demonstrate use of the MAT-
LAB Symbolic Math ToolboxTM for deriving Jacobians related to bundle adjust-
ment, image Jacobian for visual servoing and Gaussian kernels.
1.1.3 Enhancements
The Machine Vision Toolbox is freely available from the Toolbox home page at
http://www.petercorke.com
The file is available in MATLABtoolbox format (.mltbx) or zip format (.zip).
Since MATLAB R2014b toolboxes can be packaged as, and installed from, files with
the extension .mltbx. Download the most recent version of robot.mltbx or
vision.mltbx to your computer. Using MATLAB navigate to the folder where
you downloaded the file and double-click it (or right-click then select Install). The
Toolbox will be installed within the local MATLAB file structure, and the paths will be
appropriately configured for this, and future MATLAB sessions.
Download the most recent version of robot.zip or vision.zip to your computer. Use
your favourite unarchiving tool to unzip the files that you downloaded. To add the
Toolboxes to your MATLAB path execute the command
>> addpath RVCDIR ;
>> startup_rvc
where RVCDIR is the full pathname of the folder where the folder rvctools was
created when you unzipped the Toolbox files. The script startup_rvc adds various
subfolders to your path and displays the version of the Toolboxes. After installation
the files for both Toolboxes reside in a top-level folder called rvctools and beneath
this are a number of folders:
robot The Robotics Toolbox
vision The Machine Vision Toolbox
common Utility functions common to the Robotics and Machine Vision Toolboxes
simulink Simulink blocks for robotics and vision, as well as examples
contrib Code written by third-parties
A menu-driven demonstration can be invoked by
>> mvtbdemo
The MVTB distribution includes the code and example images necessary to do almost
all the examples in the Robotics, Vision & Control book. Additional files are available:
• contrib2.zip Additional third party code for the functions: isift, and
isurf. Note that the code here is slightly modified version of the open-source
packages.
If you already have the Robotics Toolbox installed then download the zip file(s) to the
directory above the existing rvctools directory and then unzip them. The files from
these zip archives will properly interleave with the Robotics Toolbox files.
Ensure that the folder rvctools is on your MATLAB search path. You can do this
by issuing the addpath command at the MATLAB prompt. Then issue the com-
mand startup_rvc and it will add a number of paths to your MATLAB search
path. You need to setup the path every time you start MATLAB but you can automate
this by setting up environment variables, editing your startup.m script by pressing
the “Update Toolbox Path Cache" button under MATLAB General preferences.
The Toolbox works well with MATLAB OnlineTM which lets you access a MATLAB
session from a web browser, tablet or even a phone. The key is to get the MVTB
files into the filesystem associated with your Online account. The easiest way to do
this is to install MATLAB DriveTM from MATLAB File Exchange or using the Get
Add-Ons option from the MATLAB GUI. This functions just like Google Drive or
Dropbox, a local filesystem on your computer is synchronized with your MATLAB
Online account. Copy the MVTB files into the local MATLAB Drive cache and they
will soon be synchronized, invoke startup_rvc to setup the paths and you are ready
to machine vision on your mobile device or in a web browser.
1.2.4 Simulink
R
Simulink
R
is the block diagram simulation environment for MATLAB. The following
Simulink models are included with the Toolbox, but rely on having RTB installed.
General
sl_ibvs Classical IBVS
sl_partitioned XY/Z partioned IBVS
Robot manipulator arm
sl_arm_ibvs Servo a 6DOF robot arm
Mobile ground robot
sl_drivepose_vs Drive nonholonomic robot to a pose
sl_mobile_vs Drive a holonomic vehicle to a pose
Flying robot
sl_quadrotor_vs Control visual servoing to a target
1.2.5 Documentation
The Toolbox has been tested under R2016b and R2017aPRE. Compatibility problems
are increasingly likely the older your version of MATLAB is.
This is definitely encouraged! You are free to put the PDF manual (robot.pdf or
the web-based documentation html/*.html on a server for class use. If you plan to
distribute paper copies of the PDF manual then every copy must include the first two
pages (cover and licence).
Link to other resources such as MOOCs or the Robot Academy can be found at www.
petercorke.com/moocs.
If the Toolbox helps you in your endeavours then I’d appreciate you citing the Toolbox
when you publish. The details are:
@book{Corke17a,
Author = {Peter I. Corke},
Note = {ISBN 978-3-319-54413-7},
Edition = {Second},
Publisher = {Springer},
Title = {Robotics, Vision \& Control: Fundamental Algorithms in {MATLAB}},
Year = {2017}}
or
P.I. Corke, Robotics, Vision & Control: Fundamental Algorithms in MAT-
LAB. Second edition. Springer, 2017. ISBN 978-3-319-54413-7.
which is also given in electronic form in the CITATION file.
1.6 Support
There is no support! This software is made freely available in the hope that you find it
useful in solving whatever problems you have to hand. I am happy to correspond with
people who have found genuine bugs or deficiencies but my response time can be long
and I can’t guarantee that I respond to your email.
I can guarantee that I will not respond to any requests for help with assignments
or homework, no matter how urgent or important they might be to you. That’s
what your teachers, tutors, lecturers and professors are paid to do.
You might instead like to communicate with other users via the Google Group called
“Robotics and Machine Vision Toolbox”
http://tiny.cc/rvcforum
which is a forum for discussion. You need to signup in order to post, and the signup
process is moderated by me so allow a few days for this to happen. I need you to write a
few words about why you want to join the list so I can distinguish you from a spammer
or a web-bot.
The Image Processing ToolboxTM (IPT) from MathWorks is an official and supported
product. This toolbox includes a comprehensive set of image processing operations.
The Computer Vision System ToolboxTM (CVST) from MathWorks is an official and
supported product. System toolboxes (see also the Computer Vision System Toolbox)
are aimed at developers of systems. This toolbox includes a comprehensive set of
feature detectors and descriptors, and be used with Simulink to conveniently generate
machine vision pipelines that can run in target hardware.
1.7.3 Octave
Robotics toolbox (RTB) for MATLAB provides a very wide range of useful robotics
functions and is used to illustrate principals in the Robotics, Vision & Control book.
You can obtain this from http://www.petercorke.com/robot.
1.8 Acknowledgements
This release includes functions for computing image plane homographies and the fun-
damental matrix, contributed by Nuno Alexandre Cid Martins of I.S.R., Coimbra.
RANSAC code by Peter Kovesi; pose estimation by Francesco Moreno-Noguer, Vin-
cent Lepetit, Pascal Fua at the CVLab-EPFL; color space conversions by Pascal Ge-
treuer; numerical routines for geometric vision by various members of the Visual Ge-
ometry Group at Oxford (from the web site of the Hartley and Zisserman book1 ; the k-
means and MSER algorithms by Andrea Vedaldi and Brian Fulkerson;the graph-based
image segmentation software by Pedro Felzenszwalb; and the SURF feature detec-
tor by Dirk-Jan Kroon at U. Twente. The Camera Calibration Toolbox by Jean-Yves
Bouguet is used unmodified.Functions such as SURF, MSER, graph-based segmenta-
tion and pose estimation are based on great code Some of the MEX file use some really
neat macros that were part of the package VISTA Copyright 1993, 1994 University of
British Columbia. See the file CONTRIB for details.
1 http://www.robots.ox.ac.uk/~vgg/hzbook
about
Compact display of variable type
about(x) displays a compact line that describes the class and dimensions of x.
about x as above but this is the command rather than functional form
Examples
>> a=1;
>> about a
a [double] : 1x1 (8 bytes)
>> a = rand(5,7);
>> about a
a [double] : 5x7 (280 bytes)
See also
whos
anaglyph
Convert stereo images to an anaglyph image
a = anaglyph(left, right) is an anaglyph image where the two images of a stereo pair
are combined into a single image by coding them in two different colors. By default
‘r’ red
‘g’ green
‘b’ green
‘c’ cyan
‘m’ magenta
a = anaglyph(left, right, color, disp) as above but allows for disparity correction. If
disp is positive the disparity is increased, if negative it is reduced. These adjustments
are achieved by trimming the images. Use this option to make the images more nat-
ural/comfortable to view, useful if the images were captured with a stereo baseline
significantly different the human eye separation (typically 65mm).
Example
References
See also
stdisp
apriltags
Read April tags from image
tags = apriltags(im) is a vector of structures that describe each of the April tags found
within the image IM.
Notes
Author
AxisWebCamera
Image from Axis webcam
A concrete subclass of ImageSource that acquires images from a web camera built by
Axis Communications (www.axis.com).
Methods
See also
ImageSource, Video
AxisWebCamera.AxisWebCamera
Axis web camera constructor
Options
Notes:
• The specified ‘resolution’ must match one that the camera is capable of, other-
wise the result is not predictable.
AxisWebCamera.char
Convert to string
A.char() is a string representing the state of the camera object in human readable form.
See also
AxisWebCamera.display
AxisWebCamera.close
Close the image source
AxisWebCamera.grab
Acquire image from the camera
Notes
• Some web cameras have a fixed picture taking interval, and this function will
return the most recently captured image held in the camera.
BagOfWords
Bag of words class
The BagOfWords class holds sets of features for a number of images and supports
image retrieval by comparing new images with those in the ‘bag’.
Methods
Properties
Reference
J.Sivic and A.Zisserman, “Video Google: a text retrieval approach to object matching
in videos”, in Proc. Ninth IEEE Int. Conf. on Computer Vision, pp.1470-1477, Oct.
2003.
See also
PointFeature
BagOfWords.BagOfWords
Create a BagOfWords object
b = BagOfWords(f, k) is a new bag of words created from the feature vector f and with
k words. f can also be a cell array, as produced by ISURF() for an image sequence.
The features are sorted into k clusters and each cluster is termed a visual word.
b = BagOfWords(f, b2) is a new bag of words created from the feature vector f but
clustered to the words (and stop words) from the existing bag b2.
Notes
See also
PointFeature, isurf
BagOfWords.char
Convert to string
BagOfWords.contains
Find images containing word
k = B.contains(w) is a vector of the indices of images in the sequence that contain one
or more instances of the word w.
BagOfWords.display
Display value
B.display() displays the parameters of the bag in a compact human readable form.
Notes
• This method is invoked implicitly at the command line when the result of an
expression is a BagOfWords object and the command has no trailing semicolon.
See also
BagOfWords.char
BagOfWords.exemplars
display exemplars of words
Options
BagOfWords.isword
Features from words
f = B.isword(w) is a vector of feature objects that are assigned to any of the word w. If
w is a vector of words the result is a vector of features assigned to all the words in w.
BagOfWords.occurrence
Word occurrence
BagOfWords.remove_stop
Remove stop words
B.remove_stop(n) removes the n most frequent words (the stop words) from the bag.
All remaining words are renumbered so that the word labels are consecutive.
BagOfWords.wordfreq
Word frequency statistics
BagOfWords.wordvector
Word frequency vector
wf = B.wordvector(J) is the word frequency vector for the Jth image in the bag. The
vector is K × 1 and the angle between any two WFVs is an indication of image simi-
larity.
Notes
blackbody
Compute blackbody emission spectrum
Example
l = [380:10:700]’*1e-9; % visible spectrum
e = blackbody(l, 6500); % emission of sun
plot(l, e)
References
bresenham
Generate a line
p = bresenham(x1, y1, x2, y2) is a list of integer coordinates (2 × N) for points lying
on the line segment joining the integer coordinates (x1,y1) and (x2,y2).
p = bresenham(p1, p2) as above but p1=[x1; y1] and p2=[x2; y2].
Notes
Author
See also
icanvas
camcald
Camera calibration from data points
See also
CentralCamera
Camera
Camera superclass
Methods
Properties (read/write)
Notes
See also
Camera.Camera
Create camera object
Options
Notes
• Normally the class plots points and lines into a set of axes that represent the
image plane. The ‘image’ option paints the specified image onto the image plane
and allows points and lines to be overlaid.
See also
Camera.centre
Get camera position
Camera.char
Convert to string
Camera.clf
Clear the image plane
Camera.delete
Camera object destructor
C.delete() destroys all figures associated with the Camera object and removes the
object.
disp(’delete camera object’);
Camera.display
Display value
Notes
• This method is invoked implicitly at the command line when the result of an
expression is a Camera object and the command has no trailing semicolon.
See also
Camera.char
Camera.figure
Return figure handle
H = C.figure() is the handle of the figure that contains the camera’s image plane graph-
ics.
Camera.hold
Control hold on image plane graphics
Camera.homline
Plot homogeneous lines on image plane
C.homline(L) plots lines on the camera image plane which are defined by columns of
L (3 × N) considered as lines in homogeneous form: a.u + b.v + c = 0.
Camera.ishold
Return image plane hold status
H = C.ishold() returns true (1) if the camera’s image plane is in hold mode, otherwise
false (0).
Camera.lineseg
handle for this camera image plane
Camera.mesh
Plot mesh object on image plane
Options
‘objpose’, T Transform all points by the homogeneous transformation T before projecting them to
the camera image plane.
‘pose’, T Set the camera pose to the homogeneous transformation T before projecting points to
the camera image plane. Temporarily overrides the current camera pose C.T.
See also
Camera.move
Instantiate displaced camera
C2 = C.move(T) is a new camera object that is a clone of C but its pose is displaced
by the homogeneous transformation T with respect to the current pose of C.
Camera.plot
Plot points on image plane
C.plot(p, options) projects world points p (3 × N) to the image plane and plots them. If
p is 2×N the points are assumed to be image plane coordinates and are plotted directly.
uv = C.plot(p) as above but returns the image plane coordinates uv (2 × N).
• If p has 3 dimensions (3 × N × S) then it is considered a sequence of point sets
and is displayed as an animation.
C.plot(L, options) projects the world lines represented by the array of Plucker objects
(1 × N) to the image plane and plots them.
li = C.plot(L, options) as above but returns an array (3 × N) of image plane lines in
homogeneous form.
Options
‘pose’, T Set the camera pose to the homogeneous transformation T before projecting points to
the camera image plane. Overrides the current camera pose C.T.
‘fps’, N Number of frames per second for point sequence display
Additional options are considered MATLAB linestyle parameters and are passed di-
rectly to plot.
See also
Camera.plot_camera
Display camera icon in world view
Options
Notes
Camera.point
Plot homogeneous points on image plane
C.point(p) plots points on the camera image plane which are defined by columns of p
(3 × N) considered as points in homogeneous form.
Camera.rpy
Set camera attitude
CatadioptricCamera
Catadioptric camera class
Methods
Properties (read/write)
Notes
See also
CentralCamera, Camera
CatadioptricCamera.CatadioptricCamera
Create central projection camera object
Options
Notes
• The elevation angle range is from -pi/2 (below the mirror) to maxangle above the
horizontal plane.
See also
CatadioptricCamera.project
Project world points to image plane
uv = C.project(p, options) are the image plane coordinates for the world points p.
The columns of p (3 × N) are the world points and the columns of uv (2 × N) are the
corresponding image plane points.
Options
‘pose’, T Set the camera pose to the pose T (homogeneous transformation (4×4) or SE3) before
projecting points to the camera image plane. Temporarily overrides the current camera
pose C.T.
‘objpose’, T Transform all points by the pose T (homogeneous transformation (4 × 4) or SE3)
before projecting them to the camera image plane.
See also
FishEyeCamera.plot, Camera.plot
ccdresponse
CCD spectral response
Notes
References
See also
rluminos
ccxyz
XYZ chromaticity coordinates
References
See also
cmfxyz
CentralCamera
Perspective camera class
This camera model assumes central projection, that is, the focal point is at z=0 and the
image plane is at z=f. The image is not inverted.
Methods
Properties (read/write)
Notes
See also
Camera
CentralCamera.CentralCamera
Create central projection camera object
Options
See also
CentralCamera.C
Camera matrix
C = C.C() is the 3 × 4 camera matrix, also known as the camera calibration or projec-
tion matrix.
CentralCamera.centre
Projective centre
p = C.centre() returns the 3D world coordinate of the projective centre of the camera.
Reference
See also
Ray3D
CentralCamera.derivs
Compute bundle adjustment Jacobians
[p,ja,jb] = cam.derivs(T, qv, x) computes the image plane projection p (2 × 1), Jaco-
bian dP/dV (2 × 6) and Jacobian dP/dX (2 × 3) given the world point x (3 × 1) and the
camera position T (3 × 1) and orientation qv (3 × 1).
Orientation is expressed as the vector part of a unit-quaterion.
Notes
• The Jacobians are used to compute the approximate Hessian for bundle adjust-
ment problems based on camera observations of landmarks.
• This is optimized automatically generated code.
See also
UnitQuaternion, UnitQuaternion.tovec
CentralCamera.distort
Compute distorted coordinate
CentralCamera.E
Essential matrix
E = C.E(T) is the essential matrix relating two camera views. The first view is from
the current camera pose C.T and the second is a relative motion represented by the
homogeneous transformation T.
E = C.E(C2) is the essential matrix relating two camera views described by camera
objects C (first view) and C2 (second view).
E = C.E(f) is the essential matrix based on the fundamental matrix f (3 × 3) and the
intrinsic parameters of camera C.
Reference
Y.Ma, J.Kosecka, S.Soatto, S.Sastry, “An invitation to 3D”, Springer, 2003. p.177
See also
CentralCamera.F, CentralCamera.invE
CentralCamera.estpose
Estimate pose from object model and camera view
Reference
CentralCamera.F
Fundamental matrix
F = C.F(T) is the fundamental matrix relating two camera views. The first view is
from the current camera pose C.T and the second is a relative motion represented by
the homogeneous transformation T.
F = C.F(C2) is the fundamental matrix relating two camera views described by camera
objects C (first view) and C2 (second view).
Reference
Y.Ma, J.Kosecka, S.Soatto, S.Sastry, “An invitation to 3D”, Springer, 2003. p.177
See also
CentralCamera.E
CentralCamera.flowfield
Optical flow
C.flowfield(v) displays the optical flow pattern for a sparse grid of points when the
camera has a spatial velocity v (6 × 1).
See also
quiver
CentralCamera.fov
Camera field-of-view angles.
a = C.fov() are the field of view angles (2 × 1) in radians for the camera x and y (hori-
zontal and vertical) directions.
CentralCamera.H
Homography matrix
See also
CentralCamera.H
CentralCamera.invE
Decompose essential matrix
Reference
Notes
See also
CentralCamera.E
CentralCamera.invH
Decompose homography matrix
s = C.invH(H) decomposes the homography H (3 × 3) into the camera motion and the
normal to the plane.
In practice there are multiple solutions and s is a vector of structures with elements:
• T, camera motion as a homogeneous transform matrix (4 × 4), translation not to
scale
• n, normal vector to the plane (3 × 3)
Notes
Reference
Y.Ma, J.Kosecka, s.Soatto, s.Sastry, “An invitation to 3D”, Springer, 2003. section 5.3
See also
CentralCamera.H
CentralCamera.K
Intrinsic parameter matrix
CentralCamera.normalized
Convert to normalized coordinate
See also
CentralCamera.project
CentralCamera.plot_epiline
Plot epipolar line
C.plot_epiline(f, p) plots the epipolar lines due to the fundamental matrix f and the
image points p.
C.plot_epiline(f, p, ls) as above but draw lines using the line style arguments ls.
H = C.plot_epiline(f, p) as above but return a vector of graphic handles, one per line.
CentralCamera.plot_line_tr
Plot line in theta-rho format
CentralCamera.plot_line_tr(L) plots lines on the camera’s image plane that are de-
scribed by columns of L with rows theta and rho respectively.
See also
Hough
CentralCamera.project
Project world points to image plane
Options
‘pose’, T Set the camera pose to the homogeneous transformation T before projecting points to
the camera image plane. Temporarily overrides the current camera pose C.T.
‘objpose’, T Transform all points by the homogeneous transformation T before projecting them to
the camera image plane.
Notes
See also
CentralCamera.ray
3D ray for image point
R = C.ray(p) returns a vector of Ray3D objects, one for each point defined by the
columns of p.
Reference
See also
Ray3D
CentralCamera.visjac_e
Visual motion Jacobian for point feature
Reference
See also
CentralCamera.visjac_l
Visual motion Jacobian for line feature
J = C.visjac_l(L, pl) is the image Jacobian (2N × 6) for the image plane lines L (2 ×
N). Each column of L is a line in theta-rho format, and the rows are theta and rho
respectively.
The lines all lie in the plane pl = (a,b,c,d) such that aX + bY + cZ + d = 0.
The Jacobian gives the rates of change of the line parameters in terms of camera spatial
velocity.
Reference
See also
CentralCamera.visjac_p
Visual motion Jacobian for point feature
J = C.visjac_p(uv, z) is the image Jacobian (2N × 6) for the image plane points uv
(2 × N). The depth of the points from the camera is given by z which is a scalar for all
points, or a vector (N × 1) of depth for each point.
The Jacobian gives the image-plane point velocity in terms of camera spatial velocity.
Reference
“A tutorial on Visual Servo Control”, Hutchinson, Hager & Corke, IEEE Trans. R&A,
Vol 12(5), Oct, 1996, pp 651-670.
See also
CentralCamera.visjac_p_polar
Visual motion Jacobian for point feature
J = C.visjac_p_polar(rt, z) is the image Jacobian (2N × 6) for the image plane points
rt (2 × N) described in polar form, radius and theta. The depth of the points from the
camera is given by z which is a scalar for all point, or a vector (N × 1) of depths for
each point.
The Jacobian gives the image-plane polar point coordinate velocity in terms of camera
spatial velocity.
Reference
See also
chi2inv_rtb
Inverse chi-squared function
Notes
See also
chi2inv
cie_primaries
Define CIE primary colors
p = cie_primaries() is a 3-vector with the wavelengths [m] of the CIE 1976 red, green
and blue primaries respectively.
closest
Find closest points in N-dimensional space.
[k,d1] = closest(a, b) as above and d1(I)=|a(I)-b(J)| is the distance of the closest point.
[k,d1,d2] = closest(a, b) as above but also returns the distance to the second closest
point.
Notes
• Is a MEX file.
See also
distance
cmfrgb
RGB color matching function
The color matching function is the RGB tristimulus required to match a particular
spectral excitation.
Notes
• From Table I(5.5.3) of Wyszecki & Stiles (1982). (Table 1(5.5.3) of Wyszecki &
Stiles (1982) gives the Stiles & Burch functions in 250 cm-1 steps, while Table
I(5.5.3) of Wyszecki & Stiles (1982) gives them in interpolated 1 nm steps.)
• The Stiles & Burch 2-deg CMFs are based on measurements made on 10 ob-
servers. The data are referred to as pilot data, but probably represent the best
estimate of the 2 deg CMFs, since, unlike the CIE 2 deg functions (which were
reconstructed from chromaticity data), they were measured directly.
• These CMFs differ slightly from those of Stiles & Burch (1955). As noted in
footnote a on p. 335 of Table 1(5.5.3) of Wyszecki & Stiles (1982), the CMFs
have been "corrected in accordance with instructions given by Stiles & Burch
(1959)" and renormalized to primaries at 15500 (645.16), 19000 (526.32), and
22500 (444.44) cm-1
References
See also
cmfxyz, ccxyz
cmfxyz
matching function
The color matching function is the XYZ tristimulus required to match a particular
wavelength excitation.
xyz = cmfxyz(lambda) is the CIE xyz color matching function (N × 3) for illumination
at wavelength lambda (N × 1) [m]. If lambda is a vector then each row of xyz is the
color matching function of the corresponding element of lambda.
xyz = cmfxyz(lambda, E) is the CIE xyz color matching (1 × 3) function for an illu-
mination spectrum E (N × 1) defined at corresponding wavelengths lambda (N × 1).
Note
References
See also
cmfrgb, ccxyz
col2im
Convert pixel vector to image
Notes
• The number of rows in pix must match the product of the elements of imsize.
See also
im2col
colnorm
Column-wise norm of a matrix
See also
norm
colordistance
Colorspace distance
Notes
See also
colorspace
colorize
Colorize a greyscale image
out = colorize(im, mask, color) is a color image where each pixel in out is set to
the corresponding element of the greyscale image im or a specified color according
to whether the corresponding value of mask is true or false respectively. The color is
specified as a 3-vector (R,G,B).
out = colorize(im, func, color) as above but a the mask is the return value of the
function handle func applied to the image im, and returns a per-pixel logical result, eg.
@isnan.
Examples
Notes
See also
colorkmeans
Color image segmentation by clustering
[L,C,R] = colorkmeans(im, k) as above but also returns the residual R, the root mean
square error of all pixel chromaticities with respect to their cluster centre.
L = colorkmeans(im, C) is a segmentation of the color image im into k classes which
are defined by the cluster centres C (k × 2) in chromaticity space. Pixels are assigned
to the closest (Euclidean) centre. Since cluster centres are provided the k-means seg-
mentation step is not required.
Options
Various options are possible to choose the initial cluster centres for k-means:
Notes
• The k-means clustering algorithm used in the first three forms is computationally
expensive and time consuming.
• Clustering is performed in xy-chromaticity space.
• The residual is an indication of quality of fit, low is good.
See also
rgb2xyz, kmeans
colorname
Map between color names and RGB values
name = colorname(rgb) is a string giving the name of the color that is closest (Eu-
clidean) to the given rgb-tristimulus value (1 × 3). If rgb is a matrix (N × 3) then
return a cell-array (1 × N) of color names.
name = colorname(XYZ, ‘xyz’) as above but the color is the closest (Euclidean) to
the given XYZ-tristimulus value.
name = colorname(XYZ, ‘xy’) as above but the color is the closest (Euclidean) to the
given xy-chromaticity value with assumed Y=1.
Notes
colorseg
Color image segmentation using k-means
Notes
See also
colorkmeans
distance
Euclidean distances between sets of points
Example
A = rand(400,100); B = rand(400,200);
d = distance(A,B);
Notes
Author
See also
closest
dtransform
distance transform
dt = dtransform(im, options) is the distance transform of the binary image im. The
value of each output pixel is the distance (pixels) to the closest set pixel.
Options
See also
e2h
Euclidean to homogeneous
See also
h2e
EarthView
Image from Google maps
Methods
Examples
Notes
• Google limit the number of map queries limit to 1000 unique (different) image
requests per viewer per day. A 403 error is returned if the daily quota is exceeded.
• Maximum size is 640 × 640 for free access, business users can get more.
• There are lots of conditions on what you can do with the images, particularly
with respect to publication. See the Google web site for details.
Author
Peter Corke, with some lines of code from from get_google_map by Val Schmidt.
See also
ImageSource
EarthView.EarthView
Create EarthView object
ev = EarthView(options)
Options
Notes
• A key is required before you can use the Google Static Maps API. The key is
a long string that can be passed to the constructor or saved as an environment
variable GOOGLE_KEY. You need a Google account before you can register for
a key.
Notes
• Scale is 1 for the whole world, 20 is about as high a resolution as you can get.
See also
ImageSource, EarthView.grab
EarthView.char
Convert to string
EV.char() is a string representing the state of the EarthView object in human readable
form.
See also
EarthView.display
EarthView.grab
Grab an aerial image
Options
Examples
Notes
edgelist
Return list of edge pixels for region
[eg,d] = edgelist(im, seed, direction) as above but also returns a vector of edge seg-
ment directions which have values 1 to 8 representing W SW S SE E NW N NW
respectively.
Notes
• Coordinates are given assuming the matrix is an image, so the indices are always
in the form (x,y) or (column,row).
• The seed point is always the first element of the returned edgelist.
• 8-direction chain coding can give incorrect results when used with blobs founds
using 4-way connectivty.
Reference
See also
ilabel
epidist
Distance of point from epipolar line
d = epidist(f, p1, p2) is the distance of the points p2 (2 × M) from the epipolar lines
due to points p1 (2 × N) where f (3 × 3) is a fundamental matrix relating the views
containing image points p1 and p2.
d (N × M) is the distance matrix where element d(i,j) is the distance from the point
p2(j) to the epipolar line due to point p1(i).
Author
Based on fmatrix code by, Nuno Alexandre Cid Martins, Coimbra, Oct 27, 1998, I.S.R.
See also
epiline, fmatrix
epiline
Draw epipolar lines
epiline(f, p) draws epipolar lines in current figure based on points p (2 × N) and the
fundamental matrix f (3 × 3). Points are specified by the columns of p.
epiline(f, p, ls) as above but draw lines using the line style arguments ls.
H = epiline(f, p, ls) as above but return a vector of graphic handles, one per line drawn.
See also
fmatrix, epidist
FeatureMatch
Feature correspondence object
This class represents the correspondence between two PointFeature objects. A vector
of FeatureMatch objects can represent the correspondence between sets of points.
Methods
remove
Properties
Note
See also
FeatureMatch.FeatureMatch
Create a new FeatureMatch object
Notes
See also
FeatureMatch.char
Convert to string
FeatureMatch.display
Display value
Notes
• This method is invoked implicitly at the command line when the result of an
expression is a FeatureMatch object and the command has no trailing semicolon.
See also
FeatureMatch.char
FeatureMatch.inlier
Inlier features
Notes
See also
FeatureMatch.outlier, FeatureMatch.ransac
FeatureMatch.outlier
Outlier features
Notes
See also
FeatureMatch.inlier, FeatureMatch.ransac
FeatureMatch.p
Feature point coordinate pairs
See also
FeatureMatch.p1, FeatureMatch.p2
FeatureMatch.p1
Feature point coordinates from view 1
See also
FeatureMatch.p2
Feature point coordinates from view 2
See also
FeatureMatch.plot
Show corresponding points
M.plot(ls) as above but the optional line style arguments ls are passed to plot.
Notes
• Using IDISP as above adds UserData to the figure, and an error is created if this
UserData is not found.
See also
idisp
FeatureMatch.ransac
Apply RANSAC
M.ransac(func, options) applies the RANSAC algorithm to fit the point correspon-
dences to the model described by the function func. The options are passed to the
RANSAC() function. Elements of the FeatureMatch vector have their status updated
in place to indicate whether they are inliers or outliers.
Example
f1 = isurf(im1);
f2 = isurf(im2);
m = f1.match(f2);
m.ransac( @fmatrix, 1e-4);
See also
FeatureMatch.show
Display summary statistics of the FeatureMatch vector
M.show() is a compact summary of the FeatureMatch vector M that gives the number
of matches, inliers and outliers (and their percentages).
FeatureMatch.subset
Subset of matches
filt1d
1-dimensional rank filter
Options
Notes
FishEyeCamera
Fish eye camera class
This camera model assumes central projection, that is, the focal point is at z=0 and the
image plane is at z=f. The image is not inverted.
Methods
Properties (read/write)
Notes
See also
Camera
FishEyeCamera.FishEyeCamera
Create fisheyecamera object
Options
Notes
• If K is not specified it is computed such that the circular imaging region maxi-
mally fills the square image plane.
See also
FishEyeCamera.project
Project world points to image plane
uv = C.project(p, options) are the image plane coordinates for the world points p.
The columns of p (3 × N) are the world points and the columns of uv (2 × N) are the
corresponding image plane points.
Options
‘pose’, T Set the camera pose to the pose T (homogeneous transformation (4×4) or SE3) before
projecting points to the camera image plane. Temporarily overrides the current camera
pose C.T.
‘objpose’, T Transform all points by the pose T (homogeneous transformation (4 × 4) or SE3)
before projecting them to the camera image plane.
See also
CatadioprtricCamera.plot, Camera.plot
fmatrix
Estimate fundamental matrix
f = fmatrix(p1, p2, options) is the fundamental matrix (3 × 3) that relates two sets of
corresponding points p1 (2 × N) and p2 (2 × N) from two different camera views.
Notes
Reference
Hartley and Zisserman, ‘Multiple View Geometry in Computer Vision’, page 270.
Author
Based on fundamental matrix code by Peter Kovesi, School of Computer Science &
Software Engineering, The University of Western Australia, http://www.csse.uwa.edu.au/,
See also
h2e
Homogeneous to Euclidean
See also
e2h
hist2d
MEX file to compute 2-D histogram.
[h,vx,vy] = hist2d(x,y)
or
Inputs:
x,y data points. {x(i),y(i)} is a single data point.
x0 lowest x bin’s lower edge
dx x bin width
nx number of x bins
y0 lowest y bin’s lower edge
dy y bin width
ny number of y bins
[x0,dx,nx] and [y0,dy,ny] default = [0,1,256]
Outputs:
h histogram matrix. h(i,j) = number of data points
satisfying vx(j) <= x < vx(j+1) and vy(i) <= y < vy(i+1).
Notes
Author
hitormiss
Hit or miss transform
H = hitormiss(im, se) is the hit-or-miss transform of the binary image im with the
structuring element se. Unlike standard morphological operations S has three possible
values: 0, 1 and don’t care (represented by NaN).
References
See also
homline
Homogeneous line from two points
See also
plot_homline
homography
Estimate homography
Notes
Author
Based on homography code by Peter Kovesi, School of Computer Science & Software
Engineering, The University of Western Australia, http://www.csse.uwa.edu.au/,
See also
homtrans
Apply a homogeneous transformation
Notes
See also
homwarp
Warp image by an homography
out = homwarp(H, im, options) is a warp of the image im obtained by applying the
homography H to the coordinates of every input pixel.
[out,offs] = homwarp(H, im, options) as above but offs is the offset of the warped tile
out with respect to the origin of im.
Options
‘full’ output image contains all the warped pixels, but its position with respect to the input
image is given by the second return value offs.
‘extrapval’, V set unmapped pixels to this value (default NaN)
‘roi’, R output image contains the specified ROI in the input image
‘scale’, S scale the output by this factor
‘dimension’, D ensure output image is D × D
‘size’, S size of output image S=[W,H]
‘coords’, {U,V} coordinate matrices for im, each same size as im.
Notes
• The edges of the resulting output image will in general not be be vertical and
horizontal lines.
See also
Hough
Hough transform class
The Hough transform is a technique for finding lines in an image using a voting scheme.
For every edge pixel in the input image a set of cells in the Hough accumulator (voting
array) are incremented.
In this version of the Hough transform lines are described by:
d = y cos(theta) + x sin(theta)
where theta is the angle the line makes to horizontal axis, and d is the perpendicular
distance between (0,0) and the line. A horizontal line has theta = 0, a vertical line has
theta = pi/2 or -pi/2.
The voting array is 2-dimensional, with columns corresponding to theta and rows cor-
responding to offset (d). Theta spans the range -pi/2 to pi/2 in Ntheta steps. Offset is
in the range -rho_max to rho_max where rho_max=max(W,H).
Methods
Properties
Notes
See also
LineFeature
Hough.Hough
Create Hough transform object
Options
‘equal’ All edge pixels have equal weight, otherwise the edge pixel value is the vote strength
‘points’ Pass set of points rather than an edge image, in this case E (2 × N) is a set of N points,
or E (3 × N) is a set of N points with corresponding vote strengths as the third row
‘interpwidth’, W Interpolation width (default 3)
‘houghthresh’, T Set ht.houghThresh (default 0.5)
‘edgethresh’, T Set ht.edgeThresh (default 0.1);
‘suppress’, W Set ht.suppress (default 0)
‘nbins’, N Set number of bins, if N is scalar set Nrho=Ntheta=N, else N = [Ntheta, Nrho]. Default
400 × 401.
Hough.char
Convert to string
Hough.display
Display value
Notes
• This method is invoked implicitly at the command line when the result of an
expression is a Hough object and the command has no trailing semicolon.
See also
Hough.char
Hough.lines
Find lines
See also
Hough.plot, LineFeature
Hough.plot
Plot line features
See also
Hough.lines
Hough.show
Display the Hough accumulator as image
s = HT.show() displays the Hough vote accumulator as an image using the hot col-
ormap, where ‘heat’ is proportional to the number of votes.
See also
colormap, hot
humoments
Hu moments
Notes
Reference
M-K. Hu, Visual pattern recognition by moment invariants. IRE Trans. on Information
Theory, IT-8:pp. 179-187, 1962.
See also
npq
ianimate
Display an image sequence
Examples
Options
Notes
See also
ibbox
Find bounding box
box = ibbox(p) is the minimal bounding box that contains the points described by the
columns of p (2 × N).
box = ibbox(im) as above but the box minimally contains the non-zero pixels in the
image im.
Notes
iblobs
features
Options
‘touch’, T accept only blobs that touch (1) or do not touch (0) the edge (default accept all)
‘class’, C accept only blobs of pixel value C (default all)
References
Notes
• The RegionFeature objects are ordered by the raster order of the top most point
(smallest v coordinate) in each blob.
• Circularity is computed using the raw perimeter length scaled down by Kulpa’s
correction factor.
See also
icanny
edge detection
E = icanny(im, options) is an edge image obtained using the Canny edge detector
algorithm. Hysteresis filtering is applied to the gradient image: edge pixels > th1 are
connected to adjacent pixels > th0, those below th0 are set to zero.
Options
Reference
Notes
• Produces a zero image with single pixel wide edges having non-zero values.
See also
isobel, kdgauss
iclose
closing
out = iclose(im, se, options) is the image im after morphological closing with the
structuring element se. This is a morphological dilation followed by an erosion.
out = iclose(im, se, n, options) as above but the structuring element se is applied n
times, that is n erosions followed by n dilations.
Notes
• For binary image a closing operation can be used to eliminate small black holes
in white regions.
• Cheaper to apply a smaller structuring element multiple times than one large
one, the effective structuring element is the Minkowski sum of the structuring
element with itself n times.
• Windowing options of IMORPH can be passed. By default output image is same
size as input image.
See also
icolor
Colorize a greyscale image
Examples
Create a color image that looks the same as the greyscale image
c = icolor(im);
c = icolor(im, colorname(’pink’));
Notes
See also
iconcat
Concatenate images
Options
Examples
Notes
• The images do not have to be of the same size, and smaller images are surrounded
by background pixels which can be specified.
• Works for color or greyscale images.
• Direction can be abbreviated to first character, ‘h’ or ‘v’.
• In vertical mode all images are right justified.
• In horizontal mode all images are top justified.
See also
idisp
iconvolve
Image convolution
Options
Notes
• If the image is color (has multiple planes) the kernel is applied to each plane,
resulting in an output image with the same number of planes.
• If the kernel has multiple planes, the image is convolved with each plane of the
kernel, resulting in an output image with the same number of planes.
• This function is a convenience wrapper for the MATLAB function CONV2.
• Works for double, uint8 or uint16 images. Image and kernel must be of the same
type and the result is of the same type.
• This function replaces iconv().
See also
conv2
icorner
Corner detector
u horizontal coordinate
v vertical coordinate
strength corner strength
descriptor corner descriptor (vector)
Options
‘detector’, D choose the detector where D is one of ‘harris’ (default), ‘noble’ or ‘klt’
‘sigma’, S kernel width for smoothing (default 2)
‘deriv’, D kernel for gradient (default kdgauss(2))
‘cmin’, CM minimum corner strength
‘cminthresh’, CT minimum corner strength as a fraction of maximum corner strength
‘edgegap’, E don’t return features closer than E pixels to the edge of image (default 2)
‘suppress’, R don’t return a feature closer than R pixels to an earlier feature (default 0)
‘nfeat’, N return the N strongest corners (default Inf)
‘k’, K set the value of k for the Harris detector
‘patch’, P use a P × P patch of surrounding pixel values as the feature vector. The vector has
zero mean and unit norm.
‘color’ specify that im is a color image not a sequence
Example
Notes
References
• “A combined corner and edge detector”, C.G. Harris and M.J. Stephens, Proc.
Fourth Alvey Vision Conf., Manchester, pp 147-151, 1988.
• “Finding corners”, J.Noble, Image and Vision Computing, vol.6, pp.121-128,
May 1988.
• “Good features to track”, J. Shi and C. Tomasi, Proc. Computer Vision and
Pattern Recognition, pp. 593-593, IEEE Computer Society, 1994.
– Robotics, Vision & Control, Section 13.3, P. Corke, Springer 2011.
See also
PointFeature, isurf
icp
Point cloud alignment
T = icp(p1, p2, options) is the homogeneous transformation that best transforms the
set of points p1 to p2 using the iterative closest point algorithm.
[T,d] = icp(p1, p2, options) as above but also returns the norm of the error between
the transformed point set p2 and p1.
Options
‘dplot’, d show the points p1 and p2 at each iteration, with a delay of d [sec].
‘plot’ show the points p1 and p2 at each iteration, with a delay of 0.5 [sec].
‘maxtheta’, T limit the change in rotation at each step to T
‘maxiter’, N stop after N iterations (default 100)
‘mindelta’, T stop when the relative change in error norm is less than T (default 0.001)
‘distthresh’, T eliminate correspondences more than T x the median distance at each iteration.
Example
Notes
• For noisy data setting distthresh and maxtheta can help to prevent the solution
from diverging.
Reference
idecimate
an image
Notes
See also
idilate
Morphological dilation
out = idilate(im, se, options) is the image im after morphological dilation with the
structuring element se.
out = idilate(im, se, n, options) as above but the structuring element se is applied n
times, that is n dilations.
Options
Notes
• Cheaper to apply a smaller structuring element multiple times than one large
one, the effective structuring element is the Minkowski sum of the structuring
element with itself n times.
• Windowing options of IMORPH can be passed.
Reference
See also
idisp
image display tool
idisp(im, options) displays an image and allows interactive investigation of pixel val-
ues, linear profiles, histograms and zooming. The image is displayed in a figure with
a toolbar across the top. If im is a cell array of images, they are first concatenated
(horizontally).
User interface
• Left clicking on a pixel will display its value in a box at the top.
• The “line” button allows two points to be specified and a new figure displays
intensity along a line between those points.
• The “histo” button displays a histogram of the pixel values in a new figure. If the
image is zoomed, the histogram is computed over only those pixels in view.
• The “zoom” button requires a left-click and drag to specify a box which defines
the zoomed view.
• The “colormap” button is displayed only for greyscale images, and is a popup
button that allows different color maps to be selected.
Options
Notes
• Is a wrapper around the MATLAB builtin function IMAGE. See the MATLAB
help on “Display Bit-Mapped Images” for details of color mapping.
• Color images are displayed in MATLAB true color mode: pixel triples map to
display RGB values. (0,0,0) is black, (1,1,1) is white.
• Greyscale images are displayed in indexed mode: the image pixel value is mapped
through the color map to determine the display pixel value.
• For grey scale images the minimum and maximum image values are mapped to
the first and last element of the color map, which by default (’greyscale’) is the
range black to white. To set your own scaling between displayed grey level and
pixel value use the ‘cscale’ option.
• The title of the figure window by default is the name of the variable passed in as
the image, this can’t work if the first argument is an expression.
Examples
Display an image which contains a map of a region, perhaps an obstacle grid, that spans
real world dimensions x, y in the range -10 to 10.
idisp(map, ’xyscale’, {[-10 10], [-10 10]});
See also
idisplabel
Display an image with mask
idisplabel(im, labelimage, labels) displays only those image pixels which belong to a
specific class. im is a greyscale (H ×W ) or color (H ×W × 3) image, and labelimage
(H × W ) contains integer pixel class labels for the corresponding pixels in im. The
pixel classes to be displayed are given by labels which is either a scalar or a vector of
class labels. Non-selected pixels are displayed as white by default.
idisplabel(im, labelimage, labels, bg) as above but the grey level of the non-selected
pixels is specified by bg in the range 0 to 1 for a float image or 0 to 255 for a uint8
image..
Example
where the matrix cls is the same size as flowers and the elements are the corresponding
pixel class, a value in the range 1 to 7. To display pixels of class 5 we use
idisplabel(flowers, cls, 5)
See also
idouble
Convert integer image to double
imd = idouble(im, options) is an image with double precision elements in the range 0
to 1 corresponding to the elements of im. The integer pixels im are assumed to span
the range 0 to the maximum value of their integer class.
Options
Notes
• Works for an image with arbitrary number of dimensions, eg. a color image or
image sequence.
• There is a linear mapping (scaling) of the values of imd to im.
See also
iint, cast
iendpoint
Find end points in a binary skeleton image
out = iendpoint(im) is a binary image where pixels are set if the corresponding pixel
in the binary image im is the end point of a single-pixel wide line such as found in an
image skeleton. Computed using the hit-or-miss morphological operator.
References
See also
ierode
Morphological erosion
out = ierode(im, se, options) is the image im after morphological erosion with the
structuring element se.
out = ierode(im, se, n, options) as above but the structuring element se is applied n
times, that is n erosions.
Options
Notes
• Cheaper to apply a smaller structuring element multiple times than one large one,
the effective structuing element is the Minkowski sum of the structuring element
with itself n times.
• Windowing options of IMORPH can be passed.
Reference
See also
igamm
correction
out = igamm(im, gamma) is a gamma corrected version of the image im. All pixels
are raised to the power gamma. Gamma encoding can be performed with gamma > 1
and decoding with gamma < 1.
out = igamm(im, ‘sRGB’) is a gamma decoded version of im using the sRGB decoding
function (JPEG images sRGB encoded).
Notes
• This function was once called igamma(), but that name taken by MATLAB
method for double class objects.
• Gamma decoding should be applied to any color image prior to colometric oper-
ations.
• For images with multiple planes the gamma correction is applied to all planes.
• For images of type double the pixels are assumed to be in the range 0 to 1.
• For images of type int the pixels are assumed in the range 0 to the maximum
value of their class. Pixels are converted first to double, processed, then con-
verted back to the integer class.
See also
iread, colorspace
igraphseg
Graph-based image segmentation
Example
im = iread(’58060.jpg’);
[labels,maxval] = igraphseg(im, 1500, 100, 0.5);
idisp(labels)
Reference
Notes
Author
See also
ithresh, imser
ihist
Image histogram
ihist(im, options) displays the image histogram. For an image with multiple planes
the histogram of each plane is given in a separate subplot.
H = ihist(im, options) is the image histogram as a column vector. For an image with
multiple planes H is a matrix with one column per image plane.
[H,x] = ihist(im, options) as above but also returns the bin coordinates as a column
vector x.
Options
Example
[h,x] = ihist(im);
bar(x,h);
Notes
• For a uint8 image the MEX function FHIST is used (if available)
– The histogram always contains 256 bins
– The bins spans the greylevel range 0-255.
• For a floating point image the histogram spans the greylevel range 0-1.
• For floating point images all NaN and Inf values are first removed.
See also
hist
iint
Convert image to integer class
out = iint(im) is an image with unsigned 8-bit integer elements in the range 0 to 255
corresponding to the elements of the image im.
out = iint(im, class) as above but the output pixels belong to the integer class class.
Examples
Notes
• Works for an image with arbitrary number of dimensions, eg. a color image or
image sequence.
• If the input image is floating point (single or double) the pixel values are scaled
from an input range of [0,1] to a range spanning zero to the maximum positive
value of the output integer class.
• If the input image is an integer class then the pixels are cast to change type but
not their value.
See also
idouble
iisum
Sum of integral image
s = iisum(ii, u1, v1, u2, v2) is the sum of pixels in the rectangular image region defined
by its top-left (u1,v1) and bottom-right (u2,v2). ii is a precomputed integral image.
See also
intgimage
ilabel
Label an image
L = ilabel(im) is a label image that indicates connected components within the image
im (H ×W ). Each pixel in L (H ×W ) is an integer label that indicates which connected
region the corresponding pixel in im belongs to. Region labels are in the range 1 to M.
[L,m] = ilabel(im) as above but returns the value of the maximum label value.
[L,m,parents] = ilabel(im) as above but also returns region hierarchy information. The
value of parents(I) is the label of the parent, or enclosing, region of region I. A value
of 0 indicates that the region has no single enclosing region, for a binary image this
means the region touches the edge of the image, for a multilevel image it means that
the region touches more than one other region.
[L,maxlabel,parents,class] = ilabel(im) as above but also returns the class of pixels
within each region. The value of class(I) is the value of the pixels that comprise region
I.
[L,maxlabel,parents,class,edge] = ilabel(im) as above but also returns the edge-touch
status of each region. If edge(I) is 1 then region I touches edge of the image, otherwise
it does not.
Notes
See also
iblobs, imoments
iline
Draw a line in an image
out = iline(im, p1, p2) is a copy of the image im with a single-pixel thick line drawn
between the points p1 and p2, each a 2-vector [U,V]. The pixels on the line are set to
1.
out = iline(im, p1, p2, v) as above but the pixels on the line are set to v.
Notes
See also
im2col
Convert an image to pixel per row format
out = im2col(im) is a matrix (N × P) where each row represents a single of the image
im (H × W × P). The pixels are in image column order (ie. column 1, column 2 etc)
and there are N=W × H rows.
out = im2col(im, mask) as above but only includes pixels if:
• the corresponding element of mask (H ×W ) is non-zero
• the corresponding element of mask (N) is non-zero where N=H ×W
• the pixel index is included in the vector mask
See also
col2im
ImageSource
Abstract class for image sources
Methods
See also
ImageSource.ImageSource
Image source constructor
Options
ImageSource.display
Display value
I.display() displays the state of the image source object in human readable form.
Notes
• This method is invoked implicitly at the command line when the result of an
expression is an ImageSource object and the command has no trailing semicolon.
imatch
Template matching
The template is searched for within im2 inside a rectangular region, centred at (u,v)
and whose size is a function of s. If s is a scalar the search region is [-s, s, -s, s] relative
to (u,v). More generally s is a 4-vector s=[umin, umax, vmin, vmax] relative to (u,v).
The return value is xm=[DU,DV,CC] where (DU,DV) are the u- and v-offsets relative
to (u,v) and CC is the similarity score for the best match in the search region.
[xm,score] = imatch(im1, im2, u, v, H, s) as above but also returns a matrix of match-
ing score values for each template position tested. The rows correspond to horizontal
positions of the template, and columns the vertical position. The centre element corre-
sponds to (u,v).
Example
Consider a sequence of images im(:,:,N) and we find corner points in the kth image
corners = icorner(im(:,:,k), ’nfeat’, 20);
Now, for each corner we look for the 11 × 11 patch of surrounding pixels in the next
image, by searching within a 21 × 21 region
for corner=corners
end
end
Notes
• Useful for tracking a template in an image sequence where im1 and im2 are
consecutive images in a template and (u,v) is the coordinate of a corner point in
im1.
• Is a MEX file.
• im1 and im2 must be the same size.
• ZNCC (zero-mean normalized cross correlation) matching is used as the simi-
larity measure. A perfect match score is 1.0 but anything above 0.8 is typically
considered to be a good match.
See also
isimilarity
imeshgrid
Domain matrices for image
[u,v] = imeshgrid(im) are matrices that describe the domain of image im and can be
used for the evaluation of functions over the image. u and v are the same szie as im.
The element u(v,u) = u and v(v,u) = v.
[u,v] = imeshgrid(im, n) as above but...
[u,v] = imeshgrid(w, H) as above but the domain is w × H.
[u,v] = imeshgrid(size) as above but the domain is described size which is scalar size×
size or a 2-vector [w H].
See also
meshgrid
imoments
Image moments
Properties
moments a structure containing moments of order 0 to 2, the elements are m00, m10, m01, m20,
m02, m11.
Notes
• For a binary image the zeroth moment is the number of non-zero pixels, or its
area.
• This function does not perform connectivity it considers all non-zero pixels in
the image. If connected regions are required then use IBLOBS instead.
See also
RegionFeature, iblobs
imono
Convert color image to monochrome
Options
Notes
See also
imorph
Morphological neighbourhood processing
out = imorph(im, se, op) is the image im after morphological processing with the
operator op and structuring element se.
The structuring element se is a small matrix with binary values that indicate which
elements of the template window are used in the operation.
The operation op is:
out = imorph(im, se, op, edge) as above but performance of edge pixels can be con-
trolled. The value of edge is:
Notes
• Is a MEX file.
• Performs greyscale morphology.
• The structuring element should have an odd side length.
• For binary image ‘min’ = EROSION, ‘max’ = DILATION.
• The ‘plusmin’ operation can be used to compute the distance transform.
• The input can be logical, uint8, uint16, float or double, the output is always
double
See also
imser
Maximally stable extremal regions
Options
Example
im = iread(’castle_sign2.png’, ’grey’, ’double’);
[label,n] = imser(im, ’light’);
idisp(label)
Notes
Reference
See also
ithresh, igraphseg
inormhist
Histogram normalization
Notes
See also
ihist
intgimage
Compute integral image
Examples
Create integral images for sum of pixel squared values over rectangular regions
i = intimage(im.^2);
See also
iisum
invcamcal
camera calibration
c = invcamcal(C)
Decompose, or invert, a 3x4camera calibration matrix C.
The result is a camera object with the following parameters set:
f
sx, sy (with sx=1)
(u0, v0) principal point
iopen
Morphological opening
out = iopen(im, se, options) is the image im after morphological opening with the
structuring element se. This is a morphological erosion followed by dilation.
out = iopen(im, se, n, options) as above but the structuring element se is applied n
times, that is n erosions followed by n dilations.
Notes
• For binary image an opening operation can be used to eliminate small white
noise regions.
• It is cheaper to apply a smaller structuring element multiple times than one large
one, the effective structuring element is the Minkowski sum of the structuring
element with itself n times.
• Windowing options of IMORPH can be passed. By default output image is same
size as input image.
See also
ipad
Pad an image with constants
out = ipad(im, sides, n) is a padded version of the image im with a block of NaN
values n pixels wide on the sides of im as specified by sides.
out = ipad(im, sides, n, v) as above but pads with pixels of value v.
sides is a string containing one or more of the characters:
‘t’ top
‘b’ bottom
‘l’ left
‘r’ right
Examples
Add a band of zero pixels 20 pixels high across the top of the image:
ipad(im, ’t’, 20, 0)
Add a band of white pixels 10 pixels wide on all sides of the image:
ipad(im, ’tblr’, 10, 255)
Notes
ipaste
Paste an image into an image
out = ipaste(im, im2, p, options) is the image im with the subimage im2 pasted in at
the position p=[U,V].
Options
‘centre’ The pasted image is centred at p, otherwise p is the top-left corner of the subimage in
im (default)
‘zero’ the coordinates of p start at zero, by default 1 is assumed
‘set’ im2 overwrites the pixels in im (default)
‘add’ im2 is added to the pixels in im
‘mean’ im2 is set to the mean of pixel values in im2 and im
Notes
See also
iline
ipixswitch
Pixelwise image merge
out = ipixswitch(mask, im1, im2) is an image where each pixel is selected from the
corresponding pixel in im1 or im2 according to the corresponding pixel values in mask.
If the element of mask is zero im1 is selected, otherwise im2 is selected.
im1 or im2 can contain a color descriptor which is one of:
• A scalar value corresponding to a greyscale
• A 3-vector corresponding to a color value
• A string containing the name of a color which is found using COLORNAME.
ipixswitch(mask, im1, im2) as above but the result is displayed.
Example
The result is a uint8 image since both arguments are uint8 images.
a = ipixswitch(im>120, im, [1 0 0]);
The result is a double precision image since the color specification is a double.
The result is a double precision image since the result of colorname is a double preci-
sion 3-vector.
Notes
• im1, im2 and mask must all have the same number of rows and columns.
• If im1 and im2 are both greyscale then out is greyscale.
• If either of im1 and im2 are color then out is color.
• If either one image is double and one is integer then the integer image is first
converted to a double image.
See also
colorize, colorname
iprofile
Extract pixels along a line
v = iprofile(im, p1, p2) is a vector of pixel values extracted from the image im (H ×
W × P) between the points p1 (2 × 1) and p2 (2 × 1). v (N × P) has one row for each
point along the line and the row is the pixel value which will be a vector for a multi-
plane image.
[p,uv] = iprofile(im, p1, p2) as above but also returns the coordinates of the pixels
for each point along the line. Each row of uv is the pixel coordinate (u,v) for the
corresponding row of p.
Notes
See also
bresenham, iline
ipyramid
Pyramidal image decomposition
Notes
See also
irank
Rank filter
out = irank(im, order, se) is a rank filtered version of im. Only pixels corresponding
to non-zero elements of the structuring element se are ranked and the orderth value in
rank becomes the corresponding output pixel value. The highest rank, the maximum,
is order=1.
out = irank(image, se, op, nbins) as above but the number of histogram bins can be
specified.
out = irank(image, se, op, nbins, edge) as above but the processing of edge pixels can
be controlled. The value of edge is:
Examples
5 × 5 median filter, 25 elements in the window, the median is the 12thn in rank
irank(im, 12, ones(5,5));
3 × 3 non-local maximum, find where a pixel is greater than its eight neighbours
se = ones(3,3); se(2,2) = 0;
im > irank(im, 1, se);
Notes
See also
iread
Read image from file
im = iread() presents a file selection GUI from which the user can select an image file
which is returned as a matrix. On subsequent calls the initial folder is as set on the last
call.
im = iread([], OPTIONS) as above but allows options to be specified.
im = iread(path, options) as above but the GUI is set to the folder specified by path.
If the path is not absolute it is searched for on the MATLAB search path.
im = iread(file, options) reads the specified image file and returns a matrix. If the path
is not absolute it is searched for on MATLAB search path.
The image can be greyscale or color in any of a wide range of formats supported by the
MATLAB IMREAD function.
Wildcards are allowed in file names. If multiple files match a 3D or 4D image is
returned where the last dimension is the number of images in the sequence.
Options
‘uint8’ return an image with 8-bit unsigned integer pixels in the range 0 to 255
‘single’ return an image with single precision floating point pixels in the range 0 to 1.
‘double’ return an image with double precision floating point pixels in the range 0 to 1.
‘grey’ convert image to greyscale, if it’s color, using ITU rec 601
‘grey_709’ convert image to greyscale, if it’s color, using ITU rec 709
‘gamma’, G apply this gamma correction, either numeric or ‘sRGB’
‘reduce’, R decimate image by R in both dimensions
‘roi’, R apply the region of interest R to each image, where R=[umin umax; vmin vmax].
Examples
Notes
See also
irectify
Rectify stereo image pair
Notes
• The resulting image pair are epipolar aligned, equivalent to the view if the two
original camera axes were parallel.
• Rectified images are required for dense stereo matching.
• The effect of lense distortion is not removed, use the camera calibration toolbox
to unwarp each image prior to rectification.
• The resulting images may have negative disparity.
• Some output pixels may have no corresponding input pixels and will be set to
NaN.
See also
ireplicate
Expand image
See also
idecimate, iscale
iroi
Extract region of interest
Notes
See also
idisp
irotate
Rotate image
out = irotate(im, angle, options) is a version of the image im that has been rotated
about its centre.
Options
Notes
See also
iscale
isamesize
Automatic image trimming
out = isamesize(im1, im2) is an image derived from im1 that has the same dimensions
as im2. This is achieved by cropping and scaling.
out = isamesize(im1, im2, bias) as above but bias controls which part of the image is
cropped. bias=0.5 is symmetric cropping, bias<0.5 moves the crop window up or to
the left, while bias>0.5 moves the crop window down or to the right.
See also
iscale
Scale an image
Options
See also
iscalemax
Scale space maxima
Notes
See also
iscalespace, ScalePointFeature
iscalespace
Scale-space image sequence
Examples
Notes
See also
iscolor
Test for color image
iscolor(im) is true (1) if im is a color image, that is, it its third dimension is equal to
three.
ishomog
Test if SE(3) homogeneous transformation matrix
ishomog(T, ‘valid’) as above, but also checks the validity of the rotation sub-matrix.
Notes
• The first form is a fast, but incomplete, test for a transform is SE(3).
See also
ishomog2
Test if SE(2) homogeneous transformation matrix
Notes
• The first form is a fast, but incomplete, test for a transform in SE(3).
See also
isift
SIFT feature extractor
Options
u horizontal coordinate
v vertical coordinate
strength feature strength
descriptor feature descriptor (128 × 1)
sigma feature scale
theta feature orientation [rad]
image_id a value passed as an option to isift
Notes
– at least 5x faster
– does not return feature strength
– does not sort features by strength
– ‘nfeat’ option cannot be used, adjust ‘PeakThresh’ to control the number
of features
• Default MEX implementation by Andrea Vedaldi (2006).
– Features are returned in descending strength order.
• The SIFT algorithm is covered by US Patent 6,711,293 (March 23, 2004) held
by the Univerity of British Columbia.
• ISURF is a functional equivalent.
Reference
See also
isimilarity
Locate template in image
s = isimilarity(T, im) is an image where each pixel is the ZNCC similarity of the
template T (M × M) to the M × M neighbourhood surrounding the corresonding input
pixel in im. s is same size as im.
s = isimilarity(T, im, metric) as above but the similarity metric is specified by the
function metric which can be any of @sad, @ssd, @ncc, @zsad, @zssd.
Example
The magnitude at each pixel indicates how well the template centred on that point
matches the surrounding pixels. The locations of the maxima are
[~,p] = peak2(S, 1, ’npeaks’, 5);
’edgecolor’, ’none’)
References
Notes
• For NCC and ZNCC the maximum in s corresponds to the most likely template
location. For SAD, SSD, ZSAD and ZSSD the minimum value corresponds to
the most likely location.
• Similarity is not computed for those pixels where the template crosses the image
boundary, and these output pixels are set to NaN.
• User provided similarity metrics can be used, the function accepts two regions
and returns a scalar similarity score.
See also
isize
Size of image
Notes
See also
size
ismooth
Gaussian smoothing
out = ismooth(im, sigma) is the image im after convolution with a Gaussian kernel of
standard deviation sigma.
out = ismooth(im, sigma, options) as above but the options are passed to CONV2.
Options
Notes
• By default (option ‘full’) the returned image is larger than the passed image.
• Smooths all planes of the input image.
See also
iconv, kgauss
isobel
Sobel edge detector
out = isobel(im) is an edge image computed using the Sobel edge operator convolved
with the image im. This is the norm of the vertical and horizontal gradients at each
pixel. The Sobel horizontal gradient kernel is:
1 |1 0 -1|
• – |2 0 -2| 8 |1 0 -1|
and the vertical gradient kernel is the transpose.
[gx,gy] = isobel(im) as above but returns the gradient images rather than the gradient
magnitude.
out = isobel(im,dx) as above but applies the kernel dx and dx’ to compute the hori-
zontal and vertical gradients respectively.
[gx,gy] = isobel(im,dx) as above but returns the gradient images rather than the gradi-
ent magnitude.
Notes
See also
isrot
Test if SO(3) rotation matrix
Notes
See also
istereo
Stereo matching
relative to the integer disparity at which s is maximum. p.A and p.B are matrices the
same size as d whose elements are the per pixel values of the interpolation polynomial
coefficients. p.dx is the peak of the polynomial with respect to the integer disparity at
which s is maximum (in the range -0.5 to +0.5).
Options
‘metric’, M string that specifies the similarity metric to use which is one of ‘zncc’ (default), ‘ncc’,
‘ssd’ or ‘sad’.
‘interp’ enable subpixel interpolation and d contains non-integer values (default false)
‘vshift’, V move the right image V pixels vertically with respect to left.
Example
References
Notes
See also
irectify, stdisp
istretch
Image normalization
out = istretch(im, options) is a normalized image in which all pixel values lie in the
range 0 to 1. That is, a linear mapping where the minimum value of im is mapped to 0
and the maximum value of im is mapped to 1.
Options
Notes
• For an integer image the result is a double image in the range 0 to max value.
See also
inormhist
isurf
SURF feature extractor
u horizontal coordinate
v vertical coordinate
strength feature strength
descriptor feature descriptor (64 × 1 or 128 × 1)
sigma feature scale
theta feature orientation [rad]
Options
Example
Notes
Reference
“SURF: Speeded Up Robust Features”, Herbert Bay, Andreas Ess, Tinne Tuytelaars,
Luc Van Gool, Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3,
pp. 346–359, 2008
See also
isvec
Test if vector
Notes
• Differs from MATLAB builtin function ISVECTOR, the latter returns true for
the case of a scalar, isvec does not.
• Gives same result for row- or column-vector, ie. 3 × 1 or 1 × 3 gives true.
See also
ishomog, isrot
ithin
Morphological skeletonization
out = ithin(im) is the binary skeleton of the binary image im. Any non-zero region is
replaced by a network of single-pixel wide lines.
out = ithin(im,delay) as above but graphically displays each iteration of the skele-
tonization algorithm with a pause of delay seconds between each iteration.
References
See also
ithresh
Interactive image threshold
ithresh(im) displays the image im in a window with a slider which adjusts the binary
threshold.
ithresh(im, T) as above but the initial threshold is set to T.
im2 = ithresh(im) as above but returns the thresholded image after the “done” button
in the GUI is pressed.
[im2,T] = ithresh(im) as above but also returns the threshold value.
Notes
See also
idisp
itrim
Trim images
‘t’ top
‘b’ bottom
‘l’ left
‘r’ right
[out1,out2] = itrim(im1,im2) returns the central parts of images im1 and im2 as out1
and out2 respectively. When images are rectified or warped the shapes can become
quite distorted and are embedded in rectangular images surrounded by black of NaN
values. This function crops out the central rectangular region of each. It assumes that
the undefined pixels in im1 and im2 have values of NaN. The same cropping is applied
to each input image.
[out1,out2] = itrim(im1,im2,T) as above but the threshold T in the range 0 to 1 is
used to adjust the level of cropping. The default is 0.5, a higher value will include
fewer NaN value in the result (smaller region), a lower value will include more (larger
region). A value of 0 will ensure that there are no NaN values in the returned region.
See also
homwarp, irectify
itriplepoint
Find triple points
out = itriplepoint(im) is a binary image where pixels are set if the corresponding
pixel in the binary image im is a triple point, that is where three single-pixel wide
line intersect. These are the Voronoi points in an image skeleton. Computed using the
hit-or-miss morphological operator.
References
See also
ivar
Pixel window statistics
out = ivar(im, se, op) is an image where each output pixel is the specified statistic over
the pixel neighbourhood indicated by the structuring element se which should have odd
side lengths. The elements in the neighbourhood corresponding to non-zero elements
in se are packed into a vector on which the required statistic is computed.
The operation op is one of:
‘var’ variance
‘kurt’ Kurtosis or peakiness of the distribution
‘skew’ skew or asymmetry of the distribution
out = ivar(im, se, op, edge) as above but performance at edge pixels can be controlled.
The value of edge is:
Notes
• Is a MEX file.
• The input can be logical, uint8, uint16, float or double, the output is always
double
See also
irank, iwindow
iwindow
Generalized spatial operator
out = iwindow(im, se, func) is an image where each pixel is the result of applying the
function func to a neighbourhood centred on the corresponding pixel in im. The neigh-
bourhood is defined by the size of the structuring element se which should have odd
side lengths. The elements in the neighbourhood corresponding to non-zero elements
in se are packed into a vector (in column order from top left) and passed to the specified
function handle func. The return value becomes the corresponding pixel value in out.
out = iwindow(image, se, func, edge) as above but performance of edge pixels can be
controlled. The value of edge is:
Example
Notes
• Is a MEX file.
• The structuring element should have an odd side length.
• Is slow since the function func must be invoked once for every output pixel.
• The input can be logical, uint8, uint16, float or double, the output is always
double
See also
ivar, irank
kcircle
Circular structuring element
Notes
• If R is a 2-element vector the result is an annulus of ones, and the two numbers
are interpretted as inner and outer radii.
See also
kdgauss
Derivative of Gaussian kernel
Notes
See also
kdog
Difference of Gaussian kernel
Notes
• This kernel is similar to the Laplacian of Gaussian and is often used as an effi-
cient approximation.
See also
kgauss
Gaussian kernel
Notes
See also
klaplace
Laplacian kernel
Notes
See also
ilaplace, iconv
klog
Laplacian of Gaussian kernel
See also
kmeans
K-means clustering
organized into k clusters based on Euclidean distance from cluster centres C (D × k). L
is a vector (N × 1) whose elements indicates which cluster the corresponding element
of x belongs to.
[L,C] = kmeans(x, k, ) as above but the initial clusters C0 (D × k) is given and column
I is the initial estimate of the centre of cluster I.
L = kmeans(x, C) is similar to above but the clustering step is not performed, it is
assumed to have been completed previously. C (D × k) contains the cluster centroids
and L (N × 1) indicates which cluster the corresponding element of x is closest to.
Options
‘random’ initial cluster centres are chosen randomly from the set of data points x (default)
‘spread’ initial cluster centres are chosen randomly from within the hypercube spanned by x.
‘initial’, C0 Provide initial cluster centers
Reference
ksobel
Sobel edge detector
|2 0 -2|
|1 0 -1|
Notes
See also
isobel
ktriangle
Triangular kernel
Examples
>> ktriangle(3)
ans =
|0 1 0|
|0 1 0|
|1 1 1|
See also
kcircle
lambda2rg
RGB chromaticity coordinates
References
See also
cmfrgb, lambda2xy
lambda2xy
= LAMBDA2XY(LAMBDA) is the xy-chromaticity coordinate
(1 × 2) for
References
See also
cmfxyz, lambda2rg
LineFeature
Line feature class
Methods
Properties
Note
See also
LineFeature.LineFeature
Create a line feature object
LineFeature.char
Convert to string
LineFeature.display
Display value
Notes
• This method is invoked implicitly at the command line when the result of an
expression is a LineFeature object and the command has no trailing semicolon.
See also
LineFeature.char
LineFeature.plot
Plot line
Notes
LineFeature.points
Return points on line segments
p = L.points(edge) is the set of points that lie along the line in the edge image edge
are determined.
See also
icanny
LineFeature.seglength
Compute length of line segments
The Hough transform identifies lines but cannot determine their length. This method
examines the edge pixels in the original image and determines the longest stretch of
non-zero pixels along the line.
l2 = L.seglength(edge, gap) is a copy of the line feature object with the property length
updated to the length of the line (pixels). Small gaps, less than gap pixels are tolerated.
l2 = L.seglength(edge) as above but the maximum allowable gap is 5 pixels.
See also
icanny
loadspectrum
Load spectrum data
Notes
• The file is assumed to have its first column as wavelength in metres, the remaind-
ing columns are linearly interpolated and returned as columns of s.
• The files are kept in the private folder inside the MVTB folder.
References
luminos
Photopic luminosity function
Luminosity has units of lumens which are the intensity with which wavelengths are
perceived by the light-adapted human eye.
References
See also
rluminos
mkcube
Create cube
Options
‘facepoint’ Add an extra point in the middle of each face, in this case the returned value is 3 × 14
(8 vertices + 6 face centres).
‘centre’, C The cube is centred at C (3 × 1) not the origin
‘pose’, T The pose of the cube coordinate frame is defined by the homogeneous transform T,
allowing all points in the cube to be translated or rotated.
‘edge’ Return a set of cube edges in MATLAB mesh format rather than points.
See also
cylinder, sphere
mkgrid
Create grid of points
Options
‘pose’, T The pose of the grid coordinate frame is defined by the homogeneous transform T,
allowing all points in the plane to be translated or rotated.
morphdemo
Demonstrate morphology using animation
morphdemo(im, se, options) displays an animation to show the principles of the math-
ematical morphology operations dilation or erosion. Two windows are displayed side
by side, input binary image on the left and output image on the right. The structuring
element moves over the input image and is colored red if the result is zero, else blue.
Pixels in the output image are initially all grey but change to black or white as the
structuring element moves.
out = morphdemo(im, se, options) as above but returns the output image.
Options
Notes
See also
Movie
Class to read movie file
A concrete subclass of ImageSource that acquires images from a web camera built by
Axis Communications (www.axis.com).
Methods
Properties
See also
ImageSource, Video
SEE ALSO: Video
Movie.Movie
Image source constructor
m = Movie(file, options) is an Movie object that returns frames from the movie file
file.
Options
Movie.char
Convert to string
M.char() is a string representing the state of the movie object in human readable form.
Movie.close
Close the image source
Movie.grab
Acquire next frame from movie
Options
Notes
mpq
Image moments
m = mpq(im, p, q) is the PQth moment of the image im. That is, the sum of I(x,y).xp .yq .
See also
mpq_poly
Polygon moments
Notes
• The points must be sorted such that they follow the perimeter in sequence (counter-
clockwise).
• If the points are clockwise the moments will all be negated, so centroids will be
still be correct.
• If the first and last point in the list are the same, they are considered to be a single
vertex.
See also
ncc
Normalized cross correlation
m = ncc(i1, i2) is the normalized cross-correlation between the two equally sized image
patches i1 and i2. The result m is a scalar in the interval -1 (non match) to 1 (perfect
Notes
See also
niblack
Adaptive thresholding
T = niblack(im, k, w2) is the per-pixel (local) threshold to apply to image im. T has
the same dimensions as im. The threshold at each pixel is a function of the mean and
standard deviation computed over a W ×W window, where W=2*w2+1.
[T,m,s] = niblack(im, k, w2) as above but returns the per-pixel mean m and standard
deviation s.
Example
t = niblack(im, -0.2, 20);
idisp(im >= t);
Notes
Reference
See also
otsu, ithresh
npq
Normalized central image moments
m = npq(im, p, q) is the PQth normalized central moment of the image im. That is
UPQ(im,p,q)/MPQ(im,0,0).
Notes
See also
npq_poly
Normalized central polygon moments
Notes
• The points must be sorted such that they follow the perimeter in sequence (counter-
clockwise).
• If the points are clockwise the moments will all be negated, so centroids will be
still be correct.
• If the first and last point in the list are the same, they are considered as a single
vertex.
• The normalized central moments are invariant to translation and scale.
See also
numcols
Number of columns in matrix
Notes
See also
numrows, size
numrows
Number of rows in matrix
Notes
See also
numcols, size
OrientedScalePointFeature
ScalePointCorner feature object
Methods
Properties
u horizontal coordinate
v vertical coordinate
strength feature strength
scale feature scale
descriptor feature descriptor (vector)
See also
OrientedScalePointFeature.OrientedScalePoin
Create a scale point feature object
OrientedScalePointFeature.plot
Plot feature
Options
Examples
Mark each feature with a green circle with a radial line to indicate orientation and with
exagerated scale
f.plot(’clock’, ’color’, ’g’, ’scale’, 2)
See also
otsu
Threshold selection
Example
t = otsu(im);
idisp(im >= t);
Options
Notes
Reference
A Threshold Selection Method from Gray-Level Histograms, N. otsu IEEE Trans. Sys-
tems, Man and Cybernetics Vol SMC-9(1), Jan 1979, pp 62-66
An improved method for image thresholding on the valley-emphasis method H-F Ng,
D. Jargalsaikhan etal Signal and Info Proc. Assocn. Annual Summit and Conf (AP-
SIPA) 2013 pp 1-4
See also
niblack, ithresh
peak
Find peaks in vector
[yp,i] = peak(y, options) as above but also returns the indices of the maxima in the
vector y.
[yp,xp] = peak(y, x, options) as above but also returns the corresponding x-coordinates
of the maxima in the vector y. x is the same length as y and contains the corresponding
x-coordinates.
Options
Notes
• A maxima is defined as an element that larger than its two neighbours. The first
and last element will never be returned as maxima.
• To find minima, use peak(-V).
• The interp options fits points in the neighbourhood about the peak with an Mth
order polynomial and its peak position is returned. Typically choose M to be
even. In this case xp will be non-integer.
See also
peak2
peak2
Find peaks in a matrix
Options
‘scale’, S Only consider as peaks the largest value in the horizontal and vertical range +/- S
points.
‘interp’ Interpolate peak (default no interpolation)
‘plot’ Display the interpolation polynomial overlaid on the point data
Notes
• A maxima is defined as an element that larger than its eight neighbours. Edges
elements will never be returned as maxima.
• To find minima, use peak2(-V).
• The interp options fits points in the neighbourhood about the peak with a paraboloid
and its peak position is returned. In this case ij will be non-integer.
See also
peak, sub2ind
pickregion
Pick a rectangular region of a figure using mouse
[p1,p2] = pickregion() initiates a rubberband box at the current click point and ani-
mates it so long as the mouse button remains down. Returns the first and last coordi-
nates in axis units.
Options
Notes
• Effectively a replacement for the builtin rbbox function which draws the box in
the wrong location on my Mac’s external monitor.
Author
plot_arrow
Draw an arrow in 2D or 3D
Options
See also
arrow3
plot_box
Draw a box
plot_box(b, options) draws a box defined by b=[XL XR; YL YR] on the current plot
with optional MATLAB linestyle options LS.
plot_box(x1,y1, x2,y2, options) draws a box with corners at (x1,y1) and (x2,y2), and
optional MATLAB linestyle options LS.
plot_box(’centre’, P, ‘size’, W, options) draws a box with center at P=[X,Y] and with
dimensions W=[WIDTH HEIGHT].
plot_box(’topleft’, P, ‘size’, W, options) draws a box with top-left at P=[X,Y] and with
dimensions W=[WIDTH HEIGHT].
plot_box(’matlab’, BOX, LS) draws box(es) as defined using the MATLAB convention
of specifying a region in terms of top-left coordinate, width and height. One box is
drawn for each row of BOX which is [xleft ytop width height].
Options
• For an unfilled box any standard MATLAB LineStyle such as ‘r’ or ‘b—’.
• For an unfilled box any MATLAB LineProperty options can be given such as
‘LineWidth’, 2.
Notes
• Additional options LS are MATLAB LineSpec options and are passed to PLOT.
See also
plot_circle
Draw a circle
plot_circle(C, R, options) draws a circle on the current plot with centre C=[X,Y] and
radius R. If C=[X,Y,Z] the circle is drawn in the XY-plane at height Z.
Animation
First draw the circle and keep its graphic handle, then alter it, eg.
H = PLOT_CIRCLE(C, R)
PLOT_ELLIPSE(C, R, ’alter’, H);
Options
• For an unfilled circle any standard MATLAB LineStyle such as ‘r’ or ‘b—’.
• For an unfilled circle any MATLAB LineProperty options can be given such as
‘LineWidth’, 2.
Notes
See also
plot_ellipse
Draw an ellipse or ellipsoid
Animation
First draw the ellipse and keep its graphic handle, then alter it, eg.
H = PLOT_ELLIPSE(E, C, ’r’)
PLOT_ELLIPSE(C, R, ’alter’, H);
Options
• For an unfilled ellipse any standard MATLAB LineStyle such as ‘r’ or ‘b—’.
• For an unfilled ellipse any MATLAB LineProperty options can be given such as
‘LineWidth’, 2.
• For a filled ellipse any MATLAB PatchProperty options can be given.
Notes
See also
plot_homline
Draw a line in homogeneous form
plot_homline(L, ls) draws a line in the current plot defined by L.X = 0 where L (3×1).
The current axis limits are used to determine the endpoints of the line. MATLAB line
specification ls can be set. If L (3 × N) then N lines are drawn, one per column.
H = plot_homline(L, ls) as above but returns a vector of graphics handles for the lines.
Notes
See also
plot_point
Draw a point
plot_point(p, options) adds point markers to the current plot, where p (2 × N) and
each column is the point coordinate.
Options
Examples
Notes
See also
plot, text
plot_poly
Draw a polygon
Animation
plot_poly(H, T) sets the pose of the polygon with handle H to the pose given by T
(3 × 3 or 4 × 4).
Create a polygon that can be animated, then alter it, eg.
H = PLOT_POLY(P, ’animate’, ’r’)
PLOT_POLY(H, transl(2,1,0) );
options
• For an unfilled polygon any standard MATLAB LineStyle such as ‘r’ or ‘b—’.
• For an unfilled polygon any MATLAB LineProperty options can be given such
as ‘LineWidth’, 2.
• For a filled polygon any MATLAB PatchProperty options can be given.
Notes
See also
plot_sphere
Draw sphere
plot_sphere(C, R, ls) draws spheres in the current plot. C is the centre of the sphere
(3 × 1), R is the radius and ls is an optional MATLAB ColorSpec, either a letter or a
3-vector.
H = plot_sphere(C, R, color) as above but returns the handle(s) for the spheres.
H = plot_sphere(C, R, color, alpha) as above but alpha specifies the opacity of the
sphere where 0 is transparant and 1 is opaque. The default is 1.
Example
NOTES
Plucker
Plucker coordinate class
Methods
Operators
Notes
References
Plucker.Plucker
Create Plucker object
p = Plucker(p1, p2) create a Plucker object that represents the line joining the 3D
points p1 (3 × 1) and p2 (3 × 1).
p = Plucker(’points’, p1, p2) as above.
p = Plucker(’planes’, PL1, PL2) create a Plucker object that represents the line formed
by the intersection of two planes PL1, PL2 (4 × 1).
p = Plucker(’wv’, W, V) create a Plucker object from its direction W (3 × 1) and
moment vectors V (3 × 1).
p = Plucker(’Pw’, p, W) create a Plucker object from a point p (3 × 1) and direction
vector W (3 × 1).
Plucker.char
Convert to string
See also
Plucker.display
Plucker.closest
Point on line closest to given point
p = PL.closest(x) is the coordinate of a point on the line that is closest to the point x
(3 × 1).
[p,d] = PL.closest(x) as above but also returns the closest distance.
See also
Plucker.origin_closest
Plucker.display
Display parameters
Notes
• This method is invoked implicitly at the command line when the result of an
expression is a Plucker object and the command has no trailing semicolon.
See also
Plucker.char
Plucker.double
Convert Plucker coordinates to real vector
Plucker.intersect
Line intersection
• ———>
counterclockwise clockwise
Plucker.intersect_plane
Line intersection with plane
x = PL.intersect_plane(p) is the point where the line intersects the plane p. Planes are
structures with a normal p.n (3 × 1) and an offset p.p (1 × 1) such that p.n x + p.p = 0.
x=[] if no intersection.
See also
Plucker.point
Plucker.intersect_volume
Line intersects plot volume
[p,T] = PL.intersect_volume(bounds, line) as above but also returns the line parame-
ters (1 × N) at the intersection points.
See also
Plucker.point
Plucker.L
Skew matrix form of the line
Notes
• For two homogeneous points P and Q on the line, PQ’-QP’ is also skew symmet-
ric.
Plucker.line
Plucker line coordinates
See also
Plucker.v, Plucker.w
Plucker.mindist
Minimum distance between two lines
d = PL1.mindist(pl2) is the minimum distance between two Plucker lines PL1 and
pl2.
Plucker.mtimes
Plucker composition
Plucker.or
Operator form of side operator
P1 | P2 is the side operator which is zero whenever the lines P1 and P2 intersect or are
parallel.
See also
Plucker.side
Plucker.origin_closest
Point on line closest to the origin
See also
Plucker.origin_distance
Plucker.origin_distance
Smallest distance from line to the origin
See also
Plucker.origin_closest
Plucker.plot
Plot a line
PL.plot(options) plots the Plucker line within the current plot volume.
PL.plot(b, options) as above but plots within the plot bounds b = [XMIN XMAX
YMIN YMAX ZMIN ZMAX].
Options
See also
plot3
Plucker.point
Point on line
p = PL.point(L) is a point on the line, where L is the parametric distance along the
line from the principal point of the line.
See also
Plucker.pp
Plucker.pp
Principal point of the line
Notes
• Same as Plucker.point(0)
See also
Plucker.point
Plucker.side
Plucker side operator
x = SIDE(p1, p2) is the side operator which is zero whenever the lines p1 and p2
intersect or are parallel.
See also
Plucker.or
pnmfilt
Pipe image through PNM utility
out = pnmfilt(cmd) runs the external program given by the string cmd and the output
(assumed to be PNM format) is returned as out.
out = pnmfilt(cmd, im) pipes the image im through the external program given by the
string cmd and the output is returned as out. The external program must accept and
return images in PNM format.
Examples
im = pnmfilt(’ppmforge -cloud’);
im = pnmfilt(’pnmrotate 30’, lena);
Notes
• Provides access to a large number of Unix command line utilities such as Im-
ageMagick and netpbm.
• The input image is passed as stdin, the output image is assumed to come from
stdout.
• MATLAB doesn’t support i/o to pipes so the image is written to a temporary file,
the command run to another temporary file, and that is read into MATLAB.
See also
pgmfilt, iread
PointFeature
PointCorner feature object
Methods
Properties
u horizontal coordinate
v vertical coordinate
strength feature strength
descriptor feature descriptor (vector)
See also
PointFeature.PointFeature
Create a point feature object
PointFeature.char
Convert to string
PointFeature.display
Display value
Notes
• This method is invoked implicitly at the command line when the result of an
expression is a PointFeature object and the command has no trailing semicolon.
See also
PointFeature.char
PointFeature.distance
Distance between feature descriptors
d = F.distance(f1) is the distance between feature descriptors, the norm of the Eu-
clidean distance.
If F is a vector then d is a vector whose elements are the distance between the corre-
sponding element of F and f1.
PointFeature.match
Match point features
Options
See also
FeatureMatch
PointFeature.ncc
Feature descriptor similarity
s = F.ncc(f1) is the similarty between feature descriptors which is a scalar in the interval
-1 to 1, where 1 is perfect match.
If F is a vector then D is a vector whose elements are the distance between the corre-
sponding element of F and f1.
PointFeature.pick
Graphically select a feature
v = F.pick() is the id of the feature closest to the point clicked by the user on a plot of
the image.
PointFeature.plot
Plot feature
polydiff
Differentiate a polynomial
6 2
See also
polyval
radgrad
Radial gradient
[gr,gt] = radgrad(im) is the radial and tangential gradient of the image im. At each
pixel the image gradient vector is resolved into the radial and tangential directions.
[gr,gt] = radgrad(im, centre) as above but the centre of the image is specified as
centre=[X,Y] rather than the centre pixel of im.
radgrad(im) as above but the result is displayed graphically.
See also
isobel
ransac
Random sample and consensus
Options
Model function
out = func(R) is the function passed to RANSAC and it must accept a single argument
R which is a structure:
‘size’ out.s is the minimum number of points required to compute an estimate to out.s
‘condition’ out.x = CONDITION(R.x) condition the point data
‘decondition’ out.theta = DECONDITION(R.theta) decondition the estimated model data
‘valid’ out.valid is true if a set of points is not degenerate, that is they will produce a model.
This is used to discard random samples that do not result in useful models.
‘estimate’ [out.theta,out.resid] = EST(R.x) returns the best fit model and residual for the subset
of points R.x. If this function cannot fit a model then out.theta = []. If multiple models
are found out.theta is a cell array.
‘error’ [out.inliers,out.theta] = ERR(R.theta,R.x,T) evaluates the distance from the model(s)
R.theta to the points R.x and returns the best model out.theta and the subset of R.x
that best supports (most inliers) that model.
Notes
References
• m.A. Fishler and R.C. Boles. "Random sample concensus: A paradigm for
model fitting with applications to image analysis and automated cartography".
Comm. Assoc. Comp, Mach., Vol 24, No 6, pp 381-395, 1981
• Richard Hartley and Andrew Zisserman. "Multiple View Geometry in Computer
Vision". pp 101-113. Cambridge University Press, 2001
Author
Peter Kovesi School of Computer Science & Software Engineering The University of
Western Australia pk at csse uwa edu au http://www.csse.uwa.edu.au/ pk
See also
fmatrix, homography
Ray3D
Ray in 3D space
This object represents a ray in 3D space, defined by a point on the ray and a direction
unit-vector.
Methods
Properties
Notes
Ray3D.Ray3D
Ray constructor
Ray3D.char
Convert to string
Ray3D.closest
Closest distance between point and ray
Ray3D.display
Display value
Notes
• This method is invoked implicitly at the command line when the result of an
expression is a Ray3D object and the command has no trailing semicolon.
See also
Ray3D.char
Ray3D.intersect
Intersetion of ray with line or plane
x = R.intersect(r2) is the point on R that is closest to the ray r2. If R is a vector then
then x has multiple columns, corresponding to the intersection of R(i) with r2.
[x,E] = R.intersect(r2) as above but also returns the closest distance between the rays.
x = R.intersect(p) returns the point of intersection between the ray R and the plane
p=(a,b,c,d) where aX + bY + cZ + d = 0. If R is a vector then x has multiple columns,
corresponding to the intersection of R(i) with p.
RegionFeature
Region feature class
Methods
Properties
Note
See also
iblobs, imoments
RegionFeature.RegionFeature
Create a region feature object
RegionFeature.boundary
Boundary in polar form
RegionFeature.box
Return bounding box
b = R.box() is the bounding box in standard Toolbox form [xmin,xmax; ymin, ymax].
RegionFeature.char
Convert to string
RegionFeature.contains
Test if coordinate is contained within region bounding box
R.contains(coord) true if the coordinate COORD lies within the bounding box of the
region feature R. If R is a vector, return a vector of logical values, one per input region.
RegionFeature.display
Display value
Notes
• this method is invoked implicitly at the command line when the result of an
expression is a RegionFeature object and the command has no trailing semicolon.
See also
RegionFeature.char
RegionFeature.pick
Select blob from mouse click
i = R.pick() is the index of the region feature within the vector of RegionFeatures R to
which the clicked point corresponds. Since regions can overlap of be contained in other
regions, the region with the smallest area of bounding box that contains the selected
point is returned.
See also
ginput, RegionFeature.inbox
RegionFeature.plot
Plot centroid
R.plot() overlay the centroid on current plot. It is indicated with overlaid o- and x-
markers.
R.plot(ls) as above but the optional line style arguments ls are passed to plot.
If R is a vector then each element is plotted.
RegionFeature.plot_boundary
plot boundary
R.plot_boundary(ls) as above but the optional line style arguments ls are passed to
plot.
Notes
See also
boundmatch
RegionFeature.plot_box
Plot bounding box
R.plot_box() overlay the the bounding box of the region on current plot.
R.plot_box(ls) as above but the optional line style arguments ls are passed to plot.
RegionFeature.plot_ellipse
Plot equivalent ellipse
R.plot_ellipse() overlay the the equivalent ellipse of the region on current plot.
R.plot_ellipse(ls) as above but the optional line style arguments ls are passed to plot.
rg_addticks
Label spectral locus
See also
xycolourspace
rgb2xyz
RGB to XYZ color space
rluminos
Relative photopic luminosity function
References
See also
luminos
sad
Sum of absolute differences
m = sad(i1, i2) is the sum of absolute differences between the two equally sized image
patches i1 and i2. The result m is a scalar that indicates image similarity, a value of
0 indicates identical pixel patterns and is increasingly positive as image dissimilarity
increases.
See also
ScalePointFeature
ScalePointCorner feature object
Methods
Properties
u horizontal coordinate
v vertical coordinate
See also
ScalePointFeature.ScalePointFeature
Create a scale point feature object
ScalePointFeature.plot
Plot feature
F.plot(options) overlay a marker at the feature position. The default is a point marker.
F.plot(options, ls) as above but the optional line style arguments ls are passed to plot.
If F is a vector then each element is plotted.
Options
Examples
Mark each feature with a green circle and with exagerated scale
f.plot(’circle’, ’color’, ’g’, ’scale’, 2)
See also
PointFeature.plot, plot
showcolorspace
Display spectral locus
Notes
• The colors shown within the locus only approximate the true colors, due to the
gamut of the display device.
See also
rg_addticks
showpixels
Show low resolution image
Displays a low resolution image in detail as a grid with colored lines between pixels
and numeric display of pixel values at each pixel. Useful for illustrating principles in
teaching.
Options
Notes
SiftPointFeature
SIFT point corner feature object
Methods
Properties
u horizontal coordinate
v vertical coordinate
strength feature strength
theta feature orientation [rad]
scale feature scale
descriptor feature descriptor (vector)
image_id index of image containing feature
Notes
References
See also
SiftPointFeature.SiftPointFeature
Create a SIFT point feature object
See also
isift
SiftPointFeature.match
Match SIFT point features
SiftPointFeature.support
Support region of feature
See also
SiftPointFeature
SphericalCamera
Spherical camera class
Methods
Properties (read/write)
Note
See also
Camera
SphericalCamera.SphericalCamera
Create spherical projection camera object
Options
See also
SphericalCamera.plot_camera
Display camera icon in world view
C.plot_camera(T) draws the spherical image plane (unit sphere) at pose given by the
SE3 object T.
C.plot_camera(T, p) as above but also display world points, given by the columns of
p (3 × N), as small spheres.
Reference
See also
SphericalCamera.project
Project world points to image plane
pt = C.project(p, options) are the image plane coordinates for the world points p.
The columns of p (3 × N) are the world points and the columns of pt (2 × N) are the
corresponding spherical projection points, each column is phi (longitude) and theta
(colatitude).
Options
‘pose’, T Set the camera pose to the pose T (homogeneous transformation (4×4) or SE3) before
projecting points to the camera image plane. Temporarily overrides the current camera
pose C.T.
‘objpose’, T Transform all points by the pose T (homogeneous transformation (4 × 4) or SE3)
before projecting them to the camera image plane.
See also
SphericalCamera.plot
SphericalCamera.sph
Implement spherical IBVS for point features
1. The camera view, showing the desired view (*) and the
current view (o)
2. The external view, showing the target points and the camera
The results structure contains time-history information about the image plane, cam-
era pose, error, Jacobian condition number, error norm, image plane size and desired
feature locations.
The params structure can be used to override simulation defaults by providing ele-
ments, defaults in parentheses:
SphericalCamera.sph2
Implement spherical IBVS for point features
2. The external view, showing the target points and the camera
The results structure contains time-history information about the image plane, cam-
era pose, error, Jacobian condition number, error norm, image plane size and desired
feature locations.
The params structure can be used to override simulation defaults by providing ele-
ments, defaults in parentheses:
SphericalCamera.visjac_p
Visual motion Jacobian for point feature
J = C.visjac_p(pt, z) is the image Jacobian (2N × 6) for the image plane points pt
(2 × N) described by phi (longitude) and theta (colatitude). The depth of the points
from the camera is given by z which is a scalar, for all points, or a vector (N × 1) for
each point.
The Jacobian gives the image-plane velocity in terms of camera spatial velocity.
Reference
See also
ssd
Sum of squared differences
m = ssd(i1, i2) is the sum of squared differences between the two equally sized image
patches i1 and i2. The result m is a scalar that indicates image similarity, a value of
0 indicates identical pixel patterns and is increasingly positive as image dissimilarity
increases.
See also
stdisp
Display stereo pair
See also
idisp, istereo
SurfPointFeature
SURF point corner feature object
Methods
Properties
u horizontal coordinate
v vertical coordinate
strength feature strength
scale feature scale
theta feature orientation [rad]
descriptor feature descriptor (vector)
image_id index of image containing feature
Notes
Reference
“SURF: Speeded Up Robust Features”, Herbert Bay, Andreas Ess, Tinne Tuytelaars,
Luc Van Gool, Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3,
pp. 346–359, 2008
See also
SurfPointFeature.SurfPointFeature
Create a SURF point feature object
See also
isurf, OrientedScalePointFeature
SurfPointFeature.match
Match SURF point features
Options
Notes
See also
FeatureMatch
SurfPointFeature.support
Support region of feature
See also
SurfPointFeature
tb_optparse
Standard option parser for Toolbox functions
• That only one value can be assigned to a field, if multiple values are required
they must placed in a cell array.
• To match an option that starts with a digit, prefix it with ‘d_’, so the field ‘d_3d’
matches the option ‘3d’.
• opt can be an object, rather than a structure, in which case the passed options are
assigned to properties.
The return structure is automatically populated with fields: verbose and debug. The
following options are automatically parsed:
The allowable options are specified by the names of the fields in the structure opt. By
default if an option is given that is not a field of opt an error is declared.
[optout,args] = tb_optparse(opt, arglist) as above but returns all the unassigned op-
tions, those that don’t match anything in opt, as a cell array of all unassigned arguments
in the order given in arglist.
[optout,args,ls] = tb_optparse(opt, arglist) as above but if any unmatched option
looks like a MATLAB LineSpec (eg. ‘r:’) it is placed in ls rather than in args.
[objout,args,ls] = tb_optparse(opt, arglist, obj) as above but properties of obj with
matching names in opt are set.
testpattern
Create test images
‘rampx’ intensity ramp from 0 to 1 in the x-direction. args is the number of cycles.
‘rampy’ intensity ramp from 0 to 1 in the y-direction. args is the number of cycles.
‘sinx’ sinusoidal intensity pattern (from -1 to 1) in the x-direction. args is the number of
cycles.
‘siny’ sinusoidal intensity pattern (from -1 to 1) in the y-direction. args is the number of
cycles.
‘dots’ binary dot pattern. args are dot pitch (distance between centres); dot diameter.
‘squares’ binary square pattern. args are pitch (distance between centres); square side length.
‘line’ a line. args are theta (rad), intercept.
Examples
A 256 × 256 image with a grid of dots on 50 pixel centres and 20 pixels in diameter:
testpattern(’dots’, 256, 50, 25);
Notes
See also
idisp
Tracker
Track points in image sequence
This class assigns each new feature a unique identifier and tracks it from frame to frame
until it is lost. A complete history of all tracks is maintained.
Methods
Properties
See also
PointFeature
Tracker.Tracker
Create new Tracker object
Options
Notes
See also
PointFeature
Tracker.char
Convert to string
Tracker.display
Display value
Notes
• This method is invoked implicitly at the command line when the result of an
expression is a Tracker object and the command has no trailing semicolon.
See also
Tracker.char
Tracker.plot
Show feature trajectories
Tracker.tracklengths
Length of all tracks
tristim2cc
Tristimulus to chromaticity coordinates
upq
Central image moments
m = upq(im, p, q) is the PQth central moment of the image im. That is, the sum of
I(x,y).(x-x0)p .(y-y0)q where (x0,y0) is the centroid.
Notes
See also
upq_poly
Central polygon moments
m = upq_poly(v, p, q) is the PQth central moment of the polygon with vertices de-
scribed by the columns of v.
Notes
• The points must be sorted such that they follow the perimeter in sequence (counter-
clockwise).
• If the points are clockwise the moments will all be negated, so centroids will be
still be correct.
• If the first and last point in the list are the same, they are considered as a single
vertex.
• The central moments are invariant to translation.
See also
usefig
figure windows
usefig(’Foo’) makes figure ‘Foo’ the current figure, if it doesn’t exist create it.
h = usefig(’Foo’) as above, but returns the figure handle
VideoCamera
Abstract class to read from local video camera
A concrete subclass of ImageSource that acquires images from a local camera using the
MATLAB Image Acquisition Toolbox (imaq). This Toolbox provides a multiplatform
interface to a range of cameras, and this class provides a simple wrapper.
This class is not intended to be used directly, instead use the factory method Video
which will return an instance of this class if the Image Acquisition Toolbox is installed,
for example
vid = VideoCamera();
Methods
See also
VideoCamera_fg
Class to read from local video camera
A concrete subclass of ImageSource that acquires images from a local camera using a
simple open-source frame grabber interface.
This class is not intended to be used directly, instead use the factory method Video-
Camera.which will return an instance of this class if the interface is supported on your
platform (Mac or Linux), for example
vid = VideoCamera.amera();
Methods
See also
VideoCamera_fg.VideoCamera_fg
Video camera constructor
Options
Notes:
• The specified ‘resolution’ must match one that the camera is capable of, other-
wise the result is not predictable.
VideoCamera_fg.char
Convert to string
V.char() is a string representing the state of the camera object in human readable form.
VideoCamera_fg.close
Close the image source
VideoCamera_fg.grab
Acquire image from the camera
Notes
VideoCamera_IAT
Class to read from local video camera
A concrete subclass of ImageSource that acquires images from a local camera using the
MATLAB Image Acquisition Toolbox (imaq). This Toolbox provides a multiplatform
interface to a range of cameras, and this class provides a simple wrapper.
This class is not intended to be used directly, instead use the factory method Video
which will return an instance of this class if the Image Acquisition Toolbox is installed,
for example
vid = VideoCamera();
Methods
See also
VideoCamera_IAT.VideoCamera_IAT
Video camera constructor
v = Video_IAT(camera, options) is a Video object that acquires images from the local
video camera specified by the string camera.
Options
Notes:
• The specified ‘resolution’ must match one that the camera is capable of, other-
wise the result is not predictable.
VideoCamera_IAT.char
Convert to string
V.char() is a string representing the state of the camera object in human readable form.
VideoCamera_IAT.close
Close the image source
VideoCamera_IAT.grab
Acquire image from the camera
Notes
VideoCamera_IAT.list
available adaptors and cameras
VideoCamera_IAT.preview
Control image preview
xaxis
Set X-axis scaling
See also
yaxis
xyzlabel
Label X, Y and Z axes
XYZLABEL label the x-, y- and z-axes with ‘X’, ‘Y’, and ‘Z’ respectiveley
yaxis
Y-axis scaling
See also
yaxis
YUV
Class to read YUV4MPEG file
Methods
Properties
See also
ImageSource, Video
SEE ALSO: Video
YUV.YUV
YUV4MPEG sequence constructor
y = YUV(file, options) is a YUV4MPEG object that returns frames from the yuv4mpeg
format file file. This file contains uncompressed color images in 4:2:0 format, with a
full resolution luminance plane followed by U and V planes at half resolution both
directions.
Options
YUV.char
Convert to string
M.char() is a string representing the state of the movie object in human readable form.
YUV.close
Close the image source
YUV.grab
Acquire next frame from movie
Options
Notes
yuv2rgb
YUV format to RGB
yuv2rgb2
YUV format to RGB
zcross
Zero-crossing detector
iz = zcross(im) is a binary image with pixels set where the corresponding pixels in the
signed image im have a zero crossing, a positive pixel adjacent to a negative pixel.
Notes
See also
ilog
zncc
Normalized cross correlation
m = zncc(i1, i2) is the zero-mean normalized cross-correlation between the two equally
sized image patches i1 and i2. The result m is a scalar in the interval -1 to 1 that
indicates similarity. A value of 1 indicates identical pixel patterns.
Notes
See also
zsad
Sum of absolute differences
m = zsad(i1, i2) is the zero-mean sum of absolute differences between the two equally
sized image patches i1 and i2. The result m is a scalar that indicates image similarity,
a value of 0 indicates identical pixel patterns and is increasingly positive as image
dissimilarity increases.
Notes
See also
zssd
Sum of squared differences
m = zssd(i1, i2) is the zero-mean sum of squared differences between the two equally
sized image patches i1 and i2. The result m is a scalar that indicates image similarity,
a value of 0 indicates identical pixel patterns and is increasingly positive as image
dissimilarity increases.
Notes
See also