0% found this document useful (0 votes)
12 views11 pages

Index - 2017 - Deep Learning For Medical Image Analysis

Uploaded by

Abinaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views11 pages

Index - 2017 - Deep Learning For Medical Image Analysis

Uploaded by

Abinaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Index

A Bag-of-words (BoW), 305


Action recognition, 38, 41 Baseline, 259, 335, 344, 359, 360, 362, 364,
Activations, 9, 13, 29, 31, 33, 45, 59, 226, 370, 373
253, 308, 327 Baseline markers, 361, 362, 370
Active shape model (ASM), 56, 63, 216, 219 Batch normalization (BN), 21, 22, 31
ADNI (Alzheimer’s Disease Neuroimaging Binary masks, 187, 189
Initiative), 260, 360, 372, 377 Biomedical image analysis tasks, 157, 165
ADNI dataset, 248, 259, 262 Blocks, 354–359
Agent, 65–70, 75, 77 Body sections, 85, 91
AlexNet, 26, 30, 32, 33, 36, 41, 96, 126, 330 Body-part recognition, 84, 86, 87, 91, 92, 95
AlexNet model, 27, 32 Boundaries, 64, 115, 187, 232
Algorithmic strategies, 274, 286, 290, 292, BoVW model, 305
293 Brain images, 371
Alzheimer’s disease (AD), 342, 360, 368, Brain MR images, 252, 255, 265
370, 372 Brain MRI images, 238, 240
Anatomical structures, 57, 64, 199, 224, 249, Brain regions, 230, 354
414 Breast cancer, 139, 322
Anatomies, 57, 71, 83, 223, 233, 237, 382 histology images, 157, 166
Anatomy detection, 56, 71, 100
Answer, 43 C
APOE (Apolipoprotein E), 360, 362, 367, C++, 22, 45, 169, 285
372, 374 Cancer, 301, 322
Architecture of randomized deep network, Cardiac histopathology, 180, 182, 184, 191
356 Cardiac histopathology images, 180, 182,
Artificial agent, 57, 64–66, 78 186, 191
Atlas images, 199, 206 Cardiovascular disease (CVD), 106
Atlas patches, 208 Carotid artery, 107, 112, 114, 125
Attributes, 43 common, 108, 109, 113, 124
Auto-encoder (AE), 12, 201, 250, 348 Carotid bulb, 108, 112–115, 121, 124
basic, 202 Carotid intima–media thickness (CIMT),
single, 252 106, 124
stacked, 12, 200, 247, 250, 252 Cell detection, 166
Automated system, 301, 315 Cells
Automatic segmentation, 238 complex, 28, 110
Auxiliary tasks, 43, 140 simple, 28
Central processing units (CPU), 11, 72, 109
B Centroid distances, 228, 230, 233, 239
Background, 92, 139, 143, 147, 164, 197, Centroids, 112, 115, 140, 229, 230, 287
199, 205, 206, 281, 330, 344, 350 Cerebral microbleed detection, 143
voxels, 205 Cerebral microbleeds (CMBs), 134, 135,
Backpropagation algorithm, 6, 11, 14, 30, 143–147, 149
161 Cerebral-spinal fluid (CSF), 259, 342, 343
Bag-of-visual-words (BoVW), 302, 305, 310 Chest radiograph, 300, 302

423
424 Index

CIMT measurements, 106, 123, 128 Computer vision, 26, 37, 86, 106, 109, 180,
CIMT video interpretation, 106, 127 200, 226, 248, 253, 273, 302, 324,
CIMT videos, 108–110, 117, 127 343, 352
Class membership, 160, 187, 188, 343 Computer vision problems, 26, 230, 239
Classes Computer vision tasks, 30, 191, 273
body section, 92 Computer-aided diagnosis, 134, 149, 322
non-informative, 91 Concatenation deep network (CDN), 389
Classification accuracies, 33, 93, 331, 343, Conditional random field (CRF), 37, 325
370 Confidence maps, 113, 116
Classification of breast lesions Connection weights, 5, 8, 11, 14, 21, 203
benign, 323, 324, 326, 330–332, 334 Constrained ROI localization, 113, 118
malignant, 323, 324, 326, 330–332, 334 Convolution kernel, 74, 137, 144, 158, 226,
Classification performance, 33, 205, 317 227
Classifier, 56, 58, 61, 64, 86, 87, 90, 91, 97, Convolution layer, 8, 27, 109
148, 158, 166, 185, 188, 191, 310, Convolutional filters, 28, 87, 100
315 Convolutional layers, 8, 29, 33, 86, 109, 110,
117, 126, 137, 158, 163, 167, 226,
main, 63, 71, 73
227, 307, 326
Clinical dementia rating sum of boxes
Convolutional networks, 33, 248
(CDR-SB), 361, 364, 366, 367, 372,
Convolutional neural network architecture,
373
226
Clinical trials, 344, 368, 373
Convolutional neural network (CNN), 8, 26,
CMBs
27, 34, 35, 40, 85, 109, 137, 157,
detection, 135, 143, 146
166, 273, 280, 411
true, 144, 148 Convolutional SAE, 247, 256, 257, 259, 263,
CNN 264
2D, 143, 148, 150 Convolutional SAE network (CSAE), 253,
3D, 135, 144, 226 263
CNN flavors, 34 Convolutions, 8, 27, 28, 36, 37, 138, 226,
CNN model, 38, 165, 280, 293, 307, 327 254, 281
CNN regression model, 286, 292 Coupled sparse representation (CSR), 382,
CNN structure, 88, 92, 157 389, 396
CNN-based methods, 234, 238–240 Cranio-caudal (CC), 322
CNNs Cross-correlation (CC), 289, 290, 329, 331,
local patch-based, 94 332
standard, 86, 93, 99 Cross-modal medical image synthesis, 393,
trained, 113 401
Co-occurrence of local anisotropic gradient Cross-modal nearest neighbor search, 382,
orientations (CoLlAGe), 186 384, 392, 396
Coarse retrieval model, 136
Comparison of deep learning, 186 D
Computational complexity, 110, 226, 305 Data augmentation, 31, 113, 125, 331
Computational limitations, 78 Dataset, 71, 74, 95, 140, 146, 219, 228, 286,
Computed tomography (CT), 84, 106, 272, 309, 326, 328–330, 388
382, 409 DDSM, 324, 329–332, 334
Computer aided diagnosis (CAD), 83, 134, InBreast, 324, 329–334
272, 322 large, 146, 353, 407
Index 425

Decliners Detection accuracy, 34, 76, 117, 142, 165


strong, 370, 371 Detection network, 86
weak, 342, 363, 367, 372 Detection of emphysema, 302, 417
Deconvolution, 37 Detection time, 136, 142
Deep architecture, 12, 74, 126, 166, 205, Dice ratios, 212, 260
250, 343, 344, 353 Dice scores, 230, 233
Deep belief network (DBN), 15, 17 Diffeomorphic demons, 257, 259, 264, 265
Deep Boltzmann machine (DBM), 15, 18 Digitally reconstructed radiograph (DRR),
Deep cascaded networks, 136 272
Deep convolutional neural networks, 69, Disease, 181, 310, 311, 345, 351–353, 359,
135, 180, 181 361, 368, 410, 415, 417, 419
Deep learning, 8, 57, 65, 85, 86, 134, 157, Disease markers, 342, 358, 362, 363, 370,
180–182, 184, 191, 200, 248, 250, 373
262, 324, 325, 346, 347 Disease progression, 345, 358, 368, 370
approach, 182, 185, 191, 224, 248, 302, Dropout, 20, 31, 347, 358
315 Dropout networks, 349, 353, 354
architecture, 109, 209, 265 DSC (direct splatting correlation), 293
for medical image, 87, 157, 223, 239, 240 DV-1 (deep voting with no stride), 163–165
for segmentation, 188, 191, 225 DV-3 (deep voting with stride 3), 163–165
methods, 22, 84, 200, 207, 223, 240, 303, DxConv, 361, 364, 365, 367, 370, 372, 373
325, 353, 406
models, 26, 180, 182, 185, 186, 188, 250,
252, 324, 334, 350, 419 E
network, 187, 253, 255 Edge-hypersampling, 183, 187
software for, 45 Edges, 28, 185–187, 189, 190, 279, 286,
tools for, 22 300, 305, 312
unsupervised, 247, 250 Effect size, 345, 361, 367
Deep learning features, 200, 207, 211, 248, Effectiveness, 121, 135, 143, 199–201, 207,
324 212, 216, 398
Deep models, 11, 20, 22, 247, 250 Efficacy, 141, 142, 147, 150, 343, 344, 372,
Deep networks, 11, 14, 20, 27, 29, 35, 73, 373
88, 203, 227, 306, 344, 352, 358 End-diastolic ultrasonographic frames
location-sensitive, 382, 383, 385, 401 (EUFs), 106, 108, 110–113, 117,
very, 30, 31, 33 118, 123–125, 127
Deep neural networks, 11, 12, 18, 20, 67, 70, Enrichment, 343, 359, 364, 373
135, 224–226, 303 Ensemble learning, 224, 239, 351, 352
Deep Q network (DQN), 69 Errors, 75, 228, 231, 233, 237
Deep voting, 156, 163, 164 boundary, 232, 237, 239
Deep voting model, 157, 159, 163, 165 labeling, 237
Deep-learned features, 199, 206, 212 localization, 12, 120, 202, 348
Deformable model, 209, 210, 216 segmentation, 74, 231, 233, 237
Descriptor, 273, 304 Evaluation, 71, 74, 75, 77, 212, 216
Detection, 33, 34, 57, 58, 84, 86, 100, 300 Experience replay, 70
accurate, 134, 143 Experimental results, 134, 135, 150, 164,
computer-aided, 106 169, 259, 260, 263, 310, 330
lymph node, 325 Experiments, 71, 73, 117, 163, 186–188,
microbleed, 134, 240 211, 228, 258, 260, 283, 285, 287,
negation, 409 309, 360, 388
426 Index

Experts, 122 GLCM (gray-level co-occurrence matrix),


Extracting the image information, 409 303, 304, 310
Gradient correlation (GC), 284, 289
F Graphics processing units (GPUs), 11, 22,
False negative (FN), 141, 142, 170, 315 27, 106, 110, 230, 285, 324
False positive (FP), 93, 135, 137, 140–142, Gray matter (GM), 259
147, 150, 163, 170, 237, 315, 325 Ground truth, 59, 77, 117, 140, 141, 146,
Fast scanning, 164 159, 211, 234, 246, 247, 274, 284,
Feature extraction, 56, 277 287, 397
Feature maps, 9, 227 Ground-truth regions, 163
Feature representations, 200, 214, 225, 248,
262, 305, 411 H
abstract, 224 HAMMER, 259, 263–265
intrinsic, 248–250, 255 Handcrafted features, 137, 139, 140, 143,
latent, 251, 253, 255, 260, 265 157, 182, 191, 199–201, 207, 209,
low-dimensional, 249, 256, 259 214, 246, 248, 256, 273, 323, 324
Feature selection, 246, 247, 258, 265, 309, Heart failure, 181, 191, 300
313, 317 Hidden layers, 5, 6, 12, 28, 92, 94, 202, 227,
Feed-forward neural networks, 4, 6, 412 250, 286, 385
Feldman, 191 dimension of, 203
FH (family history), 360, 361, 370, 372, 374 first, 14, 207, 385
Fine discrimination model, 134, 136, 139 second, 14, 205, 387
Fine-tuning, 14, 32, 331 Hidden nodes, 87, 96, 203, 251, 259, 264
Fine-tuning process, 328, 331 High-power fields (HPFs), 140, 142
Frame selection, 108, 110, 118, 126, 128 Hippocampal volume, 361, 362, 364, 368,
Fully connected hidden layers, 109 370, 373
Fully connected neural networks, 9, 59 Hippocampus, 224, 260, 263, 265
Fully convolutional network (FCN), 35–37, Histogram of oriented gradients (HOG), 26,
44, 127, 136, 137 33, 85, 199–201, 214, 219, 303
2D, 145 Histology images, 139, 141
3D, 144, 145, 147, 148, 150 Hyperparameters, 358
Fully-connected layers, 29, 33, 168, 229,
280, 286 I
Function ICPR MITOSIS dataset, 140
activation, 4, 10, 59, 158, 168, 227, 346 Image analysis, 106, 180, 227
network response, 59 Image classification, 22, 26, 32, 34, 87, 93,
optimal action-value, 66, 69 95, 135, 334, 419
Fundamentals of natural language Image classification tasks, 32, 35, 85, 90, 93,
processing, 407 99, 100
Fusion process, 169 Image patches, 34, 35, 45, 93, 113, 115, 156,
166, 169, 186, 202, 227, 246, 247
G Image registration, 239, 246, 249, 256, 259,
Gaussian mixture model (GMM), 248, 249 265
Gaussian smoothing, 119, 125 methods, 246, 257
Generative models Image representation, 42, 157, 303, 305, 306
deep, 14 Image representation, schemes, 303
GIST, 310, 311 Image segmentation, 180
Index 427

Image-based tool for counting nuclei L


(ITCN), 171 Label fusion, 93
ImageNet, 27, 303, 307, 331, 332, 334, 406 Labels
ImageNet classification, 26, 307 assigned, 415
ImageNet data, 303, 307 correct, 88, 315, 415
Images true, 59, 230, 239
fluoroscopic, 272, 275, 283 Landmark detection, 57, 67, 71, 74, 78
hematoxylin or eosin grayscale, 186 accurate, 43, 74
original, 88, 138, 188 anatomical, 83
radiology, 406, 407, 412, 414, 419 robust, 75
Implementation, 73, 93, 163, 182, 183, 305, Landmarks, 68, 74, 77, 206, 288, 289, 386
Language, 42, 183, 411, 413
324, 368, 374
Latent Dirichlet allocation (LDA), 410
Improvement, 34, 72, 94, 100, 140, 260, 312,
Layer-wise learning, 14, 252
313, 332, 334, 344, 373
Layers, 5, 6, 8, 13, 15, 17, 18, 27, 31–34, 36,
Inclusion criteria, 359
37, 45, 117, 118, 158, 160, 161, 202,
Information
253, 312, 313
contextual, 143, 184, 249, 323 connections between, 13
topological, 166, 172 final, 42, 227, 306, 327
Input channels, 280 first, 17, 28, 158
Input data, 13, 84, 202, 249, 251, 324, 348, last, 158, 166, 170, 347
352 neighboring, 5, 18, 202
Input feature maps, 8, 158 penultimate, 158, 312
Input features, 13, 19, 229, 277 second, 17, 229
Input image, 9, 36, 42, 84, 109, 137, 158, single, 4, 38, 41, 247, 313
182, 226, 304, 307, 330, 334 sub-sampling, 96, 326
Input layer, 5, 11, 158, 202, 229, 250, 358 Learned feature representations, 86, 203,
Input patches, 10, 118, 228, 251, 386 205, 248, 256, 257, 259, 262–264
Input training patches, 202, 253 Learned features, 36, 38, 148, 214, 219, 247,
Input vector, 13, 19, 158, 202 255
Intelligence, 65 Learning models, 26, 71, 249, 250, 343, 354,
Intensity, 143, 144, 147, 148, 185, 207, 209, 355
214, 216, 219, 225, 272, 279, 391 Learning problems, 38, 56, 344, 352, 354
Intensity features, 186, 216, 384 Leave-1-patient-out cross-validation,
Intensity patch, 199, 208, 219 118–122
Intensity transformation, 382 Left consolidation (LCN), 310
Intensity values, 94, 125, 246, 383, 389, 392, Left cuneus, 234, 237
393, 397 Left pleural effusion (LPE), 310, 315
Intensity-based methods, 272, 285, 288 Lesions, 143, 321–325, 332, 334
classification, 305, 322, 325
Intervention, 342, 344
detection, 322, 323, 325
Invariant, 85, 203, 254, 346
segmentation, 322, 323, 325
Iterations, 8, 258, 286, 292, 396
Likelihood map, 209, 210
Iterative radial voting (IRV), 171
Likelihood ratios, 311
Local binary patterns (LBP), 186, 199–201,
K 214, 219, 302
K-nearest neighbor for pose estimation, 273 Local image residual (LIRs), 286–288, 290,
Krizhevsky network, 307 292, 293
428 Index

Local information, 86, 88, 100 Maximally stable extremal region (MSER),
Local maxima, 112, 272, 290, 293 156
Local patches, 86, 88, 95, 207 Media–adventitia interface, 115, 116, 122
discriminative, 88, 94 Medical image analysis (MIA), 16, 74, 83,
extracted, 95 85, 106, 157, 180, 181, 200, 226,
Local regions, 29, 85, 100 239, 315, 324, 325, 419
discriminative, 85, 86, 95, 100 Medical image applications, 87, 246, 248
non-informative, 87 Medical images, 83, 84, 86, 87, 134, 144,
Localization, 34, 44, 106, 125, 157, 302, 317 149, 224, 239, 246, 250, 253, 299,
Locations, 34, 56, 58, 111–115, 125, 145, 335, 387, 406, 419
239, 277, 323, 417 Medio-lateral oblique (MLO), 322, 331
Logistic regression (LR), 85, 87, 225 Methodology, 58, 87, 158, 166, 324, 326
Logistic sigmoid function, 4, 6, 20, 251 MHD (modified Hausdorff distance), 188
Longitudinal change, 359 Micro-calcifications, 322, 324–326,
LONI dataset, 248, 262 330–332, 334
Loss function, 30, 87–90, 110, 157, 161, Microscopy images, 156, 166, 172
168, 389 Mild cognitively impaired subjects (MCIs),
LSDN (location-sensitive deep network), 342, 360–362, 364, 370–372
382–385, 400 late, 359–362, 370, 371
LSDN-1, 389 Mimics, 137, 139, 140, 143–145
LSDN-2, 389 Mini mental state examination (MMSE),
LSDN-small, 389 359, 361, 364, 367, 372, 373
LSTM (long short term memory), 41–43 Mini-batches, 21, 30, 69, 110, 230
Lumen–intima and media–adventitia Minimum variance unbiased (MVUB), 351,
interfaces, 107, 115–117, 123 355, 358, 359
Lung diseases, 301, 302 Mitoses, 134, 139, 157
Mitosis detection, 134–137, 139, 140, 166,
M 182, 225, 325
Machine learning, 4, 56, 58, 64, 200, 248, automated, 141
273, 324, 343, 406 MKL (multi-kernel support vector machine),
Machine learning methods, 180, 342, 343, 361
409 MKLm (MKL markers), 361, 362, 368, 370,
Madabhushi, 191 373
Mammograms, 226, 321, 323, 325, 328, 330, Modalities, 38, 43, 272, 284, 323, 342, 355,
334 360, 362, 371, 382, 388, 392, 393,
Mammography view, 322, 324, 330 403
Manual ground truth annotations, 183 Modality propagation (MP), 383, 389, 400
Marginal space, 62, 63, 276 Model selection and training parameters, 71
Marginal space deep learning (MSDL), 56, Model’s outputs, 161, 162, 169, 358
61, 77 Montreal cognitive assessment (MOCA),
Marginal space learning (MSL), 56, 61, 67 361, 362, 364, 373
Marginal space regression (MSR), 277, Morphological signature, 249, 250, 256, 263
286–288, 292 MR brain images, 248, 255
Markov decision process (MDP), 65, 67 MR images, 74, 197, 198, 211, 214, 247,
Mass, 325, 330–332, 334 259, 263
Matlab, 165, 169 MR (magnetic resonance), 84, 135, 197, 409
Max pooling, 182, 229, 254, 259, 264, 330, MR prostate images, 200, 207, 219
331 MR volumes, 135, 143, 144
Index 429

MRI images, 305, 360, 370 deep max-pooling convolutional, 137


MRI (magnetic resonance images), 224, 228, feed-forward, 6, 411
272, 353, 355, 360, 371, 382, 409 multi-layer, 5, 8, 14, 87, 346
MRI scans, 388, 397 single-layer, 4, 346
MSDL framework, 71 sparse adaptive deep, 59
MSER (maximally stable extremal region), two-layer, 11
156 Neurons, 4, 11, 30, 139, 182, 227
MTREproj (mean target registration error in Non-informative patches, 92
the projection direction), 285, 290, Nonlinear transformation, 12, 158, 224, 225,
292 250
Multi-atlas, 199–201, 206, 207, 224, 228 Nonlinearities, 29
Multi-instance learning (MIL), 86, 93–95, Number of hidden units, 5, 11–13
97–99
Multi-layer perceptron (MLP), 5, 33, 227, O
273, 385
Object detection, 26, 34, 35, 38, 39, 43, 71,
Multi-modal baseline rDAm, 361, 363–365
73, 134, 155, 157, 200
Multi-task learning, 43
Object recognition, 84, 140, 302, 353, 371
Mutual information maximization, 384, 393,
Optimal enrichment criterion, 345, 350
401
Optimization, 93, 230, 290, 395
Mutual information (MI), 231, 272, 289,
Optimization problem, 161, 167, 387, 394,
382, 384, 392, 393, 397
396
Myocytes, 180–183, 185–187
Optimizer, 272, 289, 290
Orientations, 56, 58, 61, 72, 124, 277, 278,
N 283, 304, 312
Natural language processing (NLP), 84, 343, Outcome measure, 344, 361
352, 406, 407, 411, 417, 419
Output layer, 5, 12, 14, 91, 144, 160, 202,
Neighbors, 93, 166, 169, 391, 392
204, 205, 227, 229, 250, 280, 286,
NERS (non-overlapping extremal regions
346, 386, 387
selection), 163, 164, 166, 171
Outputs, 9, 13, 30, 36, 41, 44, 87, 92, 123,
Network, 9, 13, 20, 26, 28, 32, 35, 43, 44,
136, 158, 161, 167, 168, 226, 357,
57, 59, 84, 140, 144, 286, 354, 355
358
cascaded, 144
Overfitting, 9, 20, 21, 59, 61, 94, 97, 230
decoder, 247, 248, 250
deep belief, 26
simplified, 388 P
smaller, 349, 350, 390 Paired t-test, 122, 214, 219, 262
Network architecture, 21, 76, 138, 146, 148, Parameter space, 56, 61, 276, 286, 347
163, 183, 230, 307 Parameter space partitioning (PSP), 276,
learning, 6 286–288, 292
Network parameters learning, 6 Parameters, 140, 161, 211, 231, 277
Network representation, 308 large number of, 29, 31, 33, 148
Network structure, 253, 280, 347 learned, 349, 350
Neural language models, 411 model’s, 30, 159, 161, 162, 168
Neural network model, 45 out-of-plane rotation, 277
deep convolutional, 415 out-of-plane translation, 277
Neural networks, 4, 12, 22, 29, 30, 34, 59, tuned, 118
67, 70, 76, 148, 160, 225–227, 285, Patch binarization, 125, 128
345, 384 Patch representation, 182, 187
430 Index

Patches, 35, 58, 69, 91, 97, 107, 112, 113, Probability signals, 112, 118, 125, 128
115, 121, 122, 125, 140, 144, 156, Problem formulation, 58, 67
187, 229 Prostate, 199, 206, 211, 219
local image, 35, 166 Prostate boundary, 197, 198, 200, 201, 210,
sampled image, 255 211, 214, 219
selected image, 164, 255 Prostate likelihood map, 206, 207, 209, 210,
training image, 159, 252 219
Pathologies, 182, 301–303, 310, 313–315, Prostate region, 198, 206, 209
317, 342 Prostate segmentation, 199, 216, 217
digital, 180, 187 MR, 199, 219
examined, 310 Proximity mask, 166, 169
Pattern matching, 407, 409 Proximity patch, 166
PCNN, 94, 97 PsyEF (summary score for executive
Perceptron, 4 function), 361, 362, 364, 372
Performance, 31–33, 43, 44, 70–74, 77, 78, PsyMEM (neuropsychological summary
107, 125, 126, 134, 135, 140–143, score for memory), 361, 372
147, 148, 156, 157, 165, 166, 171, PWC (pixel-wise classification), 137, 170
172, 233, 238, 239, 286, 287 Python, 45, 93
Performance speedup, 127 Theano, 22, 45, 149, 230
Perturbations, 284
PHOG (pyramid histogram of oriented
Q
gradients), 303, 310
Question, 43
Picture archiving and communication
systems (PACS), 406, 408
Pixel-wise classification (PWC), 137, 170 R
Placebos, 343, 344 Radiology text, 410, 412, 417
Pneumonia, 300, 301, 415 RadLex, 414
Pooling layers, 8, 9, 37, 110, 158, 226, 227, Random forest (RF), 148, 156, 182, 185,
306 186, 188, 189, 191, 382
Population, 344, 345, 350, 359, 362, 372 Randomized deep networks, 344, 350, 352,
Pose estimation via hierarchical learning 353, 356, 360
(PEHL), 274, 285–290, 292, 293 Randomized denoising autoencoder marker
Positive predictive value (PPV), 188 (rDAm), 358, 359, 361–364, 367,
Pre-trained CNN, 88, 307 368, 370–373
model, 95, 306, 307 Randomized dropout network marker
Pre-trained models, 22, 26, 32, 45, 332–335 (rDrm), 358–368, 370–373
Precision, 93, 147, 165, 170, 171, 212, 214, RAVLT (Rey auditory verbal learning test),
287 361, 372, 373
Predictive power, 181, 362, 364, 369, 370 RBM (restricted Boltzmann machines), 15,
Preprocessing, 84, 95, 106, 140, 146, 159, 26, 347
211, 315, 360 RCasNN (randomly initialized model), 141
Pretraining, 12, 14 RCN (right consolidation), 310
layer-wise, 14, 347 RDA (randomized denoising autoencoders),
Principle component analysis (PCA), 203, 355, 356, 359, 362, 364, 370, 371,
247–250, 259, 260, 305 373
Probability, 5, 112, 114, 116, 126, 136, 138, Recall, 93, 141, 147, 163, 165, 170, 171, 262
144, 160, 187, 255, 304, 310, 311, Receptive fields, 9
343, 416 Recognition, 57, 371
Index 431

Reconstructions, 203, 205, 252, 349 Segmentation, 22, 58, 71, 72, 84, 86, 100,
Recover, 10, 248, 250, 273, 274, 290 180, 181, 199, 200, 212, 224, 225,
Rectified linear unit (ReLu), 20, 29, 40, 59, 227, 228, 231–234, 237–240, 302,
93, 144, 158, 160, 227, 280 315
Recurrent neural network (RNN), 37, 40–42, ground-truth, 212, 216
413 registration-based, 228, 231
Registration, 206, 224, 231, 238, 255, 258, semantic, 35, 135
324, 334, 386, 388 stroma, 180, 182, 185
2-D/3-D, 272–275, 283, 284, 288, 289, Segmentation accuracy, 107, 128, 214, 219
292, 293 Segmentation maps, 328, 331, 334
real-time, 287, 290, 292, 293 Shallow models, 247, 250
Registration accuracy, 259, 265, 272, 285 Shapes, 64, 107, 125, 140, 156, 170, 210,
Registration methods, 263, 287 304
2-D/3-D, 272, 288, 293 ShrinkConnect, 388–390, 401
baseline HAMMER, 260, 262, 263 SIFT, 26, 33, 85, 199, 248, 256, 303, 304
conventional, 248 Signal-to-noise ratio (SNR), 246, 263, 322,
Registration problems, 273–275, 292, 293 388
Registration-based methods, 223, 224, 228, Similarity maps, 201
231, 233, 234, 238, 239 Similarity measures, 231, 272, 273, 285,
Regressors, 157, 275, 282, 289, 384 289, 292, 392
Reinforcement learning, 65, 67, 69 Small sample regime, 344, 350, 353
Representations, 12, 14, 17, 26, 35, 42, 55, Sonographer, 106, 109, 117, 127
59, 84, 148, 184, 225, 226, 254, 275, Source and target modalities, 382, 384, 390,
311–313 396
Responses, 59, 88, 91, 203 Sparse adaptive deep neural networks
(SADNN), 57, 59, 61, 62, 64, 74
Restored wavelets, 111, 112, 125
Sparse auto-encoder (SAE), 13, 14, 17, 203,
Reward, 68
204, 247, 248, 252–254, 265
Right pleural effusion, 301, 310
Sparse histogramming MI (SHMI), 293
Right pleural effusion (RPE), 301, 310, 315
Sparse patch matching, 206, 214
RMSDproj (root mean squared distance in
Sparse representation, 207, 210, 248, 396
the projection direction), 287, 288
coupled, 382, 389, 396, 400
Robust approach, 156
Sparsely distributed objects, 134, 150
Robust cell detection, 155, 165, 171
Spatial information, 143, 144, 149, 227, 239,
Robust cell detection using convolutional
305, 382, 383, 401
neural network, 165
Spatial locations, 28, 225, 385, 386, 389
ROI localization, 108, 115, 121, 126, 128 Spatial resolution, 36, 37, 71
ROI (region of interest), 106, 112–115, 121, Stacked sparse auto-encoder (SSAE), 201,
125, 136, 183, 259, 260, 278, 285 203–206, 208, 211, 213, 214, 216,
Root mean squared error (RMSE), 289 219
networks, 205
S Stages
Sample enrichment, 344, 370 boosting, 88, 91, 96
Sample sizes, 361, 365, 367, 373 Standard deviation, 16, 117, 122, 163, 171,
Scales, 21, 56, 58, 143, 226, 304, 360 186, 260, 265, 284, 292, 304, 310,
Screening stage, 144 331, 345, 350
SDA (stacked DA), 348–350, 354, 356, 357, State-of-the-art image classification method,
371 305
432 Index

States, 67, 68 Training regressors, 274


Stochastic gradient descent (SGD), 8, 30, 32, Training samples, 8, 11, 13, 59, 93, 137,
37, 69, 76, 283 139–141, 161, 167, 168, 247, 283,
Stride, 28, 36, 138 347, 387
Stroma, 180–183, 186, 190, 191 artificial, 332
Stromal tissue, 180, 187 Training set, 13, 63, 64, 71, 88, 93, 112, 115,
Structured regression model, 166–168, 171 117, 145, 147, 182, 185, 203, 228,
Success rate, 274, 285 230
Superior performance, 128, 164, 165, 172, stratified, 113, 115
214, 219, 265 Training time, 20–22, 74, 332
Supervised SSAE, 201, 205, 206, 213, 214, Transfer learning, 140, 299, 307, 324
216, 217, 219 Transformation parameters, 56, 61, 273, 278,
Synthetic data, 93, 294 289, 290, 293, 349
Transformations, 56, 139, 210, 274, 276,
T 294, 345, 357, 383, 386
Target image, 88, 201, 206, 210, 219 Translation, 9, 10, 35, 36, 42, 61, 62, 90, 93,
Target information, 159 95, 112, 113, 115, 137, 140, 145,
Target modalities, 384 226, 227, 254, 276–278
Target modality images, 389, 391 Treatment, 143, 182, 197, 342–345, 350
Target objects, 138, 199, 272, 277, 284, 285 Trial, 200, 342, 350, 368, 371
Template image, 255, 258, 264 Tricks, 31, 33
Tensorflow, 22, 45 True negative rate (TNR), 188
Test patients, 117, 118, 120–123, 125 True negative (TN), 315
Test set, 71, 117, 230, 331, 363 True positive rate (TPR), 188
Testing images, 162 True positive (TP), 141, 142, 315, 323
Texture, 140, 185, 224, 325 True targets, 136
Tissue, 180, 259 Tuberculosis, 301
Tissue segmentation, 182
Topic modeling, 410 U
Total knee arthroplasty (TKA), 283–286, 290 Ultrasound images, 56, 71, 124
Toy example, 40, 89, 90 UMLS Metathesaurus, 414
Training, 31, 45, 60, 64, 74, 185, 230, 253, Unified medical language system (UMLS),
326, 382, 386, 388, 389 414
two-stage, 331 Unsupervised SSAE, 201, 205, 213, 214,
Training annotations, 182, 187, 190, 191 216, 217
Training data, 14, 27, 31, 32, 84, 140, 141,
156, 159, 166, 168, 183, 184, 187, V
188, 203, 225, 239, 240, 250 Validation set, 76, 230
paired, 384, 390, 391, 397, 401 Vanilla deep network (VDN), 389
synthetic, 289 Vanishing gradient problem, 11, 20, 41
Training dataset, 139–141, 230, 238 Ventricle, 210, 234, 237, 255, 259
Training images, 32, 87, 110, 141, 239, 246, VGG network, 313
255, 259, 260, 264, 331, 389 VGG-L4, 311
Training patches, 113, 115, 118, 121, 160, VGG-L5, 308, 313
183, 184, 186, 202, 203 Videos, 38, 106, 108, 112, 117, 123
Training patients, 117–119, 121 fluoroscopic, 283
Training PEHL, 286, 289, 294 Virtual implant planning system (VIPS),
Training phase, 145, 229 283, 285, 286, 290
Index 433

Visible layer, 4, 15, 17, 18 Word embedding, 411


Volumes, 39, 145, 147, 224 Word-to-vector models, 411
Volumetric data, 143
Voting confidence, 159 X
Voting offsets, 159
X-ray attenuation map, 272
Voting units, 160
X-ray echo fusion (XEF), 284, 289
Voxels, 8, 57, 201, 208, 212, 227, 229, 234,
X-ray images, 272, 273, 275, 277, 279, 283,
255, 272, 353, 354, 371, 384, 391,
284, 287, 289
392, 397
center, 385, 386, 392 real, 281, 294
synthetic, 275, 279, 281
W X-ray imaging model, 274
Weak learners, 352 XEF dataset, 287
Weight units, 160
White matter (WM), 259 Z
Whole-slide imaging (WSI), 180, 187 Zone, 277

You might also like