A Deep Learning Approach For Efficient Decision Making in Healthcare Informatics
A Deep Learning Approach For Efficient Decision Making in Healthcare Informatics
"Deep learning"[3]
• Deep learning allows computational models that are composed of
multiple processing layers to learn representations of data with
multiple levels of abstraction. These methods have dramatically
improved the state-of-the-art in speech recognition, visual object
recognition, object detection and many other domains such as drug
discovery and genomics.
LITERATURE SURVEY
• Misclassification Issues
Despite some recent work on visualizing high level features by using
the weight filters in a CNN [141], [142], the entire deep learning model
is often not interpretable. Consequently, most researchers use deep
learning approaches as a black box without the possibility to explain
why it provides good results or without the ability to apply
modifications in the case of misclassification issues.
OBJECTIVES
• overfitting problem
A common problem that can arise during the training of a DNN
(especially in the case of small datasets) is overfitting, which may occur
when the number of parameters in the network is proportional to the total
number of samples in the training set. In this case, the network is able to
memorize the training examples, but cannot generalize to new samples
that it has not already observed. Therefore, although the error on the
training set is driven to a very small value, the errors for new data will be
high. To avoid the overfitting problem and improve generalization,
regularization methods, such as the dropout [143], are usually exploited
during training.
CONCLUSION
• we should not consider deep learning as a silver bullet for every single
challenge set by health informatics. In practice, it is still questionable whether
the large amount of training data and computational resources needed to run
deep learning at full performance is worthwhile, considering other fast
learning algorithms that may produce close performance with fewer
resources, less parameterization, tuning, and higher interpretability.
Therefore, we conclude that deep learning has provided a positive revival of
NNs and connectionism from the genuine integration of the latest advances in
parallel processing enabled by coprocessors. Nevertheless, a sustained
concentration of health informatics research exclusively around deep learning
could slow down the development of new machine learning algorithms with a
more conscious use of computational resources and interpretability.
REFERENCES
• [1] R. Fakoor, F. Ladhak, A. Nazi and M. Huber, "Using deep learning to enhance
cancer diagnosis and classification", Proc. Int. Conf. Mach. Learn., pp. 1-7, 2013.
• [2] B. Alipanahi, A. Delong, M. T. Weirauch and B. J. Frey, "Predicting the
sequence specificities of DNA-and RNA-binding proteins by deep
learning", Nature Biotechnol., vol. 33, pp. 831-838, 2015.
• [3] Y. LeCun, Y. Bengio and G. Hinton, "Deep learning", Nature, vol. 521, no.
7553, pp. 436-444, 2015.
• [4] G. E. Hinton and R. R. Salakhutdinov, "Reducing the dimensionality of data
with neural networks", Science, vol. 313, no. 5786, pp. 504-507, 2006.
• [5] H. R. Roth et al., "Improving computer-aided detection using convolutional
neural networks and random view aggregation", IEEE Trans. Med. Imag., vol. 35,
no. 5, pp. 1170-1181, May 2016.
• [6] P. Vincent, H. Larochelle, Y. Bengio and P.-A. Manzagol, "Extracting and
composing robust features with denoising autoencoders", Proc. Int. Conf. Mach.
Learn., pp. 1096-1103, 2008.
• [7] S. Rifai, P. Vincent, X. Muller, X. Glorot and Y. Bengio, "Contractive auto-
encoders: Explicit invariance during feature extraction", Proc. Int. Conf. Mach.
Learn., pp. 833-840, 2011.
• [8] J. Masci, U. Meier, D. Cireşan and J. Schmidhuber, "Stacked convolutional
auto-encoders for hierarchical feature extraction", Proc. Int. Conf. Artif. Neural
Netw., pp. 52-59, 2011.
• [9] G. E. Hinton, S. Osindero and Y.-W. Teh, "A fast learning algorithm for deep
belief nets", Neural Comput., vol. 18, no. 7, pp. 1527-1554, 2006.
• [10] C. Poultney et al., "Efficient learning of sparse representations with an
energy-based model", Proc. Adv. Neural Inf. Process. Syst., pp. 1137-1144,
2006.