0% found this document useful (0 votes)
6 views3 pages

Ref RP

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views3 pages

Ref RP

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

REFERENCES

[1] D. Stowell, D. Giannoulis, E. Benetos, M. Lagrange, and M. D. Plumbley, “Detection


and classification of acoustic scenes and events,” IEEE Transactions on Multimedia,
vol. 17, no. 10, pp. 1733–1746, Oct 2015.

[2] A. Jamal and S. Al-Azani, "A Machine-Learning Approach for Children’s Pain
Assessments Using Prosodic and Spectral Acoustic Features," 2023 3rd International
Conference on Electrical, Computer,Communications and Mechatronics Engineering
(ICECCME), Tenerife, Canary Islands, Spain, 2023, pp. 1-6, doi:
10.1109/ICECCME57830.2023.10252478.

[3] L. L. LaGasse, A. R. Neal, and B. M. Lester, “Assessment of infant cry: Acoustic cry
analysis and parental perception,” Mental Retardation and Developmental
Disabilities Research Reviews, vol. 11, no. 1, pp. 83–93,2005.

[4] R. Cohen and Y. Lavner, “Infant cry analysis and detection,” in Proc. IEEE 27th Conv.
Electr. Electron. Eng. Isr., Nov. 2012, pp. 1–5.

[5] J. Saraswathy, M. Hariharan, S. Yaacob, and W. Khairunizam, “Au-tomatic


classification of infant cry: A review,” in 2012 International Conference on Biomedical
Engineering (ICoBE), Feb 2012, pp. 543–548.

[6] G. Varallyay, “The melody of crying,” International Journal of Pediatric


Otorhinolaryngology, vol. 71, no. 11, pp. 1699–1708, Nov. 2007.

[7] R. Cohen and Y. Lavner, “Infant cry analysis and detection,” 2012 IEEE 27th
Convention of Electrical and Electronics Engineers in Israel (IEEEI 2012), pp. 2–6,
2012.

[8] M. H. C. Sudul et al., "Automatic Classification of Infant’s Cry Using Data Balancing
and Hierarchical Classification Techniques," 2023 30th International Conference on
Systems, Signals and Image Processing (IWSSIP), Ohrid, North Macedonia, 2023, pp.
1-5, doi: 10.1109/IWSSIP58668.2023.10180257.

[9] A. Gorin, C. Subakan, S. Abdoli, J. Wang, S. Latremouille and C. Onu, "Self-Supervised


Learning for Infant Cry Analysis," 2023 IEEE International Conference on Acoustics,

29
Speech, and Signal Processing Workshops (ICASSPW), Rhodes Island, Greece, 2023,
pp. 1-5, doi: 10.1109/ICASSPW59220.2023.10193421.

[10] N. Meephiw and P. Leesutthipornchai, "MFCC Feature Selection for Infant Cry
Classification," 2022 26th International Computer Science and Engineering
Conference (ICSEC), Sakon Nakhon, Thailand, 2022, pp. 123-127, doi:
10.1109/ICSEC56337.2022.10049328.

[11] Z. Firas, A. A. Nashaat and G. Ahmad, "Optimizing Infant Cry Recognition: A Fusion of
LPC and MFCC Features in Deep Learning Models," 2023 Seventh International
Conference on Advances in Biomedical Engineering (ICABME), Beirut, Lebanon, 2023,
pp. 01- 06, doi: 10.1109/ICABME59496.2023.10293083.

[12] K. Alam, M. H. Bhuiyan and M. F. Monir, "Bangla Speaker Accent Variation


Classification from Audio Using Deep Neural Networks: A Distinct Approach,"
TENCON 2023 - 2023 IEEE Region 10 Conference (TENCON), Chiang Mai, Thailand,
2023, pp. 134-139, doi: 10.1109/TENCON58879.2023.10322411.
[13] K. Alam, N. Nigar, H. Erler and A. Banerjee, "Speech Emotion Recognition from Audio
Files Using Feedforward Neural Network,” 2023 International Conference on
Electrical, Computer and Communication Engineering (ECCE), Chittagong,
Bangladesh, 2023, pp. 1-6, doi: 10.1109/ECCE57851.2023.10101492.

[14] D. HABA, Data Augmentation with Python: Enhance Accuracy in Deep Learning with
Practical Data Augmentation... for Image, Text, Audio & Tabular Data. S.l.: PACKT
PUBLISHING LIMITED, 2023.

[15] M. Muthumari, C. A. Bhuvaneswari, J. E. N. S. Kumar Babu, and S. P. Raju, “Data


Augmentation Model for Audio Signal Extraction,” 2022 3rd International Conference
on Electronics and Sustainable Communication Systems (ICESC). IEEE, Aug. 17, 2022.
doi: 10.1109/icesc54411.2022.9885539.

[16] J. Pons, T. Lidy, and X. Serra, “Experimenting with musically motivated convolutional
neural networks,” in Proc. 14th Int. Workshop Content- Based Multimedia Indexing
(CBMI), Bucharest, Romania, Jun. 2016, pp. 1–6.

[17] J. O. Garcia and C. R. Garcia, ‘Mel-frequency cepstrum coefficients extraction from


infant cry for classification of normal and pathological cry with feed-forward neural
networks, in Proceedings of the International Joint Conference on Neural Networks,
2003., 2003, vol. 4, pp. 3140–3145.

30
[18] C.-Y. Chang, Y.-C. Hsiao, and S.-T. Chen, ‘Application of incremental SVM learning for
infant cries recognition’, in 2015 18th International Conference on Network-Based
Information Systems, 2015, pp. 607–610.

[19] J. Saraswathy et al., ‘Time-frequency analysis in infant cry classification using


quadratic time frequency distributions’, Biocybernetics and Biomedical Engineering,
vol. 38, no. 3, pp. 634– 645, 2018

[20] H. B. Sailor and H. A. Patil, ‘Auditory Filterbank Learning Using ConvRBM for Infant
Cry Classification.’, in INTERSPEECH, 2018, pp. 706–710.

[21] M. Hariharan et al., ‘Improved binary dragonfly optimization algorithm and wavelet
packet-based non-linear features for infant cry classification’, Computer Methods
and Programs in Biomedicine, vol. 155, pp. 39–51, 2018.

[22] M. Moharir, M. U. Sachin, R. Nagaraj, M. Samiksha, and S. Rao, ‘Identification of


asphyxia in newborns using gpu for deep learning’, in 2017 2nd International
Conference for Convergence in Technology (I2CT), 2017, pp. 236–239

[23] L. Liu, W. Li, X. Wu, and B. X. Zhou, ‘Infant cry language analysis and recognition: an
experimental approach’, IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 3, pp.
778–788, 2019.

[24] I.-A. Bğnicğ, H. Cucu, A. Buzo, D. Burileanu, and C. Burileanu, ‘Baby cry recognition in
real-world conditions’, in 2016 39th International Conference on
Telecommunications and Signal Processing (TSP), 2016, pp. 315–318.

31

You might also like