Embryo Development Stage Onset Detection by Time Lapse Monitoring Based On Deep Learning
Embryo Development Stage Onset Detection by Time Lapse Monitoring Based On Deep Learning
Wided Souid Miled1,2 , Sana Chtourou3 , Nozha Chakroun3 and Khadija Kacem Berjeb3
1 LIMTIC Laboratory, Higher Institute of Computer Science, University of Tunis El-Manar, Ariana, Tunisia.
2 National Institute of Applied Science and Technology, University of Carthage, Centre Urbain Nord, Tunisia
3 University of Medicine of Tunis, Laboratory of Reproductive Biology and Cytogenetic, Aziza Othmana Hospital, Tunisia
[email protected], sana [email protected], [email protected], [email protected]
Keywords: IVF, Pronuclei Detection, Embryo Selection, Computer Vision, Classification, Deep Learning, Sequential
Models.
Abstract: In Vitro Fertilisation (IVF) is a procedure used to overcome a range of fertility issues, giving many couples the
chance of having a baby. Accurate selection of embryos with the highest implantation potentials is a necessary
step toward enhancing the effectiveness of IVF. The detection and determination of pronuclei number during
the early stages of embryo development in IVF treatments help embryologists with decision-making regarding
valuable embryo selection for implantation. Current manual visual assessment is prone to observer subjectivity
and is a long and difficult process. In this study, we build a CNN-LSTM deep learning model to automatically
detect pronuclear-stage in IVF embryos, based on Time-Lapse Images (TLI) of their early development stages.
The experimental results proved possible the automation of pronuclei determination as the proposed deep
learning based method achieved a high accuracy of 85% in the detection of pronuclear-stage embryo.
For the CNN backbone, in the field of medical images into RGB. Furthermore, as each pixel value
image analysis, it is common to use a deep learn- can vary from 0 to 255, representing the color inten-
ing model pre-trained on a large and challenging im- sity, feeding an image directly to the neural network
age classification task, such as the ImageNet classi- will result in complex computations and a slow train-
fication competition. The research organizations that ing process. To address this problem, we normalize
develop models for these competitions often release the high numeric values to range from 0 to 1 by di-
their final models under a permissive license for reuse. viding all pixel values by 255. Then, we labeled the
These models can take days or weeks to train on mod- dataset marking images in the tPB2 phase as class 1,
ern hardware. But, we can directly use them pre- those attaining the stage tPN as class 2, and the re-
trained employing transfer learning technique for a maining images where no event occurs into class 0.
target specific task. In this work, we opted for a Finally, we split the dataset, conventionally, into 80%
VGG16 model pre-trained on the ImageNet compe- training data and 20% test data.
tition dataset.
4.2 Models Implementation
5 CONCLUSION
Continuous embryo monitoring with time-lapse imag-
ing enables time based development metrics along-
side visual features to assess an embryo’s quality be-
fore transfer and provides valuable information about
its likelihood of leading to a pregnancy. In this work,
we developed a deep learning based model to classify
a sequence of time-lapse Human embryo images with
the aim of helping embryologists with embryo selec-
tion for IVF implantations. The classification task
aims to detect tPB2 and tPN key instants from an in-
put sequence of images by predicting the class of each
image among three classes; denoting the appearance
of the second polar body (tPB2), the appearance of the
Figure 6: The heatmaps generated by the Grad-CAM
method on the tPN stage pronuclei (tPN), or none of the two events. The pro-
posed model is a combination of a pre-trained VGG16
dependency with a deep learning technique. Their backbone, and an LSTM network. It has proven to be
model’s sensitivity reached 82%, but with only a 40% powerful enough to fit the data as it achieved a high
accuracy rate, which makes our method more accurate training accuracy, In future work, our model can be
with 85% accuracy score and 96% sensitivity score. enhanced by being incorporated into a pipeline where
the second part detects the number of pronuclei as
Regarding Gomez et al. (Gomez et al., 2022), 0PN, 1PN, 2PN or more. This pipeline can then be
the used dataset is composed of 337 thousand images part of a whole automatic embryo assessment deep
from 873 annotated videos. This big ground-truth learning framework, integrating the work on blasto-
helped apply three approaches: ResNet, LSTM, and cyst segmentation and cell counting.
ResNet-3D architectures, and demonstrate that they
REFERENCES upsampling convolution. volume 7, pages 81945–
81955.
Adolfsson, E. and Andershed, A. (2018). Morphology vs V. Raudonis, A. Paulauskaite-Taraseviciene, K. S. e. a.
morphokinetics: A retrospective comparison of inter- (2019). Towards the automation of early-stage human
observer and intra-observer agreement between em- embryo development detection. volume 18.
bryologists on blastocysts with known implantation Yadav, S. and Sawale, M. D. (2023). A review on image
outcome. JBRA assisted reproduction, 22(3):228– classification using deep learning. volume 17.
237.
Ciray, H., Campbell, A., Agerholm, I., Aguilar, J.,
Chamayou, S., Esbert, M., and Sayed, S. (2014). Pro-
posed guidelines on the nomenclature and annotation
of dynamic human embryo monitoring by a time-lapse
user group. In Human Reproduction, volume 38,
pages 2650–660.
Dolinko, A. V., Farland, L. V., Kaser, D. J., and al. (2017).
National survey on use of time-lapse imaging systems
in ivf laboratories. Assisted Reproduction and Genet-
ics, 34(9):1167–1172.
Fukunaga, N., Sanami, S., Kitasaka, H., and al. (2020).
Development of an automated two pronuclei detec-
tion system on time-lapse embryo images using deep
learning techniques. Reprod Med Biol., 19(3):286–
294.
G. Vaidya, S. Chandrasekhar, R. G. N. G. D. P. and Banker,
M. (2021). Time series prediction of viable embryo
and automatic grading in ivf using deep learning. vol-
ume 15, pages 190–203.
Gardner, D. and Schoolcraft, W. (1999). In vitro culture of
human blastocyst. Towards Reproductive Certainty:
Fertility and Genetics Beyond 1999, page 378–388.
Gomez, T., Feyeux, M., and al. (2022). To-
wards deep learning-powered ivf: A large public
benchmark for morphokinetic parameter prediction.
https://arxiv.org/abs/2203.00531.
I. Dimitriadis, N. Zaninovic, A. C. B. and Bormann, C. L.
(2022). Artificial intelligence in the embryology lab-
oratory: a review. volume 44, pages 435–448.
Kingma, D. P. and Ba, J. (2014). Adam: A method for
stochastic optimization. In 3rd International Confer-
ence for Learning Representations. arXiv.
L. Lockhart, P. Saeedi, J. A. and Havelock, J. (2019). Multi-
label classification for automatic human blastocyst
grading with severely imbalanced data. pages 1–6.
Leahy, B., Jang, W., Yang, H., and al. (2020). Automated
measurements of key morphological features of hu-
man embryos for ivf. CoRR, abs/2006.00067.
Lockhart, L. (2018). Automating assessment of human em-
bryo images and time-lapse sequences for ivf treat-
ment.
Louis, C., Erwin, A., Handayani, N., and al. (2021). Review
of computer vision application in in vitro fertilization:
the application of deep learning-based computer vi-
sion technology in the world of ivf. Assist Reprod
Genet., 38(3):1627–1639.
M. F. Kragh, J. Rimestad, J. B. and Karstoft, H. (2019).
Automatic grading of human blastocysts from time-
lapse imaging. volume 115, page 103494.
Rad, P. Saeedi, J. A. and Havelock, J. (2019). Cell-net:
Embryonic cell counting and centroid localization via
residual incremental atrous pyramid and progressive