Motivation: Quantitative large-scale cell microscopy is widely used in biological and medical research. Such experiments produce huge amounts of image data and thus require automated analysis. However, automated detection of cell outlines (cell segmentation) is typically challenging due to, e.g. high cell densities, cell-to-cell variability and low signal-to-noise ratios.
Results: Here, we evaluate accuracy and speed of various state-of-the-art approaches for cell segmentation in light microscopy images using challenging real and synthetic image data. The results vary between datasets and show that the tested tools are either not robust enough or computationally expensive, thus limiting their application to large-scale experiments. We therefore developed fastER, a trainable tool that is orders of magnitude faster while producing state-of-the-art segmentation quality. It supports various cell types and image acquisition modalities, but is easy-to-use even for non-experts: it has no parameters and can be adapted to specific image sets by interactively labelling cells for training. As a proof of concept, we segment and count cells in over 200 000 brightfield images (1388 × 1040 pixels each) from a six day time-lapse microscopy experiment; identification of over 46 000 000 single cells requires only about two and a half hours on a desktop computer.
Availability and implementation: C ++ code, binaries and data at https://www.bsse.ethz.ch/csd/software/faster.html .
Contact: [email protected] or [email protected].
Supplementary information: Supplementary data are available at Bioinformatics online.
© The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: [email protected]