Accessibility navigation


The multimodality cell segmentation challenge: towards universal solutions

Ma, J., Xie, R., Ayyadhury, S., Ge, C., Gupta, A., Gupta, R., Gu, S., Zhang, Y., Lee, G., Kim, J., Lou, W., Li, H., Upschulte, E., Dickscheid, T., de Almeida, J., Wang, Y., Han, L., Yang, X., Labagnara, M., Gligorovski, V. , Scheder, M., Rahi, S., Kempster, C., Pollitt, A., Espinosa, L., Mignot, Y., Middeke, J., Eckardt, J.-N., Li, W., Li, Z., Cai, X., Bai, B., Greenwald, N., Van Valen, D., Weisbart, E., Cimini, B., Cheung, T., Bruck, O., Bader, G. and Wang, B. (2024) The multimodality cell segmentation challenge: towards universal solutions. Nature Methods. ISSN 1548-7105

[img] Text - Accepted Version
· Restricted to Repository staff only until 26 September 2024.

12MB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1038/s41592-024-02233-6

Abstract/Summary

Cell segmentation is a critical step for quantitative single-cell analysis in microscopy images. Existing cell segmentation methods are often tailored to specific modalities or require manual interventions to specify hyper-parameters in different experimental settings. Here, we present a multi-modality cell segmentation benchmark, comprising over 1500 labeled images derived from more than 50 diverse biological experiments. The top participants developed a Transformer-based deep-learning algorithm that not only exceeds existing methods but can also be applied to diverse microscopy images across imaging platforms and tissue types without manual parameter adjustments. This benchmark and the improved algorithm offer promising avenues for more accurate and versatile cell analysis in microscopy imaging.

Item Type:Article
Refereed:Yes
Divisions:Life Sciences > School of Biological Sciences > Biomedical Sciences
ID Code:115930
Publisher:Nature

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation