The fourth monocular depth estimation challenge

[thumbnail of Obukhov_The_Fourth_Monocular_Depth_Estimation_Challenge_CVPRW_2025_paper.pdf]
Preview
Text
- Accepted Version

Please see our End User Agreement.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Obukhov, A., Poggi, M., Tosi, F., Arora, R. S., Spencer, J., Russell, C., Hadfield, S., Bowden, R., Wang, S., Ma, Z., Chen, W., Xu, B., Sun, F., Xie, D., Zhu, J., Lavreniuk, M., Guan, H., Wu, Q., Zeng, Y., Lu, C., Wang, H., Zhou, G., Zhang, H., Wang, J., Rao, Q., Wang, C., Liu, X., Lou, Z., Jiang, H., Chen, Y., Xu, R., Tan, M., Qin, Z., Mao, Y., Liu, J., Xu, J., Yang, Y., Zhao, W., Jiang, J., Liu, X., Zhao, M., Ming, A., Chen, W., Xue, F., Yu, M., Gao, S., Wang, X., Omotara, G., Farag, R., Demby’s, J., Tousi, S. M. A., DeSouza, G. N., Yang, T.-A., Nguyen, M.-Q., Tran, T.-P., Luginov, A. and Shahzad, M. ORCID: https://orcid.org/0009-0002-9394-343X (2025) The fourth monocular depth estimation challenge. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 13-15 Jun 2025, Nashville, USA. doi: 10.1109/CVPRW67362.2025.00615

Abstract/Summary

This paper presents the results of the fourth edition of the Monocular Depth Estimation Challenge (MDEC), which focuses on zero-shot generalization to the SYNS-Patches benchmark, a dataset featuring challenging environments in both natural and indoor settings. In this edition, we revised the evaluation protocol to use least-squares alignment with two degrees of freedom to support disparity and affineinvariant predictions. We also revised the baselines and included popular off-the-shelf methods: Depth Anything v2 and Marigold. The challenge received a total of 24 submissions that outperformed the baselines on the test set; 10 of these included a report describing their approach, with most leading methods relying on affine-invariant predictions. The challenge winners improved the 3D F-Score over the previous edition’s best result, raising it from 22.58% to 23.05%.

Altmetric Badge

Item Type Conference or Workshop Item (Paper)
URI https://centaur.reading.ac.uk/id/eprint/124878
Identification Number/DOI 10.1109/CVPRW67362.2025.00615
Refereed Yes
Divisions Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
Download/View statistics View download statistics for this item

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record