Inter-rater reliability of functional MRI data quality control assessments: a standardised protocol and practical guide using pyfMRIqcWilliams, B. ORCID: https://orcid.org/0000-0003-3844-3117, Hedger, N. ORCID: https://orcid.org/0000-0002-2733-1913, McNabb, C. B. ORCID: https://orcid.org/0000-0002-6434-5177, Rossetti, G. M. K. ORCID: https://orcid.org/0000-0002-9610-6066 and Christakou, A. ORCID: https://orcid.org/0000-0002-4267-3436 (2023) Inter-rater reliability of functional MRI data quality control assessments: a standardised protocol and practical guide using pyfMRIqc. Frontiers in Neuroscience, 17 (1070413). ISSN 1662-453X
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.3389/fnins.2023.1070413 Abstract/SummaryQuality control is a critical step in the processing and analysis of functional magnetic resonance imaging data. Its purpose is to remove problematic data that could otherwise lead to downstream errors in the analysis and reporting of results. The manual inspection of data can be a laborious and error-prone process that is susceptible to human error. The development of automated tools aims to mitigate these issues. One such tool is pyfMRIqc, which we previously developed as a user-friendly method for assessing data quality. Yet, these methods still generate output that requires subjective interpretations about whether the quality of a given dataset meets an acceptable standard for further analysis. Here we present a quality control protocol using pyfMRIqc and assess the inter-rater reliability of four independent raters using this protocol for data from the fMRI Open QC project (https://osf.io/qaesm/). Data were classified by raters as either “include,” “uncertain,” or “exclude.” There was moderate to substantial agreement between raters for “include” and “exclude,” but little to no agreement for “uncertain.” In most cases only a single rater used the “uncertain” classification for a given participant’s data, with the remaining raters showing agreement for “include”/“exclude” decisions in all but one case. We suggest several approaches to increase rater agreement and reduce disagreement for “uncertain” cases, aiding classification consistency. Download Statistics DownloadsDownloads per month over past year Altmetric Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |