Accessibility navigation


Inter-rater reliability of functional MRI data quality control assessments: a standardised protocol and practical guide using pyfMRIqc

Williams, B. ORCID: https://orcid.org/0000-0003-3844-3117, Hedger, N. ORCID: https://orcid.org/0000-0002-2733-1913, McNabb, C. B. ORCID: https://orcid.org/0000-0002-6434-5177, Rossetti, G. M. K. ORCID: https://orcid.org/0000-0002-9610-6066 and Christakou, A. ORCID: https://orcid.org/0000-0002-4267-3436 (2023) Inter-rater reliability of functional MRI data quality control assessments: a standardised protocol and practical guide using pyfMRIqc. Frontiers in Neuroscience, 17 (1070413). ISSN 1662-453X

[img]
Preview
Text (Open access) - Published Version
· Available under License Creative Commons Attribution.
· Please see our End User Agreement before downloading.

6MB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.3389/fnins.2023.1070413

Abstract/Summary

Quality control is a critical step in the processing and analysis of functional magnetic resonance imaging data. Its purpose is to remove problematic data that could otherwise lead to downstream errors in the analysis and reporting of results. The manual inspection of data can be a laborious and error-prone process that is susceptible to human error. The development of automated tools aims to mitigate these issues. One such tool is pyfMRIqc, which we previously developed as a user-friendly method for assessing data quality. Yet, these methods still generate output that requires subjective interpretations about whether the quality of a given dataset meets an acceptable standard for further analysis. Here we present a quality control protocol using pyfMRIqc and assess the inter-rater reliability of four independent raters using this protocol for data from the fMRI Open QC project (https://osf.io/qaesm/). Data were classified by raters as either “include,” “uncertain,” or “exclude.” There was moderate to substantial agreement between raters for “include” and “exclude,” but little to no agreement for “uncertain.” In most cases only a single rater used the “uncertain” classification for a given participant’s data, with the remaining raters showing agreement for “include”/“exclude” decisions in all but one case. We suggest several approaches to increase rater agreement and reduce disagreement for “uncertain” cases, aiding classification consistency.

Item Type:Article
Refereed:Yes
Divisions:Interdisciplinary Research Centres (IDRCs) > Centre for Integrative Neuroscience and Neurodynamics (CINN)
Life Sciences > School of Psychology and Clinical Language Sciences > Ageing
Life Sciences > School of Psychology and Clinical Language Sciences > Department of Psychology
Life Sciences > School of Psychology and Clinical Language Sciences > Development
Life Sciences > School of Psychology and Clinical Language Sciences > Neuroscience
Life Sciences > School of Psychology and Clinical Language Sciences > Psychopathology and Affective Neuroscience
Life Sciences > School of Psychology and Clinical Language Sciences > Perception and Action
ID Code:110300
Publisher:Frontiers

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation