Utilizing temporal information in fMRI decoding: Classifier using kernel regression methods.
560 - 571.
This paper describes a general kernel regression approach to predict experimental conditions from activity patterns acquired with functional magnetic resonance image (fMRI). The standard approach is to use classifiers that predict conditions from activity patterns. Our approach involves training different regression machines for each experimental condition, so that a predicted temporal profile is computed for each condition. A decision function is then used to classify the responses from the testing volumes into the corresponding category, by comparing the predicted temporal profile elicited by each event, against a canonical hemodynamic response function. This approach utilizes the temporal information in the fMRI signal and maintains more training samples in order to improve the classification accuracy over an existing strategy. This paper also introduces efficient techniques of temporal compaction, which operate directly on kernel matrices for kernel classification algorithms such as the support vector machine (SVM). Temporal compacting can convert the kernel computed from each fMRI volume directly into the kernel computed from beta-maps, average of volumes or spatial-temporal kernel. The proposed method was applied to three different datasets. The first one is a block-design experiment with three conditions of image stimuli. The method outperformed the SVM classifiers of three different types of temporal compaction in single-subject leave-one-block-out cross-validation. Our method achieved 100% classification accuracy for six of the subjects and an average of 94% accuracy across all 16 subjects, exceeding the best SVM classification result, which was 83% accuracy (p = 0.008). The second dataset is also a block-design experiment with two conditions of visual attention (left or right). Our method yielded 96% accuracy and SVM yielded 92% (p = 0.005). The third dataset is from a fast event-related experiment with two categories of visual objects. Our method achieved 77% accuracy, compared with 72% using SVM (p = 0.0006). Published by Elsevier Inc.
Archive Staff Only