Abstract
Cardiac motion atlases provide a space of reference in which the motions of a cohort of subjects can be directly compared. Motion atlases can be used to learn descriptors that are linked to different pathologies and which can subsequently be used for diagnosis. To date, all such atlases have been formed and applied using data from the same modality. In this work we propose a framework to build a multimodal cardiac motion atlas from 3D magnetic resonance (MR) and 3D ultrasound (US) data. Such an atlas will benefit from the complementary motion features derived from the two modalities, and furthermore, it could be applied in clinics to detect cardiovascular disease using US data alone. The processing pipeline for the formation of the multimodal motion atlas initially involves spatial and temporal normalisation of subjects' cardiac geometry and motion. This step was accomplished following a similar pipeline to that proposed for single modality atlas formation. The main novelty of this paper lies in the use of a multi-view algorithm to simultaneously reduce the dimensionality of both the MR and US derived motion data in order to find a common space between both modalities to model their variability. Three different dimensionality reduction algorithms were investigated: principal component analysis, canonical correlation analysis and partial least squares regression (PLS). A leave-one-out cross validation on a multimodal data set of 50 volunteers was employed to quantify the accuracy of the three algorithms. Results show that PLS resulted in the lowest errors, with a reconstruction error of less than 2.3 mm for MR-derived motion data, and less than 2.5 mm for US-derived motion data. In addition, 1000 subjects from the UK Biobank database were used to build a large scale monomodal data set for a systematic validation of the proposed algorithms. Our results demonstrate the feasibility of using US data alone to analyse cardiac function based on a multimodal motion atlas.</p>