Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Longitudinal Analysis Using Personalised 3D Cardiac Models with Population-Based Priors: Application to Paediatric Cardiomyopathies

  • Conference paper
  • First Online:

Abstract

Personalised 3D modelling of the heart is of increasing interest in order to better characterise pathologies and predict evolution. The personalisation consists in estimating the parameter values of an electromechanical model in order to reproduce the observed cardiac motion. However, the number of parameters in these models can be high and their estimation may not be unique. This variability can be an obstacle to further analyse the estimated parameters and for their clinical interpretation. In this paper we present a method to perform consistent estimations of electromechanical parameters with prior probabilities on the estimated values, which we apply on a large database of 84 different heartbeats. We show that the use of priors reduces considerably the variance in the estimated parameters, enabling better conditioning of the parameters for further analysis of the cardiac function. This is demonstrated by the application to longitudinal data of paediatric cardiomyopathies, where the estimated parameters provide additional information on the pathology and its evolution.

You have full access to this open access chapter, Download conference paper PDF

Similar content being viewed by others

Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1Introduction

Tridimensional Personalised Cardiac Models are of increasing interest for clinical applications. They compute the myocardial motion under the influence of simulated electromechanics and haemodynamics, in order to simulate a heartbeat. Their equations usually depend on a large number of parameters, so after extracting a patient’s heart geometry from clinical imaging, the first step to build apersonalised simulation is to estimate parameter values for which the simulation matches the measured heartbeat.

Recent works have shown that thepersonalised parameter values can capture intrinsic properties of the heart [1]. In particular, the simulations can help predict the possible behavior of the heart to some changes associated to some specific conditions (such as exercise or drug treatment), leading to applications in therapy planning [2]. Although these studies are very promising, progress is facing three main obstacles:

  • the difficulty to find large homogeneous cardiac databases where the same information is available for all the cases

  • the non-uniqueness of the parameter values due to the sparsity of clinical measurements compared to the high number of parameters of the models

  • the computational time required to run models (from few minutes to several days for the most complex ones). This can be a burden in the parameter estimation requiring many simulations, and even worse for large databases

Here we present a cardiac modelling study overcoming these difficulties by first building a homogeneous cohort of more than 61 patients including 22 controls and 39 children with various cardiomyopathies. For each case, MRIs were acquired, together with pressure and heart rate measurements, resulting in 84 heart mesh geometry and haemodynamic conditions. On the modelling side, we performed the estimation of 6 parameters to reproduce the stroke volume and pressure measurements. This was performed with prior probabilities on the parameter values, in order to overcome the problem of the parameters uniqueness. Finally, the personalisation for the full cohort was performed in a relatively short time (around 2 days), thanks to a “multi-fidelity” optimization scheme which predicts changes in simulations of the 3D model with a much faster and simpler 0D model.

This led to more consistent parameter values across the 84 cases, on which we studied the relationship to clinical condition and its evolution. In particular, using the follow-up data patients with cardiomyopathy we show that the evolution of parameters naturally suggest and improvement of the heart condition under therapy. Finally we demonstrate that these estimated parameters could also be complementary to the clinical measurements in order to characterise better the difference between healthy and cardiomyopathy cases.

2Clinical Data

We used two different cohorts (C1 and C2) in this study. The two protocols were approved by the local Research Ethics Committees. First 22 volunteers (C1) who participated to a clinical study to assess the cardiovascular response after the ingestion of a high-energy (1635 kcal), high-fat (142 g) meal after fasting for 12 h, closely following the protocol in [3]. In this study, short axis cardiac cine MRI sequences were acquired before the ingestion and at one or more time points within 1 h of the ingestion of the meal, in order to study the evolution of the blood flow in the arteries. After the meal ingestion, both the heart rate and the cardiac output increased by around 15%. However no substantial changes were observed in the mean, diastolic and systolic pressures during digestion (compared to the intra-patient variability of the measurement).

The second cohort (C2) consists in 39 children with various cardiomyopahies, ranging from class I to IV on the Ross and NYHA classifications [4] for heart failure symptoms, from two different clinical centers. The cine MRI was acquired at their enrollment and for 4 of them, at follow-up (after few months). The most common symptom among this cohort is a dilation of the left ventricle (Dilated CardioMyopathy) with low ejection fraction.

Fig. 1.
figure 1

Fig. 1. (a) Typical mesh geometry of a heart in the cohort (C1) and (b) Mesh geometry of a paediatric heart with Dilated Cardiomyopathy (DCM) in the cohort(C2)

We performed the cardiac modelling of a total of 41 different instants across the 22 volunteers of the first cohort, and at the 39 enrollment times and 4 available follow-up times of the second cohort. This lead to a total of 84 complete set of cine MRI, cuff pressure measurements and heart rates. See Figs. 1a and b for typical heart geometries from each cohort.

3Personalised Cardiac Modeling

3.13D Electromechanical Cardiac Model

From each MRI, a high-resolution biventricular tetrahedral mesh of the patient’s heart morphology (around 15 000 nodes) is generated with a method similar to the one in [5]. On this mesh, amyocardial fibre direction is defined at each node of the mesh by varying the elevation angles of the fibre across the myocardial wall from\(\alpha _1 = -80\) on the epicardium to\(\alpha _2 = 80\) on the endocardium.

The depolarization times in the myocardium were computed with the Eikonal model using default values of conductivities and the APD was computed from the Heart Rate with classical values of the restitution curve. Myocardial forces are computed based on the Bestel-Clement-Sorine model as detailed in [6]. It models the forces as the combination of an active contraction force in the direction of the fibre, in parallel with a passive anisotropic hyperelasticity driven by the Mooney-Rivlin strain energy. In this paper, we only consider two main parameters of the model: theMaximal Contractility\(\sigma \) and thePassive Stiffness\(c_1\).

Finally the mechanical equations are coupled with a haemodynamic model which implements the 4 phases of the cardiac cycle, and describes the pressure in the cardiac chambers with global values (see [6] for implementation details). In particular, the pressure of the aortic artery in particular is modeled with a 4-parameter Windkessel model [7], which main parameters areblood inertiaL, thearterial complianceC and theproximal and distal (peripheral) resistances\(Z_\text {C}\) andR. Amean venous pressure\(P_\text {ve}\) has to be set as well. In the following,\(Z_\text {C}\) andL are set at a default value, whileC,R and\(P_\text {ve}\) are estimated parameters.

3.2Parameter Estimation with Priors

A typicalparameter estimation problem is composed of simulated quantities called the “outputs”O (such as the simulated Stroke Volume and Mean Pressure), and a set of model parametersP. The estimation consists in finding adequate valuesx of the parameters such that the output valuesO(x) in the 3D model simulation fit the “observed values” from the clinical measurements\(\widehat{O}\) of interest.

This is done by minimizing a cost function (orscore)\(S(x,\widehat{O})\) between the simulated valuesO(x) and the target values\(\widehat{O}\):

$$\begin{aligned} S(x,\widehat{O})=||O(x)-\widehat{O}||^2+\lambda R(x)\end{aligned}$$

whereR(x) is apenalty (orregularisation) term, weighted with\(\lambda \), that can be formulated as a quadratic form with mean value\(\mu _R\) and covariance matrix\(\varDelta \):

$$\begin{aligned} R(x)= (x-\mu _R) \varDelta ^{-1} (x-\mu _R)^T \end{aligned}$$

In Bayesian Inference, this is equivalent to finding themaximum a posteriori with a Gaussian prior and a Gaussian likelihood.

Finally, since the parameters are positive, it is more meaningful to consider that the logarithm of the parameter values follow a Gaussian distribution rather than the parameter themselves, so the optimisation is perfomed over the logarithm of the parameters values.

In this scope, we focus on the set of5 parametersP:\(c_1\),\(\sigma \),R,C and\(P_\mathbf{ve }\), in order to fit3 target outputs to their clinical measurements, which are theStroke VolumeSV the AorticDiastolic PressureDP andMean PressureMP.

We then performed two different personalisations: first, one (P1) without priors on the parameter values during the optimization (\(\lambda \)=0). This allowed us to have a first assessment of the variability of the parameters and of the values which lead to the best simulations. It was then followed by a second personalisation (P2), with priors on the values of both\(c_1\) andC equipped with diagonal covariance matrix:

$$\begin{aligned} \mu _R=\begin{bmatrix} \mu _{c_1}&\\ \mu _{C}&\end{bmatrix}, \varDelta =\begin{bmatrix} \delta _{c_1}&0\\0&\delta _{C} \end{bmatrix} \end{aligned}$$
(1)

We observed inP1 that simulations with\(c_1\) around\(500\,000\, Pa\) lead to good behavior compared to the dynamics observed in echocardiographic images. For the arterial complianceC, we used a prior based on the mean value of\(1.8e^{-8} \, S.I.\) as reported in [8].

Therefore\(\mu _{c_1}=ln(500\,000)=13.12\),\(\delta _{c_1}=0.5\) and\(\mu _{C}=ln(1.8e^{-8})=-17.83\),\(\delta _{c}=1\).

3.3Efficient Multi-fidelity Optimization

The optimization in each personalisation problem was performed with a “multifidelity” approach [9] based on our recent work in cardiac model personalisation [10], where the outputs of the 3D model are approximated during the optimization by simulations from a very fast low-fidelity model called “0D model”, made of around twenty equations only.

As explained in [10], a small number of 3D “sigma-simulations” can indeed be used to approximate many 3D simulations in a large parameter space. This is done by first finding similar 0D simulations with the same outputs of interest (such as the pressure and stroke volume), then building a mapping between the parameters of the corresponding 3D and 0D simulations.

This multifidelity method was here adapted for the two personalisation problems, where the 0D and the 3D models share the same haemodynamic variablesR,C and\(P_\mathbf{ve }\) from the Windkessel model. In this specific case, the variations of pressure and stroke volume with respect to these variables are very similar, and we found to have better approximation results by only computing a mapping between\(c_1\) and\(\sigma \) (as opposed to all the personalised parameter in our original approach) to the 0D model parameters.

As a result, our Multi-Fidelity Optimization Method performs the optimization of 3D parameters based on 0D model approximations which are very fast to compute. It only requires the computation of successive sets of 5 simulations of the 3D model (the sigma-simulations). In particular, the simultaneous personalisation (P1) of the 84 hearts was completed in around 36 h, and in 48 h for (P2).

4Results

In Table 1 we report the mean of both the estimated values (Mean) and logarithmic estimated value (Log-Mean), as well as the standard deviation of the logarithm of the estimated values (Log-Std).

Table 1. Statistics of the estimated parameters in the estimations (P1) and (P2)

As expected, we can notice that the standard deviation of all the parameters in the population is reduced between (P1) and (P2). Interestingly, the goodness of fit was not impacted in the personalisation by the use of prior probabilities on\(c_1\) and\(\sigma \) in (P2). Most cases are fitted under 1.2 ml for the stroke volume and 0.5 mmHg for the pressure measurements with few outliers. This means that the prior could be stronger in order to further reduce the variability while maintaining simulations which match the clinical measurements.

4.1Application to Longitudinal Analysis of the Cardiac Function

From a clinical point view, an interesting application of the modeling is to characterise the state of the heart function, beyond the information given by the clinical measurements and the imaging. The underlying idea is that some of the estimated parameters values can capture properties of the heart which cannot be directly measured from the imaging (such as the myocardial contractility). This additional information on the heart could contribute to the diagnosis, by comparing the estimated parameters with the parameters of other known cases.

To analyse the relationship between the parameter values and the clinical condition, we performed here a linear discriminant analysis (LDA) over the parameter values and the heart rate, in order to classify between the two cohorts. This leads to the computation of two vectorsw andb such that given a vectorX of parameterx, the predicted cohort is C2 if\(A^\text {T}X+b > 0\), and C1 otherwise.

The vectorw corresponds to the most discriminative direction in the population between healthy and cardiomyopathy cases. In this context, this axis could be a candidate to characterise whether the cardiac function at a given time is closest to a healthy heart, or a heart with cardiomyopathy, based on the parameters values observed in the two cohorts.

For example, we display in Fig. 2 the projection of the parameters on this vector (x-axis) and a orthogonal direction tow. Most healthy cases (dark blue dots) are on left side of the black line (\(w<0\)) and most cardiomyopathy cases (red dots) are on the right (\(w>0\)).

Fig. 2.
figure 2

Projection of the parameter on the main directionw of a LDA classifier between the healthy cases (dark blue dots) and cardiomyopathy (other dots) cases (x-axis) and an principal orthogonal direction of this vector (y-axis). The dots in light blue, brown, orange and green correspond to 4 patients for which the data was available both at baseline (small dot) and follow-up (larger dot).

Interestingly, this could also help to quantify the evolution of the patient’s heart condition under the influence of the pathology and the therapy. Indeed, for all the cardiomyopathy cases for which we have the follow up data, we can notice a decrease in the coordinates along the horizontal axis (see the pairs of brown, light blue, green and orange dots. The larger dot is the follow-up). This could be interpreted asan improvement of the cardiac function with the therapy, which is at least becoming closer to the condition of an healthy heart. One of the cases (in light blue) is on the “healthy” side of the classification at follow-up.

Finally, the predictive power of such a classifier can be assessed, throughleave-one out cross-validation. This is done by training the classifier on all the cases but one, and predicting the diagnosis for the remaining case. If we perform the LDA over the 5 estimated parameters and the heart rate, the number of prediction errors is 11. The same classifier trained on the 3 outputs (stroke volume and pressures) and the heart rate makes 9 prediction errors. However, if we train the classifier withboth the 5 estimated parameters and the clinical measurements and the heart rate, it only makes6 classification errors.

In this context, this can mean that the estimated parameters were able to capture a more complex information on the cardiac function than the clinical measurements of volume and pressure only, both through the3D personalisation and thecomparison with the other values estimated in the population. This information was then used by the simple linear classifier to improve it accuracy in the diagnosis of a patient.

5Conclusion and Discussion

In this manuscript, we presented a cardiac modelling study based on the estimation of 5 model parameters from 3 clinical measurements of stroke volume and pressures, on a large cohort of 61 patients. We used recent ideas developments in “multi-fidelity” personalisation, to drive a very fast and computationally efficient estimation of these parameters with priors. Both the personalisations with and without priors were performed simultaneously for all the patients on our cluster, and converged respectively in less than 36 and 48 h. We showed that the use of priors during optimization reduces the variability of the estimated values in the population, leading to more consistency for further applications.

We then analyzed the estimated parameter values with respect to the clinical conditions of the patients. A linear discriminant analysis (LDA) was used to characterise the cardiac function the cases along the most discriminative axis between the two cohorts. For cardiomyoathy patients, we showed that the evolution in time along this axis suggests that their cardiac function is improving under therapy. Finally, we also demonstrated how the estimated parameter values could be complementary to clinical measurements in the context of diagnosis.

A direct extension of this study is to estimate values for more model parameters, from a larger set of measurements such as the flow or the myocardial strain. On the cardiomyopathy point of view, this could help to further discriminate between the various types of cardiomyopathy, with applications in risk stratification of heart failure. Another interesting direction of study is the further analysis of longitudinal data in order to better understand both the short-term and long-term variabilities in cardiac function, with applications on the prediction of disease evolution and therapy planning.

References

  1. Chabiniok, R., et al.: Estimation of tissue contractility from cardiac cine-MRI using a biomechanical heart model. Biomech. Model. Mechanobiol.11(5), 609–630 (2012)

    Article  Google Scholar 

  2. Crozier, W.A.: Personalised electromechanical modelling of cardiac resynchronisation therapy. Ph.D. thesis, King’s College London (2015)

    Google Scholar 

  3. Hauser, J.A., et al.: Comprehensive assessment of the global and regional vascular responses to food ingestion in humans using novel rapid MRI. Am. J. Physiol. Regul. Integr. Comp. Physiol.310(6), R541–R545 (2016)

    Article  Google Scholar 

  4. Hsu, D.T., et al.: Heart failure in children. Circ. Heart Fail.2(1), 63–70 (2009)

    Article  Google Scholar 

  5. Molléro, R., et al.: Propagation of myocardial fibre architecture uncertainty on electromechanical model parameter estimation: a case study. In: van Assen, H., Bovendeerd, P., Delhaas, T. (eds.) FIMH 2015. LNCS, vol. 9126, pp. 448–456. Springer, Cham (2015). doi:10.1007/978-3-319-20309-6_51

    Chapter  Google Scholar 

  6. Marchesseau, S., et al.: Fast parameter calibration of a cardiac electromechanical model from medical images based on the unscented transform. Biomech. Model. Mechanobiol.12(4), 815–831 (2013)

    Article  Google Scholar 

  7. Westerhof, N., et al.: The arterial windkessel. Med. Biol. Eng. Comput.47(2), 131–141 (2009)

    Article  Google Scholar 

  8. Laskey, W.K., et al.: Estimation of total systemic arterial compliance in humans. J. Appl. Physiol.69(1), 112–119 (1990)

    Article  Google Scholar 

  9. Peherstorfer, B., et al.: Survey of multifidelity methods in uncertainty propagation, inference, and optimization. ACDL Technical report TR-16-1 (2016)

    Google Scholar 

  10. Mollero, R., Pennec, X., Delingette, H., Ayache, N., Sermesant, M.: A multiscale cardiac model for fast personalisation and exploitation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9902, pp. 174–182. Springer, Cham (2016). doi:10.1007/978-3-319-46726-9_21

    Chapter  Google Scholar 

Download references

Ackowledgements

This work has been partially funded by the EU FP7-funded project MD-Paedigree (Grant Agreement 600932) and contributes to the objectives of the ERC advanced grant MedYMA (2011-291080).

Author information

Authors and Affiliations

  1. Université Côte d’Azur, Inria, Asclepios Team, Sophia Antipolis, France

    Roch Mollero, Hervé Delingette, Nicholas Ayache, Xavier Pennec & Maxime Sermesant

  2. UCL Centre for Cardiovascular Imaging, London, UK

    Jakob A. Hauser, Dilveer Panesar & Andrew Taylor

  3. Imaging and Computer Vision, Siemens Corporate Technology, Erlangen, Germany

    Manasi Datar & Tobias Heimann

  4. Deutsches Herzzentrum Berlin, Berlin, Germany

    Marcus Kelm & Titus Kuehne

  5. DMCCP Cardiology, Ospedale Pediatrico Bambino Gesù, Rome, Italy

    Marcello Chinali & Gabriele Rinelli

  6. Department of Paediatrics, University of Oxford, Oxford, UK

    Alexander Jones

Authors
  1. Roch Mollero

    You can also search for this author inPubMed Google Scholar

  2. Hervé Delingette

    You can also search for this author inPubMed Google Scholar

  3. Manasi Datar

    You can also search for this author inPubMed Google Scholar

  4. Tobias Heimann

    You can also search for this author inPubMed Google Scholar

  5. Jakob A. Hauser

    You can also search for this author inPubMed Google Scholar

  6. Dilveer Panesar

    You can also search for this author inPubMed Google Scholar

  7. Alexander Jones

    You can also search for this author inPubMed Google Scholar

  8. Andrew Taylor

    You can also search for this author inPubMed Google Scholar

  9. Marcus Kelm

    You can also search for this author inPubMed Google Scholar

  10. Titus Kuehne

    You can also search for this author inPubMed Google Scholar

  11. Marcello Chinali

    You can also search for this author inPubMed Google Scholar

  12. Gabriele Rinelli

    You can also search for this author inPubMed Google Scholar

  13. Nicholas Ayache

    You can also search for this author inPubMed Google Scholar

  14. Xavier Pennec

    You can also search for this author inPubMed Google Scholar

  15. Maxime Sermesant

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toRoch Mollero.

Editor information

Editors and Affiliations

  1. Université de Sherbrooke, Sherbrooke, QC, Canada

    Maxime Descoteaux

  2. DKFZ, Heidelberg, Germany

    Lena Maier-Hein

  3. Ulm University of Applied Sciences, Ulm, Germany

    Alfred Franz

  4. Université de Rennes 1, Rennes, France

    Pierre Jannin

  5. McGill University, Montreal, QC, Canada

    D. Louis Collins

  6. Université Laval, Québec, QC, Canada

    Simon Duchesne

Rights and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Mollero, R.et al. (2017). Longitudinal Analysis Using Personalised 3D Cardiac Models with Population-Based Priors: Application to Paediatric Cardiomyopathies. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D., Duchesne, S. (eds) Medical Image Computing and Computer-Assisted Intervention − MICCAI 2017. MICCAI 2017. Lecture Notes in Computer Science(), vol 10434. Springer, Cham. https://doi.org/10.1007/978-3-319-66185-8_40

Download citation

Publish with us

Societies and partnerships


[8]ページ先頭

©2009-2025 Movatter.jp