Skip to main content

Modeling of artificial intelligence-based respiratory motion prediction in MRI-guided radiotherapy: a review

Abstract

The advancement of precision radiotherapy techniques, such as volumetric modulated arc therapy (VMAT), stereotactic body radiotherapy (SBRT), and particle therapy, highlights the importance of radiotherapy in the treatment of cancer, while also posing challenges for respiratory motion management in thoracic and abdominal tumors. MRI-guided radiotherapy (MRIgRT) stands out as state-of-art real-time respiratory motion management approach owing to the non-ionizing radiation nature and superior soft-tissue contrast characteristic of MR imaging. In clinical practice, MR imaging often operates at a frequency of 4 Hz, resulting in approximately a 300 ms system latency of MRIgRT. This system latency decreases the accuracy of respiratory motion management in MRIgRT. Artificial intelligence (AI)-based respiratory motion prediction has recently emerged as a promising solution to address the system latency issues in MRIgRT, particularly for advanced contour prediction and volumetric prediction. However, implementing AI-based respiratory motion prediction faces several challenges including the collection of training datasets, the selection of prediction methods, and the formulation of complex contour and volumetric prediction problems. This review presents modeling approaches of AI-based respiratory motion prediction in MRIgRT, and provides recommendations for achieving consistent and generalizable results in this field.

Background

The advancement of precision radiotherapy techniques, such as volumetric modulated arc therapy (VMAT), stereotactic body radiotherapy (SBRT), and particle therapy, allows for the delivery of highly conformal doses to targets. However, the delivery of highly conformal doses to targets in the abdomen and thorax is challenging due to the respiratory motion affecting the treatment [1, 2]. To guide the implementation of respiratory motion management, the American Association of Physicists in Medicine (AAPM) has released several relevant guidelines, including TG-76 report [3] for photon therapy, TG-101 report [4] for SBRT, and TG-290 report [5] for particle therapy. These guidelines provide comprehensive summaries of intra-fractional respiratory motion management approaches, such as abdominal compression [6], breath hold [7], respiratory beam gating [8], and tumor tracking, which comprises both robotic, gimbaled and multi-leaf-collimator (MLC)-based tracking [9, 10]. Among these respiratory motion management approaches, real-time tumor tracking captured the most widespread attention for its precision and improved efficiency compared to, for instance, gating [11].

MRI-guided radiotherapy (MRIgRT), integrating MR imaging with a medical linear accelerator, stands out as state-of-art real-time motion management approach owing to the non-ionizing radiation nature and superior soft-tissue contrast characteristic of MR imaging [12, 13]. While clinical MRIgRT currently relies on gating [14, 15], several research studies have focused on the implementation of MLC-tracking [16,17,18]. However, the implementation of real-time tumor tracking in MRIgRT is associated with approximately a 300–350 ms system latency [17, 19], while the system latencies of real-time tumor tracking systems were 115 ms for a robotic linac system [20] and 48 ms for a gimbaled linac system [21]. Therefore, to avoid a decrease in dosimetry accuracy, it is important to implement respiratory motion prediction for the compensation of system latency in MRIgRT. Furthermore, the superior soft-tissue contrast characteristic provided by MR imaging facilitates target localization, thereby allowing for the extension of respiratory motion prediction from prediction of rigid shifts to two-dimensional (2D) contour or three-dimensional (3D) volume [19]. Notably, the implementation of contour and volumetric prediction mainly relies on artificial intelligence (AI)-based methods, which highlights the potential of AI-based respiratory motion prediction for MRIgRT [22].

However, previous studies of AI-based respiratory motion prediction for MRIgRT exhibited significant inconsistencies in the collection of training dataset, the selection of prediction methods, and the formulation of complex contour and volumetric prediction problems. For example, the difficulty of data collection limits the size of patient samples, resulting in a lack of diversity in respiratory motion patterns within the training datasets, which detrimentally affects the generalizability of prediction methods [23]. Theoretically, compared to linear predictors, complex recurrent neural networks (RNNs) require larger training datasets to capture temporal dependencies in training datasets [24]. In other words, the performance of prediction methods is associated with the characteristics of training datasets, which might be a potential explanation for the inconsistent results observed across different studies. This indicates the importance of data homogenization in AI-based respiratory motion prediction for MRIgRT. Therefore, this paper aims to provide a comprehensive review on the modeling approaches of AI-based respiratory motion prediction for MRIgRT, and to discuss potential solutions for achieving consistent results in this domain.

Literature search

Following the Preferred Reporting Items for systematic reviews and meta-analyses (PRISMA) guidelines [25], this study was conducted as a systematic review focusing on AI-based respiratory motion prediction (Fig. 1). The search terms used in the PubMed database were (magnetic resonance imaging) AND (motion tracking) AND ((radiotherapy) OR (radiation oncology)), resulting in a total of 203 articles until June 2024. The relevant studies were selected by screening the title and abstract of these articles according to these criteria: (1) design for MRIgRT; (2) utilization of AI algorithms; and (3) involvement of respiratory motion prediction. Subsequently, we searched the references and citations of these relevant studies in the Google Scholar search engine, finding a total of 12 studies within the scope of AI-based respiratory motion prediction.

Fig. 1
figure 1

Flowchart of article identification, screening, and inclusion criteria for studies on AI-based respiratory motion prediction in MRIgRT

Problem definition

The aim of respiratory motion prediction is to obtain future information on respiratory motion from current data, thereby compensating for the system latency in real-time. Formally, respiratory motion prediction tasks can be formulated as

$${\hat y_{t + \Delta t}} = f({x_t})$$
(1)

where \(\Delta t\) is a prediction method, \(\:{x}_{t}\) is the current respiratory motion data, \({\hat y_{t + \Delta t}}\) is the future motion, and \(\Delta t\) is the prediction window of the motion prediction task, typically corresponding to the system latency of real-time beam adaptation systems.

In MRIgRT, respiratory motion prediction can be classified into shift (i.e. centroid position) prediction, contour prediction, and volumetric prediction. The aims of shift prediction, contour prediction, and volumetric prediction are to obtain the future centroid position, 2D contour, and 3D volume of tumors, respectively. For contour and volume prediction, deformation vector fields (DVFs) or directly contours/volumes can be output by the model. Notably, the ultimate goal of MRIgRT is to achieve real-time 3D motion management, requiring 3D motion prediction [19].

Data characteristics

Input and output data

The input data \(\:{\varvec{x}}_{t}\) for AI-based respiratory motion prediction in MRIgRT is the current centroid shift/the 2D contour of tumors extracted from 2D cine MR images [26, 27] or the 2D cine MR images themselves. Accordingly, the AAPM TG-264 report [10] recommended a minimum imaging frequency of 3 Hz for real-time tumor tracking systems to uphold end-to-end system latencies below 500 ms. In clinical practice, 2D cine MR imaging often operates at a frequency of 4–8 Hz, typically in a sagittal plane [28] or across interleaved orthogonal planes [29]. For the purpose of training and prediction, the input data \(\:X=\left\{{x}_{1},\:{x}_{2},\:\dots\:,{x}_{t},\dots\:,{x}_{T}\right\}\) is reconstructed into subsequences of length \(\:w\) at each timestep. For example, the subsequence at timestep \(\:t\) is \(\:S\:=\left\{{x}_{t\:-\:w},{x}_{t\:-\:w\:+\:1},{\dots\:,x}_{t}\right\}\). Notably, the length \(\:w\) of subsequences often serves as a hyper-parameter during model optimization.

The output data \({\hat y_{t + \Delta t}}\) for respiratory motion prediction in MRIgRT is the future centroid position (from which the shift to the current centroid can be derived) or the future 2D tumor contour/cine MR image. The prediction window \(\Delta t\) is chosen to match the system latencies of real-time tumor tracking systems. As shown in Table 1, reported end-to-end system latencies for MRIgRT were on average about 300 ms for 4 Hz imaging [16,17,18, 30, 31] and 200 ms for 8 Hz imaging [16, 17, 32]. Furthermore, Liu et al. [33] developed a real-time distortion correction method for correcting geometric distortions in MR images, and reported an end-to-end system latency of 319 ± 12 ms without distortion correction and 335 ± 34 ms with distortion correction. Therefore, the prediction window \(\Delta t\) for AI-based respiratory motion prediction in MRIgRT is often set to 250 ms and 500 ms for a typical 4 Hz imaging frequency and can be adapted to the specific latency of the system via linear interpolation [28, 34].

Table 1 The reported tracking or beam gating end-to-end system latency for different MRIgRT systems. (*) asterisk denotes research prototypes

Diversity in respiratory motion patterns

The generalizability of AI-based systems heavily relies on the diversity of training datasets [35]. However, the diversity of training datasets is typically achieved by collecting a large amount of data, which is challenging in MRIgRT as most radiotherapy patients receive treatment on conventional linacs. Therefore, a quantitative description of respiratory motion patterns is needed in AI-based respiratory motion prediction for MRIgRT. For shift prediction, variability in the training data can be inferred by calculating the mean amplitude, period, and speed of respiratory motions [36,37,38,39,40,41]. For contour prediction, in addition to the previous values, multiple tumor sites, such as lung, pancreas, heart, liver, and mediastinum should be used [28], aligning with the fact that respiratory motion varies among tumor sites and can lead to different deformations/rotations of the irradiation target [42].

The regularity of respiratory motion also has significant impact on the performance of AI-based respiratory motion prediction [43]. For traditional real-time tumor tracking systems, Ernst et al. [44] analyzed 304 respiratory motion traces, extracted a total of 21 features for each trace to represent the regularity of motion and confirmed the correlations between these extracted features and the prediction performance of 6 prediction methods. In MRIgRT, the performance of AI-based respiratory motion prediction during irregular motion was investigated through case analyses with limited patient sample size [45].

Evaluation metrics

For shift prediction, the evaluation metrics mainly comprise mean absolute error (MAE) and root mean square error (RMSE). For contour prediction, the evaluation metrics include the Dice Similarity Coefficient (DSC) [46] and the Hausdorff distance [47]. For volumetric prediction, the reported evaluation metric is target registration error (TRE) [48].

The MAE quantifies the mean absolute difference between predicted and ground truth values:

$$MAE = \frac{1}{N}\sum\limits_{i = 1}^N {\left| {{y_i} - {{\hat y}_i}} \right|}$$
(2)

where \(\:{\widehat{y}}_{i}\) and \(\:{y}_{i}\) are the predicted and ground truth centroid data, respective. \(\:N\) is the total number of points in the motion trace.

The RMSE is a metric used to quantify the difference between predicted and ground truth values:

$$RMSE = \sqrt {\frac{1}{N}} {\sum\limits_{i = 1}^N {({y_i} - {{\hat y}_i})} ^2}$$
(3)

The DSC is a metric used to measure the spatial overlap between the predicted contour and ground truth contour of tumors:

$$DSC(A,B) = \frac{{2(A \cap B)}}{{A + B}}$$
(4)

where A and B refer to the predicted contour and ground truth contour of tumors, respectively. The DSC ranges from 0 to 1, with 1 indicating perfect overlap and 0 indicating no overlap between the two sets.

The Hausdorff distance measures the similarity between the points in the predicted contour and ground truth contour:

$$H(A,B) = \max (h(A,B),h(B,A))$$
(5)
$$h(A,B) = \mathop {\max }\limits_{a \in A} \,\mathop {\min }\limits_{b \in B} \left\| {a - b} \right\|$$
(6)

where \(\:A=\{{a}_{1},\dots\:,{a}_{p}\}\), \(\:B=\{{b}_{1},\dots\:,{b}_{p}\}\), and \(\:\cdot\:\) is some underlying norm on the points of A and B (e.g., the L2 or Euclidean norm) [47]. The Hausdorff distance ranges from 0 to positive infinity, with 0 indicating perfect overlap between the two sets of points.

The TRE is the Euclidean distance between corresponding points in the predicted DVFs and ground truth (the motion estimated by a pre-trained registration network):

$$TRE = \frac{1}{n}\sum\limits_{i = 1}^n {\left\| {{p_i} - G{T_i}} \right\|}$$
(7)

where \(\:P=\{{p}_{1},{p}_{2},\dots\:,{p}_{n}\}\) and \(\:GT=\{{g}_{1},{g}_{2},\dots\:,{g}_{n}\}\) represent the points in the predicted DVFs and ground truth.

Prediction methods

Shift prediction

The shift prediction methods for MRIgRT are similar with those used in traditional real-time tumor tracking systems. As shown in Table 2, Yun et al. [49] utilized artificial neural networks (ANN) for 1D superior-inferior (SI) shift prediction among 29 lung cancer patients, and obtained mean RMSEs ranging from 0.5 to 0.9 mm across system latencies of 120 to 520 ms. Seregni et al. [29] compared linear extrapolation, autoregressive linear prediction (AR), and support vector machines (SVM) for 3D shift prediction in 6 lung cancer patients, and found that linear prediction method outperformed non-linear prediction method. Bourque et al. [50, 51] proposed a particle filter combined with an autoregressive model for 3D shift prediction among 5 healthy volunteers and 8 cancer patients, and verified that their prediction method was highly accurate and robust against varying imaging quality. Lombardo et al. [28, 34] compared ridge regression and long short-term memory (LSTM) networks for 1D SI shift prediction in-silico and experimentally confirmed the superiority of the LSTM. Their results also demonstrated that continuous online re-optimization can enhance the performance of prediction methods. Conversely, Li et al. [24] recently reported that linear prediction methods outperformed recurrent neural networks (RNNs) including LSTMs, bidirectional LSTMs (Bi-LSTMs), and gated recurrent unit networks (GRUs), among 21 liver cancer patients and 10 lung cancer patients.

Table 2 Methods for future shift prediction in MRIgRT

Recent literature [24, 28, 29] therefore reported inconsistencies in the superiority between linear and non-linear prediction methods for MRIgRT. This phenomenon can also be observed for motion prediction methods in the context of traditional real-time tumor tracking systems. Sharp et al. [37] reported that the linear prediction methods outperformed more complex prediction methods such as ANN and Kalman filter (KF). Jöhl et al. [41] compared 18 prediction methods for shift prediction among 93 respiratory motions, concluding that linear prediction methods were sufficient and non-linear prediction methods were not necessarily needed. On the contrary, Murphy et al. [43] demonstrated that non-linear prediction methods were more robust than linear prediction methods for irregular respiratory motions. Wang et al. [52] found that Bi-LSTM outperformed linear prediction methods using 103 respiratory motions, especially in cases with relatively long system latency. Wang et al. [53] also reported that an LSTM outperformed a SVM using a publicly available dataset. A possible explanation of these contradictory results might be that non-linear prediction methods have been relying more and more on RNNs compared to the ANNs in early studies. For example, Lin et al. [54] trained RNNs for shift prediction among 1703 respiratory motions, and showed the obtained LSTM to outperform an ANN. Furthermore, the performance of complex RNNs is more dependent on the optimization of model hyperparameters. As reported in Samadi et al. [23], tuning the hyperparameters of RNNs resulted in a 25–30% improvement for all models compared to previous studies.

Contour prediction and volumetric prediction

Contour prediction and volumetric prediction are relatively new research interests in motion management for radiotherapy. Theoretically, contour prediction and volumetric prediction can provide more accurate real-time tumor tracking over shift prediction. Contour prediction can be achieved through either directly predicting the 2D contour or by 2D image prediction. Contour prediction directly predicts future tumor contours from a sequence of observed contours, while image prediction predicts, from a series of past images, future images or DVFs which can be used to warp the last observed tumor contour. For real-time tumor tracking, contour prediction is the clinically more relevant method. However, image prediction eliminates the need for manual target delineation, allowing unsupervised training on larger datasets. Table 3 summarized the methods for contour prediction and volumetric prediction in MRIgRT.

Table 3 Methods for future contour and volumetric prediction in MRIgRT

For contour prediction, Noorda et al. [45] combined a subject-specific motion model with respiratory motion surrogate prediction on 4 healthy volunteers. However, this method could not extrapolate respiratory motion states not included in the constructed subject-specific motion model. Ginn et al. [22] proposed an image regression (IR) algorithm for image prediction using 8 healthy volunteers and 13 cancer patients. This method outperformed linear prediction methods in predicting the centroid position of 2D contour by utilizing a weighted combination of previously observed respiratory motion states, with weights determined via sum of squared differences (SSD) between current and past images. Nevertheless, the calculation of SSD is susceptible to image noise and IR may not provide accurate predictions for irregular respiratory motion not captured in the selected most similar images. Romaguera et al. [55] utilized a convolutional LSTM combined with spatial transformer layers (ConvLSTM-STL) for image prediction on 12 healthy volunteers, achieving median vessel misalignments of 0.45 mm and 0.57 mm for prediction windows of 320 ms and 640 ms, respectively. In a follow-up study, Romaguera et al. [48] demonstrated the superiority of a transformer network compared to ConvLSTMs and ConvGRUs for contour prediction. Interestingly, Lombardo et al. [56] found that an LSTM-shift prediction method, in which the last available contour is shifted by the difference between the predicted and last centroid position, outperformed both ConvLSTM contour and image prediction methods when looking at the accuracy of the predicted contours. More specifically, patients with larger respiratory motion are more likely to benefit from using the LSTM-shift prediction method.

In addition to spatiotemporal prediction, the implementation of volumetric prediction involves inferring 3D volumetric information from 2D cine MR images. This can be achieved through 2D-3D deformable image registration, which deforms a 3D pre-treatment reference volumetric image to align with orthogonal 2D cine MR images [57, 58]. However, the 2D-3D deformable image registration is time-consuming and infeasible for real-time applications. Alternatively, several studies constructed subject-specific or population-based motion models to overcome this limitation. Subject-specific motion models [59,60,61] parameterize motion information within the pre-treatment 4D datasets using principal component analysis (PCA), and can thereby accelerate the inference of DVFs. For example, Liu et al. [62] combined a subject-specific motion model with PCA coefficient prediction to achieve volumetric prediction using diagnostic MRI. Nevertheless, inferring DVFs that are not expected from the constructed subject-specific motion model might be problematic, and pre-treatment 4D MRI is not always available in clinical scenarios. To address these issues, various studies have proposed population-based motion models [63,64,65] that utilize large datasets to capture a broader range of respiratory motion patterns. Recently, Romaguera et al. [48] combined a population-based motion model with transformer network for volumetric motion prediction using 25 healthy volunteers, and achieved prediction of future 3D DVFs with a mean TRE of 1.2 ± 0.7 mm.

Inference of 3D volumetric information in real-time can also be achieved via deep learning-based fast image reconstruction [66, 67]. Terpstra et al. [68] trained a multi-resolution convolutional neural network for inferring 3D DVFs, achieving a TRE of 1.87 ± 1.65 mm. Similarity, Shao et al. [69] proposed a deep learning-based deformable registration network for downsampled 4D-MRI image reconstruction with sub-second latency. Xiao et al. [70] proposed a downsampling-invariant deformable registration model for inferring 3D DVFs, obtaining a reconstruction time of less than 500 ms. Liu et al. [71, 72] developed a geometry-informed deep learning framework for inferring 3D volumetric information with sub-second acquisition time, and incorporated implicit neural representation learning with prior information to enable fast volumetric image reconstruction from orthogonal cine MR images. These studies offer alternative approaches for inferring 3D volumetric information and facilitating further volumetric prediction in MRIgRT.

Discussion

Several clinical studies [73,74,75,76,77] have demonstrated that the superior treatment accuracy provided by MRIgRT is associated with improved clinical outcomes. Notably, Neylon et al. [78] found that large intrafraction motion in patients correlates with increased toxicity, which highlights the importance of real-time motion management in MRIgRT. Furthermore, real-time beam adaptation with either gating or MLC-tracking can provide a reduction of the CTV-PTV margin, boosting the potential of dose escalation [27, 79,80,81]. To maintain the accuracy of MLC-tracking, respiratory motion prediction becomes imperative to alleviate the system latency inherent to MRIgRT systems. In contrast to traditional real-time tracking systems, MRIgRT has the potential to achieve more advanced contour prediction and volumetric prediction, mainly relying on AI-based methods.

However, the implementation of AI in health and medicine faces challenges such as data limitations and modeling divergence [82]. For AI-based respiratory motion prediction in MRIgRT, complex clinical workflows and therefore reduced patient numbers limit the collection of large datasets, making it challenging to gather the amount of data required for robust model training, validation, and testing. Additionally, many studies utilized private datasets for the AI modeling [83, 84], and the lack of data transparency and code availability may further reduce the replicability of the reported results [85]. For example, Lombardo et al. [28, 34] reported that an LSTM outperformed a linear method, whereas Li et al. [24] recently obtained the opposite result. Therefore, in future work, the authors aim to establish a publicly available benchmark dataset for AI model comparison within AI-based respiratory motion prediction in MRIgRT.

Inspired by the University of California, Riverside (UCR) time series classification archive [86,87,88,89,90], this publicly available benchmark dataset should aim to standardize the modeling approaches, thereby promoting fair comparisons and accelerating advancements for AI-based respiratory motion prediction in MRIgRT. To ensure diversity in respiratory motion patterns, the data collection for this publicly available benchmark dataset should follow these criteria: (1) including irregular respiratory motions; (2) comprising multiple tumor sites to represent varying moving anatomies; and (3) incorporating multi-institutional data with varying image quality to assess the robustness of the prediction methods. To foster collaboration and reproducibility, this publicly available benchmark dataset will include detailed documentation, such as guidelines for data usage, model training, and evaluation metrics. Researchers will be encouraged to share their code and results through an open repository, enabling the community to build upon each other’s work and validate findings independently.

The reported average or median tracking accuracies of shift prediction methods (Tables 2 and 3) achieved in-silico were consistently within 3 mm. However, these in-silico studies did not evaluate and report the uncertainties associated with rapid target localization algorithms [69, 91,92,93]. This indicates that the accuracy in clinical scenarios might be inferior to the in-silico one, highlighting the need of developing more advanced prediction methods. As reported in Ginn et al. [22], their proposed contour prediction method outperformed common shift prediction methods. In contrast, Lombardo et al. [56] concluded that a shift prediction method overall outperformed both contour and image prediction methods. Also, their results indicated that patients with smaller respiratory motion are more likely to benefit from using the contour prediction methods. The execution time of their LSTM-shift model, ConvLSTM, and ConvLSTM-STL were 17 ± 3 ms, 14 ± 1 ms, and 45 ± 1 ms, respectively, which is clinically acceptable. Additionally, contour prediction and volumetric prediction can also provide the information of surrounding organs at risk (OARs), which may enhance the sparing of these critical OARs.

Model optimization strategies can also influence the final performance of AI-based respiratory motion prediction methods [54]. An interesting model optimization strategy is adaptive learning (also called continuous or online learning), which continuously updates the model weights to enable adaptation to recent respiratory motion patterns [94, 95]. Sun et al. [96] have confirmed that adaptive learning improves the performance of shift prediction models among 202 respiratory motions obtained from a real-time position management (RPM) device. Lombardo et al. [28, 56] also found that LSTMs with adaptive learning outperformed standard LSTM for shift prediction and contour prediction in MRIgRT. However, unlike RPM which directly provides respiratory motion signals, 2D cine MR images require additional preprocessing to extract these signals. For this reason, in their experimental study on a prototype MR-linac, Lombardo et al. [34] utilized a template matching algorithm for target centroid position extraction, and further confirmed the efficacy of adaptive learning in boosting LSTM performance for shift prediction during MRI-guided MLC tracking.

A straightforward approach to evaluate the end-to-end uncertainties in real-time motion management systems is to conduct experimentally studies. Uijtewaal et al. [16, 31] have experimentally demonstrated the combination of IMRT/VMAT with MLC-tracking on a prototype of a commercial MR-linac, obtaining 2%/1 mm pseudo-3D gamma passing-rates of 22–77% without MLC-tracking and 92–100% with MLC-tracking. However, this 3D gamma analysis might not be applicable for evaluating contour prediction and volumetric prediction. Lombardo et al. [34] proposed an alternative EPID-based performance analysis to evaluate the end-to-end uncertainties of MLC-tracking on a protype research MR-linac. The 2D cine MR images offered by MR-linacs allow for precise target localization, yet constrained by low temporal resolution. Integrating MR-linacs with the high temporal resolution of optical surface systems might enhance the tracking accuracy of MRIgRT [97].

Conclusions

In conclusion, AI-based methods have extended respiratory motion prediction from shift prediction to contour/image and volumetric prediction for MRIgRT. However, the inconsistent results observed in literature underscore the need of establishing a benchmark dataset for comparing traditional and AI-based respiratory motion prediction methods in MRIgRT. Additionally, investigating the impact of uncertainties associated with the real-time target localization step prior to the application of AI-based respiratory motion prediction might represent another research direction in MRIgRT.

Data availability

No datasets were generated or analysed during the current study.

Abbreviations

VMAT:

Volumetric modulated arc therapy

SBRT:

Stereotactic body radiotherapy

MRIgRT:

MRI-guided radiotherapy

AI:

Artificial intelligence

MLC:

Multi-leaf-collimator

RNN:

Recurrent neural network

MAE:

Mean absolute error

RMSE:

Root mean square error

DSC:

Dice similarity coefficient

TRE:

Target registration error

DVFs:

Deformation vector fields

SI:

Superior-inferior

AP:

Anterior-posterior

LR:

Left-right

ANN:

Artificial neural networks

SVM:

Support vector machines

LSTM:

Long short-term memory

GRUs:

Gated recurrent unit networks

References

  1. Buchele C, Renkamp CK, Regnery S, Behnisch R, Rippke C, Schlüter F, et al. Intrafraction organ movement in adaptive MR-guided radiotherapy of abdominal lesions - dosimetric impact and how to detect its extent in advance. Radiat Oncol. 2024;19:80.

    Article  PubMed  PubMed Central  Google Scholar 

  2. van Ommen F, Le Quellenec GAT, Willemsen-Bosman ME, van Noesel MM, van den Heuvel-Eibrink MM, Seravalli E, et al. MRI-based inter- and intrafraction motion analysis of the pancreatic tail and spleen as preparation for adaptive MRI-guided radiotherapy in neuroblastoma. Radiat Oncol. 2023;18:160.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Keall PJ, Mageras GS, Balter JM, Emery RS, Forster KM, Jiang SB, et al. The management of respiratory motion in radiation oncology report of AAPM Task Group 76. Med Phys. 2006;33:3874–900.

    Article  PubMed  Google Scholar 

  4. Benedict SH, Yenice KM, Followill D, Galvin JM, Hinson W, Kavanagh B, et al. Stereotactic body radiation therapy: the report of AAPM Task Group 101. Med Phys. 2010;37:4078–101.

    Article  PubMed  Google Scholar 

  5. Li H, Dong L, Bert C, Chang J, Flampouri S, Jee K-W, et al. AAPM Task Group Report 290: respiratory motion management for particle therapy. Med Phys. 2022;5:223.

    Google Scholar 

  6. Mannerberg A, Nilsson MP, Edvardsson A, Karlsson K, Ceberg S. Abdominal compression as motion management for stereotactic radiotherapy of ventricular tachycardia. Phys Imaging Radiat Oncol. 2023;28:100499.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Dekker J, Essers M, Verheij M, Kusters M, de Kruijf W. Dose coverage and breath-hold analysis of breast cancer patients treated with surface-guided radiotherapy. Radiat Oncol. 2023;18:72.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Høgsbjerg KW, Maae E, Nielsen MH, Stenbygaard L, Pedersen AN, Yates E, et al. Benefit of respiratory gating in the Danish breast Cancer Group partial breast irradiation trial. Radiother Oncol. 2024;194:110195.

    Article  PubMed  Google Scholar 

  9. Zhang X, Liu W, Xu F, He W, Song Y, Li G et al. Neural signals-based respiratory motion tracking: a proof-of-concept study. Phys Med Biol 2023.

  10. Keall PJ, Sawant A, Berbeco RI, Booth JT, Cho B, Cerviño LI, et al. AAPM Task Group 264: the safe clinical implementation of MLC tracking in radiotherapy. Med Phys. 2021;48:e44–64.

    Article  PubMed  Google Scholar 

  11. Wang C-Y, Ho L-T, Lin L-Y, Chan H-M, Chen H-Y, Yu T-L, et al. Noninvasive cardiac radioablation for ventricular tachycardia: dosimetric comparison between linear accelerator- and robotic CyberKnife-based radiosurgery systems. Radiat Oncol. 2023;18:187.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Keall PJ, Brighi C, Glide-Hurst C, Liney G, Liu PZY, Lydiard S, et al. Integrated MRI-guided radiotherapy - opportunities and challenges. Nat Rev Clin Oncol. 2022;19:458–70.

    Article  PubMed  Google Scholar 

  13. Hall WA, Paulson E, Li XA, Erickson B, Schultz C, Tree A, et al. Magnetic resonance linear accelerator technology and adaptive radiation therapy: an overview for clinicians. CA Cancer J Clin. 2022;72:34–56.

    Article  PubMed  Google Scholar 

  14. Green OL, Rankine LJ, Cai B, Curcuru A, Kashani R, Rodriguez V et al. First clinical implementation of real-time, real anatomy tracking and radiation beam control. Med Phys 2018.

  15. Grimbergen G, Hackett SL, van Ommen F, van Lier ALHMW, Borman PTS, Meijers LTC, et al. Gating and intrafraction drift correction on a 1.5 T MR-Linac: clinical dosimetric benefits for upper abdominal tumors. Radiother Oncol. 2023;189:109932.

    Article  CAS  PubMed  Google Scholar 

  16. Uijtewaal P, Borman PTS, Woodhead PL, Hackett SL, Raaymakers BW, Fast MF. Dosimetric evaluation of MRI-guided multi-leaf collimator tracking and trailing for lung stereotactic body radiation therapy. Med Phys. 2021;48:1520–32.

    Article  PubMed  Google Scholar 

  17. Glitzner M, Woodhead PL, Borman PTS, Lagendijk JJW, Raaymakers BW. Technical note: MLC-tracking performance on the Elekta unity MRI-linac. Phys Med Biol. 2019;64:15NT02.

    Article  CAS  PubMed  Google Scholar 

  18. Liu PZY, Dong B, Nguyen DT, Ge Y, Hewson EA, Waddington DEJ, et al. First experimental investigation of simultaneously tracking two independently moving targets on an MRI-linac using real-time MRI and MLC tracking. Med Phys. 2020;47:6440–9.

    Article  PubMed  Google Scholar 

  19. Lombardo E, Dhont J, Page D, Garibaldi C, Künzel LA, Hurkmans C, et al. Real-time motion management in MRI-guided radiotherapy: current status and AI-enabled prospects. Radiother Oncol. 2023;190:109970.

    Article  PubMed  Google Scholar 

  20. Seppenwoolde Y, Berbeco RI, Nishioka S, Shirato H, Heijmen B. Accuracy of tumor motion compensation algorithm from a robotic respiratory tracking system: a simulation study. Med Phys. 2007;34:2774–84.

    Article  PubMed  Google Scholar 

  21. Hiraoka M, Mizowaki T, Matsuo Y, Nakamura M, Verellen D. The gimbaled-head radiotherapy system: rise and downfall of a dedicated system for dynamic tumor tracking with real-time monitoring and dynamic WaveArc. Radiother Oncol. 2020;153:311–8.

    Article  PubMed  Google Scholar 

  22. Ginn JS, Ruan D, Low DA, Lamb JM. An image regression motion prediction technique for MRI-guided radiotherapy evaluated in single-plane cine imaging. Med Phys. 2020;47:404–13.

    Article  PubMed  Google Scholar 

  23. Samadi Miandoab P, Saramad S, Setayeshi S. Respiratory motion prediction based on deep artificial neural networks in CyberKnife system: a comparative study. J Appl Clin Med Phys. 2023;24:e13854.

    Article  PubMed  Google Scholar 

  24. Li Y, Li Z, Zhu J, Li B, Shu H, Di Ge. Online prediction for respiratory movement compensation: a patient-specific gating control for MRI-guided radiotherapy. Radiat Oncol. 2023;18:149.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  25. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.

    Article  PubMed  PubMed Central  Google Scholar 

  26. van Sörnsen JR, Palacios MA, Bruynzeel AME, Slotman BJ, Senan S, Lagerwaard FJ. MR-guided gated Stereotactic Radiation Therapy Delivery for Lung, adrenal, and pancreatic tumors: a geometric analysis. Int J Radiat Oncol Biol Phys. 2018;102:858–66.

    Article  Google Scholar 

  27. Galetto M, Nardini M, Capotosti A, Meffe G, Cusumano D, Boldrini L, et al. Motion and dosimetric criteria for selecting gating technique for apical lung lesions in magnetic resonance guided radiotherapy. Front Oncol. 2023;13:1280845.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Lombardo E, Rabe M, Xiong Y, Nierer L, Cusumano D, Placidi L et al. Offline and online LSTM networks for respiratory motion prediction in MR-guided radiotherapy. Phys Med Biol 2022.

  29. Seregni M, Paganelli C, Lee D, Greer PB, Baroni G, Keall PJ, Riboldi M. Motion prediction in MRI-guided radiotherapy based on interleaved orthogonal cine-MRI. Phys Med Biol. 2016;61:872–87.

    Article  CAS  PubMed  Google Scholar 

  30. Yun J, Wachowicz K, Mackenzie M, Rathee S, Robinson D, Fallone BG. First demonstration of intrafractional tumor-tracked irradiation using 2D phantom MR images on a prototype linac-MR. Med Phys. 2013;40:51718.

    Article  Google Scholar 

  31. Uijtewaal P, Borman PTS, Woodhead PL, Kontaxis C, Hackett SL, Verhoeff J, et al. First experimental demonstration of VMAT combined with MLC tracking for single and multi fraction lung SBRT on an MR-linac. Radiother Oncol. 2022;174:149–57.

    Article  CAS  PubMed  Google Scholar 

  32. Crijns SPM, Raaymakers BW, Lagendijk JJW. Proof of concept of MRI-guided tracked radiation delivery: tracking one-dimensional motion. Phys Med Biol. 2012;57:7863–72.

    Article  CAS  PubMed  Google Scholar 

  33. Liu PZY, Shan S, Waddington D, Whelan B, Dong B, Liney G, Keall P. Rapid distortion correction enables accurate magnetic resonance imaging-guided real-time adaptive radiotherapy. Phys Imaging Radiat Oncol. 2023;25:100414.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Lombardo E, Liu PZY, Waddington DEJ, Grover J, Whelan B, Wong E, et al. Experimental comparison of linear regression and LSTM motion prediction models for MLC-tracking on an MRI-linac. Med Phys. 2023;50:7083–92.

    Article  CAS  PubMed  Google Scholar 

  35. Zha D, Bhat ZP, Lai K-H, Yang F, Hu X, editors. Data-centric AI: Perspectives and Challenges.

  36. Krauss A, Nill S, Oelfke U. The comparative performance of four respiratory motion predictors for real-time tumour tracking. Phys Med Biol. 2011;56:5303–17.

    Article  CAS  PubMed  Google Scholar 

  37. Sharp GC, Jiang SB, Shimizu S, Shirato H. Prediction of respiratory tumour motion for real-time image-guided radiotherapy. Phys Med Biol. 2004;49:425–40.

    Article  PubMed  Google Scholar 

  38. Mueller M, Poulsen P, Hansen R, Verbakel W, Berbeco R, Ferguson D, et al. The markerless lung target tracking AAPM Grand Challenge (MATCH) results. Med Phys. 2022;49:1161–80.

    Article  PubMed  Google Scholar 

  39. Schmitt D, Nill S, Roeder F, Gompelmann D, Herth F, Oelfke U. Motion monitoring during a course of lung radiotherapy with anchored electromagnetic transponders: quantification of inter- and intrafraction motion and variability of relative transponder positions. Strahlenther Onkol. 2017;193:840–7.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Suh Y, Dieterich S, Cho B, Keall PJ. An analysis of thoracic and abdominal tumour motion for stereotactic body radiotherapy patients. Phys Med Biol. 2008;53:3623–40.

    Article  PubMed  Google Scholar 

  41. Jöhl A, Ehrbar S, Guckenberger M, Klöck S, Meboldt M, Zeilinger M, et al. Performance comparison of prediction filters for respiratory motion tracking in radiotherapy. Med Phys. 2020;47:643–50.

    Article  PubMed  Google Scholar 

  42. McClelland JR, Hawkes DJ, Schaeffter T, King AP. Respiratory motion models: a review. Med Image Anal. 2013;17:19–42.

    Article  CAS  PubMed  Google Scholar 

  43. Murphy MJ, Dieterich S. Comparative performance of linear and nonlinear neural networks to predict irregular breathing. Phys Med Biol. 2006;51:5903–14.

    Article  PubMed  Google Scholar 

  44. Ernst F, Schlaefer A, Schweikard A. Predicting the outcome of respiratory motion prediction. Med Phys. 2011;38:5569–81.

    Article  PubMed  Google Scholar 

  45. Noorda YH, Bartels LW, Viergever MA, Pluim JPW. Subject-specific liver motion modeling in MRI: a feasibility study on spatiotemporal prediction. Phys Med Biol. 2017;62:2581–97.

    Article  PubMed  Google Scholar 

  46. Zou KH, Warfield SK, Bharatha A, Tempany CMC, Kaus MR, Haker SJ, et al. Statistical validation of image segmentation quality based on a spatial overlap index. Acad Radiol. 2004;11:178–89.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Huttenlocher DP, Klanderman GA, Rucklidge WJ. Comparing images using the Hausdorff distance. IEEE Trans Pattern Anal Mach Intell. 1993;15:850–63.

    Article  Google Scholar 

  48. Romaguera LV, Stephanie A, Jean-Francois C, Samuel K. Conditional-based Transformer Network with Learnable queries for 4D deformation forecasting and Tracking. IEEE Trans Med Imaging. 2023;42:1603–18.

    Article  Google Scholar 

  49. Yun J, Mackenzie M, Rathee S, Robinson D, Fallone BG. An artificial neural network (ANN)-based lung-tumor motion predictor for intrafractional MR tumor tracking. Med Phys. 2012;39:4423–33.

    Article  PubMed  Google Scholar 

  50. Bourque AE, Carrier J-F, Filion É, Bedwani S. A particle filter motion prediction algorithm based on an autoregressive model for real-time MRI-guided radiotherapy of lung cancer. Biomed Phys Eng Express. 2017;3:35001.

    Article  Google Scholar 

  51. Bourque AE, Bedwani S, Carrier J-F, Ménard C, Borman P, Bos C, et al. Particle filter-based target tracking algorithm for magnetic resonance-guided respiratory compensation: Robustness and Accuracy Assessment. Int J Radiat Oncol Biol Phys. 2018;100:325–34.

    Article  PubMed  Google Scholar 

  52. Wang R, Liang X, Zhu X, Xie Y. A feasibility of respiration prediction based on Deep Bi-LSTM for Real-Time Tumor Tracking. IEEE Access. 2018;6:51262–8.

    Article  Google Scholar 

  53. Wang G, Li Z, Li G, Dai G, Xiao Q, Bai L, et al. Real-time liver tracking algorithm based on LSTM and SVR networks for use in surface-guided radiation therapy. Radiat Oncol. 2021;16:13.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  54. Lin H, Shi C, Wang B, Chan MF, Tang X, Ji W. Towards real-time respiratory motion prediction based on long short-term memory neural networks. Phys Med Biol. 2019;64:85010.

    Article  Google Scholar 

  55. Romaguera LV, Plantefève R, Romero FP, Hébert F, Carrier J-F, Kadoury S. Prediction of in-plane organ deformation during free-breathing radiotherapy via discriminative spatial transformer networks. Med Image Anal. 2020;64:101754.

    Article  PubMed  Google Scholar 

  56. Lombardo E, Rabe M, Xiong Y, Nierer L, Cusumano D, Placidi L, et al. Evaluation of real-time tumor contour prediction using LSTM networks for MR-guided radiotherapy. Radiother Oncol. 2023;182:109555.

    Article  PubMed  Google Scholar 

  57. Paganelli C, Lee D, Kipritidis J, Whelan B, Greer PB, Baroni G, et al. Feasibility study on 3D image reconstruction from 2D orthogonal cine-MRI for MRI-guided radiotherapy. J Med Imaging Radiat Oncol. 2018;62:389–400.

    Article  PubMed  Google Scholar 

  58. Paganelli C, Portoso S, Garau N, Meschini G, Via R, Buizza G, et al. Time-resolved volumetric MRI in MRI-guided radiotherapy: an in silico comparative analysis. Phys Med Biol. 2019;64:185013.

    Article  CAS  PubMed  Google Scholar 

  59. Harris W, Yin F-F, Cai J, Ren L. Volumetric cine magnetic resonance imaging (VC-MRI) using motion modeling, free-form deformation and multi-slice undersampled 2D cine MRI reconstructed with spatio-temporal low-rank decomposition. Quant Imaging Med Surg. 2020;10:432–50.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Pham J, Harris W, Sun W, Yang Z, Yin F-F, Ren L. Predicting real-time 3D deformation field maps (DFM) based on volumetric cine MRI (VC-MRI) and artificial neural networks for on-board 4D target tracking: a feasibility study. Phys Med Biol. 2019;64:165016.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Harris W, Ren L, Cai J, Zhang Y, Chang Z, Yin F-F. A technique for Generating Volumetric cine-magnetic resonance imaging. Int J Radiat Oncol Biol Phys. 2016;95:844–53.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Liu L, Johansson A, Cao Y, Lawrence TS, Balter JM. Volumetric prediction of breathing and slow drifting motion in the abdomen using radial MRI and multi-temporal resolution modeling. Phys Med Biol. 2021;66:175028.

    Article  Google Scholar 

  63. Romaguera LV, Mezheritsky T, Mansour R, Tanguay W, Kadoury S. Predictive online 3D target tracking with population-based generative networks for image-guided radiotherapy. Int J Comput Assist Radiol Surg. 2021;16:1213–25.

    Article  PubMed  Google Scholar 

  64. Romaguera LV, Mezheritsky T, Mansour R, Carrier J-F, Kadoury S. Probabilistic 4D predictive model from in-room surrogates using conditional generative networks for image-guided radiotherapy. Med Image Anal. 2021;74:102250.

    Article  PubMed  Google Scholar 

  65. Wilms M, Werner R, Yamamoto T, Handels H, Ehrhardt J. Subpopulation-based correspondence modelling for improved respiratory motion estimation in the presence of inter-fraction motion variations. Phys Med Biol. 2017;62:5823–39.

    Article  PubMed  Google Scholar 

  66. Kyong HJ, McCann MT, Froustey E, Unser M. Deep convolutional neural network for inverse problems in imaging. IEEE Trans Image Process. 2017;26:4509–22.

    Article  Google Scholar 

  67. Zhou H, Zhu Y, Zhang H, Zhao X, Zhang P. Multi-scale dilated dense reconstruction network for limited-angle computed tomography. Phys Med Biol 2023.

  68. Terpstra ML, Maspero M, Bruijnen T, Verhoeff JJC, Lagendijk JJW, van den Berg CAT. Real-time 3D motion estimation from undersampled MRI using multi-resolution neural networks. Med Phys. 2021;48:6597–613.

    Article  PubMed  Google Scholar 

  69. Shao H-C, Li T, Dohopolski MJ, Wang J, Cai J, Tan J et al. Real-time MRI motion estimation through an unsupervised k-space-driven deformable registration network (KS-RegNet). Phys Med Biol 2022.

  70. Xiao H, Han X, Zhi S, Wong Y-L, Liu C, Li W, et al. Ultra-fast multi-parametric 4D-MRI image reconstruction for real-time applications using a downsampling-invariant deformable registration (D2R) model. Radiother Oncol. 2023;189:109948.

    Article  PubMed  Google Scholar 

  71. Liu L, Shen L, Johansson A, Balter JM, Cao Y, Chang D, Xing L. Real time volumetric MRI for 3D motion tracking via geometry-informed deep learning. Med Phys. 2022;49:6110–9.

    Article  PubMed  Google Scholar 

  72. Liu L, Shen L, Johansson A, Balter JM, Cao Y, Vitzthum L, Xing L. Volumetric MRI with sparse sampling for MR-guided 3D motion tracking via sparse prior‐augmented implicit neural representation learning. Med Phys. 2024;51:2526–37.

    Article  CAS  PubMed  Google Scholar 

  73. Yim K, Hsu S-H, Nolazco JI, Cagney D, Mak RH, D’Andrea V, et al. Stereotactic magnetic resonance–guided adaptive Radiation Therapy for localized kidney Cancer: early outcomes from a prospective phase 1 Trial and Supplemental Cohort. Eur Urol Oncol. 2024;7:147–50.

    Article  PubMed  Google Scholar 

  74. Weisz Ejlsmark M, Bahij R, Schytte T, Rønn Hansen C, Bertelsen A, Mahmood F, et al. Adaptive MRI-guided stereotactic body radiation therapy for locally advanced pancreatic cancer – a phase II study. Radiother Oncol. 2024;197:110347.

    Article  PubMed  Google Scholar 

  75. Chin R-I, Schiff JP, Bommireddy A, Kang KH, Andruska N, Price AT, et al. Clinical outcomes of patients with unresectable primary liver cancer treated with MR-guided stereotactic body radiation therapy: a six-year experience. Clin Transl Radiat Oncol. 2023;41:100627.

    PubMed  PubMed Central  Google Scholar 

  76. Chiloiro G, Panza G, Boldrini L, Romano A, Placidi L, Nardini M, et al. REPeated mAgnetic resonance image-guided stereotactic body Radiotherapy (MRIg-reSBRT) for oligometastatic patients: REPAIR, a mono-institutional retrospective study. Radiat Oncol. 2024;19:52.

    Article  PubMed  PubMed Central  Google Scholar 

  77. Poiset SJ, Shah S, Cappelli L, Anné P, Mooney KE, Werner-Wasik M, et al. Early outcomes of MR-guided SBRT for patients with recurrent pancreatic adenocarcinoma. Radiat Oncol. 2024;19:65.

    Article  PubMed  PubMed Central  Google Scholar 

  78. Neylon J, Ma TM, Savjani R, Low DA, Steinberg ML, Lamb JM, et al. Quantifying Intrafraction Motion and the impact of gating for magnetic resonance imaging-guided Stereotactic Radiation therapy for prostate Cancer: analysis of the magnetic resonance imaging arm from the MIRAGE phase 3 Randomized Trial. Int J Radiation Oncology*Biology*Physics. 2024;118:1181–91.

    Article  Google Scholar 

  79. Chuong MD, Lee P, Low DA, Kim J, Mittauer KE, Bassetti MF, et al. Stereotactic MR-guided on-table adaptive radiation therapy (SMART) for borderline resectable and locally advanced pancreatic cancer: a multi-center, open-label phase 2 study. Radiother Oncol. 2024;191:110064.

    Article  CAS  PubMed  Google Scholar 

  80. Rimner A, Gelblum DY, Wu AJ, Shepherd AF, Mueller B, Zhang S, et al. Stereotactic body Radiation Therapy for Stage IIA to IIIA Inoperable Non-small Cell Lung Cancer: a phase 1 dose-escalation Trial. Int J Radiat Oncol Biol Phys. 2024;119:869–77.

    Article  PubMed  Google Scholar 

  81. Reyngold M, Karam SD, Hajj C, Wu AJ, Cuaron J, Lobaugh S, et al. Phase 1 dose escalation study of SBRT using 3 fractions for locally Advanced Pancreatic Cancer. Int J Radiat Oncol Biol Phys. 2023;117:53–63.

    Article  PubMed  PubMed Central  Google Scholar 

  82. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. 2022;28:31–8.

    Article  CAS  PubMed  Google Scholar 

  83. Mylonas A, Booth J, Nguyen DT. A review of artificial intelligence applications for motion tracking in radiotherapy. J Med Imaging Radiat Oncol. 2021;65:596–611.

    Article  PubMed  Google Scholar 

  84. Salari E, Wang J, Wynne J, Chang C-W, Yang X. Artificial Intelligence-based Motion Tracking in Cancer Radiotherapy: A Review.

  85. Fehr J, Citro B, Malpani R, Lippert C, Madai VI. A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare. Front Digit Health. 2024;6:1267290.

    Article  PubMed  PubMed Central  Google Scholar 

  86. Middlehurst M, Schäfer P, Bagnall A. Bake off redux: a review and experimental evaluation of recent time series classification algorithms; 2023/4/26.

  87. Foumani NM, Miller L, Tan CW, Webb GI, Forestier G, Salehi M. Deep learning for Time Series classification and extrinsic regression. A Current Survey; 2023. /2/6.

  88. Ismail Fawaz H, Forestier G, Weber J, Idoumghar L, Muller P-A. Deep learning for time series classification: a review. Data Min Knowl Disc. 2019;33:917–63.

    Article  Google Scholar 

  89. Tan CW, Bergmeir C, Petitjean F, Webb GI. Time series extrinsic regression: Predicting numeric values from time series data. Data Min Knowl Discov. 2021;35:1032–60.

    Article  PubMed  PubMed Central  Google Scholar 

  90. Dau HA, Bagnall A, Kamgar K, Yeh C-CM, Zhu Y, Gharghabi S et al. The UCR time series archive. IEEE/CAA J. Autom. Sinica. 2019;6:1293–305.

  91. Hunt B, Gill GS, Alexander DA, Streeter SS, Gladstone DJ, Russo GA, et al. Fast deformable image Registration for Real-Time Target Tracking during Radiation Therapy using Cine MRI and Deep Learning. Int J Radiat Oncol Biol Phys. 2023;115:983–93.

    Article  PubMed  Google Scholar 

  92. Wei R, Chen J, Liang B, Chen X, Men K, Dai J. Real-time 3D MRI reconstruction from cine-MRI using unsupervised network in MRI-guided radiotherapy for liver cancer. Med Phys. 2023;50:3584–96.

    Article  PubMed  Google Scholar 

  93. Frueh M, Kuestner T, Nachbar M, Thorwarth D, Schilling A, Gatidis S. Self-supervised learning for automated anatomical tracking in medical image data with minimal human labeling effort. Comput Methods Programs Biomed. 2022;225:107085.

    Article  PubMed  Google Scholar 

  94. Pohl M, Uesaka M, Takahashi H, Demachi K, Chhatkuli RB. Respiratory motion forecasting with online learning of recurrent neural networks for safety enhancement in externally guided radiotherapy; 2024/3/4.

  95. Murphy MJ, Pokhrel D. Optimization of an adaptive neural network to predict breathing. Med Phys. 2009;36:40–7.

    Article  PubMed  Google Scholar 

  96. Sun W, Wei Q, Ren L, Dang J, Yin F-F. Adaptive respiratory signal prediction using dual multi-layer perceptron neural networks. Phys Med Biol. 2020;65:185005.

    Article  PubMed  PubMed Central  Google Scholar 

  97. Shao H-C, Li Y, Wang J, Jiang S, Zhang Y. Real-time liver tumor localization via combined surface imaging and a single x-ray projection. Phys Med Biol 2023.

  98. Lamb JM, Ginn JS, O’Connell DP, Agazaryan N, Cao M, Thomas DH, et al. Dosimetric validation of a magnetic resonance image gated radiotherapy system using a motion phantom and radiochromic film. J Appl Clin Med Phys. 2017;18:163–9.

    Article  PubMed  PubMed Central  Google Scholar 

  99. Green OL, Rankine LJ, Cai B, Curcuru A, Kashani R, Rodriguez V, et al. First clinical implementation of real-time, real anatomy tracking and radiation beam control. Med Phys. 2018;45:3728–40.

    Article  Google Scholar 

  100. Kim T, Lewis B, Lotey R, Barberi E, Green O. Clinical experience of MRI4D QUASAR motion phantom for latency measurements in 0.35T MR-LINAC. J Appl Clin Med Phys. 2021;22:128–36.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This work is supported by the National Natural Science Foundation of China (No. 12405390) and the Science and Technology Department of Sichuan Province, China (24ZDYF1028).

Author information

Authors and Affiliations

Authors

Contributions

Xiangbin Zhang designed the study, collected data, analyzed data, and drafted the manuscript. Di Yan and Haonan Xiao review and edit the manuscript. Renming Zhong designed the study, revised and finally approved the manuscript. All authors read and confirmed the manuscript.

Corresponding author

Correspondence to Renming Zhong.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, X., Yan, D., Xiao, H. et al. Modeling of artificial intelligence-based respiratory motion prediction in MRI-guided radiotherapy: a review. Radiat Oncol 19, 140 (2024). https://doi.org/10.1186/s13014-024-02532-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13014-024-02532-4

Keywords