The use of dose-escalated radiation for locally advanced non-small cell lung cancer in the U.S., 2004–2013

The clinical effects of radiation dose-intensification in locally advanced non-small cell lung (NSCLCa) and other cancers are challenging to predict and are ideally studied in randomized trials. The purpose of this study was to assess the use of dose-escalated radiation for locally advanced NSCLCa in the U.S., 2004–2013, a period in which there were no published level 1 studies on dose-escalation. We performed analyses on two cancer registry databases with complementary strengths and weaknesses: the National Oncology Data Alliance (NODA) 2004–2013 and the National Cancer Database (NCDB) 2004–2012. We classified locally advanced patients according to the use of dose-escalation (>70 Gy). We used adjusted logistic regression to assess the association of year of treatment with dose-escalated radiation use in two periods representing time before and after the closure of a cooperative group trial (RTOG 0617) on dose-escalation: 2004–2010 and 2010–2013. To determine the year in which a significant change in dose could have been detected had dose been prospectively monitored within the NODA network, we compared the average annual radiation dose per year with the forecasted dose (average of the prior 3 years) adjusted for patient age and comorbidities. Within both the NODA and NCDB, use of dose-escalation increased from 2004 to 2010 (p < 0.0001) and decreased from 2010 to 2013 (p = 0.0018), even after controlling for potential confounders. Had the NODA network been monitoring radiation dose in this cohort, significant changes in average annual dose would have been detected at the end of 2008 and 2012. Patterns of radiation dosing in locally advanced NSCLCa changed in the U.S. in the absence of level 1 evidence. Monitoring radiation dose is feasible using an existing national cancer registry data collection infrastructure.


Introduction
Incremental technical advances in linear accelerators have steadily improved the delivery of radiotherapy to a patient's tumor while sparing the adjacent normal tissue. Leveraging advances in technology to dose-intensify is compelling particularly in diseases for which local control outcomes are poor, such as inoperable locally advanced non-small cell lung cancer (NSCLCa) which has an estimated 2-year local failure rate of 30% when treated with concurrent chemotherapy and standard dose radiation [1]. However, the clinical effects of doseintensification in NSCLCa and other cancers have been challenging to predict and several clinical studies have failed to validate presumed benefits [1][2][3]. A recent example is RTOG 0617 which randomized inoperable locally advanced NSCLCa patients to 60 Gy (standard arm) or 74 Gy (dose-escalation arm) with concurrent chemotherapy [4]. Unexpectedly, patients treated on the dose-escalation arm had substantially worse median overall survival (20.3 months versus 28.7 months), despite having acceptable radiation treatment plans based on established normal tissue constraints.
We hypothesized that because the theoretical rationale for dose-intensified treatments are compelling, they are slowly adopted even in the absence of high level clinical evidence and that this "radiation dose creep" may unexpectedly be causing harm. To assess the first of these hypotheses, that radiation dose creep occurs, we evaluated the patterns of radiation dosing in locally advanced NSCLCa between 2004 and 2013, a period in which there were no published level 1 studies on doseescalation. We hypothesized that there would be an increasing trend in the use of dose-escalated radiation in the years preceding the closure of RTOG 0617 and that there would be a decreasing trend beginning immediately preceding the study's closure to the end of the study period.
Finally, we sought to determine the year in which a significant change in radiation dosing practice patterns could have been detected in this study period had dose been prospectively monitored using commonly available cancer registry data.

Data sources
The primary analyses of this study were performed on data extracted from two cancer registry databases with complementary strengths and weaknesses: the National Oncology Data Alliance® (NODA) (Elekta Inc., Sunnyvale, CA), years 2004-2013, and the National Cancer Database (NCDB) (American College of Surgeons, Chicago, Il), years 2004-2012. The NODA captures newly diagnosed cancer cases at more than 150 hospitals in the U.S. and includes all of the data fields sent to state and federal cancer registries. The strength of the NODA is that it includes radiation dose fields that are assessed for internal validity by reviewers with specialized radiation oncology training. The NODA, however, may not be broadly representative of U.S. practice. To assess the generalizability of our observations, we performed the same analysis using data from the NCDB, a more representative database that captures information from approximately 70% of all newly diagnosed cancers in the U.S. [5]. A weakness of the NCDB is that radiation dose is not secondarily validated. Both the NODA and the NCDB require a minimum set of database fields to be completed by registrars to meet submission requirements, so every case of locally advanced NSCLCa treated at participating institutions is not necessarily captured within these databases.

Cohort identification
To minimize confounding, we sought to restrict our patterns of care analysis to patients likely meeting eligibility criteria for RTOG 0617 since this represents a cohort deemed potentially suitable for dose-escalated radiation by subject matter experts in the era of interest. Using manual chart review as the gold standard, we iteratively developed a cancer registry-based algorithm that classifies patients based on RTOG 0617 eligibility and whether they were treated with definitive-intent concurrent chemotherapy and radiation. To minimize misclassification errors, the algorithm was more restrictive than RTOG 0617 eligibility criteria with respect to tumor staging and prior allowable malignancies. The study cohorts included only patients with AJCC versions 6 and 7 clinical T3, N1 or T0-3, N2, M0, but excluded patients with derived AJCC v6 stage IIB, which included clinical T2, N1, and M0. The algorithm excluded patients treated to total doses of <59 Gy (to omit patients who received pre-operative radiation or palliative radiation or who discontinued radiation early) and patients with total doses of >80 Gy (to omit outliers likely related to reporting errors). We assessed the algorithm's ability to predict RTOG 0617 eligibility (positive predictive value) in validation cohorts at cancer registry programs in two regionally distinct hospitals.

Internal and external validity
To assess internal validity (e.g., confounding and bias) within the NODA cohort, we used chi-square tests to compare the distribution of covariates between patients treated in the years before (2004-2010) and after (2011-2013) the early closure of RTOG 0617 (on June 17, 2011). We evaluated the external validity (e.g., generalizability) of our NODA findings in two ways. First, we used chi-square tests to compare the distribution of patient covariates in the NODA cohort (excluding 2013 cases) and the NCDB 2004-2012 cohort. In order to maximize the representativeness of the NCDB, we did not filter the NCDB cohort on radiation dose (unfiltered NCDB 2004(unfiltered NCDB -2012 for this first generalizability analysis. Second, after our a priori hypotheses were confirmed in the NODA cohort, we determined whether these patterns were also present in the NCDB, 2004-2012, which was similarly filtered on radiation dose (filtered NCDB 2004-2012).

Primary outcome and control variables
The primary outcome was the use of dose-escalated radiation over two a priori-defined periods. Period 1 was defined from 1/1/2004 to 12/31/2010, the period before the early closure of the high dose arms of RTOG 0617 was announced to participating institutions. Period 2 was defined from 1/1/2010 to 12/31/2013, which is the period immediately before RTOG 0617 closure through the data collection period. We defined dose-escalated radiation as a total dose of >70 Gy, consistent with guidelines and on-going national trials [6,7]. Both NODA and NCDB capture delivered dose, not prescribed dose. We a priori selected patient and disease control variables that might affect a physician's perception of the tolerability and effectiveness of doseescalated radiation and characteristics of the diagnosing hospital that might affect patterns of care ( Table 1). Characteristics of the diagnosing hospital were defined as previously described [8,9]. We calculated confidence intervals around the estimated annual percentage of patients treated with dose-escalated radiation using the Clopper-Pearson method. We used logistic regression to assess the covariate-adjusted association of the year of treatment (continuous variable) with dose-escalated radiation use (categorical variable) and in each period of interest. In a post hoc analysis, we characterized the use of total doses in the 'low-standard dose' range (≥59 Gy and <64 Gy), a dose most consistent with the superior arm of RTOG 0617. We used linear regression to determine the year in which a significant change in radiation dosing could have been detected had total radiation dose been prospectively monitored within the NODA network. We compared the actual average radiation dose (continuous variable) used for a given year across the network with a forecasted average dose based on the average of the prior three years, with adjustments for age and comorbidities. For 2005 and 2006, where three years of prior data were not available, we forecasted based on the prior year and two years, respectively.
Statistical significance was set a priori at 0.01 because multiple hypotheses were being tested. Missing values were uncommon so were excluded from statistical analyses.

Results
The positive predictive values for the RTOG 0617 eligibility algorithm were 90.2% (37/41 patients) and 91.2% (31/34) based on manual chart review at the first and second cancer registry programs, respectively. The most common reason for false positives within both validation sets was failure to meet RTOG 0617 performance status and pulmonary function test criteria. The positive predictive values of the algorithm with respect to total radiation dose were 97.5% (40/41) and 97.1% (33/34) at the first and second cancer registry programs, respectively.
When applied to the NODA dataset, the algorithm identified 1733 patients treated between 2004 and 2013, of whom 499 (29%) patients were treated with doseescalated radiation (Fig. 1). Table 1 Table 2 shows a comparison of the distribution of patient, disease and contextual variables of the NODA primary analytic cohort (excluding 2013 data) and the NCDB 2004-2012 NSCLCa cohort without radiation dose exclusions (Fig. 1). The NODA cohort had a greater proportion of metropolitan and non-academic hospitals and borderline differences with respect to age, race, T-stage and comorbidity scores.

Discussion
Our analyses show that the use of dose-escalated radiation (>70 Gy) in locally advanced NSCLCa increased in the U.S. between 2004 and 2010. This finding is consistent with our a priori hypothesis that "radiation dose creep" occurred in the absence of level I evidence, since no randomized comparisons of standard versus dose-escalated radiation were formally published over this period. Single-arm studies published in the 2000s suggested that dose-escalation of up to 74Gy in 2 Gy fractions was safe and effective [10][11][12][13]. Questionnaire-based surveys show that the evidence was compelling enough to already have changed practice by the mid-2000s, particularly for physicians who considered themselves thoracic specialists [14]. The use of escalating radiation doses in this cohort likely reflects in part disappointing local control outcomes demonstrated with the standard of care. When outcomes are poor, providers may be more open to deviating from standards of care. Even after the results of RTOG 0617 were communicated, it is clear that the radiation oncology community did not fully reembrace the 60Gy treatment paradigm. While utilization of doses >70Gy declined to its lowest levels in 2012, our data suggests that less than half of patients were treated using low-standard doses (≥59 Gy to <64Gy) in 2012 and 2013. The perception that "more is better" appears to have persisted. Indeed, there is a growing interest in whether intermediate doses (64Gy < dose <74 Gy) might achieve the benefits of dose escalation without the excess costs [15]. Of course, the complex interplay of hypoxic [16], immune [17] and other in vivo responses to radiation and our limited understanding of dose-response of normal tissues make the clinic effects of dose-escalation hard to predict and possibly counter-intuitive. We also observed that the use of dose-escalation decreased between 2010 and 2013. The preliminary findings of RTOG 0617 were first presented at the annual meeting of ASTRO in 2011 [18]. The first peer-reviewed manuscript was published in 2015 [4]. It is remarkable that the rate of dose-escalation declined by 2012 to levels observed in 2004, based on abstracts alone. In some ways, the rapidity of response is re-assuring, but  Finally, we demonstrated that an increasing average radiation dose in this cohort could have been detected by 2008 within the smaller NODA network. This result suggests that a national infrastructure for monitoring cohort-specific radiation dosing patterns already exists. Most centers in the U.S. now participate in the CoC's credentialing program so are already abstracting information on radiation dose. Thus, the CoC, SEER and many other programs that aggregate cancer registry data internationally could feasibly monitor dose to identify cohorts for which dose creep is occurring. In addition, our demonstration used a simple forecasting approach. More sophisticated analytical approaches could provide greater and timelier information [19].
This study has important limitations. First, neither the NODA nor the NCDB captures a statistically representative portion of the U.S. cancer population. While the NCDB captures approximately 70% of cancer cases, diagnoses from small and rural facilities may be underrepresented. Second, the absolute estimates for the utilization of dose-escalated therapy varied between the NODA and the NCDB analyses. In contrast, the NODA and the NCDB generated similar estimates for the utilization of low-standard dose over the same time period. This discrepancy may reflect differences in the characteristics of the institutions represented within each dataset and challenges that registrars have in interpreting radiation summaries of patients who have both a regional and a boost treatment, which is more common in patients receiving dose-escalated therapy. The NODA likely contains more accurate dose data, but it lacks the representativeness of the NCDB. Given that the NODA and NCDB have complementary strengths and weaknesses, the true absolute utilization rate of dose-escalation therapy over this period probably falls somewhere between the NODA and NCDB estimates. Third, our analyses do not adjust for radiation technique (e.g., 3D conformal vs IMRT). While the NCDB does have a field identifying "treatment modality", the current options eligible for cancer registrars are not mutually exclusive. For example, a registrar can identify a treatment as 6MV photons or IMRT, though often a treatment is both. Because of this issue, we chose not to include this variable in our analyses. While

Conclusion
Patterns of radiation dosing in locally advanced NSCLCa changed in the U.S. from 2004 to 2013 and in the absence of level 1 evidence. These changes could have been identified using a simple radiation dose monitoring approach that uses data already aggregated from most cancer centers in the U.S.