Introduction
Federal and Department of Defense (DoD) policies require DoD hospitals to follow the Centers for Disease Control and Prevention’s (CDC’s) Core Elements (CEs) of hospital antibiotic stewardship programs (ASPs). 1–3 The Core Elements include three structural (Hospital Leadership Commitment, Accountability, Pharmacy Expertise) and four procedural (Action, Tracking, Reporting, Education) elements. 1 Priority Core Elements released in 2022 highlight key focus areas. 4 Hospitals reporting into the National Healthcare Safety Network (NHSN) Patient Safety Component complete a required annual survey that includes an antibiotic stewardship practices section. 5,6 Each required question/variable in this section is mapped to at least one CE. 7 The survey undergoes regular revisions, notably in 2018 and in 2021 due to the 2019 CE guide update. 7 CDC methodology for measuring adherence dictates that any positive response within a CE indicates that the CE is met. 6 This methodology limits sensitivity, as illustrated by 97% of national hospitals meeting all seven CEs in 2022. 8 Current CDC methodology for measuring CE adherence is likely inadequate for guiding ASPs to improve patient outcomes, which was part of the rationale for this study.
Several clinical outcome measures can be assessed to approximate ASP performance. The Standardized Antimicrobial Administration Ratio (SAAR) measures antibiotic use through data reported into NHSN’s Antimicrobial Use (AU) Option of the AU and Resistance Module. The SAAR is risk-adjusted on facility and location characteristics, allowing comparisons between facilities and locations within a facility, but it does not measure appropriate use. 9–Reference O’Leary, Edwards and Srinivasan11 Incidence of resistant pathogens may be related to ASP performance; several studies, including a global systematic review and meta-analysis, have found significantly reduced incidence of antibiotic-resistant bacteria and Clostridioides difficile infections (CDI) after ASP implementation. 1,Reference Baur, Gladstone and Burkert12,13
The aim of this study was to evaluate DoD ASPs as associated with key clinical outcome metrics: (1) AU based on the SAAR and (2) priority healthcare-associated infection/antibiotic resistance incidence based on selected resistant pathogens and CDI, and to determine how clinical outcome metrics relate to CE implementation, based on a novel adherence scoring approach. The hypothesis was that prioritized clinical outcome metrics would be associated with CE adherence across the Military Health System (MHS).
Methods
Data set building and Core Elements scoring
This was a retrospective, cross-sectional study of DoD hospitals in 2018 and 2021. The years of analysis were chosen as a snapshot of early adherence and the current status since 2018 followed soon after ASP requirements, and 2021 was the most recent year of complete data. Additionally, 2019-2020 were not representative years since DoD CE adherence temporarily dropped. Reference Lynch, Mende and Hamdy14 Based on discussions with ASP leaders, this was likely related to COVID-19 since these data were reported when the military was providing substantial support to civilian COVID-19 efforts. Each year’s data set was built using NHSN annual hospital survey responses, SAAR data, and incidence data from the MHS Composite Health Care System (which, prior to the new electronic healthcare records system, was the primary record for documenting health information and history and a repository of laboratory, prescription, and test results). 15 The SAAR used was the by-facility, annual, adult, for all locations from the Defense Health Agency’s (DHA’s) Center of Data Integration. Most incidence data were pulled from microbiology results using existing, NHSN-based algorithms (except difficult-to-treat resistant infections) from the DHA’s EpiData Center (Appendix 1).
Pathogen incidence outcomes examined included hospital-onset (1) CDI, (2) resistant “ESKAPEE” pathogens: Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, Enterobacter species, and Escherichia coli, (3) resistant extended-spectrum beta-lactamase-producing Escherichia coli (hereafter ESBLEc), and (4) difficult-to-treat resistant infections (hereafter DTR), based on modifications to the 2018 Kadri et al. definition. Reference Kadri, Adjemian and Lai16 Each numerator counted one case/person/organism/year. The denominator was the number of beneficiaries enrolled at each facility on 30 June of the respective year since this is the standard denominator reported elsewhere by DoD for resistant pathogen incidence. Incidence was defined as cases per 10,000 enrolled beneficiaries.
CE adherence is monitored using NHSN annual survey data. The corresponding CDC mapping guide was used to develop a novel CE scoring approach to create scores for each CE and overall. Only questions/variables used in the CDC’s methodology for dichotomous (yes/no) adherence were used in scoring to enable more direct comparison. If variables appeared multiple times, points were only assigned for the CE the variable was best aligned with to avoid overweighting single variables. Each CE was scored from 0 to 1. The number of variables used to measure adherence for each CE were divided and assigned scores proportionately so that the maximum score was 1 (ie, the Reporting CE has four variables but one is duplicated in Action and is scored there; three scored variables translate to 0.3333 points possible for each variable and a maximum total of 1 point). An overall CE score was created by adding individual CE scores and could range from 0 to 7 (Appendix 2). Due to revisions in the survey and facility participation variation, results for each year are not entirely comparable. Priority CE adherence variables were added to the 2021 data set to dichotomously measure whether a facility met each Priority CE and how many Priority CEs they met. Several facility characteristics variables were included: categorical bed size (≤ 50 beds, 51–200 beds, and > 200 beds), if the hospital was located in the United States or not, and military service the hospital was affiliated with (Joint—more than one service, Army, Navy, or Air Force).
Facilities that transitioned to the new electronic health records system (EHR) in the year being analyzed or prior were removed due to a post-transition data transfer issue. If a SAAR was not reported, it was treated as missing, and the facility was dropped from that analysis. If a facility did not report an infection count for an incidence outcome, it was coded as zero cases and treated as such for the analysis.
Core Elements relationships with ASP outcomes
For both years, bivariate correlations for each outcome measure (SAAR, incidence of CDI, ESKAPEE, ESBLEc, and DTR) and each CE score were assessed using Spearman’s correlation (rho), due to nonnormal distribution of CE scores, and scatter plots of relationships were visualized. Then, bivariate regressions were done using linear regression for the SAAR and individual CE scores; robust variance estimates were used since visual relationships were not clear but showed some linear indications, and there was nonconstant variance. To address concerns about having a ratio outcome in the SAAR models, these 2021 linear models were re-run post hoc using antimicrobial days, the SAAR numerator, as the outcome. For incidence outcomes, negative binomial regression was used since Poisson distributions were not met due to overdispersion, and there was not a clear preference for zero inflated negative binomial regression based on fit indices (AIC and BIC). Incidence rate ratios (IRRs) were the measure of effect and robust variance estimates were used. Each regression model included a single CE score independent variable and a single outcome (SAAR or pathogen incidence) dependent variable. To assess Priority CE adherence linear regression was conducted for the SAAR outcome and negative binomial regression for the incidence outcomes. ANOVA was used to analyze outcome differences based on how many Priority CEs were met.
Adjusted negative binomial regression models were run for 2021 for CEs/Priority CEs to determine how each CE score or Priority CE adherence related to the incidence outcome while controlling for categorical bed size. Each model included two independent variables (CE score or Priority CE adherence and categorical bed size) and pathogen incidence as the outcome/dependent variable. The SAAR outcome was not included since it is already adjusted on variables related to facility size.
All α levels were set at 0.05, and Stata was used for analyses (Stata/IC 16.1 for Windows, StataCorp LLC, College Station, TX). This study was reviewed by the Uniformed Services University Human Research Protection Program and determined to be exempt.
Results
Descriptive statistics
Over half of the DoD’s hospitals had ≤ 50 beds, and most of the remaining facilities were 51–200 beds (Table 1). On average, the DoD’s hospitals are smaller than nationally-reporting NHSN hospitals, Reference O’Leary, Neuhauser, McLees, Paek, Tappe and Srinivasan17 comprising approximately 1.0% of the nationally-reporting hospitals. Nationally in 2022, 35.9% of facilities reporting to NHSN had an adult all antibacterial SAAR that was statistically significantly > 1.0; 18 2022 data were not available for DoD, but in 2021, it was 34.4% of facilities. Of facilities with data in 2021, 12 hospitals (36.4%) reported zero CDI cases and 18 (54.5%) reported zero ESBLEc cases. In 2018, 34 hospitals (73.9%) reported zero ESBLEc cases.
Table 1. Descriptive statistics for DoD hospitals and outcome data, 2018 and 2021

Abbreviations: CDI, Clostridioides difficile (C. difficile) infections; DoD, Department of Defense; DTR, difficult to treat resistant infections; ESBLEc, Resistant Extended-spectrum Beta-lactamase (ESBL)-producing Escherichia coli (E. coli) infections; ESKAPEE= Resistant infections caused by ESKAPEE pathogens: Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, Enterobacter species, and Escherichia coli; IQR, interquartile range (Q3-Q1); SAAR, Standardized Antimicrobial Administration Ratio; Q1, first quartile; Q2, second quartile; Q3, third quartile.
a Mean incidence rate is the mean number of cases per 10,000 enrolled beneficiaries for all hospitals included for that year and incidence outcome.
In 2018, five facilities were missing a SAAR, and in 2021 one facility was, due to lack of enrollment in the NHSN AU Option (all were ≤ 50 beds). Due to the EHR transition, in 2018, two of 48 hospitals were dropped from incidence and SAAR outcome models; in 2021, 14 of 47 hospitals were dropped. For 2018, all hospitals had the same CE scores for Accountability and Pharmacy Expertise, so these were not included in the analyses. In 2021, all facilities met the Tracking Priority Element, so this was not analyzed.
Core Elements scoring
In 2018 based on CDC methodology, 100% of DoD hospitals met the Leadership (hospital leadership commitment), Accountability, and Pharmacy Expertise CEs (Table 2). Subsequently, 97.9% of the hospitals met the Action CE, followed by 95.8% Tracking, 93.8% Reporting, 89.6% Education, and 85.4% met all CEs. In 2021, 100% met Action, Tracking, Reporting, and Education, while 97.9% met Leadership and Pharmacy Expertise, 95.7% met Accountability, and 95.7% met all CEs.
Table 2. Core Elements CDC adherence methodology and scoring methodology distributions in DoD hospitals

a See Appendix 2. Core Elements (CE) Survey Questions Scoring Guide (2018 & 2021) for scoring methodology details.
b Due to the scoring methodology and not assigning points twice to a variable that appeared more than once (across more than one Core Element), some facilities could have a score of 0 but still considered to have met the Core Element by CDC’s methodology.
c 2018: A scoring methodology score list is not available for Overall Core Elements score since there were 44 unique scores ranging from 2.76 to 6.52.
d 2021: A scoring methodology score list is not available for Overall Core Elements score since there were 45 unique scores ranging from 1.52 to 6.03.
CE scoring displays how broadly CDC-defined adherence is distributed (Figure 1). Leadership CE scores are spread out in both years and follow a mostly normal distribution, as do Action and Tracking. The Reporting and Education CE scores have somewhat left-skewed distributions for both years with more facilities having higher scores. Accountability and Pharmacy Expertise CE “scores” are dichotomous and effectively not different from CDC methodology.

Figure 1. Core Elements adherence and scoring distributions, 2018 (N = 48) & 2021 (N = 47).
Charts are not available for overall Core Elements score since there were 44 unique scores ranging from 2.76 to 6.52 in 2018 and 45 unique scores ranging from 1.52 to 6.03 in 2021.
Core Elements relationships with ASP outcomes
Spearman correlations and bivariate regressions with each CE score and outcome variable did not show many or consistent statistically significant relationships, and on scatter plots relationships were not visually obvious. Though, both correlations and regressions often showed a positive relationship with a higher CE score associated with a higher SAAR or incidence rate (Appendix 3). Post-hoc analyses re-running SAAR models instead with the antimicrobial days outcome maintained the direction of relationships and whether associations were statistically significant, with the exception of Accountability and Pharmacy Expertise changing significance. There was no statistical difference between facilities that met or did not meet the Priority CEs in 2021, but again a positive relationship was seen where the group meeting Priority CEs had higher SAARs and incidence rates (Table 3). Reporting Priority Element adherence was an exception, being inversely related to the SAAR and ESKAPEE and DTR incidences. Again for the negative binomial regressions, most were not statistically significant but a positive relationship persisted, with an exception in most cases for the Reporting Priority CE. The ESBLEc relationships are not reported in the text due to the proportion of facilities with no cases.
Table 3. Priority Core Elements adherence bivariate relationships and regressions with outcome metrics, 2021 DoD hospitals (N = 47)

Abbreviations: ANOVA, analysis of variance; CDI, Clostridioides difficile (C. difficile) infections; DoD, Department of Defense; CI, confidence interval; DTR, difficult to treat resistant infections; ESBLEc, Resistant Extended-spectrum Beta-lactamase (ESBL)-producing Escherichia coli (E. coli) infections; ESKAPEE, Resistant infections caused by ESKAPEE pathogens: Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, Enterobacter species, and Escherichia coli; IRR, incidence rate ratio; SAAR, Standardized Antimicrobial Administration Ratio.
* Indicates statistical significance for that measure of effect (p < 0.05).
a The Tracking Priority Element was not included because all DoD facilities met this Priority Element.
b Each Priority/Core Element is modeled individually with each outcome (ie, for the Core Elements scores there are 8 models for each incidence outcome).
Adjusted regression
Adjusted regression models included CE scores or Priority CE adherence and categorical bed size as independent variables. These models showed an expected, always statistically significant positive relationship where resistant pathogen incidence increased as bed size increased. The positive relationship with some CEs/Priority CEs and incidence rates remained even when adjusting for categorical bed size (Table 4). An important caveat for the 2021 adjusted analyses structural CE results is that only two of 47 hospitals did not meet Accountability and only one did not meet Pharmacy Expertise; these CE scores are also not true scores. Although largely not statistically significant, in the adjusted models there was a consistent negative relationship between several procedural CEs and incidence rates. Action, Reporting, Education, and Overall CE scores were associated with lower CDI, ESKAPEE, and DTR incidence when adjusting for bed size. This was seen somewhat for adherence to the Priority CE of Action and number of Priority CEs met with ESKAPEE and DTR incidence and strongly for the Reporting Priority CE, where a statistically significant negative relationship was observed for CDI, ESKAPEE, and DTR incidence with IRRs of 0.199 (P = 0.002), 0.255 (P = 0.001), and 0.420 (P = 0.024), respectively. Thus, a one-point increase—adhering to all variables under the Reporting Priority CE compared to none—was associated with an 80.1%, 74.5%, and 58.0% decrease in CDI, ESKAPEE, and DTR cases, respectively.
Table 4. Core Elements scores and Priority Core Elements adherence models adjusted for categorical bed size, 2021 DoD hospitals (N = 47)

Abbreviations: CDI, Clostridioides difficile (C. difficile) infections; DoD, Department of Defense; CI, confidence interval; DTR, difficult to treat resistant infections; ESBLEc, Resistant Extended-spectrum Beta-lactamase (ESBL)-producing Escherichia coli (E. coli) infections; ESKAPEE, Resistant infections caused by ESKAPEE pathogens: Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, Enterobacter species, and Escherichia coli; IRR, incidence rate ratio; ND, not determined.
* Indicates statistical significance for that measure of effect (p < 0.05).
a Each Priority/Core Element is modeled with each outcome (ie, for the Core Elements scores there are 8 models for each incidence outcome).
b Each negative binomial regression model in this table is adjusted for categorical bed size. The categorical bed size independent variable is not shown but an increase in bed size was always associated with a statistically significant increase in the pathogen incidence outcome.
c The Tracking Priority Element was not included because all DoD facilities met this Priority Element.
d Unable to run model; convergence was not achieved.
Discussion
This study provides an initial assessment of how CEs could be quantitatively assessed, which showed wide distributions of scores for CDC-defined adherent facilities. Scoring allowed for more granular assessment of adherence to the CDC’s recommended CEs while providing insight into the range of responses and where assistance may be most beneficial, such as Leadership, Action, and Tracking for the MHS, where the score distribution was more normal compared to other left-skewed CEs.
In adjusted analyses, the pattern of higher CE scores being associated with worse performance on some outcomes was largely untrue for the procedural CEs and sometimes untrue for Leadership; findings warrant further examination, particularly of structural CEs and to address other confounders. One study by Vaughn et al. found that organizational context influenced ASP performance and referenced other studies showing that context affected hospital quality improvement initiatives. Reference Vaughn, Krein and Hersh19 Residual confounding likely remains since relationships between AU/pathogen incidence and CEs are complex, affected by factors within the community and hospital, especially considering CDI and diagnostic stewardship, infection prevention and control programs, if/when there were interventions related to the studied outcomes, and patient volume and complexity differences between hospitals.
Adherence to the Reporting Priority CE in the bivariate regression was related to a lower SAAR, and in adjusted models it was statistically significantly associated with lower incidence of CDI, ESKAPEE, and DTR pathogens. This adds evidence for the MHS to more strongly recommend priority implementation of the variables: 1) at a minimum annual prescriber-/unit-level reports to direct specific feedback to prescribers and describe how they can improve prescribing and 2) ASP monitoring adherence to facility “treatment recommendations for antibiotic selection for common clinical conditions.” 7 Overall, results also suggest a benefit to placing greater emphasis on procedural CEs.
A nationwide study showed a significant association between higher statewide adoption of CEs and lower CDI rates but found no association with hospital-associated methicillin-resistant Staphylococcus aureus (MRSA) bacteremia rates. Reference Garcia Reeves, Lewis, Trogdon, Stearns, Weber and Weinberger20 Another study found that an ASP meeting all CEs was more likely to have statistically significant higher AU for two studied infection types. Reference Bernard, Kuper and Lee21 In Nebraska and Iowa, CE adherence was measured in critical access hospitals as full or partial CE met or CE deficient, and researchers found inconsistencies between NHSN-reported CE adherence and their own adherence evaluation. Reference Ryder, Preusker and Watkins22 Similar themes are found in this study regarding a lack of consistent evidence to relate implementation of the CEs to desired outcomes and question how well the CEs as measured capture ASP status. This emphasizes the need to continually assess CE adherence measures to ensure evidence-based recommendations are used to prioritize structures and processes in ASPs.
This study was limited by the relatively small number of DoD hospitals and smaller subset with available data, along with nonnormal distributions and high variance, which may have limited study power. There is potential for misclassification where incidence rates were zero; there may be no cases or there could be a data capture issue. Though, most facilities with zero cases were in the smallest bed size category and therefore more likely to truly have no cases. Also, in the MHS more complex patients are sometimes transferred to civilian care. Missing SAAR data were from a few hospitals in the smallest bed size category, echoing national trends with smaller facilities being slower on AU Option uptake, likely related to limited resources. Reference O’Leary, Neuhauser, McLees, Paek, Tappe and Srinivasan17 Results then may not fully represent the range of MHS ASPs in the smallest size category and are most likely to cause CE scores to be skewed high. Facilities dropped due to the randomized EHR transition were spread across size categories and were unlikely a source of bias. An additional limitation was that time was not robustly explored. Perhaps hospitals with worse outcomes recognized this and in response bolstered ASP efforts, which could account for some positive associations. Results may not be generalizable to other healthcare systems due to the unique nature of the MHS, particularly smaller than national mean bed size. However, these methods could prove useful to other healthcare systems or at state and local levels to more quantitatively assess CE adherence and prioritize CE variables associated with better outcomes.
In the DoD, future research should investigate ASP performance in specific location types and by services offered within hospitals; analyses should be repeated with 2023 data since the resolution of the EHR transition data issue. Other work should refine and validate CE scoring methodology and analyze specific aspects of the CEs and common deficiencies to develop resources to facilitate uptake and policies. Methodology updates could include testing more variables to assess Accountability and Pharmacy Expertise or exploring score weighting and whether nonlinear models are more appropriate. Careful consideration is needed to select future outcome metrics. The SAAR should remain a priority since it is standardized, risk-adjusted, comparable between facilities, and likely to have less annual variation and to more directly relate to ASPs than incidence outcomes. DTR may be a useful ASP metric and could benefit by being better defined and standardized at national/high-level surveillance platforms. Other potential outcomes include the NHSN Standardized Infection Ratio (SIR) for MRSA bloodstream infections, CDI SIR, overall multidrug-resistant organism infection/colonization incidence density rate, and the new pathogen-specific SIRs and Standardized Resistant Infection Ratios (also risk-adjusted and comparable). 9,23 ASPs must systematically collect patient outcome data as important indicators of ASP performance. Metrics related to patient outcomes such as mortality and readmissions would be a beneficial category of ASP outcomes to include. A 2017 systematic review showed high-certainty evidence that ASP interventions reduce duration of AU and hospital stay without increasing risk of mortality. Reference Davey, Marwick and Scott24 Finally, these results serve to spur a discussion about refining CEs and compliance measures to ensure ASPs are guided toward better public health and patient outcomes.
Supplementary material
The supplementary material for this article can be found at https://doi.org/10.1017/ice.2025.33
Acknowledgments
The authors are grateful to Dr. Stephanie Giancola and the Defense Health Agency Antimicrobial Stewardship Program Committee for their feedback and support. The authors appreciate the contributions of Michelle LaCour, Suji Xie, and Nicholas Seliga to gather DoD NHSN data and provide them to the principal investigator as well as answering data inquiries. Thank you to Leigh Carson and Dr. Nicole Martin for administrative assistance.
Authorship and Manuscript Preparation
LeeAnne Lynch: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Visualization, Writing—original draft, Writing—review and editing.
Katrin Mende: Funding acquisition, Project administration, Supervision, Validation, Writing—review and editing.
Rana Hamdy: Conceptualization, Writing—review and editing.
Cara Olsen: Conceptualization, Methodology, Writing—review and editing.
Paige Waterman: Conceptualization, Writing—review and editing.
John Young: Conceptualization.
David Tribble: Conceptualization, Methodology, Project administration, Supervision, Writing—review and editing.
Financial support
Support for this work (IDCRP-139) was provided by the Infectious Disease Clinical Research Program (IDCRP), a Department of Defense program executed through the Uniformed Services University of the Health Sciences, Department of Preventive Medicine and Biostatistics through a cooperative agreement with The Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc. (HJF). This project has been funded by the National Institute of Allergy and Infectious Diseases, National Institutes of Health, under Inter-Agency Agreement Y1-AI-5072, the Defense Health Program, and US DoD, under award HU0001190002. Support in the form of salaries was provided by HJF for LeeAnne Lynch and Katrin Mende; HJF did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.
Competing interests
All authors report no conflicts of interest relevant to this article.
Disclaimer
The views expressed are those of the authors and do not necessarily reflect the official policy or position of the Uniformed Services University of the Health Sciences, Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., National Institutes of Health and Department of Health and Human Services, the Defense Health Agency, the Departments of the Army, Navy, or Air Force, the Department of Defense, or the US Government. Mention of trade names, commercial products, or organizations does not imply endorsement by the US Government.
Research Transparency and Reproducibility
Joint analysis, following data-sharing agreements, can be undertaken to assure reproducibility by request.