Hostname: page-component-6bb9c88b65-spzww Total loading time: 0 Render date: 2025-07-26T16:56:48.149Z Has data issue: false hasContentIssue false

Translating implementation science principles and methods to front-line clinicians: The Implementation Science Scholars Program

Published online by Cambridge University Press:  26 June 2025

Geoffrey M. Curran*
Affiliation:
Center for Implementation Research, Department of Pharmacy Practice, University of Arkansas for Medical Sciences, Little Rock, AR, USA
Sara J. Landes
Affiliation:
Department of Psychiatry, University of Arkansas for Medical Sciences, Little Rock, AR, USA Central Arkansas Veterans Healthcare System, Little Rock, AR, USA
Taren Massey-Swindle
Affiliation:
Department of Pediatrics, University of Arkansas for Medical Sciences, Little Rock, AR, USA Arkansas Children’s Nutrition Center, Little Rock, AR, USA Arkansas Children’s Research Institute, Little Rock, AR, USA
Benjamin S. Teeter
Affiliation:
Center for Implementation Research, Department of Pharmacy Practice, University of Arkansas for Medical Sciences, Little Rock, AR, USA
Cynthia L. Mosley
Affiliation:
Center for Implementation Research, Department of Pharmacy Practice, University of Arkansas for Medical Sciences, Little Rock, AR, USA
Jennifer Naylor
Affiliation:
Center for Implementation Research, Department of Pharmacy Practice, University of Arkansas for Medical Sciences, Little Rock, AR, USA
Laura P. James
Affiliation:
Department of Pediatrics, University of Arkansas for Medical Sciences, Little Rock, AR, USA Translational Research Institute, University of Arkansas for Medical Sciences, Little Rock, AR, USA
*
Corresponding author: G.M. Curran; Email: currangeoffreym@uams.edu
Rights & Permissions [Opens in a new window]

Abstract

This article describes the Implementation Science (IS) Scholars Program at the University of Arkansas for Medical Sciences (UAMS). The program’s goal is to translate knowledge, approaches, and methods from IS to front-line clinicians in an academic medical center, thereby supporting its goals as a learning health system and promoting a dynamic workforce of IS-informed change leaders. Initiated in 2020, the program is relatively unique in that it attempts to translate concepts and knowledge from IS to clinicians to improve their skills as implementers and change agents. The program is supported by the Translational Research Institute, the UAMS’ awardee of the Clinical and Translational Science Award Program. The two-year program provides 20% salary coverage, bespoke didactics, and close mentoring on a Scholar-initiated project to improve care in their clinical context. The program has trained four cohorts of Scholars over the program’s initial five years. We describe the program, our evaluation of it thus far, and future plans. The program has contributed to numerous healthcare improvements and served as a gateway to future implementation and other research activities among some Scholars.

Information

Type
Special Communication
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science

Introduction

Implementation science (IS) is “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practice into routine practice and, hence, to improve the quality and effectiveness of health services[Reference Eccles and Mittman1].” As such and largely by definition [Reference Leppin, Mahoney and Stevens2], IS exemplifies the goals of both translation [Reference Austin3] (turning research observations into interventions that improve health) and translational science [Reference Faupel-Badger, Vogel, Austin and Rutter4] (creating generalizable solutions for barriers to translation). In the past decade, funders of research in the US and elsewhere have placed increased emphasis on implementation research. Most institutes at the National Institutes of Health support enduring Program Announcements focused on implementation research, and many periodically release IS-themed Requests for Applications and Notices of Special Emphasis. Likewise, other major US research funders, such as the Patient Centered Outcomes Research Initiative and Department of Veterans Affairs (VA), have standing research calls for implementation research. Internationally, a similar pattern has emerged with government-funded research agencies from Canada, Australia, the United Kingdom, and many European countries.

With the growing demand for research focused on implementation of evidence-based practices came a parallel increase in demand for training in IS [Reference Davis and D’Lima5,Reference Huebschmann, Johnston and Davis6]. The IS “field” is fairly new, and has been built on foundations of many academic disciplines and applied fields of study, for example, sociology, psychology, education, public health, health services, marketing, and social work [Reference Dearing, Kee and Peng7]. Only recently have academic degree programs with an IS emphasis been developed. Hence, most of the capacity building programs have been developed by academic and healthcare institutions “from scratch,” with little coordination between them and relatively few guiding principles, theories of action, or competencies. Many were largely created to meet “local” needs, whether that was supporting implementation of clinical guidelines in a specific healthcare system (e.g., in the VA) or supporting research capacity in specific emphasis areas and/or funded by individual NIH institutes (e.g., the Implementation Research Institute [IRI] [Reference Proctor, Landsverk and Baumann8] funded by the National Institute Mental Health). As we detail below, our program, too, was directed at the local needs of a growing healthcare system.

The last five years have seen tremendous growth in reviews and evaluation of programs and published guidance on capacity building efforts. Existing capacity building initiatives train implementation researchers, implementation practitioners, program evaluators, quality improvement (QI) personnel, educators, or a combination of these [Reference Davis and D’Lima5,Reference Huebschmann, Johnston and Davis6]. Recent literature reviews on these programs indicate that the main deliverers are academic institutions [Reference Viglione, Stadnick and Birenbaum9], and the main recipients (learners) are researchers and/or those in graduate research training programs [Reference Davis and D’Lima5,Reference Chambers, Proctor, Brownson and Straus10]. Many authors in this space recognize a need to increase capacity building amongst practitioners/implementers, health care officials, and policy makers [Reference Davis and D’Lima5,Reference Huebschmann, Johnston and Davis6,Reference Chambers and Emmons11,Reference Leppin, Baumann and Fernandez12]. Chambers et al. [Reference Chambers, Proctor, Brownson and Straus10] offered a typology which captures the diverse range of training opportunities – degree programs, short courses (day, or multiple days), training institutes (months or years), workshop/conference, panel session, webinar/seminar/lecture, self-directed learnings (online courses, videos), and publications.

A recent systematic review of published articles on capacity building programs [Reference Davis and D’Lima5] identified 41 distinct IS capacity building initiatives (2006–2019). Programs ranged from short courses to training institutes (often with mentoring) to components of academic programs (certificates, degrees). A more recent systematic review [Reference Viglione, Stadnick and Birenbaum9] expanded the scope to include as many programs as could be identified through an internet presence. Their search spanned 2020–2022 and found 165 programs meeting their inclusion criteria – those offering more than at least one capacity building activity other than educational coursework or training alone. A significant majority (68%) were located in the US, and over half were embedded within a Clinical and Translation Science Award (CTSA) Program [Reference Leshner, Terry and Schultz13]. Based on surveys from program representatives (55% of identified programs), most use multiple capacity building activities, with the most popular being training/education (79%), mentoring (67%), provision of IS resources/tools (66%), consultation (67%), networking (62%), technical assistance (52%), and grant development support (52%).

An important issue at the heart of IS capacity building is competencies, that is, what the learners are expected to learn and why. The first published set of competencies for implementation research training programs came from Padek et al. [Reference Padek, Colditz and Dobbins14] in 2015, based on a deliberative expert consensus process. They settled on 43 competencies in four categories: background and rationale (e.g., identifying implementation gaps), theory and approaches (e.g., identifying and applying frameworks), design and analysis (e.g., common designs in applied IS), and practice-based considerations (e.g., considering multiple perspectives). They also set the competencies as falling into beginner, intermediate, or advanced skill levels. More attention in recent years has been paid to IS and/or practice competencies [Reference Davis, Sevdalis and Baumann15]. At least one set of implementation practice competencies have been published [Reference Metz, Louison, Burke, Albers and Ward16]. A recent study by Schultes et al. (2023) generated a “competence profile” for implementation practice and implementation research. Based on interviews with 82 international implementation experts, the study’s profile contained highly overlapping “knowledge & skills” areas for implementation research and practice competency – for example, the setting, collaboration, communication, program evaluation, and research methodology.

To assist those designing and studying IS capacity building, it is important for individual programs to disseminate information on their goals, structure, and performance. This article describes the University of Arkansas for Medical Sciences’ (UAMS) IS Scholars Program. The program’s goal is to translate knowledge, approaches, and methods from IS to front-line clinicians in an academic medical center, thereby supporting its goals as a learning health system and promoting a dynamic workforce of IS-informed change agents. We have trained four cohorts of Scholars over the program’s initial five years. Herein, we describe the program, evaluation of it thus far, and plans for its future.

Program context: UAMS and the center for implementation research

UAMS is Arkansas’ only academic medical center. UAMS’s clinical affiliates include Arkansas Children’s Hospital and the Central Arkansas Veterans Healthcare System [17]. The UAMS Health system includes regional campuses across the state that were developed to address the state’s shortage and uneven distribution of primary care providers and provide medical training in underserved communities [18]. UAMS’ Translational Research Institute (TRI) was established in 2009 with CTSA funding. Consistent with the mission of CTSA awards [19], the TRI provides a variety of resources and services to researchers (e.g., consultations, training, networking opportunities, study subject recruitment, assistance with administrative and compliance processes, and funding opportunities) [20].

In 2014, with seed funding from the Colleges of Pharmacy and Medicine, UAMS established the Center for Implementation Research (CIR) with two complementary aims: 1) build IS capacity to grow a portfolio of implementation research, and 2) support implementation of evidence-based practices within UAMS’s statewide healthcare system and other systems within the state of Arkansas. In support of its first aim, the CIR helped establish a small cadre of IS mentors who supported grant writing and project development for trainees coming from a variety of graduate programs, postdoctoral fellowships, junior faculty training programs, and a growing pool of health services-minded researchers with interests in IS. In support of its second aim, CIR faculty partnered with UAMS, VA, and other health system’s clinical/quality improvement initiatives to consult on practice transformation and participate in QI efforts and clinical program implementation (and evaluation). In 2018, the CIR partnered with UAMS’s TRI and expanded its IS capacity building efforts to include a Graduate Certificate Program in IS, pilot awards in IS, support of 2-year KL2/K12 awardees with a focus in IS, and a Visiting Scholars Program.

CIR’s partnership with the TRI provided an opportunity to improve healthcare for Arkansans (Aim 2) by training clinical faculty in principles and methods of IS to support them in improving care in their clinical settings. To do so, the CIR developed the UAMS IS Scholars Program, initiated in 2020.

Program description: UAMS Implementation Science Scholars Program

Overview

The program trains clinical faculty with a professional degree (M.D., Ph.D., Pharm.D., D.N.P., D.O., etc.) interested in learning more about how to implement evidence-based practices (and/or de-implement nonevidenced-based practices or low-value care). The program is open to clinical faculty who provide care at UAMS, Arkansas Children’s Hospital, and/or the Central Arkansas Veterans Healthcare System. The two-year program includes didactics, group and individual mentoring, and completion of an IS-informed project. Our pedagogy is primarily guided by experiential learning [Reference Morris21], emphasizing learning through direct (mentored) experience, reflection, and iterative “re-doing.” We chose this pedagogy for three complimentary reasons: 1) it is positively associated with student motivation and engagement [Reference Kong22], 2) it is familiar to clinicians (given its widespread use in medical training) [Reference Maudsley and Strivens23], and 3) it allowed our capacity building program to directly contribute to improved implementation of evidence-based practices within our own healthcare system. The program is co-led by a senior (GMC) and mid-career (SJL) implementation researcher. Scholars are required to dedicate 20% effort to the program, with the TRI providing 20% salary support for Scholars (up to NIH annual salary cap). The TRI provides salary support (5–10%) for three mentors (GMC, SJL, TMS), a local program evaluator (BST), and a program administrator (JN, CLM).

Application and selection processes

The application process includes submission of a project proposal, curriculum vitae, and at least one letter from a supervisor (e.g., Department Chair, Division Lead) detailing support for the candidate, the proposed project, and affirming the 20% effort commitment. Candidates are asked to describe: 1) a quality or implementation gap to be addressed, 2) potential implementation strategies to be deployed, and 3) a summary of their background, interest, previous experience with practice change, and plans for applying the knowledge gained after the program. In the “request for applications” (RFA) and application information sessions, candidates are encouraged to report on any preliminary data they had indicating current performance of their clinical unit on the practice(s) of interest as well as potential barriers to improved implementation. If no such data exist yet, they are encouraged to discuss their plans/needs for using or developing outcomes measures to assess current performance. They are not asked to articulate a research question, study design, or evaluation approach (of note, these topics are covered in program didactics and mentoring sessions). Candidates are asked to describe how their proposed projects will assess/address rural and/or other underserved populations. A detailed RFA is provided, and two information sessions are provided in the 2–3 months before the applications are due.

Candidates are selected by a panel of two implementation scientists (GMC, SJL) and two quality leaders from the UAMS Health system and Arkansas Children’s Hospital. The review follows NIH grant review procedures. Each application is reviewed by two committee members (one implementation researcher and one quality leader) who rate them using an overall impact score and four criteria using the NIH grant scoring from 1 (exceptional) to 9 (poor) [24]. The four criteria are significance, priority focus on rural and underserved populations, investigator, and approach. Given that addressing health issues of rural and other underserved populations are explicit goals of the TRI, priority is given to candidates whose projects have an explicit focus in this area.

Program didactics

The didactic component of the program includes 10 didactic sessions in the first year and 5 in the second year. Each year’s didactics spans 12 hours of “classroom” time. See Table 1 for topics. While we drew from existing graduate coursework offered by CIR faculty, we focused on topics we felt would be important to support the clinicians’ change efforts, for example, understanding their clinical contexts, working with a range of partners to co-create an implementation plan, developing and deploying implementation strategies, evaluating their performance, and sharing results. In addition, we explicitly linked IS topics to QI (about which many of our learners were already familiar) and focused on using our electronic medical record (EMR) to support both their intervention and evaluation approaches. As we were not explicitly training the learners to become implementation scientists, we chose a “short course” format to give them the training they needed in short “bursts” which allowed them to spend the majority of their time in their mentored project experience.

Table 1. Didactic topics by year

Program competencies

Initially, we grounded the coursework and overall program in selected IS competencies of Padek et al. [Reference Padek, Colditz and Dobbins14]. Since the program launch in 2020, additional competencies were published which both covered implementation researchers and practitioners, which influenced course and program revisions (in addition to our own evaluations, described below). Table 2 denotes the relevant competencies addressed by our program.

Table 2. Implementation science (IS) competencies addressed by IS Scholars Program grouped by paper specifying those competencies

Mentoring

Mentor matching and initial discussions occur before the program begins. One senior (GMC) and two mid-career (SJL, TMS) implementation scientists serve as mentors. Together they designate mentoring pairs with scholars based on relevant expertise, shared interests, and capacity. Mentoring begins right after the initial short course (end of the 2nd month of the program). Group mentoring occurs monthly, with all scholars within a cohort meeting together with all mentors. During the first year of the program, additional experts join topic-driven mentoring sessions, for example, chief quality and informatics officers when discussing strategy development. Each group mentoring session begins with project updates from scholars, followed by collective problem-solving and brainstorming. Scholars are encouraged to engage with each other during and outside these sessions for support and sharing expertise. Individual mentoring occurs twice per month. Individual mentoring is focused on the Scholar’s project, including refining the project plan, conducting formative evaluation, linking to needed expertise or authority (e.g., EMR, medical media, statistical analysis, IRB), selecting and designing implementation strategies with partners, analyzing outcomes, and publishing results.

Scholar projects

Scholar projects address a quality or implementation gap in their clinical area. Frequently, projects address under-implementation of evidence-based practices, either “stand-alone” or as part of a clinical practice guideline. Numerous deimplementation projects have been conducted as well. All projects develop and deploy implementation strategies to improve practice. Table 3 depicts characteristics of all Scholar projects initiated to date. By design, the first year of the project focuses on understanding implementation determinants; identifying (and creating if needed) outcome measures; identifying and engaging with constituents, collaborators, and supporters; and selecting and developing implementation strategies. The second year focuses on completing any tasks from year 1 not yet completed, deploying strategies, refining them based on outcomes and feedback, assessing outcomes, and preparing dissemination products (e.g., slides for presentations, abstracts for submission, outlines and/or sections of manuscripts). At the close of each year of the program, the Scholars participate in an annual IS Scholars Symposium. The Scholars provide interim (year 1) or final (year 2) reports on their projects.

Table 3. Details of each scholar’s project

Program evaluation

Thus far in the program, we have used a combination of internal and external evaluation approaches to measure outcomes and support improvements to the program. In general, our programmatic outcomes were selected based on the program’s objectives: 1) impart selected competencies in IS from Padek et al. (2015) through didactics and a mentored project experience, 2) support the Scholars’ capacity to complete IS-informed projects, 3) support the Scholar’s capacity to disseminate findings from IS-informed projects, and 4) improve clinical care within the UAMS Health system. We were not guided by a specific evaluation framework or theory per se. However, we knew we wanted to measure a range of outcomes over time, as follows: Short term (during program): competency attainment and Scholar feedback on didactics, mentoring, and barriers/facilitators to participation; Intermediate term (within 2–3 years post program completion): project completion, barriers/facilitators to completing project, academic products, and clinical impacts; Longer term (3–5 years and beyond): continued use of training on projects and/or other QI efforts, academic products, and any more formal research activities. In addition, we wanted to closely and continually evaluate program feasibility, satisfaction of the Scholars, and solicit recommendations to improve the program.

Our internal evaluation has used survey and qualitative interviews. Our external evaluation has involved an annual invited evaluator who reviews programs materials, attends the symposium, reviews internal evaluation results, and provides a narrative evaluation in the form of a letter to the program director. Below, we summarize the processes and findings of the evaluation to date.

Internal evaluation process

At the end of their year 1, the Scholars participated in a survey and qualitative interview. Survey topics focused on their perspectives on year 1 didactics and competencies gained. The main objective of the qualitative interview was to support improvement of the program and covered a wide range of topics – e.g., overall program structure and content, didactics, mentoring, how their time is spent, expectations when entering the program, barriers to their participation and/or conducting their projects, and recommendations to improve the program. Some of the topics covered served as triangulation and/or extension for items in the survey, e.g., content and structure of the didactics, but the majority of the interview covered program feasibility and acceptability. At the end of their year 2, Scholars participated in a second qualitative interview covering similar topics and some new ones focusing on their intentions for future QI and/or implementation research, their perceived support from leadership for their future efforts, program sustainment, and focused questions on informatics and IT supports for their projects (a recurring issue). Also at the end of their year 2, Scholars began to complete a yearly tracking survey on their activities, e.g., continued work on their project, new projects initiated, academic products completed, trainings initiated/completed, and any research undertaken. In years 3 and 4 of the program, a group mentoring session was used to collect qualitative data on common barriers to engaging with the program and to explore potential solutions.

Participation in the surveys and qualitative interviews was voluntary. No incentives were given. Surveys were completed via RedCap. Our program evaluator (BST), who is an implementation researcher not directly involved with program delivery, conducted the qualitative interviews in person or by phone/televideo. Descriptive analysis of survey responses was conducted using SPSS (version 28.0, IBM Corp, Armonk, NY, 2017). Qualitative interviews were coded by the program evaluator using a topic-driven template based on the interview topics and then summarized into “key themes” (e.g., barriers/facilitators to program engagement/completion) and “recommendations for program improvement.”

External evaluation process

In each of the first four years, program leaders (GMC, SJL) invited a nationally recognized clinician-scholar in IS to serve as an external evaluator. Candidates were invited from among listed faculty of prominent national training programs in IS and/or the leadership of CTSA-supported IS programs. Evaluators were provided with an orientation to program (1 hour call with GMC) and the following materials to review: the RFA, slide deck from an information session, five funded applications, course syllabi, and a summary of survey data collected by the internal evaluator. The evaluator virtually attended the IS Scholars Symposia. During the Symposia, the external reviewer provided comments on each project and an overall reaction to the projects as a whole. Finally, the evaluator provided a narrative summary letter with their overall review. Each year, program leadership including the principal investigator of the TRI (LJ) assessed the internal and external evaluations and revised the program (as described below).

Program evaluation results

In the first four years of the program, 39 clinical faculty applied and 20 were accepted as IS Scholars. We designated 5 “slots” per year based on availability of funding and an estimate of our capacity to mentor. In year 1, we had 14 applicants, and after that we averaged seven. Most applicants and accepted Scholars were physicians (90%). One IS Scholar from the 3rd cohort left the program after 6 months for pursue a 2-year K12 Scholars Program, so we have had 19 IS Scholars complete the program through the first 4 cohorts (the last cohort completing in December 2024). Forty percent of the selected Scholars came from the Department of Pediatrics, the largest Department at UAMS and one with a strong culture of QI and scholarship. See Table 3 for details of each funded Scholar’s project, including their clinical setting, the gap in care they are addressing, the proposed intervention (i.e., to implement or de-implement), and the implementation strategies used.

Table 4 summarizes responses from the end of year 1 survey. Seventeen of 19 scholars completed the survey. In general, the Scholars rated the year 1 didactics highly, and the “dose” and duration of the sessions largely matched their preferences. Further, Scholars indicated that they felt knowledgeable and competent to perform key tasks associated with their projects, i.e., assess implementation context, connect/partner with relevant colleagues, consider implementation strategies, and evaluate them. They were mixed on whether they found attending IS activities outside their own program to be helpful; however, they reported generally that being part of the larger IS community at UAMS was important to them.

Table 4. Descriptive statistics for each survey item

* Items were answered on a 1 (Strongly Disagree) to 5 (Strongly Agree) scale; 0 = Not Applicable.

Seventeen (of 19) Scholars also completed the qualitative interview after their first year, and 12 (of 14) completed the end of year 2 interview. Interviews lasted between 30–60 minutes. Here, we summarize common emergent themes under the categories of barriers or facilitators to program engagement/completion and recommendations for program improvement. Below, we indicate program revisions which reflect these themes and recommendations.

Barriers to engagement/completion

  • The 20% protected time was not experienced consistently. Periodic staffing shortages and other clinical challenges (e.g., COVID-19) impacted most Scholars’ schedules at some point during their projects. Most reported that “clinical work comes first,” causing many Scholars to revert to evening and weekend hours to complete their project work.

  • Clinical schedules made it difficult to schedule didactics and mentoring sessions. Many sessions were held in late afternoons or evenings, which many Scholars found nonideal. Scholars working in inpatient settings missed more scheduled sessions due to unpredictable clinical needs.

  • Two years was not enough time to complete most projects. In addition to problematic clinical schedules, timelines for completing EMR-based tools were frequently lengthy, causing delays in being able to deploy them. Some Scholars reported reducing the scope of their projects to try to speed up.

  • Unforeseen contextual barriers (e.g., change in leadership, revised/new UAMS guidelines) caused shifts in plans/goals of projects and/or caused delays.

Facilitators to engagement/completion

  • Scholars reported that project mentors were a key strength of the program. They noted that their knowledge, flexibility, and commitment were critical to “staying on course” and completing projects. Many reported that the mentors could help “open doors” to influential leadership support and help address numerous barriers.

  • TRI-provided resources were helpful, for example, assess to informatics consultation and tool-building services, statistical consultation and analysis assistance). The 20% protected time was a “draw” for candidates and necessary for them to be able to participate in the program.

  • Many Scholars reported that learning to focus on contextual determinants and integrating stakeholder perspectives were invaluable competencies that often “made the difference” in creating implementations that worked.

  • Many Scholars with successful outcomes noted that vocal and active support from clinical leaders was critical. Some leaders established expectations to meet the goals of the projects and promoted use of implementation strategies.

Recommendations to improve the program

  • Many recommendations dealt with the didactic sessions: they should be 1 hour and more frequent (as opposed to fewer but longer blocks), all required readings for the courses should be released at the program start, remove didactic session on QI approaches, cover more about implementation strategies (especially EMR tools and other decision aids), provide more didactics on qualitative interviewing and analysis, start the second course at the start of year 2 (it had been given later in the year).

  • Start conversations with clinical informatics personnel sooner, have them present in year 1 didactics, and do early consults.

  • Spend more time with mentors on paper-writing. Give more published examples upon which to model their manuscripts.

  • Allow 6+ months from selection to program initiation to adjust clinical schedules and increasing planning time. Changing the start date for salary coverage to July 1 each year will assist with budget and clinical scheduling changes (to align with UAMS fiscal year).

External evaluator findings

Common themes emerged over the years, with respect to both program strengths and opportunities for improvement. The curriculum, didactic learning, and selected readings were universally identified as strengths of our program. Methodologically, the evaluators reported that program excelled at the use of rapid qualitative methods; use of appropriate theories, models, and frameworks; and perhaps most importantly, the success of projects seems to be due in part to the preimplementation assessments that allowed for adequate preparation and planning. Additional common themes among evaluators included the noticeable sense of community of practice and enthusiasm among the Scholars. Through the program, Scholars gained understanding of the relevance of IS to their practices, which was evidenced through their presentations. All external evaluators commented on the transferable nature of the program and how it may be used for other institutions interested in capacity building efforts in IS and to support learning health system goals.

Evaluators were asked to suggest opportunities for improvement in the program. Some common themes included 1) some projects were too ambitious and should be “dialed back” during the first year if needed, 2) scholars should put more emphasis in their presentations on IS and not just the clinical aspects of their projects, 3) all projects should formulate plans for small tests of change (PDSA cycles), and 4) projects should focus as well on strategies for sustainability of project outcomes after the program. External evaluators also identified the need for a more structured approach to project timelines that might allow for more Scholars to complete their projects during the funding period.

Discussion

When we started the program, we had little guidance on competencies for clinician-scholars wanting to increase implementation knowledge and skills, but not explicitly en route to becoming an implementation researcher. We reviewed the available competencies and selected those we felt applied to our learner population – that is, understanding implementation determinants, applying a determinant framework, creating partnerships, co-designing implementation strategies, building and deploying strategies, and evaluating implementation progress. We settled on these areas and under-emphasized others more tailored to research (e.g., complex designs, mechanism of action).

In addition, we knew we would need to continually iterate the program based on yearly, multi-method evaluations. Revisions thus far have been directed at three common “targets” – increasing feasibility, increasing structure, and improving skills building/competency attainment. We note here the most substantial changes made:

  • Revised year 1 course topic list, structure, and guest speakers (multiple times); revised year 2 course topic list, structure, and timing (multiple times)

  • Added Associate Chief Clinical Informaticist to paid faculty (2.5% effort) to support didactics, provide consultation, and facilitate tool building; increased frequency of individual mentoring sessions

  • Initiated and then ended a peer mentoring element (it was recommended but not feasible)

  • Increased the focus on equitable implementation (in strategy development and evaluation) and addressing rural/underserved populations (added a scored element on this during application review)

  • Changed start date of program and increased amount of time from being selected to starting the program

  • Created multiple preparation sessions for Scholars (preaward) assisting with scheduling, discussing potential informatics needs, begin partnering conversation, and preparing for context assessment

  • Created a range of new “guidance documents” (e.g., sample timelines, expectations guide) to add structure and consistency.

The last three revisions were recently added and are being utilized for the first time with cohort 5 (starting the program in July 2025). Program faculty and staff devoted a portion of their time in 2024–25 to processing the evaluation data and devising these structural changes. Feedback from cohort 4 Scholars was solicited on these changes.

While we have improved the program’s feasibility, structure, and competency attainment over time, we continue to experience challenges associated with meeting program goals and maximizing engagement. As indicated above in our qualitative analysis, we have experienced a number of barriers/challenges around project completion and generating manuscripts. A majority of our Scholars needed additional time after the 2 years to complete data collection and analysis. As indicated in our external reviews especially, some of our Scholars’ projects were ambitious from the start and unlikely to be completed in the allotted time. Others experienced delays in terms of implementation strategy development (most common) and/or outcome measure creation. Others experienced variable leadership support and/or endured local clinical/policy changes which impacted their project plans. We expect that our recent revisions to program structure and timing will help address some of these challenges – mostly by starting earlier and engaging leaders and other partners earlier and more often, but also by being more mindful in mentoring to promote better project focus.

Perhaps, our largest challenge has been paper productivity. While the majority of our Scholars have presented on their work at conferences either during or after the two-year program, a minority have published from their projects (33% of those through cohort 3; with an additional 27% having papers currently under review/in development). Part of the problem is associated with finishing projects after the two-year period of the program (when the protected time expires), but another is the amount of training and mentoring supported needed to produce the papers. Our Scholars report that these manuscripts are unlike others that they have produced; hence, they need a lot of support. The project mentors have supported Scholars 1–2 years after program completion to help with manuscripts, but this has strained their capacity. In addition, a number of Scholars have simultaneously held one or more clinical leadership roles which they reported were barriers to their overall program engagement, especially after program completion when trying to disseminate their projects. When the next cohort commences (July 2025), we hope that the recent revisions developed to better coordinate and focus the program will improve this situation.

Importantly, most of the Scholars were able to demonstrably improve care and reduce the implementation gaps their projects focused on. For example, unnecessary blood draws were reduced 20% in the pediatric NICU; opioid prescribing was significantly reduced in two clinical contexts, with one project (adult ICU) eliminating the use of high-dose opioids; antimicrobial stewardship guideline-concordance was substantially improved at Arkansas Children’s Hospital; the ICU Liberation bundle (to reduce time spend on a ventilator) was more fully implemented in the ICU; and when statewide newborn screening identified babies with Spinal Muscular Atrophy, curative medication was administered within days [Reference Robinson26]. Further, our evaluation of graduated Scholars finds a majority to have extended their projects beyond their initial stated project goals and/or initiated a new improvement effort after the program support had ended. And while not a goal of the program, a sizable minority of Scholars (40%) through cohort 3 were funded to conduct additional implementation research (two K12 awardees and 2 pilot awards from our CTSA, one VA Merit Award, and one nursing foundation award). In addition, two Scholars subsequently entered the Master’s in Clinical and Translational Sciences Program at UAMS.

Presenting the challenge of paper productivity alongside the outcomes of practice change in clinical settings highlights the balance of practice change versus research output as it relates to our program. By design, our program emphasizes practice change first (via applying IS principles and methods) and academic output second. This is perhaps unusual for a program supported primarily by a research infrastructure program (CTSA). However, CTSAs have pursued an explicit goal of promoting practice gains via translation of knowledge and findings to the clinical enterprises affiliated with their institutions, and indeed, this is the primary goal of our program. Our goal is train clinicians in IS to support practice change, not turn them into researchers. We feel this is aligned with the ultimate goal of IS - to change practice. Of note, given that this is a different goal than other training programs (and we are also researchers), it has been challenging to communicate this nuance to local leaders. We still feel that dissemination of this work is important – especially in supporting other health systems in making these types of changes. Therefore, we are considering how to better support scholars to write papers or produce output.

The future of the program was ensured in 2024 with the renewal of the TRI’s CTSA award for seven more years. While we had to reduce the number of CTSA-support slots per year due to budget constraints (from five to two), we are confident that we can fund additional slots through other avenues (e.g., the Department of Pediatrics is funding an additional slot in cohort 5). In addition, working with fewer Scholars per year could produce benefits in terms quality of applications funded, mentoring and informatics resource allocations, and the ability of the program to focus more on rural/underserved populations (as is a goal overall within the TRI). At the same time, we wish to increase knowledge and interest in the program and increase the number of applicants. Starting in the last quarter of 2024, program leaders launched new outreach efforts to health system leaders to describe the program and increase support. Over the years we have heard from clinicians who wanted to apply, but felt they would not have been supported to do so. We hope to increase knowledge and buy-in among local health system leaders.

Moving into the future, we will also expand our evaluation activities. We will assess self-ratings of IS competencies [Reference Alonge, Rao and Kalbarczyk27] at the start of the program and at regular intervals following that time. We will add a parallel survey for the end of year 2 focusing on the year 2 coursework and competencies. We will conduct one Scholar-wide focus group at the mid-point of each program year to allow Scholars to provide feedback and identify unmet needs on their projects and the program as a whole. Further, in 2025 we will conduct a system-wide survey to assess unmet need and potential demand for the program. We will continue to collect longer-term outcomes of the program, for example, manuscripts, additional projects initiated and completed, new research initiated, and clinical impacts, and submit additional evaluative manuscripts in the future.

Acknowledgements

We are very grateful to the following individuals who currently or formerly served as reviewers of IS Scholars’ applications: Jared Caputa, MD; Troy Schmit, MS; Jessica Coker, MD; Bridget Norton, MD. We are very grateful to the following individuals who served as invited external reviewers for the IS Scholars Program: Elvin Geng, MD; Jane Mahoney, MD; Meghan Lann-Fall, MD; Michele Heisler, MD. We are very very grateful to the IS Scholars themselves, whose names we are not listing. Their hard work and dedication have been inspiring, and we very much appreciate their feedback and coaching on how to improvement program.

Author contributions

Geoffrey Curran: Conceptualization, Data curation, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Writing-original draft, Writing-review & editing; Sara Landes: Conceptualization, Data curation, Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing-original draft, Writing-review & editing; Taren Massey-Swindle: Conceptualization, Formal analysis, Investigation, Supervision, Writing-original draft, Writing-review & editing; Benjamin Teeter:Conceptualization, Data curation, Project administration, Resources, Writing-original draft, Writing-review & editing; Cynthia Mosley: Conceptualization, Data curation, Project administration, Resources, Writing-original draft, Writing-review & editing; Jennifer Naylor: Data curation, Project administration, Resources, Writing-original draft, Writing-review & editing; Laura James: Conceptualization, Funding acquisition, Project administration, Resources, Writing-original draft, Writing-review & editing.

Funding statement

This work was supported by the National Institute of Health, National Center for Advancing Translational Sciences, Clinical Translational Science Awards at the University of Arkansas for Medical Sciences: U54TR001629, UL1TR003107, and UM1TR004909. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Competing interests

The authors declare none.

References

Eccles, MP, Mittman, BS. Welcome to implementation science. Implement Sci. 2006;1(1):13. doi: 10.1186/1748-5908-1-1.Google Scholar
Leppin, AL, Mahoney, JE, Stevens, KR, et al. Situating dissemination and implementation sciences within and across the translational research spectrum. J Clin Transl Sci. 2019;4(3):152158. doi: 10.1017/cts.2019.392.Google Scholar
Austin, CP. Opportunities and challenges in translational science. Clin Transl Sci. 2021;14(5):16291647. doi: 10.1111/cts.13055.Google Scholar
Faupel-Badger, JM, Vogel, AL, Austin, CP, Rutter, JL. Advancing translational science education. Clin Transl Sci. 2022;15(11):25552566. doi: 10.1111/cts.13390.Google Scholar
Davis, R, D’Lima, D. Building capacity in dissemination and implementation science: a systematic review of the academic literature on teaching and training initiatives. Implementation Sci. 2020;15(1):97. doi: 10.1186/s13012-020-01051-6.Google Scholar
Huebschmann, AG, Johnston, S, Davis, R, et al. Promoting rigor and sustainment in implementation science capacity building programs: a multi-method study. Implement Res Pract. 2022;3:263348952211462. doi: 10.1177/26334895221146261.Google Scholar
Dearing, JW, Kee, K, Peng, T. Historical Roots of Dissemination and Implementation Science. In: Dissemination and Implementation Research in Health: Translating Science to Practice. 3rd ed. Oxford University Press, 2023: 6985.Google Scholar
Proctor, EK, Landsverk, J, Baumann, AA, et al. The implementation research institute: training mental health implementation researchers in the United States. Implement Sci. 2013;8(1):105.Google Scholar
Viglione, C, Stadnick, NA, Birenbaum, B, et al. A systematic review of dissemination and implementation science capacity building programs around the globe. Implement Sci Commun. 2023;4(1):34. doi: 10.1186/s43058-023-00405-7.Google Scholar
Chambers, DA, Proctor, EK, Brownson, RC, Straus, SE. Mapping training needs for dissemination and implementation research: lessons from a synthesis of existing D&I research training programs. Behav Med Pract Policy Res. 2017;7(3):593601. doi: 10.1007/s13142-016-0399-3.Google Scholar
Chambers, DA, Emmons, KM. Navigating the field of implementation science towards maturity: challenges and opportunities. Implementation Sci. 2024;19(1): 26. doi: 10.1186/s13012-024-01352-0.Google Scholar
Leppin, AL, Baumann, AA, Fernandez, ME, et al. Teaching for implementation: a framework for building implementation research and practice capacity within the translational science workforce. J Clin Trans Sci. 2021;5(1):e147. doi: 10.1017/cts.2021.809.Google Scholar
Leshner, AI, Terry, SF, Schultz, AM, et al. Introduction. In: The CTSA Program at NIH: Opportunities for Advancing Clinical and Translational Research. National Academies Press (US), 2013. (https://www.ncbi.nlm.nih.gov/books/NBK169203/) Accessed December 30, 2024.Google Scholar
Padek, M, Colditz, G, Dobbins, M, et al. Developing educational competencies for dissemination and implementation research training programs: an exploratory analysis using card sorts. Implement Sci. 2015;10(1):114. doi: 10.1186/s13012-015-0304-3.Google Scholar
Davis, R, Sevdalis, N, Baumann, AA. Training and Capacity Building in Dissemination and Implementation Science. In: Dissemination and Implementation Research in Health: Translating Science to Practice. 3rd ed. Oxford University Press, 2023: 644662.Google Scholar
Metz, A, Louison, L, Burke, K, Albers, B, Ward, C. Implementation support practitioner profile:Guiding principles and core competencies for implementation practice. Chapel Hill, NC: National Implementation Research Network, University of North Carolina at Chapel Hill. 2020.Google Scholar
University of Arkansas for Medical Sciences. About UAMS. About UAMS. 2023. (https://web.uams.edu/about/) Accessed January 17, 2023.Google Scholar
University of Arkansas for Medical Sciences. About regional campuses. 2023. (https://regionalcampuses.uams.edu/about/) Accessed January 17, 2023.Google Scholar
Committee to Review the Clinical and Translational Science Awards Program at the National Center for Advancing Translational Sciences; Board on Health Sciences Policy; Institute of Medicine; Leshner AI, Terry SF, Schultz AM, et al., eds. The CTSA Program at NIH: Opportunities for Advancing Clinical and Translational Research. Washington (DC): National Academies Press (US); 2013. Summary (https://www.ncbi.nlm.nih.gov/books/NBK169198) Accessed July 3, 2025.Google Scholar
University of Arkansas for Medical Sciences. About TRI. 2023 (https://tri.uams.edu/about-tri/) Accessed January 17, 2023.Google Scholar
Morris, TH. Experiential learning – a systematic review and revision of Kolb’s model. Interact Learn Envir. 2020;28(8):10641077. doi: 10.1080/10494820.2019.1570279.Google Scholar
Kong, Y. The role of experiential learning on students’ motivation and classroom engagement. Front Psychol. 2021;12:771272. doi: 10.3389/fpsyg.2021.771272.Google Scholar
Maudsley, G, Strivens, J. Promoting professional knowledge, experiential learning and critical thinking for medical students. Med Educ. 2000;34(7):535544. doi: 10.1046/j.1365-2923.2000.00632.x.Google Scholar
National Institutes of Health. Scoring guidance. 2023. (https://grants.nih.gov/grants/policy/review/rev_prep/scoring.htm) Accessed January 27, 2023.Google Scholar
Schultes, MT, Aijaz, M, Klug, J, Fixsen, DL. Competences for implementation science: what trainees need to learn and where they learn it. Adv in Health Sci Educ. 2021;26(1):1935. doi: 10.1007/s10459-020-09969-8.Google Scholar
Robinson, D. UAMS Physician’s New Skills and Lucky Timing Save Vilonia Baby from Deadly, Disabling Disease. 2021. Retrieved from https://news.uams.edu/2021/05/07/uams-physicians-new-skills-and-lucky-timing-save-vilonia-baby-from-deadly-disabling-disease/.Google Scholar
Alonge, O, Rao, A, Kalbarczyk, A, et al. Multimethods study to develop tools for competency-based assessments of implementation research training programmes in low and middle-income countries. BMJ Open. 2024;14(7):e082250. doi: 10.1136/bmjopen-2023-082250.Google Scholar
Figure 0

Table 1. Didactic topics by year

Figure 1

Table 2. Implementation science (IS) competencies addressed by IS Scholars Program grouped by paper specifying those competencies

Figure 2

Table 3. Details of each scholar’s project

Figure 3

Table 4. Descriptive statistics for each survey item