Hostname: page-component-54dcc4c588-54gsr Total loading time: 0 Render date: 2025-10-04T23:34:52.188Z Has data issue: false hasContentIssue false

Approaches and tools to measure individual-level research experience, activities, and outcomes: A narrative review

Published online by Cambridge University Press:  11 August 2025

Brenda M. Joly*
Affiliation:
Public Health Program, Muskie School of Public Service, University of Southern Maine, Portland, USA
Carolyn Gray
Affiliation:
Cutler Institute for Health and Social Policy, Muskie School of Public Service, University of Southern Maine, Portland, USA
Julia Rand
Affiliation:
Public Health Program, Muskie School of Public Service, University of Southern Maine, Portland, USA
Katy Bizier
Affiliation:
Public Health Program, Muskie School of Public Service, University of Southern Maine, Portland, USA
Karen Pearson
Affiliation:
Cutler Institute for Health and Social Policy, Muskie School of Public Service, University of Southern Maine, Portland, USA
*
Corresponding author: B.M. Joly; Email: brenda.joly@maine.edu
Rights & Permissions [Opens in a new window]

Abstract

Strengthening the research workforce is essential for meeting the evolving needs and challenges in the health and biomedical fields. To do so effectively, it requires an understanding of how the experiences of a researcher shift over time and how one’s research career evolves, particularly as supports are put in place to foster research. This narrative review provides a summary of published individual-level assessment measures and survey tools from 2000–2024. All measures were abstracted, classified, and coded during analyses to describe the areas of focus, and they were organized into one of six research categories. The review identified a range of measures and methods across all categories. However, the measures were often narrow, focused on outputs, and not ideal for assessing the full range of experiences a researcher may have throughout their career. The most common metrics were related to research productivity and bibliometric measures. Our review of survey tools revealed a gap in comprehensive approaches available to assess an individual’s research experience, efforts, supports, and impact. As efforts expand to evaluate and study the research workforce, tools that focus on a broad range of individual-level measures, tied to specific underlying constructs and drawn from the literature, may prove useful.

Information

Type
Review Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science

Introduction

The need to provide research support to early career scientists and clinicians is well documented in the literature [Reference Blanchard, Kleppel and Bianchi1Reference Skinnider, Twa, Squair, Rosenblum and Lukac9]. In the United States, the demand for skilled researchers in the health and biomedical field is expected to increase as the gap between supply and demand continues to widen [Reference Freel, Snyder and Bastarache10Reference Salata, Geraci and Rockey12]. Studies suggest that the path to research independence takes longer now than in previous decades, resulting in what is known as a “holding zone [Reference Milewicz, Lorenz, Dermody and Brass6].” Efforts have been underway to promote diversity in the workforce [Reference Estape, Quarshie and Segarra13] and to attract and build a pipeline of researchers to address contemporary health issues and challenges [Reference Salata, Geraci and Rockey12]. Several federal agencies, including the Veterans Affairs, the National Institutes of Health (NIH), and the Agency for Healthcare Research and Quality, provide career support, training, and mentoring [Reference Finney, Amundson and Bi4]. One example is the support NIH provides for clinical and translational researchers through two national programs. These programs, known as Clinical and Translational Science Awards (CTSAs) and Institutional Development Award Networks for Clinical and Translational Research (IDeA-CTR) seek to improve health by fostering new research and accelerating its use. A key strategy of these programs relies on building, training, and strengthening the research workforce and ongoing efforts to evaluate the success of these initiatives based on published guidelines [Reference Trochim, Rubio and Thomas14].

While efforts to measure individual-level changes in the health research workforce remain a priority, there are few tools and systematic approaches to do so. Bilardi and colleagues [Reference Bilardi, Rapa, Bernays and Lang15] found a lack of comprehensive tools that are widely applicable and able to provide standardized and consistent measures. This narrative review describes the characteristics and focus areas of individual-level metrics designed to assess research experience, activities, and outcomes. It uses an organizational framework to categorize the measures into one of six areas. The review also summarizes the use of common bibliometric indicators and survey tools, identified and flagged during the review process, to provide further details on their focus areas and use.

Methods

Review and abstraction process

An initial review was conducted in 2021 to scan the published literature for tools and frameworks associated with return on investment (ROI) particularly related to federally funded programs such as the CTSAs. Results indicated relatively few studies that report implementing an evaluative framework to capture the individual-level productivity and impact of research across the career path. That search provided the foundation for the current study reported here, whose purpose was to identify any individual-level assessment tools and measures and determine gaps and opportunities for development of future measures.

A literature search was conducted in 2022 by an experienced librarian to identify publications that included indicators or measures related to an individual’s research-related experience and activities, as well as the outcomes or impact of their research. A second professional librarian provided input in December 2022 on search strategies and additional databases in the disciplines of psychology, education, and engineering, to identify articles, including review articles that focused on assessing or measuring research productivity and/or research capacity, especially for junior faculty on a research career trajectory. The aim was to identify as many tools and individual metrics as possible that were relevant to clinical and translational research in order to create a new, more comprehensive tool. The following questions framed the search: what are the key factors that enable successful research productivity and how is that measured in this discipline? Are there specific assessment tools (e.g., surveys) pertinent to each discipline? The databases searched included: PubMed, Academic Search Complete, Business Source Complete, Cochrane Database of Systematic Reviews, ERIC, Ei Compendex, MMYB with Tests in Print, APAPsychInfo, Web of Science, and Google Scholar. All databases were searched for publications from 2010 through 2022. Search terms included combinations of the following key terms: research capacity, research investment, researcher productivity, translational research, research assessment, assessment tool, and the wild card term research method*. As seen in Figure 1, 124 publications were identified through an initial search of PubMed and Google Scholar. The database search of non-health disciplines yielded an additional 69 articles. A total of 44 publications were pulled from the reference lists of relevant articles scanned by the lead author. Fourteen articles retrieved from a previous internal study (unpublished) focusing on clinical and translational research ROI were included in this review. All publications were exported into Endnote and an annotated subject bibliography was produced for team review. Based on our analysis and coding approach (see below), we ran our search again in 2023-2024 in PubMed and Google Scholar, yielding an additional 30 articles to fill in gaps identified in the preliminary search. A final peer-review yielded five additional articles for inclusion.

Figure 1. Search Process and Results.

The publications were reviewed and abstracted using a standardized approach (described below) and three separate spreadsheets. The first spreadsheet recorded all individual-level measures (e.g., serving as a peer reviewer). The second spreadsheet recorded all bibliometric indicators identified during the review. The third spreadsheet captured survey tools that were also identified and flagged during the review. All survey tools (named or unnamed) that included a structured questionnaire with a set of items (scale) or set of scales measuring constructs were abstracted. The tools were reviewed, regardless of whether they reported psychometric testing. However, any tools that were strictly qualitative, inadequately described, or or omitted the item wording and response options were excluded, as were post-program surveys, evaluations, or tracking systems not designed to measure underlying constructs.

Individual-level measures

Figure 2 depicts 17 focus areas identified by the lead author from the literature for coding purposes. The codes were classified into six overarching research categories. All but four codes were determined a priori based on areas deemed relevant to the evaluation of NIH-funded clinical and translational research initiatives. Each measure was recorded under one of the 17 focus areas. The individual-level measures, citation, and the question wording and response options (when available, and as applicable) were catalogued in the spreadsheet..

Figure 2. Assigned Research Categories and Focus Areas Used in Coding. (Note: * Codes developed a priori).

Bibliometric measures and survey tools

The data abstraction for all flagged bibliometric indicators included the measure name, definition, focus, and type (e.g., individual or organizational), as well as the date published, and the author and complete citation. The following information was abstracted for each survey tool: the year of publication, the authors and citation, the study location, the name of the tool (if applicable), the total number of items, the item wording and response options, the respondents, a description, and any reported validation efforts.

Results

Individual-level measures by research category

The results are discussed within the framework of the six research categories depicted in Figure 2.

Research category #1: research productivity

The most frequently used measures focused on the concept of research productivity and they were typically assessed by exploring publications and research grants or funding. As noted below, most approaches relied on quantifying publication efforts (e.g., total publications), and external grants (e.g., total number of awards). However, in a few instances, algorithms were used to measure the productivity of researchers. For example, propensity score matching was used to compare funded and non-funded researchers with impact scores [Reference Tesauro, Seger, Dijoseph, Schnell and Klein16]. Additionally, Wootton and colleagues [Reference Wootton17] developed a research output score based on the sum of three measures: 1) peer reviewed publications, 2) research grant income, and 3) PhD student supervision. In this case, peer reviewed papers were assigned publication “points” based on publication year, journal impact factor (JIF), and author position. Grants were included if they were awarded competitively and within a given timeframe and they were weighted by role and took into account the grant “income.” Points were given for supervising PhD students when a thesis was aligned with their research and occurred during the year in review [Reference Wootton17]. Finally, one study explored faculty research productivity by calculating the number of publications during the first few years of an academic appointment, prior to an initial promotion, and following promotion. This study also explored research productivity to determine associations based on rank and length of time at a given rank [Reference Tien and Blackburn18].

Publications

Overall, the most frequently published productivity measures were related to self-reported publications [Reference Blanchard, Kleppel and Bianchi1,Reference Chou, Hammon and Akins3,Reference Wootton17Reference Scott Van Epps and Younger23] and the use of bibliometric indicators based on existing databases (described below) [Reference Tesauro, Seger, Dijoseph, Schnell and Klein16,Reference Abramo, D’Angelo and Viel24Reference Zink and Curran45]. Publications were measured by computing the total number overall and by type (e.g., peer-reviewed) [Reference Finney, Amundson and Bi4,Reference Mahoney, Verma and Morantz21,Reference Akl, Meerpohl and Raad26,Reference Caminiti, Iezzi, Ghetti, De’ Angelis and Ferrari29,Reference Lai, Saxena and Allen34Reference Mavis and Katz36,Reference Creswell46], tallying articles published during a set time period [Reference Brocato and Mavis20,Reference Mahoney, Verma and Morantz21,Reference Barreto, McCoy and Larson27,Reference Zink and Curran45], focusing on articles linked to funded projects or training programs [Reference Mason, Lei and Faupel-Badger5,Reference Mahoney, Verma and Morantz21,Reference Tsai, Ordóñez, Reus and Mathews47], analyzing the publication date and sometimes comparing it to when specific funds were received [Reference Brocato and Mavis20], assessing the impact factor of top tier journals, and weighing authorship order and authorship collaboration [Reference Finney, Amundson and Bi4,Reference Bautista-Puig, Lorente and Sanz-Casado28,Reference Duffy, Martin, Bryan and Raque-Bogdan31,Reference Lai, Saxena and Allen34,Reference Ponomariov and Boardman41,Reference Farrokhyar, Bianco and Dao48], Self-reported publication measures were typically compiled through a review of curriculum vitas or survey questions [Reference Chou, Hammon and Akins3,Reference Tien and Blackburn18,Reference Scott Van Epps and Younger23] or via a publication list [Reference Mahoney, Verma and Morantz21].

The JIF is another common approach that has been used to explore publications. Journals with high impact numbers have been used as a vehicle to secure grants, tenure, and raise awareness of translational research [Reference Nair49]. Using the JIF along with the h-index (described below) is useful in assessing an individual researcher’s impact across their career trajectory. Of note is the current movement away from the JIF, instead promoting a more comprehensive assessment of research quality that goes beyond the basic reliance on the influence of the journal in which the studies are published and looks at the merits of the individual studies. To that end, the San Francisco Declaration on Research Assessment provides a set of recommendations for researchers, institutions, and publishers in the measurement and dissemination of research publications [50].

Bibliometric measures

In the last two decades, bibliometric indicators have been used to assess metrics focused on lifetime publications, citations, authorship order, publication rates, and scholarship quality [Reference Wootton17,Reference Dakik, Kaidbey and Sabra30,Reference Lai, Saxena and Allen34,Reference Mavis and Katz36], A number of indices have been created to compute scores or values based on publication and citation data. In 2005, the h-index was published as a new approach to characterize research output that could be used across scientific disciplines [Reference Hirsch32,Reference Hirsch51]. This index is among the most commonly reported and it relies on both the publication and citation record of an individual [Reference Hirsch and Buela-Casal52]. As seen in Table 1, several additional approaches using bibliometric data now exist including the: g-index [Reference Abramo, D’Angelo and Viel24], integrated researcher productivity index [Reference Duffy, Martin, Bryan and Raque-Bogdan31], relative citation ratio [Reference Hutchins, Yuan, Anderson and Santangelo53], scientific quality index (SQI) [Reference Pluskiewicz, Drozdzowska, Adamczyk and Noga40], scholarly productivity index [Reference Walters44], category normalize citation impact (CNCI) [Reference Szomszor, Adams and Fry43], Ab-index [Reference Biswal54], fractional scientific strength index [Reference Abramo, D’Angelo and Viel24,Reference Abramo and D’Angelo55], and the future h-index [Reference Acuna, Allesina and Kording25]. These measures largely focus on productivity by exploring an individual’s cumulative publication and citation record using different assumptions and algorithms.

Table 1. Bibliometric measure

Research funding

There was a lack of consistency in the ways in which funding data were obtained ranging from surveys that included yes or no responses, Likert-type scales options, or open-ended questions. Some approaches relied on a review of existing documents, administrative records, or curriculum vitas. Overall, the number, amount (e.g., total research awards in dollars), type (e.g., competitive, extramural, foundation), and source of awards (e.g., federal, sponsor name) received were the most frequent measures used to assess research funding [Reference Blanchard, Kleppel and Bianchi1,Reference Chou, Hammon and Akins3,Reference Mason, Lei and Faupel-Badger5,Reference Brocato and Mavis20,Reference Barreto, McCoy and Larson27,Reference Caminiti, Iezzi, Ghetti, De’ Angelis and Ferrari29,Reference Mavis and Katz36,Reference Akabas, Brass and Tartakovsky56Reference Prasad and Goldstein61]. These measures represented the volume of funding (e.g., how many research grants, the total award amounts) [Reference Finney, Amundson and Bi4,Reference Bawden, Manouchehri, Villa-Roel, Grafstein and Rowe19,Reference Akabas, Brass and Tartakovsky56,Reference Dev, Kauf and Zekry57], the first time a particular type of award was received [Reference Garrison and Deschamps58], the consistency of funding over time [Reference Mason, Lei and Faupel-Badger5,Reference Sweeney, Schwartz, Toto, Merchant, Fair and Gabrilove62], the nature of the award [Reference Mason, Lei and Faupel-Badger5,Reference Tesauro, Seger, Dijoseph, Schnell and Klein16,Reference Mahoney, Verma and Morantz21], and the prestige of the funding source [Reference Chou, Hammon and Akins3,Reference Nikaj and Lund8,Reference Brocato and Mavis20,Reference Dev, Kauf and Zekry57,Reference Halvorson, Finlay and Cronkite59]. Several studies also investigated the researcher’s role (e.g., Principal Investigator), proposal submissions, and award decisions including those receiving and not receiving funding [Reference Finney, Amundson and Bi4,Reference Mason, Lei and Faupel-Badger5,Reference Nikaj and Lund8,Reference Brocato and Mavis20,Reference Panettieri, Kolls and Lazarus22,Reference Akl, Meerpohl and Raad26,Reference Barreto, McCoy and Larson27,Reference Dev, Kauf and Zekry57,Reference Libby, Hosokawa, Fairclough, Prochazka, Jones and Ginde60,Reference Brass and Akabas63,Reference Joss-Moore, Lane, Rozance, Bird and Albertine64]. A few studies explored potential factors impacting subsequent funding such as career development training [Reference Mason, Lei and Faupel-Badger5,Reference Halvorson, Finlay and Cronkite59,Reference Joss-Moore, Lane, Rozance, Bird and Albertine64,Reference Robinson, Schwartz, DiMeglio, Ahluwalia and Gabrilove65]. More recently, studies have focused on the number and type of funding received among early-stage researchers [Reference Chou, Hammon and Akins3,Reference Nikaj and Lund8,Reference Halvorson, Finlay and Cronkite59,Reference Libby, Hosokawa, Fairclough, Prochazka, Jones and Ginde60,Reference Joss-Moore, Lane, Rozance, Bird and Albertine64]. For example, Chou and colleagues [Reference Chou, Hammon and Akins3] surveyed new and early stage faculty who were the recipients of pilot funding through the Oklahoma IDeA Network of Biomedical Research Excellence Research Project Investigator award program.

Research category #2: research activities and skills

This group of measures focused on a number of activities in which researchers are involved throughout their career and they were often aligned with the productivity measures noted above. For instance, the most common activities were related to publishing and securing funding through grant writing [Reference Bawden, Manouchehri, Villa-Roel, Grafstein and Rowe19,Reference Caminiti, Iezzi, Ghetti, De’ Angelis and Ferrari29,Reference Alison, Zafiropoulos and Heard66Reference Smith, Wright, Morgan, Dunleavey and Moore68]. Additional activities were linked to presenting research, including early career experience delivering oral presentations [Reference Ommering, van Blankenstein, van Diepen and Dekker69], and submitting unsuccessful grant proposals and articles that have not been published [Reference Caminiti, Iezzi, Ghetti, De’ Angelis and Ferrari29]. The literature also identified items focused on participation in external NIH advisory groups or other committees [Reference Mason, Lei and Faupel-Badger5] serving as a peer reviewer for papers or grants [Reference Caminiti, Iezzi, Ghetti, De’ Angelis and Ferrari29], serving in an editorial role [Reference Finney, Amundson and Bi4], teaching [Reference Caminiti, Iezzi, Ghetti, De’ Angelis and Ferrari29,Reference Alison, Zafiropoulos and Heard66], training or mentoring students or post graduates [Reference Nikaj and Lund8,Reference Brass and Akabas63,Reference Yan, Lao and Lu70], participating in scientific or professional societies [Reference Mason, Lei and Faupel-Badger5], submitting ethics and regulatory applications [Reference Schreiber and Giustini42], receiving prizes or awards tied to research [7,Reference Abramo, D’Angelo and Viel24], and contributing to new guidelines [Reference Paul and Mukhopadhyay39].

Several research activity measures were also used to assess individual skills, or the self-efficacy of selected research skills related to: the conceptualization of a research project [Reference Sweeney, Schwartz, Toto, Merchant, Fair and Gabrilove62,Reference Joss-Moore, Lane, Rozance, Bird and Albertine64], proposal or grant writing [Reference Libby, Hosokawa, Fairclough, Prochazka, Jones and Ginde60,Reference Sweeney, Schwartz, Toto, Merchant, Fair and Gabrilove62], regulatory compliance [Reference Libby, Hosokawa, Fairclough, Prochazka, Jones and Ginde60,Reference Robinson, Schwartz, DiMeglio, Ahluwalia and Gabrilove65], management and oversight [Reference Joss-Moore, Lane, Rozance, Bird and Albertine64Reference Alison, Zafiropoulos and Heard66], the collection, recording, and analyses of data [Reference Robinson, Schwartz, DiMeglio, Ahluwalia and Gabrilove65], and the dissemination of findings [Reference Joss-Moore, Lane, Rozance, Bird and Albertine64], including report development and presentations [Reference Robinson, Schwartz, DiMeglio, Ahluwalia and Gabrilove65].

Research category #3: factors influencing researcher efforts

Facilitating factors

Several studies focused on motivators to do research [Reference Brass and Akabas63,Reference Sarli, Dubinsky and Holmes67,Reference Smith, Wright, Morgan, Dunleavey and Moore68] and the characteristics, beliefs, and roles of those involved in research, including individual attributes [Reference Brass and Akabas63,Reference Ommering, van Blankenstein, van Diepen and Dekker69], research self-efficacy [Reference Brass and Akabas63], administrative roles [Reference Yan, Lao and Lu70], those in top positions [Reference Kalet, Lusk and Rockfeld71], and those who face greater challenges competing for research funding [Reference Choo, Mathis and Harrod2]. For example, Ommering and colleagues [Reference Ommering, van Blankenstein, van Diepen and Dekker69] studied the association between academic success and measures of motivation and research self-efficacy among first year medical students based on their grades in a research-related course. They included nine items measuring intrinsic and extrinsic motivation for research (e.g., doing research is interesting, fun, challenging and is useful for my resume), and three items related to self-efficacy (e.g., I feel I am competent enough to do research).

A range of factors linked to one’s research skills, quality, success, attrition, and overall productivity, volume, and scholarly impact were also explored [Reference Choo, Mathis and Harrod2,Reference Hirsch and Buela-Casal52,Reference Yan, Lao and Lu70,Reference Dzirasa, Krishnan and Williams72,Reference Goldstein, Blair and Keswani73]. For example, one study examined the role of age, academic rank, self-confidence, years of research experience, and teaching load [Reference Yan, Lao and Lu70]. Common facilitating factors promoting research included: compensation [Reference Hirsch51,Reference Dzirasa, Krishnan and Williams72], start-up support [Reference Lowenstein, Fernandez and Crane74], career satisfaction [Reference Dzirasa, Krishnan and Williams72], percent of time dedicated to research—known as protected, uninterrupted, or quarantined time [Reference Dev, Kauf and Zekry57,Reference Smith, Wright, Morgan, Dunleavey and Moore68,Reference Lowenstein, Fernandez and Crane74], prior research socialization and opportunities [Reference Bilardi, Rapa, Bernays and Lang15,Reference Hirsch51], laboratory experience [Reference Lowenstein, Fernandez and Crane74], skills training [Reference Smith, Wright, Morgan, Dunleavey and Moore68], and early career research engagement [Reference Feldon, Litson and Jeong75]. Our review also revealed that work considered personally meaningful, with a flexible schedule and collaborative team environment promoted satisfaction. For instance, Kalet and associates [Reference Kalet, Lusk and Rockfeld71] surveyed women who participated in the Clinical Scholars Program sponsored by the Robert Wood Johnson Foundation. They found that flexibility was critical in their decision to remain in an academic career given greater perceived freedom to care for family, attend school events or appointments. Dzirasa and colleagues [Reference Dzirasa, Krishnan and Williams72] published a case study of a pilot training program for a single MD/PhD graduate to explore research engagement, unyoking clinical and research milestones, and a path to independence based on an integrated training model, dedicated research time, space, resources and salary support, as well as mentorship. The program proved successful resulting in NIH funding and research independence within 3.5 years versus the average nine-year schedule typical for MD/PhDs [Reference Dzirasa, Krishnan and Williams72].

Research barriers

Impediments included uncompensated research costs as well as lack of resources or supports [Reference Hirsch51]. Increased competition for funding, administrative burdens, student debt, and issues related to regulatory compliance, work-life expectations, and reduced supports were also cited as obstacles to research for clinical scientists [Reference Goldstein, Blair and Keswani73]. Goldstein and colleagues [Reference Goldstein, Blair and Keswani73] reported a major impediment for surgeon-scientists is the long hours required resulting in difficulty achieving a work-life balance and their paper underscored the need for strong social support. One study of medical school faculty found that the strongest predictor of intent to leave academic medicine was problems balancing the demands of family and career [Reference Lowenstein, Fernandez and Crane74]. The lack of timely and constructive support from departmental leadership was also linked to attrition [Reference Lowenstein, Fernandez and Crane74].

Early career and professional development supports

A number of articles explored early career plans and development opportunities. For example, one study asked medical students how extensively they planned to be involved in research during their career [Reference Garrison and Deschamps58]. Other studies focused on the characteristics of training programs including the duration [Reference Brass and Akabas63], timing (e.g., during schooling), components (e.g., laboratory experience), and participants (e.g., research staff and Principal Investigators) [Reference Feldon, Litson and Jeong75]. A few studies focused on the quality and culture of graduate training [Reference Dundar and Lewis76], academic rank of participants [Reference Monsura, Dizon, Tan and Gapasin77], and personal factors linked to training participation [Reference Rubio, Robinson, Gabrilove and Meagher78]. More recently, several studies have begun to focus on individual groups of researchers who have participated in tailored career development programs or opportunities including: respiratory disease young investigators [Reference Panettieri, Kolls and Lazarus22], Perinatal Research Society scholars [Reference Joss-Moore, Lane, Rozance, Bird and Albertine64], clinical investigator training programs [Reference Kalet, Lusk and Rockfeld71,Reference Saleh, Naik and Jester79], medical students [Reference Garrison and Deschamps58], MD/PhD programs [Reference Skinnider, Squair and Twa80], and new or early stage faculty or scientists [Reference Chou, Hammon and Akins3,Reference Mason, Lei and Faupel-Badger5,Reference Nikaj and Lund8,Reference Comeau, Escoffery, Freedman, Ziegler and Blumberg81Reference Smyth, Coller and Jackson84].

Research supports and environment

The literature in this area is largely descriptive, lacking specificity in terms of how items are measured and collected. A number of research supports were identified including assistance provided by grant programs and staff [Reference Blanchard, Kleppel and Bianchi1], leadership support [Reference Connors, Pacchiano, Stein and Swartz85], financial and social supports [Reference Goldstein, Blair and Keswani73], organizational supports [Reference Ari, Iskander and Araujo86], and structures that facilitate interdisciplinary work and encourage work with external partners [Reference Connors, Pacchiano, Stein and Swartz85]. Additional institutional practices that define clear roles, have fair decision-making, adopt agreed upon processes for reaching agreement, and offer rewards were also identified [Reference Panettieri, Kolls and Lazarus22,Reference Monsura, Dizon, Tan and Gapasin77,Reference Connors, Pacchiano, Stein and Swartz85,Reference Bland, Center, Finstad, Risbey and Staples87Reference Pager, Holden and Golenko90]. Added supports were characterized by focusing on the research systems in place at a given organization [Reference Bland, Center, Finstad, Risbey and Staples87], the managerial actions related to resource allocation [Reference Goldstein, Blair and Keswani73], the research culture of specific departments [Reference Ommering, van Blankenstein, van Diepen and Dekker69], and the overall research culture of the organization, including the mentoring climate [Reference Bilardi, Rapa, Bernays and Lang15,Reference Yan, Lao and Lu70,Reference Tigges, Sood, Dominguez, Kurka, Myers and Helitzer91,Reference Yu, Van and Patel92]. For example, the literature on translational research revealed the importance of organizational cultures that support collaborative work and entrepreneurial science skills [Reference Heslin88]. Fudge and colleagues [Reference Fudge, Sadler, Fisher, Maher, Wolfe and McKevitt93] found a reported shift in attitudes among scientists who perceived an advantage to securing funding for translational research based on becoming more “entrepreneurial” or working in an entrepreneurial organization.

Research category #4: research mentorship

The literature on research mentorship largely focuses on characteristics, components, value, or benefits of mentoring programs, as well as the effectiveness of research mentors. Common areas of research inquiry include the quality of the mentor/mentee relationship [Reference Bice, Hollman, Ball and Hollman94], mentee satisfaction and retention [Reference McRae and Zimmerman95], mentorship hours received and the types of research projects in which mentees participated [Reference Yu, Van and Patel92], as well as the mentoring skills of those providing research mentorship [Reference Bice, Hollman, Ball and Hollman94,Reference Fleming, House and Hanson96Reference Hyun, Rogers, House, Sorkness and Pfund98]. A few studies have underscored the importance mentoring programs have had on women, medical school faculty, and underrepresented groups [Reference Lowenstein, Fernandez and Crane74,Reference Tigges, Sood, Dominguez, Kurka, Myers and Helitzer91,Reference Pololi, Knight, Dennis and Frankel99] and the value of creating a career map or academic development plan [Reference Pololi, Knight, Dennis and Frankel99,Reference Cordrey, King, Pilkington, Gore and Gustafson100]. One study conducted by Bice and associates [Reference Bice, Hollman, Ball and Hollman94] examined faculty productivity by calculating composite variables to assess graduate mentorship with a focus on hours spent per week with the advisor, the number of projects with the advisor, the amount of communications with the advisor (e.g., never, once a month), and the perceived supportiveness of the advisor (e.g., extremely to not supportive). The findings suggested the number of hours spent was less important than involvement in meaningful research projects where students can participate in multiple aspects of the research.

Research category #5: research collaboration

Research collaboration is commonly measured by operationalizing various constructs based on participatory approaches and models. The measures tend to focus on research projects, including the processes and outcomes of the work [Reference Oetzel, Zhou and Duran101]. The metrics also tend to explore the relationships, climate or expectations of partners [Reference Greenwald and Zukoski102], as well as the attributes [Reference Walters, Stern and Robertson-Malt103], and many of the measures focus on the actual partnerships (e.g., community-academic) or the participating organizations (versus individuals). For example, Greenwald and colleagues [Reference Greenwald and Zukoski102] identified metrics that included self-reported measures to assess organizational activities, communication among partners, information exchange, resource allocation and values, and policy or advocacy efforts. Another study measured the stages of community engagement to track the progress of research projects over time [Reference Pelfrey, Cain, Lawless, Pike and Sehgal104]. Proposed metrics to assess community-engaged clinical and translational science research have been organized into process measures focusing on research activities, and outcome measures linked to the contributions of community-engaged research [Reference Eder, Evans and Funes105]. Common measures included the synergy between research and community interests, priorities or concerns, as well as the partnership dynamics and outcomes based on research projects [Reference Cordrey, King, Pilkington, Gore and Gustafson100Reference Greenwald and Zukoski102,Reference Oetzel, Zhou and Duran101,Reference Eder, Evans and Funes105Reference Patten, Albertie and Chamie107]. Additionally, efforts to include the community in research [Reference Eder, Evans and Funes105,Reference Eder, Carter-Edwards, Hurd, Rumala and Wallerstein106,Reference Holzer and Kass108], and the dissemination of findings were also cited in the literature [Reference Alison, Zafiropoulos and Heard66,Reference Buxton and Hanney109,Reference Ziegahn, Joosten and Nevarez110]. In a review of community engagement measures conducted by Eder and colleagues [Reference Eder, Evans and Funes105], CTSA organizations reported collecting a number of process measures such as the number and type of community members engaged, the number of projects that receive support from a community engagement core, the number of projects that seek input from the community, and the number of community–academic interactions during a given project, to name a few.

Efforts to assess research partners or collaboration at the individual level are less commonly reported and the literature tends to focus on tracking engagement, enhancing partnerships, the benefits to researchers, and perceptions about the collaboration. Trochim and colleagues [Reference Trochim, Marcus, Mâsse, Moser and Weld111] created a Researcher Form to collected self-reported data based on four areas: 1) satisfaction with collaboration, 2) impact of collaboration, 3) trust and respect among partners, and 4) attitudes about transdisciplinary research. Other studies have explored the benefits of collaboration on scientists’ productivity and early career measures by assessing team composition [Reference Hall, Stokols and Moser112Reference Mazumdar, Messinger and Finkelstein115]. Several measures to track collaborative efforts or team science often focused on publications [Reference Holzer and Kass108Reference Ziegahn, Joosten and Nevarez110] including indicators that assessed a researchers’ contributions and the number of distinct institutions with which researcher’s co-authors are affiliated [Reference Ponomariov and Boardman41]. Partner contributions to grantsmanship, project implementation as well as collaborative service, teaching, and leadership efforts were also reported [Reference Mazumdar, Messinger and Finkelstein115]. Metrics for enhancing community-academic partnerships included measures related to role clarity, inter-professional research teams, and use of quality improvement and practical trials, as well as skill-building, partnership development, and outreach to public health agencies [Reference Hawk, Murphy, Hartmann, Burnett and Maguin113,Reference Inkelas, Brown and Vassar116].

Research category #6: research impact

Literature on the impact of health research investments are largely based on conceptual frameworks that reference individual-level, community-level, organizational-level, or societal-level results [Reference Banzi, Moja, Pistotti, Facchini and Liberati117Reference Luke, Sarli and Suiter122]. For decades, the importance of using data to improve decision-making around funding investments in research has been acknowledged. For example, a 1986 Technical Memorandum authored by the Office of Technology Assessment and commissioned by Congress, provided an assessment of the use of quantitative approaches to help guide research funding decisions [123]. The authors emphasized that despite the subjective natures of quantitative measures, they are invaluable for analytical comparisons and describing trends. In their narrative review on measuring research impact, Greenhalgh and colleagues [Reference Greenhalgh, Raftery, Hanney and Glover124] conclude that there is not one standard method for measuring research impact; different purposes require different approaches. One common approach for assessing clinical and translational research impacts is the Translational Science Benefits Model with over 50 publications in PubMed citing this work [Reference Luke, Sarli and Suiter122].

Our review of the literature found that, in general, the benefits or impacts of health research are often captured as efforts that: influenced clinical procedures, guidelines, or testing [Reference Luke, Sarli and Suiter122], led to career advancement [Reference Farrokhyar, Bianco and Dao48], contributed new knowledge to the field [Reference Banzi, Moja, Pistotti, Facchini and Liberati117,Reference Croxson, Hanney and Buxton119,Reference Donovan, Butler, Butt, Jones and Hanney121], led to future research and development [Reference Croxson, Hanney and Buxton119], resulted in new methodology, innovative discoveries, drugs, software, medical devices or diagnostics [118,Reference Luke, Sarli and Suiter122,125], led to economic benefits (e.g., reduced social costs of chronic disease) [Reference Luke, Sarli and Suiter122] and influenced policymakers, decision-makers, and leaders [Reference Banzi, Moja, Pistotti, Facchini and Liberati117,118,Reference Donovan, Butler, Butt, Jones and Hanney121,Reference Luke, Sarli and Suiter122,Reference Smith126,Reference Van Eerd, Moser and Saunders127].

Indicators specifically linked to the benefits and impact of health research included: improved health services, clinical practice and patient outcomes [118,Reference Croxson, Hanney and Buxton119,Reference Luke, Sarli and Suiter122,Reference Van Eerd, Moser and Saunders127], the development of health and social service guidelines, informed public health policies, the creation of new health education material, enhanced advocacy [118,Reference Luke, Sarli and Suiter122], decreased medical errors and adverse drug effects, improved adoption of adherence to new guidelines and best practices, improved health of patient panels, strengthened patient-clinic relationships, improved continuity of care and overall safety, improved service delivery, and improved distribution and equity in health care access and quality [118,Reference Dembe, Lynch, Gugiu and Jackson120,Reference Luke, Sarli and Suiter122]. Economic benefits and public health outcomes were also identified including enhanced quality of life and an increase in life expectancy [118,Reference Dembe, Lynch, Gugiu and Jackson120,Reference Luke, Sarli and Suiter122].

Finally, ROI was a common theme used to describe the impact of research [118,Reference Grazier, Trochim, Dilts and Kirk128]. A review of several impact frameworks and approaches has been reported elsewhere including economic studies [Reference Cruz Rivera, Kyte, Aiyegbusi, Keeley and Calvert129], and applications of various approaches used to assess the paybacks or returns tied to research investments have been well documented [Reference Donovan, Butler, Butt, Jones and Hanney121,Reference Grazier, Trochim, Dilts and Kirk128Reference Wooding, Hanney, Buxton and Grant133]. However, the literature on individual-level measures to assess research impact is limited. For instance, in 2009, the Canadian Academy of Health Sciences published a report describing a framework with a list of 54 indicators and corresponding metrics to assess ROI in health research, including aspirational measures [118]. In this report, they generated a list of five impact categories: 1) advancing knowledge, 2) capacity building, 3) informing decision-making, 4) health impacts, and 5) broad economic and social impacts. Each indicator was aligned with recommendations for application at the individual, group, institutional, regional, or national level. Of the 54 indicators, 21 were relevant at the individual level and only one “impact” indicator (self-reported continuity of care based on patient surveys) was appropriate at the individual level. The remaining indicators proposed to assess health impacts and broad economic and social impacts were applicable at the provider, organization, regional, or national level.

Survey tools

There are a range of survey tools designed to assess one or more aspects of a researcher’s experience, activities, and outcomes, yet relatively few publications provide the item wording and response options used to operationalize the constructs. During our review, we identified and flagged 11 survey tools published from 2001 to 2022 with specific individual-level measures that met our criteria. As seen in Table 2, seven of the 11 tools were validated [Reference Smith, Wright, Morgan, Dunleavey and Moore68,Reference Bland, Center, Finstad, Risbey and Staples87,Reference Hall, Stokols and Moser112,Reference Brennan, McKenzie and Turner136139]. The surveys ranged in length, from seven to 150 items. In general, respondents were faculty, clinicians, health professionals or researchers. In one case, policymakers were surveyed. While the scope varied, most of the tools were narrowly focused on research skills, self-efficacy, or attributes of the researcher, and nearly all included one or more measures assessing research activities. Four of the tools were developed in the United States [Reference Brocato and Mavis20,Reference Bland, Center, Finstad, Risbey and Staples87,Reference Hall, Stokols and Moser112,Reference Mills, Caetano and Rhea138] and three of the four were administered to medical school faculty or clinical researchers [Reference Brocato and Mavis20,Reference Bland, Center, Finstad, Risbey and Staples87,Reference Mills, Caetano and Rhea138]. The remaining tools were based on international efforts [Reference Caminiti, Iezzi, Ghetti, De’ Angelis and Ferrari29,Reference Smith, Wright, Morgan, Dunleavey and Moore68,Reference Croxson, Hanney and Buxton119,Reference Brennan, McKenzie and Turner136,Reference Holden, Pager, Golenko and Ware137,139,Reference Paiva, Araujo and Paiva140].

Table 2. Survey tools

Discussion

Our review of the literature revealed a range of individual-level measures designed to assess one or more aspects of a researcher’s experience, activities, and outcomes. However, the existing approaches fall short in capturing metrics across all six research categories included in our review. They lack comprehensiveness, with most measures concentrating on a single area. We found limited integration across multiple research categories that collectively represent the depth and breadth of a researcher’s experience and efforts as well as their supports and impact. By developing measures that span our research categories and focus areas, we can capture a more complete picture of a researcher’s activities, personal experiences, and outcomes, rather than focusing narrowly on traditional metrics.

A key observation of our review is the reliance on traditional research productivity metrics such as publications, research grants, and bibliometric indicators. While these are commonly used, they may not fully encompass the complexities of a researcher’s career trajectory or success. Publications, often measured by total counts and citations, may be foundational to many assessments of academic achievement, but they fail to account for the quality of the research, individual contributions, engagement of the community, or the broader influences or impact of the work. In addition, reliance on bibliometric data may result in valuing quantity over quality. As efforts to move away from narrowly measuring research productivity, more holistic approaches to understand the experiences of researchers, the merits of their findings, and their impact may offer a more accurate reflection of a researcher’s efforts and contributions throughout their research career.

Measures related to research funding were common and they provide varied approaches with little consistency. The literature largely focused on grant numbers, dollar amounts, and the nature of awards. There were a few studies that considered the broader context of funding such as the role of the researcher in the grant process. Understanding how financial support such as the type and timing of funding influences a researcher’s ability to attain research independence may be important to measure over time.

This review also revealed a range of output measures designed to capture the variety of activities and skills researchers engage in. These measures often focused on common tasks such as grant writing, presenting research, mentoring students, teaching, and engaging in routine service activities such as peer review. A clear theme across the output measures was the absence of a unified approach to track these diverse efforts, making it more difficult to assess an individual’s progress toward advancing their research career or their broader contributions to their field.

Metrics related to facilitating factors and research barriers were well documented in the literature, particularly as they related to new and emerging researchers. However, despite the varied measures, there is no uniform approach for capturing the supports needed to foster research and mitigate barriers that hinder career progression.

In terms of research mentorship and collaboration, the literature suggests both areas are essential for research success, suggesting a need to include the areas in future effort focused on individual-level researchers. The measures we reviewed related to mentorship tend to focus on the quality of the mentor-mentee relationship, the involvement in meaningful research projects, and the support received, including the effectiveness of the mentor. Similarly, the measures of collaboration we identified tend to focus on the research activities, outputs, contributions of the partners, and the composition of the research team.

Measures related to research impacts continue to evolve and there are a number of useful frameworks that exist. Approaches for assessing ROI have also been widely applied. Yet, more work is needed to create methods to evaluate the individual-level impact of research throughout one’s career trajectory versus the impact of specific research projects. Ongoing efforts to assess a researcher’s outcomes or impact may benefits from the use of additional resources such as the Overton Index policy database (https://www.overton.io/overton-index), a subscription-based resource that provides a robust repository of policy and gray literature, linking individual publications to policy-relevant documents to highlight the impact and translational effects of research [Reference Szomszor and Adie141]. Although this is a subscription-based database that was outside the budgeted resources available for this review, it may be useful to explore for ongoing evaluation efforts of clinical and translational research.

Our review of survey tools also revealed the lack of comprehensive and standardized approaches for assessing individual-level research experience, activities, and outcomes. There are few validated instruments with consistent and standardized methodologies, and many existing tools focus on narrow aspects of a researcher’s career, such as self-reported skills. This complicates the field’s ability to adopt standardized measurement practices.

More comprehensive measurement tools could provide important insights on how funding, career support, and other resources help fortify both the new investigator and the bench-to-bedside/bedside-to-practice pathways in their communities. Future measurement efforts should focus on developing and psychometrically testing new tools that capture the full range of factors that influence a researcher’s experience, activities, and outcomes. Broader methodological approaches could potentially be used to predict research career progression or research success. Future efforts using structural equation modeling or other advanced analytic techniques may help to determine which areas contribute to the experiences of researchers and the outcomes of their research careers.

Limitations

This review contributes to the evaluation of clinical and translational research by highlighting current approaches and measures used to assess individual-level efforts. Although an attempt was made to widely scan the literature, a systematic review with expanded search terms that focus on “evaluation” and “measurement” may yield additional published measures and tools that were not captured in our narrative review. Our inclusion criteria focused on summarizing published survey tools that were readily available with complete descriptions including their corresponding item wording and response options, thus limiting our scope. We identified several additional tools (e.g., post-program surveys, evaluation surveys) that were excluded because they did not meet these criteria. Finally, we acknowledge previous efforts of CTSAs to document and study a range of bibliometric approaches, including their relationship to programmatic outcomes and collaborative impact. For example, Lellewellyn and colleagues [Reference Llewellyn, Carter, DiazGranados, Pelfrey, Rollins and Nehl142] explored CTSA grant-cited publications based on several bibliometrics such as the CNCI along with the JIF, Relative Citation Ratios, and Approximate Potential to Translate. Yu and colleagues [Reference Yu, Van and Patel92] outlined a bibliometric approach to evaluating CTSA research outcomes and collaborative impact. We attempted to documentthe range of bibliometric measures available, versus the ways in which they have been used to contribute to the evaluation science of clinical and translational research.

Conclusion

Assessing changes related to the health research workforce is a priority, yet there are no comprehensive tools that currently exist to measure individual-level attributes, supports, or outcomes among researchers. A range of measures and approaches exist, yet more efforts are needed to create a comprehensive measurement tool that can be widely applicable. Such a tool could help standardize the ways in which a researcher’s experience and institutional supports are studied over time to determine the best avenues for supporting new and early-stage investigators.

Acknowledgements

We would like to thank the members of the NNE-CTR Tracking and Evaluation Corpe for their efforts to support this work.

Author contributions

Brenda Joly conceptualized and led the study and all authors contributed to data curation, writing, review, and editing.

Funding statement

This work was supported by the National Institutes of Health General Medical Sciences IDeA-CTR grant (NNE-CTR) [grant number U54GM115516].

Competing interests

There are no conflicts.

References

Blanchard, RD, Kleppel, R, Bianchi, DW. The impact of an institutional grant program on the economic, social, and cultural capital of women researchers. J Womens Health. 2019;28(12):16981704. doi: 10.1089/jwh.2018.7642.CrossRefGoogle ScholarPubMed
Choo, E, Mathis, S, Harrod, T, et al. Contributors to independent research funding success from the perspective of K12 BIRCWH program directors. Am J Med Sci. 2020;360(5):596603. doi: 10.1016/j.amjms.2020.09.006.CrossRefGoogle ScholarPubMed
Chou, AF, Hammon, D, Akins, DR. Impact of the Oklahoma IDeA network of biomedical research excellence research support and mentoring program for early-stage faculty. Adv Physiol Educ. 2022;46(3):443452. doi: 10.1152/advan.00075.2021.CrossRefGoogle ScholarPubMed
Finney, JW, Amundson, EO, Bi, X, et al. Evaluating the productivity of VA, NIH, and AHRQ health services research career development awardees. Acad Med. 2016;91(4):563569. doi: 10.1097/acm.0000000000000982.CrossRefGoogle ScholarPubMed
Mason, JL, Lei, M, Faupel-Badger, JM, et al. Outcome evaluation of the national cancer institute career development awards program. J Cancer Educ Mar. 2013;28(1):917. doi: 10.1007/s13187-012-0444-y.CrossRefGoogle ScholarPubMed
Milewicz, DM, Lorenz, RG, Dermody, TS, Brass, LF. Rescuing the physician-scientist workforce: the time for action is now. J Clin Invest. 2015;125(10):37423747. doi: 10.1172/jci84170.CrossRefGoogle ScholarPubMed
National Institutes of Health. Physician-scientist workforce working group report. NIH. 2014. (https://acd.od.nih.gov/documents/reports/PSW_Report_ACD_06042014.pdf) Accessed November 17, 2022.Google Scholar
Nikaj, S, Lund, PK. The impact of individual mentored career development (K) awards on the research trajectories of early-career scientists. Acad Med. 2019;94(5):708714. doi: 10.1097/acm.0000000000002543.CrossRefGoogle ScholarPubMed
Skinnider, MA, Twa, DDW, Squair, JW, Rosenblum, ND, Lukac, CD. Predictors of sustained research involvement among MD/PhD programme graduates. Med Educ. 2018;52(5):536545. doi: 10.1111/medu.13513.CrossRefGoogle ScholarPubMed
Freel, SA, Snyder, DC, Bastarache, K, et al. Now is the time to fix the clinical research workforce crisis. Clin Trials. 2023;20(5):457462. doi: 10.1177/17407745231177885.CrossRefGoogle Scholar
Rich, E, Collins, A. Current and future demand for health services researchers: perspectives from diverse research organizations. Health Serv Res. 2018;53(Suppl Suppl 2):39273944. doi: 10.1111/1475-6773.12999.CrossRefGoogle ScholarPubMed
Salata, RA, Geraci, MW, Rockey, DC, et al. Physician-scientist workforce in the 21st century: recommendations to attract and sustain the pipeline. Acad Med. 2018;93(4):565573. doi: 10.1097/acm.0000000000001950.CrossRefGoogle ScholarPubMed
Estape, ES, Quarshie, A, Segarra, B, et al. Promoting diversity in the clinical and translational research workforce. J Natl Med Assoc. 2018;110(6):598605. doi: 10.1016/j.jnma.2018.03.010.Google ScholarPubMed
Trochim, WM, Rubio, DM, Thomas, VG. Evaluation guidelines for the clinical and translational science awards (CTSAs). Clin Transl Sci. 2013;6(4):303309. doi: 10.1111/cts.12036.CrossRefGoogle ScholarPubMed
Bilardi, D, Rapa, E, Bernays, S, Lang, T. Measuring research capacity development in healthcare workers: a systematic review. BMJ Open. 2021;11(7):e046796. doi: 10.1136/bmjopen-2020-046796.CrossRefGoogle ScholarPubMed
Tesauro, GM, Seger, YR, Dijoseph, L, Schnell, JD, Klein, WM. Assessing the value of a small grants program for behavioral research in cancer control. Transl Behav Med. 2014;4(1):7985. doi: 10.1007/s13142-013-0236-x.CrossRefGoogle ScholarPubMed
Wootton, R. A simple, generalizable method for measuring individual research productivity and its use in the long-term analysis of departmental performance, including between-country comparisons. Health Res Policy Syst. 2013;11:2. doi: 10.1186/1478-4505-11-2.CrossRefGoogle ScholarPubMed
Tien, FF, Blackburn, RT. Faculty rank system, research motivation, and faculty research productivity: measure refinement and theory testing. J Higher Educ. 1996;67(1):222. doi: 10.2307/2943901.CrossRefGoogle Scholar
Bawden, J, Manouchehri, N, Villa-Roel, C, Grafstein, E, Rowe, BH. Important returns on investment: an evaluation of a national research grants competition in emergency medicine. CJEM. 2010;12(1):3338. doi: 10.1017/s1481803500011994.CrossRefGoogle ScholarPubMed
Brocato, JJ, Mavis, B. The research productivity of faculty in family medicine departments at U.S. Medical schools: a national study. Acad Med. 2005;80(3):244252. doi: 10.1097/00001888-200503000-00008.CrossRefGoogle ScholarPubMed
Mahoney, MC, Verma, P, Morantz, S. Research productivity among recipients of AAFP foundation grants. Ann Fam Med. 2007;5(2):143145. doi: 10.1370/afm.628.CrossRefGoogle ScholarPubMed
Panettieri, RA, Kolls, JK, Lazarus, S, et al. Impact of a respiratory disease young investigators’ forum on the career development of physician-scientists. ATS Sch. 2020;1(3):243259. doi: 10.34197/ats-scholar.2019-0018OC.CrossRefGoogle ScholarPubMed
Scott Van Epps, J, Younger, JG. Early career academic productivity among emergency physicians with R01 grant funding. Acad Emerg Med. 2011;18(7):759762. doi: 10.1111/j.1553-2712.2011.01118.x.CrossRefGoogle ScholarPubMed
Abramo, G, D’Angelo, CA, Viel, F. Assessing the accuracy of the h- and g-indexes for measuring researchers’ productivity. J Assoc Inf Sci Technol. 2013;64(6):12241234. doi: 10.1002/asi.22828.CrossRefGoogle Scholar
Acuna, DE, Allesina, S, Kording, KP. Predicting scientific success. Nature. 2012;489(7415):201202. doi: 10.1038/489201a.CrossRefGoogle ScholarPubMed
Akl, EA, Meerpohl, JJ, Raad, D, et al. Effects of assessing the productivity of faculty in academic medical centres: a systematic review. CMAJ. 2012;184(11):E602E612. doi: 10.1503/cmaj.111123.CrossRefGoogle ScholarPubMed
Barreto, EF, McCoy, RG, Larson, JJ, et al. Evaluation of the academic achievements of clinician health services research scientists involved in “pre-k” career development award programs. J Clin Transl Sci. 2021;5(1):e122. doi: 10.1017/cts.2021.780.CrossRefGoogle ScholarPubMed
Bautista-Puig, N, Lorente, LM, Sanz-Casado, E. Proposed methodology for measuring the effectiveness of policies designed to further research. Res Eval. 2021;30(2):215229. doi: 10.1093/reseval/rvaa021.CrossRefGoogle Scholar
Caminiti, C, Iezzi, E, Ghetti, C, De’ Angelis, G, Ferrari, C. A method for measuring individual research productivity in hospitals: development and feasibility. BMC Health Serv Res. 2015;15(1):1–8. doi: 10.1186/s12913-015-1130-7.CrossRefGoogle ScholarPubMed
Dakik, HA, Kaidbey, H, Sabra, R. Research productivity of the medical faculty at the American University of Beirut. Postgrad Med J. 2006;82(969):462464. doi: 10.1136/pgmj.2005.042713.CrossRefGoogle ScholarPubMed
Duffy, RD, Martin, HM, Bryan, NA, Raque-Bogdan, TL. Measuring individual research productivity: a review and development of the integrated research productivity index. J Couns Psychol. 2008;55(4):518527. doi: 10.1037/a0013618.CrossRefGoogle ScholarPubMed
Hirsch, JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):1656916572. doi: 10.1073/pnas.0507655102.CrossRefGoogle ScholarPubMed
Huettner, DA, Clark, W. Comparative research productivity measures for economics departments. J Econ Educ. 1997;28(3):272278. doi: 10.2307/1183203.CrossRefGoogle Scholar
Lai, KA, Saxena, G, Allen, PJ. Research performance of academic psychologists in the United Kingdom. Scientometrics. 2022;127(7):41394166. doi: 10.1007/s11192-022-04424-4.CrossRefGoogle Scholar
Lewison, G, Devey, ME. Bibliometric methods for the evaluation of arthritis research. Rheumatology (Oxford). 1999;38(1):1320. doi: 10.1093/rheumatology/38.1.13.CrossRefGoogle ScholarPubMed
Mavis, B, Katz, M. Evaluation of a program supporting scholarly productivity for new investigators. Acad Med. 2003;78(7):757765. doi: 10.1097/00001888-200307000-00020.CrossRefGoogle ScholarPubMed
Noble, A, Kecojevic, V. Analysis of research scholarship for academic staff at US ABET accredited mining engineering schools by publications, citations and h-index. Min Technol. 2015;124(4):222230. doi: 10.1179/1743286315Y.0000000010.CrossRefGoogle Scholar
Patel, PA, Patel, KK, Gopali, R, Reddy, A, Bogorad, D, Bollinger, K. The relative citation ratio: examining a novel measure of research productivity among southern academic ophthalmologists. Semin Ophthalmol. 2022;37(2):195202. doi: 10.1080/08820538.2021.1915341.CrossRefGoogle ScholarPubMed
Paul, P, Mukhopadhyay, K. Measuring research productivity of marketing scholars and marketing departments. Marketing Educ Rev. 2022;32(4):357367. doi: 10.1080/10528008.2021.2024077.CrossRefGoogle Scholar
Pluskiewicz, W, Drozdzowska, B, Adamczyk, P, Noga, K. Scientific quality index: a composite size-independent metric compared with h-index for 480 medical researchers. Scientometrics. 2019;119(2):10091016. doi: 10.1007/s11192-019-03078-z.CrossRefGoogle Scholar
Ponomariov, BL, Boardman, PC. Influencing scientists’ collaboration and productivity patterns through new institutions: University research centers and scientific and technical human capital. Res Policy. 2010;39(5):613624. doi: 10.1016/j.respol.2010.02.013.CrossRefGoogle Scholar
Schreiber, WE, Giustini, DM. Measuring scientific impact with the h-index: a primer for pathologists. Am J Clin Pathol. 2019;151(3):286291. doi: 10.1093/ajcp/aqy137.CrossRefGoogle ScholarPubMed
Szomszor, M, Adams, J, Fry, R, et al. Interpreting bibliometric data. Front Res Metr Anal. 2020;5:628703. doi: 10.3389/frma.2020.628703.CrossRefGoogle ScholarPubMed
Walters, G. Adding authorship order to the quantity and quality dimensions of scholarly productivity: evidence from group- and individual-level analyses. Scientometrics. 2016;106(2):769785. doi: 10.1007/s11192-015-1803-3.CrossRefGoogle Scholar
Zink, HR, Curran, JD. Measuring the startup journey and academic productivity of new research faculty through systems engagement, project efficiency, and scientific publication. J Res Adm. 2020;51(1):3245.Google Scholar
Creswell, JW. Faculty research performance; lessons from the sciences and the social sciences. Asociation for the Study of Higher Education, ERIC Clearinghouse. 1985. ASHE-ERIC Higher Education Report No. 4, ED 267677. Accessed February 12, 2023.Google Scholar
Tsai, AC, Ordóñez, AE, Reus, VI, Mathews, CA. Eleven-year outcomes from an integrated residency program to train research psychiatrists. Acad Med. 2013;88(7):983988. doi: 10.1097/ACM.0b013e318294f95d.CrossRefGoogle ScholarPubMed
Farrokhyar, F, Bianco, D, Dao, D, et al. Impact of research investment on scientific productivity of junior researchers. Transl Behav Med. 2016;6(4):659668. doi: 10.1007/s13142-015-0361-9.CrossRefGoogle ScholarPubMed
Nair, A. High impact journals 2024--key to deciding the right journal. enagoacademy. 2024. (https://www.enago.com/academy/top-high-impact-factor-journals/) Accessed October 28, 2024.Google Scholar
DORA. San Francisco Declaration on Research Assessment (DORA). 2024. (https://sfdora.org/about-dora/) Accessed November 26, 2024.Google Scholar
Hirsch, JE. Does the h index have predictive power? Proc Natl Acad Sci U S A. 2007;104(49):1919319198. doi: 10.1073/pnas.0707962104.CrossRefGoogle Scholar
Hirsch, JE, Buela-Casal, G. The meaning of the h-index. Int J Clin Health Psychol. 2014;14(2):161164. doi: 10.1016/s1697-2600(14)70050-x.CrossRefGoogle Scholar
Hutchins, BI, Yuan, X, Anderson, JM, Santangelo, GM. Relative citation ratio (RCR): a new metric that uses citation rates to measure influence at the article level. PLoS Biol. 2016;14(9):e1002541. doi: 10.1371/journal.pbio.1002541.CrossRefGoogle ScholarPubMed
Biswal, AK. An absolute index (Ab-index) to measure a researcher’s useful contributions and productivity. PLoS One. 2013;8(12):e84334. doi: 10.1371/journal.pone.0084334.CrossRefGoogle Scholar
Abramo, G, D’Angelo, CA. National-scale research performance assessment at the individual level. Scientometrics. 2011;86(2):347364. doi: 10.1007/s11192-010-0297-2.CrossRefGoogle Scholar
Akabas, M, Brass, L, Tartakovsky, I. National MD-PhD program outcomes study. Association of American Medical Colleges. 2018. (https://www.aamc.org/data-reports/workforce/report/national-md-phd-program-outcomes-study) Accessed April 6, 2023.Google Scholar
Dev, AT, Kauf, TL, Zekry, A, et al. Factors influencing the participation of gastroenterologists and hepatologists in clinical research. BMC Health Serv Res. 2008;8:208. doi: 10.1186/1472-6963-8-208.CrossRefGoogle ScholarPubMed
Garrison, HH, Deschamps, AM. NIH research funding and early career physician scientists: continuing challenges in the 21st century. FASEB J. 2014;28(3):10491058. doi: 10.1096/fj.13-241687.CrossRefGoogle ScholarPubMed
Halvorson, MA, Finlay, AK, Cronkite, RC, et al. Ten-year publication trajectories of health services research career development award recipients: collaboration, awardee characteristics, and productivity correlates. Eval Health Prof. 2016;39(1):4964. doi: 10.1177/0163278714542848.CrossRefGoogle ScholarPubMed
Libby, AM, Hosokawa, PW, Fairclough, DL, Prochazka, AV, Jones, PJ, Ginde, AA. Grant success for early-career faculty in patient-oriented research: difference-in-differences evaluation of an interdisciplinary mentored research training program. Acad Med. 2016;91(12):16661675. doi: 10.1097/acm.0000000000001263.CrossRefGoogle ScholarPubMed
Prasad, V, Goldstein, JA. US news and world report cancer hospital rankings: do they reflect measures of research productivity? PLoS One. 2014;9(9):16. doi: 10.1371/journal.pone.0107803.CrossRefGoogle ScholarPubMed
Sweeney, C, Schwartz, LS, Toto, R, Merchant, C, Fair, AS, Gabrilove, JL. Transition to independence: characteristics and outcomes of mentored career development (KL2) scholars at clinical and translational science award institutions. Acad Med. 2017;92(4):556562. doi: 10.1097/acm.0000000000001473.CrossRefGoogle ScholarPubMed
Brass, LF, Akabas, MH. The national MD-PhD program outcomes study: relationships between medical specialty, training duration, research effort, and career paths. JCI Insight. 2019;4(19):1–10. doi: 10.1172/jci.insight.133009.CrossRefGoogle ScholarPubMed
Joss-Moore, LA, Lane, RH, Rozance, PJ, Bird, I, Albertine, KH. Perinatal research society’s young investigator workshop prepares the next generation of investigators. Reprod Sci. 2022;29(4):12711277. doi: 10.1007/s43032-021-00836-4.CrossRefGoogle ScholarPubMed
Robinson, GF, Schwartz, LS, DiMeglio, LA, Ahluwalia, JS, Gabrilove, JL. Understanding career success and its contributing factors for clinical and translational investigators. Acad Med. 2016;91(4):570582. doi: 10.1097/acm.0000000000000979.CrossRefGoogle ScholarPubMed
Alison, JA, Zafiropoulos, B, Heard, R. Key factors influencing allied health research capacity in a large Australian metropolitan health district. J Multidiscip Healthc. 2017;10:277291. doi: 10.2147/JMDH.S142009.CrossRefGoogle Scholar
Sarli, CC, Dubinsky, EK, Holmes, KL. Beyond citation analysis: a model for assessment of research impact. J Med Libr Assoc. 2010;98(1):1723. doi: 10.3163/1536-5050.98.1.008.CrossRefGoogle Scholar
Smith, H, Wright, D, Morgan, S, Dunleavey, J, Moore, M. The research spider: a simple method of assessing research experience. Prim Health Care Res Dev. 2002;3(3):139140. doi: 10.1191/1463423602pc102xx.CrossRefGoogle Scholar
Ommering, BWC, van Blankenstein, FM, van Diepen, M, Dekker, FW. Academic success experiences: promoting research motivation and self-efficacy beliefs among medical students. Teach Learn Med. 2021;33(4):423433. doi: 10.1080/10401334.2021.1877713.CrossRefGoogle ScholarPubMed
Yan, P, Lao, Y, Lu, Z, et al. Health research capacity of professional and technical personnel in a first-class tertiary hospital in northwest China: multilevel repeated measurement, 2013–2017, a pilot study. Health Res Policy Syst. 2020;18(1):103. doi: 10.1186/s12961-020-00616-7.CrossRefGoogle Scholar
Kalet, A, Lusk, P, Rockfeld, J, et al. The challenges, joys, and career satisfaction of women graduates of the Robert Wood Johnson clinical scholars program 1973–2011. J Gen Intern Med. 2020;35(8):22582265. doi: 10.1007/s11606-020-05715-3.CrossRefGoogle Scholar
Dzirasa, K, Krishnan, RR, Williams, RS. Incubating the research independence of a medical scientist training program graduate: a case study. Acad Med. 2015;90(2):176179. doi: 10.1097/acm.0000000000000568.CrossRefGoogle ScholarPubMed
Goldstein, AM, Blair, AB, Keswani, SG, et al. A roadmap for aspiring surgeon-scientists in today’s healthcare environment. Ann Surg. 2019;269(1):6672. doi: 10.1097/sla.0000000000002840.CrossRefGoogle ScholarPubMed
Lowenstein, SR, Fernandez, G, Crane, LA. Medical school faculty discontent: prevalence and predictors of intent to leave academic careers. BMC Med Educ. 2007;7(1):37. doi: 10.1186/1472-6920-7-37.CrossRefGoogle ScholarPubMed
Feldon, DF, Litson, K, Jeong, S, et al. Postdocs’ lab engagement predicts trajectories of PhD students’ skill development. Proc Natl Acad Sci U S A. 2019;116(42):2091020916. doi: 10.1073/pnas.1912488116.CrossRefGoogle ScholarPubMed
Dundar, H, Lewis, DR. Determinates of research productivity in higher education. Res High Educ. 1998;39(6):607631. doi: 10.1023/a:1018705823763.CrossRefGoogle Scholar
Monsura, MP, Dizon, RL, Tan, CG Jr, Gapasin, ARP. Why research matter?: an evaluative study of research productivity performance of the faculty members of the polytechnic university of the Philippines. J Pharm Negat Results. 2022;13:680694. doi: 10.47750/pnr.2022.13.S06.097.CrossRefGoogle Scholar
Rubio, DM, Robinson, G, Gabrilove, J, Meagher, EA. Creating effective career development programs. J Clin Transl Sci. 2017;1(2):8387. doi: 10.1017/cts.2016.30.CrossRefGoogle ScholarPubMed
Saleh, M, Naik, G, Jester, P, et al. Clinical investigator training program (CITP) - a practical and pragmatic approach to conveying clinical investigator competencies and training to busy clinicians. Contemp Clin Trials Commun. 2020;19:100589. doi: 10.1016/j.conctc.2020.100589.CrossRefGoogle ScholarPubMed
Skinnider, MA, Squair, JW, Twa, DDW, et al. Characteristics and outcomes of Canadian MD/PhD program graduates: a cross-sectional survey. CMAJ Open. 2017;5(2):E308E314. doi: 10.9778/cmajo.20160152.CrossRefGoogle ScholarPubMed
Comeau, DL, Escoffery, C, Freedman, A, Ziegler, TR, Blumberg, HM. Improving clinical and translational research training: a qualitative evaluation of the Atlanta clinical and translational science institute KL2-mentored research scholars program. J Investig Med. 2017;65(1):2331. doi: 10.1136/jim-2016-000143.CrossRefGoogle ScholarPubMed
Guillet, R, Holloway, RG, Gross, RA, Libby, K, Shapiro, JR. Junior faculty core curriculum to enhance faculty development. J Clin Transl Sci. 2017;1(2):7782. doi: 10.1017/cts.2016.29.CrossRefGoogle ScholarPubMed
Robb, SL, Kelly, TH, King, VL, Blackard, JT, McGuire, PC. Visiting scholars program to enhance career development among early-career KL2 investigators in clinical and translational science: implications from a quality improvement assessment. J Clin Transl Sci. 2020;5(1):e67. doi: 10.1017/cts.2020.564.CrossRefGoogle ScholarPubMed
Smyth, SS, Coller, BS, Jackson, RD, et al. KL2 scholars’ perceptions of factors contributing to sustained translational science career success. J Clin Transl Sci. 2022;6(1):124. doi: 10.1017/cts.2021.886.CrossRefGoogle ScholarPubMed
Connors, MC, Pacchiano, DM, Stein, AG, Swartz, MI. Building capacity for research and practice: a partnership approach. Future Child. 2021;31(1):119135.10.1353/foc.2021.0003CrossRefGoogle Scholar
Ari, MD, Iskander, J, Araujo, J, et al. A science impact framework to measure impact beyond journal metrics. PLoS One. 2020;15(12):e0244407. doi: 10.1371/journal.pone.0244407.CrossRefGoogle ScholarPubMed
Bland, CJ, Center, BA, Finstad, DA,Risbey, KR, Staples, JG. A theoretical, practical, predictive model of faculty and department research productivity. Acad Med. 2005;80(3):225237. doi: 10.1097/00001888-200503000-00006.CrossRefGoogle ScholarPubMed
Heslin, PA. Conceptualizing and evaluating career success. J Organ Behav. 2005;26(2):113136. doi: 10.1002/job.270.CrossRefGoogle Scholar
Ommering, BWC, Dekker, FW. Medical students’ intrinsic versus extrinsic motivation to engage in research as preparation for residency. Perspect Med Educ. 2017;6(6):366368. doi: 10.1007/s40037-017-0388-3.CrossRefGoogle ScholarPubMed
Pager, S, Holden, L, Golenko, X. Motivators, enablers, and barriers to building allied health research capacity. J Multidiscip Healthc. 2012;5:5359. doi: 10.2147/jmdh.S27638.CrossRefGoogle ScholarPubMed
Tigges, BB, Sood, A, Dominguez, N, Kurka, JM, Myers, OB, Helitzer, D. Measuring organizational mentoring climate: importance and availability scales. J Clin Transl Sci. 2020;5(1):e53. doi: 10.1017/cts.2020.547.CrossRefGoogle ScholarPubMed
Yu, F, Van, AA, Patel, T, et al. Bibliometrics approach to evaluating the research impact of CTSAs: a pilot study. J Clin Transl Sci. 2020;4(4):336344. doi: 10.1017/cts.2020.29.CrossRefGoogle ScholarPubMed
Fudge, N, Sadler, E, Fisher, HR, Maher, J, Wolfe, CD, McKevitt, C. Optimising translational research opportunities: a systematic review and narrative synthesis of basic and clinician scientists’ perspectives of factors which enable or hinder translational research. PLoS One. 2016;11(8):e0160475. doi: 10.1371/journal.pone.0160475.CrossRefGoogle ScholarPubMed
Bice, MR, Hollman, A, Ball, J, Hollman, T. Mentorship: an assessment of faculty scholarly production, mode of doctoral work, and mentorship. Am J Distance Educ. 2022;36(3):208226. doi: 10.1080/08923647.2021.1941724.CrossRefGoogle Scholar
McRae, M, Zimmerman, KM. Identifying components of success within health sciences-focused mentoring programs through a review of the literature. Am J Pharm Educ. 2019;83(1):6976. doi: 10.5688/ajpe6976.CrossRefGoogle ScholarPubMed
Fleming, M, House, S, Hanson, VS, et al. The mentoring competency assessment: validation of a new instrument to evaluate skills of research mentors. Acad Med. 2013;88(7):10021008. doi: 10.1097/ACM.0b013e318295e298.CrossRefGoogle ScholarPubMed
Hyun, SH, Rogers, JG, House, SC, Sorkness, CA, Pfund, C. Revalidation of the mentoring competency assessment to evaluate skills of research mentors: the MCA-21. J Clin Transl Sci. 2022;6(1):e46. doi: 10.1017/cts.2022.381.CrossRefGoogle ScholarPubMed
Hyun, SH, Rogers, JG, House, SC, Sorkness, CA, Pfund, C. Erratum: Re-validation of the mentoring competency assessment to evaluate skills of research mentors: The MCA-21 - corrigendum. J Clin Transl Sci. 2024;8(1):e188. doi: 10.1017/cts.2024.637.CrossRefGoogle ScholarPubMed
Pololi, LH, Knight, SM, Dennis, K, Frankel, RM. Helping medical school faculty realize their dreams: an innovative, collaborative mentoring program. Acad Med. 2002;77(5):377384. doi: 10.1097/00001888-200205000-00005.CrossRefGoogle ScholarPubMed
Cordrey, T, King, E, Pilkington, E, Gore, K, Gustafson, O. Exploring research capacity and culture of allied health professionals: a mixed methods evaluation. BMC Health Serv Res. 2022;22(1):85. doi: 10.1186/s12913-022-07480-x.CrossRefGoogle ScholarPubMed
Oetzel, JG, Zhou, C, Duran, B, et al. Establishing the psychometric properties of constructs in a community-based participatory research conceptual model. Am J Health Promot. 2015;29(5):e188e202. doi: 10.4278/ajhp.130731-QUAN-398.CrossRefGoogle Scholar
Greenwald, HP, Zukoski, AP. Assessing collaboration: alternative measures and issues for evaluation. Am J Eval. 2018;39(3):322335. doi: 10.1177/1098214017743813.CrossRefGoogle Scholar
Walters, SJ, Stern, C, Robertson-Malt, S. The measurement of collaboration within healthcare settings: a systematic review of measurement properties of instruments. JBI Database System Rev Implement Rep. 2015;14(4):138197. doi: 10.11124/jbisrir-2016-2159.CrossRefGoogle Scholar
Pelfrey, CM, Cain, KD, Lawless, ME, Pike, E, Sehgal, AR. A consult service to support and promote community-based research: tracking and evaluating a community-based research consult service. J Clin Transl Sci. 2017;1(1):3339. doi: 10.1017/cts.2016.5.CrossRefGoogle ScholarPubMed
Eder, MM, Evans, E, Funes, M, et al. Defining and measuring community engagement and community-engaged research: clinical and translational science institutional practices. Prog Community Health Partnersh. 2018;12(2):145156. doi: 10.1353/cpr.2018.0034.CrossRefGoogle ScholarPubMed
Eder, MM, Carter-Edwards, L, Hurd, TC, Rumala, BB, Wallerstein, N. A logic model for community engagement within the clinical and translational science awards consortium: Can we measure what we model? Academic Medicine. 2013;88(10):14301436. doi: 10.1097/ACM.0b013e31829b54ae.CrossRefGoogle ScholarPubMed
Patten, CA, Albertie, ML, Chamie, CA, et al. Addressing community health needs through community engagement research advisory boards. J Clin Transl Sci. 2019;3(2-3):125128. doi: 10.1017/cts.2019.366.CrossRefGoogle ScholarPubMed
Holzer, J, Kass, N. Understanding the supports of and challenges to community engagement in the CTSAs. CTS: Clin Transl Sci. 2015;8(2):116122. doi: 10.1111/cts.12205.Google ScholarPubMed
Buxton, M, Hanney, S. How can payback from health services research be assessed? J Health Serv Res Policy. 1996;1(1):3543.10.1177/135581969600100107CrossRefGoogle ScholarPubMed
Ziegahn, L, Joosten, Y, Nevarez, L, et al. Collaboration and context in the design of community-engaged research training. Health Promot Pract. 2021;22(3):358366. doi: 10.1177/1524839919894948.CrossRefGoogle ScholarPubMed
Trochim, WM, Marcus, SE, Mâsse, LC, Moser, RP, Weld, PC. The evaluation of large research initiatives. Am J Eval. 2008;29(1):828. doi: 10.1177/1098214007309280.CrossRefGoogle Scholar
Hall, KL, Stokols, D, Moser, RP, et al. The collaboration readiness of transdisciplinary research teams and centers findings from the national cancer institute’s TREC year-one evaluation study. Am J Prev Med. 2008;35(2 Suppl):S161S172. doi: 10.1016/j.amepre.2008.03.035.CrossRefGoogle Scholar
Hawk, LW Jr., Murphy, TF, Hartmann, KE, Burnett, A, Maguin, E. A randomized controlled trial of a team science intervention to enhance collaboration readiness and behavior among early career scholars in the clinical and translational science award network. J Clin Transl Sci. 2024;8(1):e6. doi: 10.1017/cts.2023.692.Google ScholarPubMed
Lee, S, Bozeman, B. The impact of research collaboration on scientific productivity. Soc Stud Sci. 2005;35(5):673702. doi: 10.1177/0306312705052359.CrossRefGoogle Scholar
Mazumdar, M, Messinger, S, Finkelstein, DM, et al. Evaluating academic scientists collaborating in team-based research: a proposed framework. Acad Med. 2015;90(10):13021308. doi: 10.1097/acm.0000000000000759.CrossRefGoogle ScholarPubMed
Inkelas, M, Brown, AF, Vassar, SD, et al. Enhancing dissemination, implementation, and improvement science in CTSAs through regional partnerships. Clin Transl Sci. 2015;8(6):800806. doi: 10.1111/cts.12348.CrossRefGoogle ScholarPubMed
Banzi, R, Moja, L, Pistotti, V, Facchini, A, Liberati, A. Conceptual frameworks and empirical approaches used to assess the impact of health research: an overview of reviews. Health Res Policy Syst. 2011;9(1):26. doi: 10.1186/1478-4505-9-26.CrossRefGoogle ScholarPubMed
Canadian Academy of Health Sciences, Panel on Return on Investment in Health Research. Making an impact: A preferred framework and indicators to measure returns on investment in health research. Canadian Academy of Health Sciences. 2009. (https://www.cahs-acss.ca/wp-content/uploads/2011/09/ROI_FullReport.pdf) Accessed February 9, 2024.Google Scholar
Croxson, B, Hanney, S, Buxton, M. Routine monitoring of performance: what makes health research and development different? J Health Serv Res Policy. 2001;6(4):226232. doi: 10.1258/1355819011927530.CrossRefGoogle ScholarPubMed
Dembe, AE, Lynch, MS, Gugiu, PC, Jackson, RD. The translational research impact scale: development, construct validity, and reliability testing. Eval Health Prof. 2014;37(1):5070. doi: 10.1177/0163278713506112.CrossRefGoogle ScholarPubMed
Donovan, C, Butler, L, Butt, AJ, Jones, TH, Hanney, SR. Evaluation of the impact of national breast cancer foundation-funded research. Med J Aust. 2014;200(4):214218. doi: 10.5694/mja13.10798.CrossRefGoogle ScholarPubMed
Luke, DA, Sarli, CC, Suiter, AM, et al. The translational science benefits model: a new framework for assessing the health and societal benefits of clinical and translational sciences. Clin Transl Sci. 2018;11(1):7784. doi: 10.1111/cts.12495.CrossRefGoogle Scholar
U.S. Congress, Office of Technology Assessment. Research funding as an investment: Can we measure the returns? A technical memorandum U.S. Congress, OTA. 1986. (https://repository.library.georgetown.edu/bitstream/handle/10822/708346/8622.PDF) NTIS order #PB86-218278. Accessed March 17, 2021.Google Scholar
Greenhalgh, T, Raftery, J, Hanney, S, Glover, M. Research impact: a narrative review. BMC Med. 2016;14(1):78. doi: 10.1186/s12916-016-0620-8.CrossRefGoogle ScholarPubMed
Center for Leading Innovation and Collaboration. Measures of impact working group: Final report. 2020. (https://clic-ctsa.org/node/6481) Accessed March 23, 2021.Google Scholar
Smith, R. Measuring the social impact of research. BMJ. 2001;323(7312):528528. doi: 10.1136/bmj.323.7312.528.CrossRefGoogle ScholarPubMed
Van Eerd, D, Moser, C, Saunders, R. A research impact model for work and health. Am J Ind Med. 2021;64(1):312. doi: 10.1002/ajim.23201.CrossRefGoogle ScholarPubMed
Grazier, KL, Trochim, WM, Dilts, DM, Kirk, R. Estimating return on investment in translational research: methods and protocols. Eval Health Prof. 2013;36(4):478491. doi: 10.1177/0163278713499587.CrossRefGoogle ScholarPubMed
Cruz Rivera, S, Kyte, DG, Aiyegbusi, OL, Keeley, TJ, Calvert, MJ. Assessing the impact of healthcare research: a systematic review of methodological frameworks. PLoS Med. 2017;14(8):e1002370. doi: 10.1371/journal.pmed.1002370.CrossRefGoogle ScholarPubMed
Bowden, JA, Sargent, N, Wesselingh, S, Size, L, Donovan, C, Miller, CL. Measuring research impact: a large cancer research funding programme in Australia. Health Res Policy Syst. 2018;16(1):39. doi: 10.1186/s12961-018-0311-3.CrossRefGoogle ScholarPubMed
Nason, E, Curran, B, Hanney, S, et al. Evaluating health research funding in Ireland: assessing the impacts of the health research board of Ireland’s funding activities. Res Eval. 2011;20(3):193200. doi: 10.3152/095820211X12941371876823.CrossRefGoogle Scholar
Scott, JE, Blasinsky, M, Dufour, M, Mandal, RJ, Philogene, GS. An evaluation of the mind-body interactions and health program: assessing the impact of an NIH program using the payback framework. Res Eval. 2011;20(3):185192. doi: 10.3152/095820211X12941371876661.CrossRefGoogle Scholar
Wooding, S, Hanney, S, Buxton, M, Grant, J. Payback arising from research funding: evaluation of the arthritis research campaign. Rheumatology (Oxford). 2005;44(9):11451156. doi: 10.1093/rheumatology/keh708.CrossRefGoogle ScholarPubMed
Blanchard, M, Burton, MC, Geraci, MW, et al. Best practices for physician-scientist training programs: recommendations from the alliance for academic internal medicine. Am J Med. 2018;131(5):578584. doi: 10.1016/j.amjmed.2018.01.015.CrossRefGoogle Scholar
Sorkness, CA, Scholl, L, Fair, AM, Umans, JG. KL2 mentored career development programs at clinical and translational science award hubs: practices and outcomes. J Clin Transl Sci. 2020;4(1):4352. doi: 10.1017/cts.2019.424.CrossRefGoogle ScholarPubMed
Brennan, SE, McKenzie, JE, Turner, T, et al. Development and validation of SEER (Seeking, engaging with and evaluating research): a measure of policymakers’ capacity to engage with and use research. Health Res Policy Syst. 2017;15(1):1. doi: 10.1186/s12961-016-0162-8.CrossRefGoogle ScholarPubMed
Holden, L, Pager, S, Golenko, X, Ware, RS. Validation of the research capacity and culture (RCC) tool: measuring RCC at individual, team and organisation levels. Aust J Prim Health. 2012;18(1):6267. doi: 10.1071/PY10081.CrossRefGoogle ScholarPubMed
Mills, BA, Caetano, R, Rhea, AE. Factor structure of the clinical research appraisal inventory (CRAI). Eval Health Prof. 2014;37(1):7182. doi: 10.1177/0163278713500303.CrossRefGoogle ScholarPubMed
The Global Health Network. Using the TDR global competency framework for clinical research: A set of tools to help develop clinical researchers. World Health Organization. 2016. (https://media.tghn.org/articles/TDR_Framework_Full_Competency_Tools_20161101_compressed.pdf) Accessed January 26, 2024.Google Scholar
Paiva, CE, Araujo, RL, Paiva, BS, et al. What are the personal and professional characteristics that distinguish the researchers who publish in high- and low-impact journals? A multi-national web-based survey. Ecancermedicalscience. 2017;11:718. doi: 10.3332/ecancer.2017.718.CrossRefGoogle Scholar
Szomszor, M, Adie, E. Overton - a bibliometric database of policy document citations. Quant Sci Stud. 2022;3:127. doi: 10.1162/qss_a_00204.CrossRefGoogle Scholar
Llewellyn, N, Carter, DR, DiazGranados, D, Pelfrey, C, Rollins, L, Nehl, EJ. Scope, influence, and interdisciplinary collaboration: the publication portfolio of the NIH clinical and translational science awards (CTSA) program from 2006 through 2017. Eval Health Prof. 2020;43(3):169179. doi: 10.1177/0163278719839435.CrossRefGoogle Scholar
Figure 0

Figure 1. Search Process and Results.

Figure 1

Figure 2. Assigned Research Categories and Focus Areas Used in Coding. (Note: * Codes developed a priori).

Figure 2

Table 1. Bibliometric measure

Figure 3

Table 2. Survey tools