Hostname: page-component-6bb9c88b65-9rk55 Total loading time: 0 Render date: 2025-07-24T16:05:08.396Z Has data issue: false hasContentIssue false

A systematic review of health economic evaluation quality assessment instruments for medical devices

Published online by Cambridge University Press:  10 July 2025

Ilke Akpinar*
Affiliation:
College of Health Sciences, School of Public Health, University of Alberta, Edmonton, AB, Canada
Ali Unsal
Affiliation:
Institute of Health Economics, Industry Partnership, Edmonton, AB, Canada
Mike Paulden
Affiliation:
College of Health Sciences, School of Public Health, University of Alberta, Edmonton, AB, Canada
Jeff Round
Affiliation:
Faculty of Medicine and Dentistry, Pediatrics Department, University of Alberta, Edmonton, AB, Canada
*
Corresponding author: Ilke Akpinar; Email: ilke@ualberta.ca
Rights & Permissions [Opens in a new window]

Abstract

Objectives

Health economic evaluations are important for healthcare resource allocation. Reviews of health economic evaluations for medical devices have highlighted concerns about the quality of these studies. The complexity of medical devices, including learning curve effects, organizational impact, dynamic pricing, low evidence, and incremental innovation presents unique challenges compared with pharmaceuticals. To support developing a methodological quality assessment instrument for medical device economic evaluations, we conducted a systematic review to identify and evaluate existing economic evaluation quality assessment instruments for suitability in medical device evaluations.

Methods

A comprehensive search of databases (MEDLINE, EMBASE, EconLit, CINAHL, and Web of Science) and grey literature was conducted. Two reviewers screened titles and abstracts. Full-text, peer-reviewed primary studies introducing original instruments were included. Only methodological quality assessment instruments were considered for data extraction. Each item was assessed for its suitability in evaluating medical device economic evaluations and inclusion of medical device-specific features.

Results

The search identified 4203 citations and 77 grey literature sources. Fifteen results underwent full-text assessment, with five relevant instruments identified. A previous systematic review identified 10 additional instruments, which we also considered. Of these 25 articles, 13 were included in the review. These instruments lack specificity for medical devices, particularly in addressing features like learning curve effects, organizational impact, and incremental innovation. Instruments should include items specific to these unique characteristics.

Conclusions

Existing instruments contain general items related to health economic evaluation studies, highlighting the need for an instrument specifically tailored to evaluate the methodological quality of medical device economic evaluation studies.

Information

Type
Assessment
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Background

Health economic evaluations are valuable tools for guiding policymakers in allocating scarce healthcare resources. Quality assessment is important for maintaining methodological standards to obtain valid and reliable results (Reference Ben, van Dongen and El Alili1). It enhances study transparency and reproducibility and facilitates appropriate resource allocation within healthcare systems (Reference Husereau, Drummond and Augustovski2). Reviews conducted since 2015 on health economic evaluations for medical devices have highlighted concerns about the quality of these studies (Reference Martelli, Devaux and van den Brink3;Reference Fontenay, Catarino and Snoussi4) indicating that they are often insufficient to address the important features of medical devices (Reference Craig, Carr and Hutton5;6). The value, accessibility, and affordability of new medical devices are critical considerations for patients, healthcare providers, and health systems, alongside their effectiveness and safety. The cost-effectiveness of these technologies and the most appropriate ways to evaluate them are of increasing importance (Reference Craig, Carr and Hutton5).

The economic evaluation of medical devices differs from pharmaceuticals in several important ways (Reference Drummond, Griffin and Tarricone7Reference Tarricone, Torbica and Drummond10). Key considerations include limited clinical and economic evidence (Reference Craig, Carr and Hutton5;Reference Sorenson, Tarricone, Siebert and Drummond11;Reference Zwanziger, Hall and Dick12), learning curve effects (Reference Craig, Carr and Hutton5;Reference Drummond, Griffin and Tarricone7), organizational impact (Reference Craig, Carr and Hutton5;Reference Drummond, Griffin and Tarricone7;Reference Sorenson, Tarricone, Siebert and Drummond11), incremental innovation (Reference Craig, Carr and Hutton5;Reference Drummond, Griffin and Tarricone7;Reference Tarricone, Torbica and Drummond10;Reference Basu, Eggington, Hallas and Strachan13;Reference Kirisits and Redekop14), dynamic pricing (Reference Daubner-Bendes, Kovacs and Niewada15;Reference Akpinar16), diversity in device types and applications (Reference Kirisits and Redekop14), and challenges with transferability of results (Reference Kirisits and Redekop14). ‘Insufficient evidence’ in evaluating medical devices refers to the limitations of randomized clinical trials, including lack of randomization, small sample sizes, and short follow-up periods. These limitations make it difficult to draw definitive conclusions about the effectiveness and cost-effectiveness of devices in real-world settings (Reference Craig, Carr and Hutton5;Reference Sorenson, Tarricone, Siebert and Drummond11;Reference Zwanziger, Hall and Dick12). The ‘learning curve’ describes the improvement in user proficiency over time (Reference Healey and Samanta17). The ‘organizational impact’ of a medical device includes various factors affecting its adoption, use, and integration within the healthcare system, with user education and organizational adjustments being essential for maximizing its benefits (Reference Craig, Carr and Hutton5). ‘Incremental innovation’ in medical devices refers to the continuous improvements and modifications made over the device’s lifecycle (Reference Drummond, Griffin and Tarricone7;Reference Tarricone, Torbica and Drummond10;Reference Basu, Eggington, Hallas and Strachan13;Reference Kirisits and Redekop14). ‘Dynamic pricing’ in the context of medical devices refers to the fluctuating costs associated with new devices and their consumables, influenced by factors such as market monopolies, manufacturer pricing strategies, and ongoing incremental innovations (Reference Drummond, Griffin and Tarricone7;Reference Sorenson, Tarricone, Siebert and Drummond11;Reference Kirisits and Redekop14). ‘Diversity’ in medical devices refers to the range of differences in complexity, features, usability, technological specifications, and clinical settings (Reference Kirisits and Redekop14). ‘Transferability’ in medical device economic evaluations refers to the challenge of applying cost-effectiveness results across different healthcare settings, often complicated by variations in device features, clinical usage, and additional cost components, all of which increase uncertainty (Reference Kirisits and Redekop14).

A more rigorous approach is necessary to explore the impact of these various aspects; however, to our knowledge, there is currently no methodological quality assessment instrument designed specifically for medical device economic evaluations. To qualify as ‘specifically tailored’ for medical devices, an instrument should incorporate criteria enabling the assessment of one or more of the seven defined features essential to their evaluation, or alternatively, contain items that adequately address these features.

In 2012, the Agency for Healthcare Research and Quality (AHRQ) in the United States conducted a systematic literature review to assess the best practices for conducting and reporting health economic evaluations (Reference Walker, Wilson and Sharma18). Ten quality assessment instruments (Reference Chiou, Hay and Wallace19Reference Grutters, Seferina and Tjan-Heijnen28) published between 1992 and 2011 were identified. To identify additional instruments, including those published after 2012 and in grey literature sources, we conducted a systematic literature review of methodological quality assessment instruments for medical device economic evaluations. This review aims to capture recent advancements and address specific considerations for medical device economic evaluations, which were not thoroughly covered in the prior review.

Our primary aim was to identify, summarize, and assess the relevance of existing instruments for evaluating medical device economic evaluations, focusing on seven defined medical device-specific features. This review has two key objectives: (i) to determine whether any existing quality assessment instruments are specifically tailored for medical device economic evaluations, and (ii) in the absence of a suitable instrument, to evaluate each item within current methodological quality assessment instruments for its potential inclusion. These items will be assessed based on their relevance to the seven device-specific features and included in a Delphi pool for expert consensus in the next phase of this project.

Methods

The reporting of this systematic review was guided by the standards of the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) Statement (Reference Page, McKenzie and Bossuyt29). The PRISMA checklists are available in Supplementary Materials 2 and 3. Our protocol, “Health Economic Evaluation Methodological Quality Assessment Tools: A protocol for a systematic review,” was registered with the International Platform of Registered Systematic Review and Meta-analysis Protocols (INPLASY DOI: 10.37766/inplasy2023.7.0093).

Eligibility criteria

Eligible studies were full-text, peer-reviewed primary studies introducing original instruments designed to assess economic evaluations of medical devices. Updated versions of instruments offering a different perspective were also included.

We excluded studies that focus on frameworks or guidelines for conducting economic evaluations, as well as those that adopt an original tool or checklist for purposes other than medical device economic evaluation, or that describe or validate an existing instrument. Reviews (scoping, rapid, systematic, literature), editorials, commentary, conference abstracts, dissertations, and these were also excluded. Studies published in a language other than English were excluded.

Information sources

We searched electronic databases—Ovid MEDLINE, Ovid EMBASE, CINAHL (via EBSCOhost), EconLit (via EBSCOhost), and Web of Science (via its online interface)—for English-language literature published between January 1, 2012 and May 24, 2023, with the final search conducted on May 24, 2023. A grey literature search was performed between November 16 and November 30, 2023, using the CADTH Grey Matters tool (30), as well as the International Network of Agencies for Health Technology Assessment (INAHTA) database and the Professional Society for Health Economics and Outcomes Research (ISPOR) website, to locate relevant guidance documents and instruments.

Search strategy

A systematic review search strategy was designed in collaboration with a University of Alberta Health Sciences librarian experienced in systematic reviews. We used the previous systematic review (Reference Walker, Wilson and Sharma18) search strategy as a foundation and made several adjustments to improve its relevance to our study. While the previous review used terms like “cost–benefit analysis,” “cost of illness,” and “economic evaluation” to capture economic analyses, we expanded the scope to specifically identify studies on quality assessment tools. To achieve this, we introduced terms such as “checklist,” “tool,” “questionnaire,” and included names of widely used checklists. Search terms included a combination of controlled vocabulary (e.g., Medical Subject Headings and EMBASE Subject Headings) and relevant keywords related to medical device economic evaluations. Additionally, we reviewed reference lists of included articles to identify further studies. The full electronic search strategy, including all limits and filters applied, is provided in the Supplementary Material 1.

Selection process

The results of the initial searches were downloaded into EndNote (31) reference manager. Duplicate articles retrieved from multiple databases were removed, and the remaining articles were uploaded to Covidence (32), a web-based systematic review manager. Covidence was used to track the search results throughout the title and abstract review, article selection, and data extraction stages.

Titles and abstracts of all citations identified in the searches were screened in duplicate (IA, AU) to assess potential relevancy. The full text of any potentially relevant articles was also assessed in duplicate against the selection criteria. Discrepancies were resolved by consensus, with a third reviewer (MP) providing arbitration, as necessary. Tool or checklist eligibility was defined based on the definition by Zoratti et al. (Reference Zoratti, Pickard and Stalmeier33). Reporting checklists were defined as “instruments that are used to evaluate the presence or absence of components without value on that component’s use.” Critical appraisal tools were defined as “an extension of reporting checklists and include some interpretation or evaluation of the reported content.” (Reference Zoratti, Pickard and Stalmeier33)

While the relevance of methods such as sensitivity analyses, risk of bias assessment for missing results, and assessing certainty in evidence for many systematic reviews, these methods were not directly applicable to our synthesis of methodological quality assessment tools. Our review focuses on evaluating and synthesizing existing methodological quality assessment tools to assess their effectiveness and applicability in medical device economic evaluations, rather than synthesizing quantitative outcomes. As a result, methods such as sensitivity analyses and bias assessments for missing data were not relevant in this context. Additionally, assessing certainty in the evidence is more suited to clinical or outcome-based reviews, rather than methodological reviews. Instead, we focused on ensuring methodological rigor in study selection and maintaining transparency throughout the synthesis process.

Data collection process

One reviewer (IA) extracted the data from each article to a data extraction form developed by IA. A second reviewer (AU) cross-checked all extracted data for accuracy and consistency. Data discrepancies within articles were noted and it was established that data extraction was prioritized to come from the summary of tables, supplemented by the main text as needed.

Data items

The following data elements were extracted:

  • Descriptive characteristics of the published instruments (e.g., name, first author, year of publication, author affiliation, journal, number of items, item response options, intended use, target audience, the methods of development, funding source, any validation data)

  • Only from methodological quality assessment instruments:

    • Each item and its appropriateness to assess medical device economic evaluations.

    • Content review with respect to the seven medical device-specific features (insufficient evidence, learning curve effects, organizational impact, incremental innovation, dynamic pricing, diversity, and transferability of the results).

Results

Study selection

In the initial electronic literature search, 6,002 records were identified. After removing duplicates, a total of 4,280 records remained. Screening of 4,203 titles and abstracts and 77 grey literature sources led to the retrieval of 15 articles for full-text review. The 2012 review from Walker et al. (Reference Walker, Wilson and Sharma18) identified 10 instruments that we considered as well. Of these 25 articles, 15 were deemed eligible and included in the review (Reference Husereau, Drummond and Augustovski2;Reference Chiou, Hay and Wallace19Reference Grutters, Seferina and Tjan-Heijnen28;Reference Grimm, Pouwels and Ramaekers34Reference Kip, MJ and Henriksson37). The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) study flowchart detailing the process of study selection and exclusion is provided in Figure 1.

Figure 1. PRISMA flow diagram.

Source: Page MJ, et al. BMJ 2021;372:n71. doi: 10.1136/bmj.n71. This work is licensed under CC BY 4.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/

After a full-text review, two instruments (Reference Husereau, Drummond and Augustovski2;Reference Kip, MJ and Henriksson37) were excluded because they were specifically designed as reporting quality assessment instruments.

Study characteristics

Thirteen instruments (Reference Chiou, Hay and Wallace19Reference Grutters, Seferina and Tjan-Heijnen28;Reference Grimm, Pouwels and Ramaekers34Reference Kim, Do and Synnott36) were designed for the methodological quality assessment of health economic evaluations in general, rather than with a specific focus on medical device economic evaluations. Detailed characteristics of each instrument are summarized in Table 1.

Table 1. Included quality assessment instruments

AGREEDT: Alignment in the Reporting of Economic Evaluations of Diagnostic Tests and Biomarkers; BMJ: British Medical Journal; CEA: cost-effectiveness analysis; CHEERS 2022: Consolidated Health Economic Evaluation Reporting Standards 2022; CHEQUE: Criteria for Health Economic Quality Evaluation; CUA: cost-utility analysis; NA: not applicable; TRUST: Transparent Uncertainty Assessment.

The included studies encompass a variety of instruments designed to assess health economic evaluations. These instruments, developed between 1992 and 2023, vary in their number of items, with some containing as few as 16 (Reference Chiou, Hay and Wallace19) and others as many as 91 (Reference Inotai, Pekli and Jona35). The item response options also vary, including open-ended responses (Reference Gerard24Reference Grutters, Seferina and Tjan-Heijnen28), yes/no options (Reference Chiou, Hay and Wallace19;Reference Inotai, Pekli and Jona35Reference Kip, MJ and Henriksson37), and weighted scales (Reference Chiou, Hay and Wallace19). Intended uses range from providing guidance for economic analyses in clinical trials (Reference Adams, McCall, Gray, Orza and Chalmers23) to developing evaluation criteria for cost-utility analyses (Reference Gerard24) and assessing the quality of cost-effectiveness studies (Reference Chiou, Hay and Wallace19). The target audiences for quality assessment checklists include researchers (Reference Chiou, Hay and Wallace19;Reference Evers, Goossens, de Vet, van Tulder and Ament20;Reference Ungar and Santos22Reference Sacristan, Soto and Galende25;Reference Grimm, Pouwels and Ramaekers34Reference Kim, Do and Synnott36), decision-makers (Reference Chiou, Hay and Wallace19;Reference Ungar and Santos22;Reference Grutters, Seferina and Tjan-Heijnen28;Reference Grimm, Pouwels and Ramaekers34Reference Kim, Do and Synnott36), policy-makers (Reference Evers, Goossens, de Vet, van Tulder and Ament20;Reference Ungar and Santos22;Reference Gerard24), journal editors (Reference Chiou, Hay and Wallace19;Reference Drummond and Jefferson21;Reference Sacristan, Soto and Galende25) and pharmaceutical industry professionals (Reference Clemens, Townsend and Luscombe26). One checklist’s authors (Reference Siegel, Weinstein, Russell and Gold27) did not explicitly state the target audience. The development methods of these instruments vary, including literature reviews, expert panel reviews, and collaboration with clinicians and policymakers.

Table 2 presents potentially relevant items extracted from various instruments, providing a comprehensive overview of key considerations in medical device economic evaluations. Out of 388 items from 13 methodological quality assessment instruments, only seven items, found in four instruments (Reference Adams, McCall, Gray, Orza and Chalmers23;Reference Gerard24;Reference Grutters, Seferina and Tjan-Heijnen28;Reference Inotai, Pekli and Jona35), were relevant to medical device economic evaluations. Relevance was assessed by comparing each checklist item against specific criteria developed for medical device economic evaluations, including the seven items outlined in the background section. These include considerations for dynamic pricing (Reference Adams, McCall, Gray, Orza and Chalmers23;Reference Inotai, Pekli and Jona35), low evidence cases (Reference Inotai, Pekli and Jona35), learning curve effects (Reference Grutters, Seferina and Tjan-Heijnen28), incremental innovation (Reference Gerard24), and organizational impact (Reference Inotai, Pekli and Jona35). This approach ensured a focus on aspects important to medical devices. None of the identified instruments in the review were deemed suitable for the standalone evaluation of medical devices.

Table 2. Medical device-specific features and relevant items by instrument

CUA, cost-utility analysis.

Consequently, we expanded our search to include grey literature sources and economic evaluation methodological guidelines. We reviewed 77 sources but found no existing instruments tailored specifically for medical devices. Relevant information for only five medical device-specific features was found in six guidelines. These features included low evidence in New Zealand, learning curve effects in Japan, the UK, the Netherlands, and New Zealand, incremental innovation in France, Ireland, Japan, the Netherlands, and New Zealand, diversity in the Netherlands, and dynamic pricing in France. None of the guidelines addressed the domains of organizational impact or transferability. Additional information on medical device economic evaluations from Canada, France, Ireland, New Zealand, and the Netherlands that could not be classified under the defined medical device-specific features was also found. These countries offer recommendations such as resource measurement and costing (Canada, France, New Zealand), adverse effects (Ireland), value components beyond health outcomes (the Netherlands), and outcome measures and evaluation methods (the Netherlands). These relevant items are presented in Table 3.

Table 3. Medical device-specific features and relevant guideline items by country

C2H: Center for Outcomes Research and Economic Evaluation for Health; CT:computed tomography; CEA: cost-effectiveness analysis; CUA: cost-utility analysis; DCE: discrete choice experiment; MCDA: Multi-Criteria Decision Analysis.

Discussion

Despite conducting a comprehensive search of peer-reviewed literature, we did not identify any methodological quality assessment instruments specifically tailored for the economic evaluation of medical devices. This gap highlights a critical area where current research is lacking. Even though Walker et al. (Reference Walker, Wilson and Sharma18) did not incorporate grey literature, to ensure comprehensiveness and avoid overlooking relevant tools, we expanded our search to include grey literature sources. This broader approach was crucial for several reasons: First, it enabled us to capture existing instruments that might not have been published in academic journals but could still be highly relevant for our review. Second, grey literature often contains supplementary information that supports or contextualizes the peer-reviewed studies (Reference Adams, Smart and Huff46Reference Paez48). Guidelines and reports from Health Technology Assessment organizations can provide practical insights into the application and relevance of the methodological quality assessment items which is essential for developing new tools (38;40;41;43). Third, including grey literature helps mitigate publication bias, as not all valuable research is published in peer-reviewed journals (Reference Adams, Smart and Huff46;Reference Paez48). Finally, the lack of medical device-specific instruments identified through both peer-reviewed and grey literature searches underscores the need for targeted research and development in this area. Without instruments that account for the relevant aspects of medical devices – such as incremental innovation, learning curves, and dynamic pricing – it is challenging to conduct robust, high-quality economic evaluations that fully assess a device’s value over time. This gap limits policymakers’ ability to make informed decisions, potentially leading to inefficient resource allocation or delayed adoption of valuable innovations. Addressing this need is essential for establishing a framework that supports rigorous, relevant economic evaluations, ultimately enhancing healthcare quality and efficiency. Considering these gaps, we also examined the relevance of items within the assessment instruments. Remarkably, only four of the included assessment instruments contained seven relevant items, indicating a substantial gap in the comprehensiveness of tools available for the economic evaluation of medical devices. This finding underscores the need for further development of methodological quality assessment instruments that adequately capture the rigorous, relevant economic evaluations. Incorporating insights from grey literature into the development process can help ensure that new instruments are comprehensive and practically applicable.

Notably, this systematic review addresses a significant gap in the existing literature, as no prior reviews have specifically explored this research question. This systematic review adhered to rigorous standards, as outlined in the review protocol published in INPLASTY. We conducted a comprehensive literature search across multiple databases and grey literature. This approach ensured that we considered a wide range of sources and potential instruments. However, the review has limitations. Only English-language articles were included, which may introduce language bias. While prior studies, such as Morrison et al. (Reference Morrison, Polisena and Husereau49), found no systematic bias from language restrictions in conventional medicine reviews, further research is needed to understand the impact of such restrictions in specialized fields like health economics. Additionally, there was no specific tool for assessing the methodological quality of included studies. Instead, we evaluated instruments based on their development, validation, applicability, previous use, citations, and updates. This approach ensured credibility, but the lack of a standardized quality assessment tool highlights another gap in literature.

Conclusion

Existing instruments cover general items related to the conduct of health economic evaluation studies. However, there is currently a lack of a specific instrument to systematically assess the methodological quality of published economic evaluations for medical devices. To address this gap, future research should focus on developing methodological quality assessment instruments that adequately capture the complexities of medical devices.

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/S0266462325000212.

Data availability statement

Data extracted from included studies will be made available upon request.

Funding statement

This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

Competing interests

The authors declare none.

References

Ben, AJ, van Dongen, JM, El Alili, M, et al. Conducting trial-based economic evaluations using R: A tutorial. Pharmacoeconomics. 2023;41(11):14031413.10.1007/s40273-023-01301-7CrossRefGoogle Scholar
Husereau, D, Drummond, M, Augustovski, F, et al.. Consolidated health economic evaluation reporting standards 2022 (CHEERS 2022) statement: updated reporting guidance for health economic evaluations. Pharmacoeconomics. 2022;40(6):601609.10.1007/s40273-021-01112-8CrossRefGoogle ScholarPubMed
Martelli, N, Devaux, C, van den Brink, H, et al. A systematic review of the level of evidence in economic evaluations of medical devices: the example of vertebroplasty and kyphoplasty. PLoS One. 2015;10(12):e0144892.10.1371/journal.pone.0144892CrossRefGoogle ScholarPubMed
Fontenay, S, Catarino, L, Snoussi, S, et al. Quality of economic evaluations of ventricular assist devices: a systematic review. Int J Technol Assess Health Care. 2020;18:18.Google Scholar
Craig, JA, Carr, L, Hutton, J, et al. A review of the economic tools for assessing new medical devices. Appl Health Econ Health Policy. 2015;13(1):1527.10.1007/s40258-014-0123-8CrossRefGoogle ScholarPubMed
World Health Organization. Medical Devices. Overview. [updated 31 Aug 2021; cited 2021 Oct 12]; Available from: https://www.who.int/health-topics/medical-devices#tab=tab_1. Accessed Oct 12, 2021.Google Scholar
Drummond, M, Griffin, A, Tarricone, R. Economic evaluation for devices and drugs–same or different? Value Health. 2009;12(4):402404.10.1111/j.1524-4733.2008.00476_1.xCrossRefGoogle ScholarPubMed
Drummond, M, Tarricone, R, Torbica, A. Economic evaluation of medical devices. In Hamilton, JH, editor, Oxford research encyclopedia of economics and finance. Oxford: Oxford University Press; 2018.Google Scholar
Tarricone, R, Callea, G, Ogorevc, M, Prevolnik Rupel, V. Improving the methods for the economic evaluation of medical devices. Health Econ. 2017;26(Suppl 1):7092.10.1002/hec.3471CrossRefGoogle ScholarPubMed
Tarricone, R, Torbica, A, Drummond, M. Challenges in the assessment of medical devices: the MedtecHTA project. Health Econ. 2017;26(Suppl 1):512.10.1002/hec.3469CrossRefGoogle ScholarPubMed
Sorenson, C, Tarricone, R, Siebert, M, Drummond, M. Applying health economics for policy decision making: Do devices differ from drugs? Europace. 2011;13(Suppl 2):ii54ii58.10.1093/europace/eur089CrossRefGoogle ScholarPubMed
Zwanziger, J, Hall, WJ, Dick, AW, et al. The cost effectiveness of implantable cardioverter-defibrillators: Results from the multicenter automatic defibrillator implantation Trial (MADIT)-II. J Am Coll Cardiol. 2006;47(11):23102318.10.1016/j.jacc.2006.03.032CrossRefGoogle ScholarPubMed
Basu, R, Eggington, S, Hallas, N, Strachan, L. Are medical device characteristics included in HTA methods guidelines and reports? A brief review Appl Health Econ Health Policy. 2024;22(5):653664.10.1007/s40258-024-00896-yCrossRefGoogle ScholarPubMed
Kirisits, A, Redekop, WK. The economic evaluation of medical devices: challenges ahead. Appl Health Econ Health Policy. 2013;11(1):1526.10.1007/s40258-012-0006-9CrossRefGoogle ScholarPubMed
Daubner-Bendes, R, Kovacs, S, Niewada, M, et al. Quo vadis HTA for medical devices in central and eastern Europe? Recommendations to address methodological challenges. Front Public Health. 2020;8:612410.10.3389/fpubh.2020.612410CrossRefGoogle Scholar
Akpinar, I. The economic contribution of industry-sponsored medical device clinical trials to health care and health research in Alberta [MSc Thesis]. Edmonton (AB): University of Alberta; 2018. https://doi.org/10.7939/R3NP1X10N. Accessed Oct 12, 2021.CrossRefGoogle Scholar
Healey, P, Samanta, J. When does the ‘learning curve’ of innovative interventions become questionable practice? Eur J Vasc Endovasc Surg. 2008;36(3):253257.10.1016/j.ejvs.2008.05.006CrossRefGoogle ScholarPubMed
Walker, DG, Wilson, RF, Sharma, R, et al. Best practices for conducting economic evaluations in health care: a systematic review of quality assessment tools. Rockville, MD: Agency for Healthcare Research and Quality; 2012.Google Scholar
Chiou, CF, Hay, JW, Wallace, JF, et al. Development and validation of a grading system for the quality of cost-effectiveness studies. Med Care. 2003;41(1):3244.10.1097/00005650-200301000-00007CrossRefGoogle ScholarPubMed
Evers, S, Goossens, M, de Vet, H, van Tulder, M, Ament, A. Criteria list for assessment of methodological quality of economic evaluations: Consensus on health economic criteria. Int J Technol Assess Health Care. 2005;21(2):240245.10.1017/S0266462305050324CrossRefGoogle ScholarPubMed
Drummond, MF, Jefferson, TO. Guidelines for authors and peer reviewers of economic submissions to the BMJ. The BMJ economic evaluation working party. BMJ. 1996;313(7052):275283.10.1136/bmj.313.7052.275CrossRefGoogle Scholar
Ungar, WJ, Santos, MT. The pediatric quality appraisal questionnaire: An instrument for evaluation of the pediatric health economics literature. Value Health. 2003;6(5):584594.10.1046/j.1524-4733.2003.65253.xCrossRefGoogle ScholarPubMed
Adams, ME, McCall, NT, Gray, DT, Orza, MJ, Chalmers, TC. Economic analysis in randomized control trials. Med Care. 1992;30(3):231243.10.1097/00005650-199203000-00005CrossRefGoogle ScholarPubMed
Gerard, K. Cost-utility in practice: A policy maker’s guide to the state of the art. Health Policy. 1992;21(3):249279.10.1016/0168-8510(92)90022-4CrossRefGoogle ScholarPubMed
Sacristan, JA, Soto, J, Galende, I. Evaluation of pharmacoeconomic studies: Utilization of a checklist. Ann Pharmacother 1993;27(9):11261133.10.1177/106002809302700919CrossRefGoogle ScholarPubMed
Clemens, K, Townsend, R, Luscombe, F, et al. Methodological and conduct principles for pharmacoeconomic research. Pharmaceutical research and manufacturers of America. Pharmacoeconomics. 1995;8(2):169174.10.2165/00019053-199508020-00008CrossRefGoogle ScholarPubMed
Siegel, JE, Weinstein, MC, Russell, LB, Gold, MR. Recommendations for reporting cost-effectiveness analyses. Panel on cost-effectiveness in health and medicine. JAMA. 1996;276(16):13391341.10.1001/jama.1996.03540160061034CrossRefGoogle ScholarPubMed
Grutters, JP, Seferina, SC, Tjan-Heijnen, VC, et al. Bridging trial and decision: A checklist to frame health technology assessments for resource allocation decisions. Value Health. 2011;14(5):777784.10.1016/j.jval.2011.01.005CrossRefGoogle ScholarPubMed
Page, MJ, McKenzie, JE, Bossuyt, PM, et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.10.1136/bmj.n71CrossRefGoogle ScholarPubMed
Canadian Agency for Drugs and Technologies in Health (CADTH ). Grey matters: A tool for searching health-related grey literature. Ottawa: CADTH; 2022 [updated 30 Nov 2023]. Available from: https://greymatters.cadth.ca.Google Scholar
The EndNote Team. EndNote. EndNote X9 ed. Philadelphia, PA: Clarivate; 2013.Google Scholar
Covidence systematic review software, Veritas Health Innovation, Melbourne, Australia. Available at www.covidence.org.Google Scholar
Zoratti, MJ, Pickard, AS, Stalmeier, PFM, et al. Evaluating the conduct and application of health utility studies: A review of critical appraisal tools and reporting checklists. Eur J Health Econ. 2021;22(5):723733.10.1007/s10198-021-01286-0CrossRefGoogle ScholarPubMed
Grimm, SE, Pouwels, X, Ramaekers, BLT, et al. Development and validation of the transparent uncertainty assessment (TRUST) tool for assessing uncertainties in health economic decision models. Pharmacoeconomics. 2020;38(2):205216.10.1007/s40273-019-00855-9CrossRefGoogle ScholarPubMed
Inotai, A, Pekli, M, Jona, G, et al. Attempt to increase the transparency of fourth hurdle implementation in Central-Eastern European middle income countries: Publication of the critical appraisal methodology. BMC Health Serv Res. 2012;12:332.10.1186/1472-6963-12-332CrossRefGoogle ScholarPubMed
Kim, DD, Do, LA, Synnott, PG, et al. Developing criteria for health economic quality evaluation tool. Value Health. 2023;26(8):12251234.10.1016/j.jval.2023.04.004CrossRefGoogle ScholarPubMed
Kip, MMA, MJ, IJ, Henriksson, M, et al. Toward alignment in the reporting of economic evaluations of diagnostic tests and biomarkers: The AGREEDT checklist. Med Decis Making. 2018;38(7):778788.10.1177/0272989X18797590CrossRefGoogle ScholarPubMed
Pharmac New Zealand Government. Prescription for pharmacoeconomic analysis-methods for cost-utility analysis. 2015.Google Scholar
Core2 Health. Guideline for preparing cost-effectiveness evaluation to the central social insurance medical council. 2022.Google Scholar
NICE National Institute for Health and Care Excellence. NICE health technology evaluations: The manual. NICE; 2022.Google Scholar
The Netherlands National Health Care Institute. Guideline for economic evaluations in healthcare (2024 version). National Health Care Institute; 2024.Google Scholar
HAS Haute Autorité de Santé – the French Health Authority. Methodological guidance choices in methods for economic evaluation. HAS; 2020.Google Scholar
Ireland Health Information and Quality Authority. Guidelines for the economic evaluation of health technologies in Ireland. Health Information and Quality Authority; 2019.Google Scholar
The Netherlands Health Care Institute. Guideline for economic evaluations in healthcare. Health Care Institute; 2016.Google Scholar
Canadian Agency for Drugs and Technologies in Health (CADTH). Guidelines for the economic evaluation of health technologies: Canada. 4th ed. Ottawa: CADTH; 2017.Google Scholar
Adams, RJ, Smart, P, Huff, AS. Shades of grey: Guidelines for working with the grey literature in systematic reviews for management and organizational studies. Int J Manag Rev. 2017;19(4):432454.10.1111/ijmr.12102CrossRefGoogle Scholar
Mahood, Q, Van Eerd, D, Irvin, E. Searching for grey literature for systematic reviews: Challenges and benefits. Res Synth Methods. 2014;5(3):221234.10.1002/jrsm.1106CrossRefGoogle ScholarPubMed
Paez, A. Grey literature: An important resource in systematic reviews. J Evid Based Med. 2017;10(3):233240. doi: 10.1111/jebm.12266. PMID: 28857505.CrossRefGoogle ScholarPubMed
Morrison, A, Polisena, J, Husereau, D, et al. The effect of English-language restriction on systematic review-based meta-analyses: A systematic review of empirical studies. Int J Technol Assess Health Care. 2012;28(2):138–44.10.1017/S0266462312000086CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. PRISMA flow diagram.Source: Page MJ, et al. BMJ 2021;372:n71. doi: 10.1136/bmj.n71. This work is licensed under CC BY 4.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/

Figure 1

Table 1. Included quality assessment instruments

Figure 2

Table 2. Medical device-specific features and relevant items by instrument

Figure 3

Table 3. Medical device-specific features and relevant guideline items by country

Supplementary material: File

Akpinar et al. supplementary material

Akpinar et al. supplementary material
Download Akpinar et al. supplementary material(File)
File 92.3 KB