Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Prevent plagiarism. Run a free check.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved February 15, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, what is your plagiarism score.

Banner

Systematic Reviews

Who is this guide for and what can be found in it, what are systematic reviews, how do systematic reviews differ from narrative literature reviews.

  • Types of Systematic Reviews
  • Reading Systematic Reviews
  • Resources for Conducting Systematic Reviews
  • Getting Help with Systematic Reviews from the Library
  • History of Systematic Reviews
  • Acknowledgements

Contact Us!

Ask a question

This guide aims to support all OHSU members' systematic review education and activities, orienting OHSU members who are new to systematic reviews and facilitating the quality, rigor, and reproducibility of systematic reviews produced by OHSU members.

In it you will find:

  • A definition of what systematic reviews are, how they compare to other evidence, and how they differ from narrative literature reviews
  • Descriptions of the different types of systematic reviews , with links to resources on methods, protocols, reporting, additional information, and selecting the right type of systematic review for your research question
  • Guidance on how to read and evaluate systematic reviews for strength, quality, and potential for bias
  • A high-level overview of how systematic reviews are conducted , including team size and roles, standards, and processes
  • Links to resources and tools for conducting systematic reviews
  • Information about how to get assistance with conducting a systematic review from the OHSU Library
  • A history of systematic reviews to provide contextual understanding of how they have developed over time
"A systematic review is a summary of the medical literature that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue. It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and random errors."

Gopalakrishnan S, Ganeshkumar P. Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare . J Family Med Prim Care . 2013;2(1):9-14. doi:10.4103/2249-4863.109934

Systematic Reviews are a vital resource used in the pursuit of Evidence-Based Practice (EBP):

  • These studies can be found near the top of the Evidence Pyramid , which ranks sources of information and study designs by the level of evidence contained within them
  • This ranking is based on the level of scientific rigor employed in their methods and the quality and reliability of the evidence contained within these sources
  • A higher ranking means that we can be more confident that their conclusions are accurate and have taken measures to limit bias

Research design and evidence , by CFCF , CC BY-SA 4.0 , via Wikimedia Commons

Things to know about systematic reviews:

  • Systematic reviews are a type of research study
  • Systematic reviews aim to provide a comprehensive and unbiased summary of the existing evidence on a particular research question
  • There are many types of systematic reviews , each designed to address a specific type of research purpose and with their own strengths and weaknesses
  • The choice of what type of review to produce typically will depend on the nature of the research question and the resources that are available on the topic

The practice of producing systematic reviews is sometimes referred to by other names such as:

  • Evidence Synthesis
  • Knowledge Synthesis
  • Research Synthesis

This guide tries to stick with the term "Systematic Reviews" unless a specific type of systematic review is being discussed.

While all reviews combat information overload in the health sciences by summarizing the literature on a topic, different types of reviews have different approaches. The term systematic review is often conflated with narrative literature reviews , which can lead to confusion and misunderstandings when seeking help with conducting them. This table helps clarify the differences.

  • Next: Types of Systematic Reviews >>
  • Last Updated: Feb 12, 2024 5:59 PM
  • URL: https://libguides.ohsu.edu/systematic-reviews

X

Library Services

UCL LIBRARY SERVICES

  • Guides and databases
  • Library skills
  • Systematic reviews

What are systematic reviews?

  • Types of systematic reviews
  • Formulating a research question
  • Identifying studies
  • Searching databases
  • Describing and appraising studies
  • Synthesis and systematic maps
  • Software for systematic reviews
  • Online training and support
  • Live and face to face training
  • Individual support
  • Further help

Searching for information

Systematic reviews are a type of literature review of research which require equivalent standards of rigour as primary research. They have a clear, logical rationale that is reported to the reader of the review. They are used in research and policymaking to inform evidence-based decisions and practice. They differ from traditional literature reviews particularly in the following elements of conduct and reporting.

Systematic reviews: 

  • use explicit and transparent methods
  • are a piece of research following a standard set of stages
  • are accountable, replicable and updateable
  • involve users to ensure a review is relevant and useful.

For example, systematic reviews (like all research) should have a clear research question, and the perspective of the authors in their approach to addressing the question is described. There are clearly described methods on how each study in a review was identified, how that study was appraised for quality and relevance and how it is combined with other studies in order to address the review question. A systematic review usually involves more than one person in order to increase the objectivity and trustworthiness of the reviews methods and findings.

Research protocols for systematic reviews may be peer-reviewed and published or registered in a suitable repository to help avoid duplication of reviews and for comparisons to be made with the final review and the planned review.

  • History of systematic reviews to inform policy (EPPI-Centre)
  • Six reasons why it is important to be systematic (EPPI-Centre)
  • Evidence Synthesis International (ESI): Position Statement Describes the issues, principles and goals in synthesising research evidence to inform policy, practice and decisions

On this page

Should all literature reviews be 'systematic reviews', different methods for systematic reviews, reporting standards for systematic reviews.

Literature reviews provide a more complete picture of research knowledge than is possible from individual pieces of research. This can be used to: clarify what is known from research, provide new perspectives, build theory, test theory, identify research gaps or inform research agendas.

A systematic review requires a considerable amount of time and resources, and is one type of literature review.

If the purpose of a review is to make justifiable evidence claims, then it should be systematic, as a systematic review uses rigorous explicit methods. The methods used can depend on the purpose of the review, and the time and resources available.

A 'non-systematic review' might use some of the same methods as systematic reviews, such as systematic approaches to identify studies or quality appraise the literature. There may be times when this approach can be useful. In a student dissertation, for example, there may not be the time to be fully systematic in a review of the literature if this is only one small part of the thesis. In other types of research, there may also be a need to obtain a quick and not necessarily thorough overview of a literature to inform some other work (including a systematic review). Another example, is where policymakers, or other people using research findings, want to make quick decisions and there is no systematic review available to help them. They have a choice of gaining a rapid overview of the research literature or not having any research evidence to help their decision-making. 

Just like any other piece of research, the methods used to undertake any literature review should be carefully planned to justify the conclusions made. 

Finding out about different types of systematic reviews and the methods used for systematic reviews, and reading both systematic and other types of review will help to understand some of the differences. 

Typically, a systematic review addresses a focussed, structured research question in order to inform understanding and decisions on an area. (see the  Formulating a research question  section for examples). 

Sometimes systematic reviews ask a broad research question, and one strategy to achieve this is the use of several focussed sub-questions each addressed by sub-components of the review.  

Another strategy is to develop a map to describe the type of research that has been undertaken in relation to a research question. Some maps even describe over 2,000 papers, while others are much smaller. One purpose of a map is to help choose a sub-set of studies to explore more fully in a synthesis. There are also other purposes of maps: see the box on  systematic evidence maps  for further information. 

Reporting standards specify minimum elements that need to go into the reporting of a review. The reporting standards refer mainly to methodological issues but they are not as detailed or specific as critical appraisal for the methodological standards of conduct of a review.

A number of organisations have developed specific guidelines and standards for both the conducting and reporting on systematic reviews in different topic areas.  

  • PRISMA PRISMA is a reporting standard and is an acronym for Preferred Reporting Items for Systematic Reviews and Meta-Analyses. The Key Documents section of the PRISMA website links to a checklist, flow diagram and explanatory notes. PRISMA is less useful for certain types of reviews, including those that are iterative.
  • eMERGe eMERGe is a reporting standard that has been developed for meta-ethnographies, a qualitative synthesis method.
  • ROSES: RepOrting standards for Systematic Evidence Syntheses Reporting standards, including forms and flow diagram, designed specifically for systematic reviews and maps in the field of conservation and environmental management.

Useful books about systematic reviews

what is systematic review of related literature in research

Systematic approaches to a successful literature review

what is systematic review of related literature in research

An introduction to systematic reviews

what is systematic review of related literature in research

Cochrane handbook for systematic reviews of interventions

Systematic reviews: crd's guidance for undertaking reviews in health care.

what is systematic review of related literature in research

Finding what works in health care: Standards for systematic reviews

Book cover image

Systematic Reviews in the Social Sciences

Meta-analysis and research synthesis.

Book cover image

Research Synthesis and Meta-Analysis

Book cover image

Doing a Systematic Review

Literature reviews.

  • What is a literature review?
  • Why are literature reviews important?
  • << Previous: Systematic reviews
  • Next: Types of systematic reviews >>
  • Last Updated: Feb 13, 2024 12:56 PM
  • URL: https://library-guides.ucl.ac.uk/systematic-reviews
  • Locations and Hours
  • UCLA Library
  • Research Guides
  • Biomedical Library Guides

Systematic Reviews

  • Types of Literature Reviews

What Makes a Systematic Review Different from Other Types of Reviews?

  • Planning Your Systematic Review
  • Database Searching
  • Creating the Search
  • Search Filters & Hedges
  • Grey Literature
  • Managing & Appraising Results
  • Further Resources

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91–108. doi:10.1111/j.1471-1842.2009.00848.x

  • << Previous: Home
  • Next: Planning Your Systematic Review >>
  • Last Updated: Feb 6, 2024 10:17 AM
  • URL: https://guides.library.ucla.edu/systematicreviews

University Libraries      University of Nevada, Reno

  • Skill Guides
  • Subject Guides

Systematic, Scoping, and Other Literature Reviews: Overview

  • Project Planning

What Is a Systematic Review?

Regular literature reviews are simply summaries of the literature on a particular topic. A systematic review, however, is a comprehensive literature review conducted to answer a specific research question. Authors of a systematic review aim to find, code, appraise, and synthesize all of the previous research on their question in an unbiased and well-documented manner. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) outline the minimum amount of information that needs to be reported at the conclusion of a systematic review project. 

Other types of what are known as "evidence syntheses," such as scoping, rapid, and integrative reviews, have varying methodologies. While systematic reviews originated with and continue to be a popular publication type in medicine and other health sciences fields, more and more researchers in other disciplines are choosing to conduct evidence syntheses. 

This guide will walk you through the major steps of a systematic review and point you to key resources including Covidence, a systematic review project management tool. For help with systematic reviews and other major literature review projects, please send us an email at  [email protected] .

Getting Help with Reviews

Organization such as the Institute of Medicine recommend that you consult a librarian when conducting a systematic review. Librarians at the University of Nevada, Reno can help you:

  • Understand best practices for conducting systematic reviews and other evidence syntheses in your discipline
  • Choose and formulate a research question
  • Decide which review type (e.g., systematic, scoping, rapid, etc.) is the best fit for your project
  • Determine what to include and where to register a systematic review protocol
  • Select search terms and develop a search strategy
  • Identify databases and platforms to search
  • Find the full text of articles and other sources
  • Become familiar with free citation management (e.g., EndNote, Zotero)
  • Get access to you and help using Covidence, a systematic review project management tool

Doing a Systematic Review

  • Plan - This is the project planning stage. You and your team will need to develop a good research question, determine the type of review you will conduct (systematic, scoping, rapid, etc.), and establish the inclusion and exclusion criteria (e.g., you're only going to look at studies that use a certain methodology). All of this information needs to be included in your protocol. You'll also need to ensure that the project is viable - has someone already done a systematic review on this topic? Do some searches and check the various protocol registries to find out. 
  • Identify - Next, a comprehensive search of the literature is undertaken to ensure all studies that meet the predetermined criteria are identified. Each research question is different, so the number and types of databases you'll search - as well as other online publication venues - will vary. Some standards and guidelines specify that certain databases (e.g., MEDLINE, EMBASE) should be searched regardless. Your subject librarian can help you select appropriate databases to search and develop search strings for each of those databases.  
  • Evaluate - In this step, retrieved articles are screened and sorted using the predetermined inclusion and exclusion criteria. The risk of bias for each included study is also assessed around this time. It's best if you import search results into a citation management tool (see below) to clean up the citations and remove any duplicates. You can then use a tool like Rayyan (see below) to screen the results. You should begin by screening titles and abstracts only, and then you'll examine the full text of any remaining articles. Each study should be reviewed by a minimum of two people on the project team. 
  • Collect - Each included study is coded and the quantitative or qualitative data contained in these studies is then synthesized. You'll have to either find or develop a coding strategy or form that meets your needs. 
  • Explain - The synthesized results are articulated and contextualized. What do the results mean? How have they answered your research question?
  • Summarize - The final report provides a complete description of the methods and results in a clear, transparent fashion. 

Adapted from

Types of reviews, systematic review.

These types of studies employ a systematic method to analyze and synthesize the results of numerous studies. "Systematic" in this case means following a strict set of steps - as outlined by entities like PRISMA and the Institute of Medicine - so as to make the review more reproducible and less biased. Consistent, thorough documentation is also key. Reviews of this type are not meant to be conducted by an individual but rather a (small) team of researchers. Systematic reviews are widely used in the health sciences, often to find a generalized conclusion from multiple evidence-based studies. 

Meta-Analysis

A systematic method that uses statistics to analyze the data from numerous studies. The researchers combine the data from studies with similar data types and analyze them as a single, expanded dataset. Meta-analyses are a type of systematic review.

Scoping Review

A scoping review employs the systematic review methodology to explore a broader topic or question rather than a specific and answerable one, as is generally the case with a systematic review. Authors of these types of reviews seek to collect and categorize the existing literature so as to identify any gaps.

Rapid Review

Rapid reviews are systematic reviews conducted under a time constraint. Researchers make use of workarounds to complete the review quickly (e.g., only looking at English-language publications), which can lead to a less thorough and more biased review. 

Narrative Review

A traditional literature review that summarizes and synthesizes the findings of numerous original research articles. The purpose and scope of narrative literature reviews vary widely and do not follow a set protocol. Most literature reviews are narrative reviews. 

Umbrella Review

Umbrella reviews are, essentially, systematic reviews of systematic reviews. These compile evidence from multiple review studies into one usable document. 

Grant, Maria J., and Andrew Booth. “A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies.” Health Information & Libraries Journal , vol. 26, no. 2, 2009, pp. 91-108. doi: 10.1111/j.1471-1842.2009.00848.x .

  • Next: Project Planning >>

Charles Sturt University

Literature Review: Systematic literature reviews

  • Traditional or narrative literature reviews
  • Scoping Reviews
  • Systematic literature reviews
  • Annotated bibliography
  • Keeping up to date with literature
  • Finding a thesis
  • Evaluating sources and critical appraisal of literature
  • Managing and analysing your literature
  • Further reading and resources

Systematic reviews

Systematic and systematic-like reviews

Charles Sturt University library has produced a comprehensive guide for Systematic and systematic-like literature reviews. A comprehensive systematic literature review can often take a team of people up to a year to complete. This guide provides an overview of the steps required for systematic reviews:

  • Identify your research question
  • Develop your protocol
  • Conduct systematic searches (including the search strategy, text mining, choosing databases, documenting and reviewing
  • Critical appraisal
  • Data extraction and synthesis
  • Writing and publishing .
  • Systematic and systematic-like reviews Library Resource Guide

Systematic literature review

A systematic literature review (SLR) identifies, selects and critically appraises research in order to answer a clearly formulated question (Dewey, A. & Drahota, A. 2016). The systematic review should follow a clearly defined protocol or plan where the criteria is clearly stated before the review is conducted. It is a comprehensive, transparent search conducted over multiple databases and grey literature that can be replicated and reproduced by other researchers. It involves planning a well thought out search strategy which has a specific focus or answers a defined question. The review identifies the type of information searched, critiqued and reported within known timeframes. The search terms, search strategies (including database names, platforms, dates of search) and limits all need to be included in the review.

Pittway (2008) outlines seven key principles behind systematic literature reviews

  • Transparency
  • Integration
  • Accessibility

Systematic literature reviews originated in medicine and are linked to evidence based practice. According to Grant & Booth (p 91, 2009) "the expansion in evidence-based practice has lead to an increasing variety of review types". They compare and contrast 14 review types, listing the strengths and weaknesses of each review. 

Tranfield et al (2003) discusses the origins of the evidence-based approach to undertaking a literature review and its application to other disciplines including management and science.

References and additional resources

Dewey, A. & Drahota, A. (2016) Introduction to systematic reviews: online learning module Cochrane Training   https://training.cochrane.org/interactivelearning/module-1-introduction-conducting-systematic-reviews

Gough, David A., David Gough, Sandy Oliver, and James Thomas. An Introduction to Systematic Reviews. Systematic Reviews. London: SAGE, 2012.

Grant, M. J. & Booth, A. (2009) A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal 26(2), 91-108

Munn, Z., Peters, M. D. J., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol, 18(1), 143. https://doi.org/10.1186/s12874-018-0611-x 

Pittway, L. (2008) Systematic literature reviews. In Thorpe, R. & Holt, R. The SAGE dictionary of qualitative management research. SAGE Publications Ltd doi:10.4135/9780857020109

Tranfield, D., Denyer, D & Smart, P. (2003) Towards a methodology for developing evidence-informed management knowledge by means of systematic review . British Journal of Management 14 (3), 207-222

Evidence based practice - an introduction : Literature reviews/systematic reviews

Evidence based practice - an introduction is a library guide produced at CSU Library for undergraduates. The information contained in the guide is also relevant for post graduate study and will help you to understand the types of research and levels of evidence required to conduct evidence based research.

  • Evidence based practice an introduction
  • << Previous: Scoping Reviews
  • Next: Annotated bibliography >>
  • Last Updated: Jan 16, 2024 1:39 PM
  • URL: https://libguides.csu.edu.au/review

Acknowledgement of Country

Charles Sturt University is an Australian University, TEQSA Provider Identification: PRV12018. CRICOS Provider: 00005F.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Systematic Review | Definition, Examples & Guide

Systematic Review | Definition, Examples & Guide

Published on 15 June 2022 by Shaun Turney . Revised on 17 October 2022.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question ‘What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?’

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs meta-analysis, systematic review vs literature review, systematic review vs scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce research bias . The methods are repeatable , and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesise the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesising all available evidence and evaluating the quality of the evidence. Synthesising means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism, run a free check.

Systematic reviews often quantitatively synthesise the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesise results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarise and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimise bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimise research b ias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinised by others.
  • They’re thorough : they summarise all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fourth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomised control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective(s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesise the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Grey literature: Grey literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of grey literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of grey literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Grey literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarise what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgement of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomised into the control and treatment groups.

Step 6: Synthesise the data

Synthesising the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesising the data:

  • Narrative ( qualitative ): Summarise the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarise and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analysed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Turney, S. (2022, October 17). Systematic Review | Definition, Examples & Guide. Scribbr. Retrieved 15 February 2024, from https://www.scribbr.co.uk/research-methods/systematic-reviews/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, what is a literature review | guide, template, & examples, exploratory research | definition, guide, & examples, what is peer review | types & examples.

Duke University Libraries

Literature Reviews

  • Getting started

What is a literature review?

Why conduct a literature review, stages of a literature review, lit reviews: an overview (video), check out these books.

  • Types of reviews
  • 1. Define your research question
  • 2. Plan your search
  • 3. Search the literature
  • 4. Organize your results
  • 5. Synthesize your findings
  • 6. Write the review
  • Thompson Writing Studio This link opens in a new window
  • Need to write a systematic review? This link opens in a new window

what is systematic review of related literature in research

Contact a Librarian

Ask a Librarian

what is systematic review of related literature in research

Definition: A literature review is a systematic examination and synthesis of existing scholarly research on a specific topic or subject.

Purpose: It serves to provide a comprehensive overview of the current state of knowledge within a particular field.

Analysis: Involves critically evaluating and summarizing key findings, methodologies, and debates found in academic literature.

Identifying Gaps: Aims to pinpoint areas where there is a lack of research or unresolved questions, highlighting opportunities for further investigation.

Contextualization: Enables researchers to understand how their work fits into the broader academic conversation and contributes to the existing body of knowledge.

what is systematic review of related literature in research

tl;dr  A literature review critically examines and synthesizes existing scholarly research and publications on a specific topic to provide a comprehensive understanding of the current state of knowledge in the field.

What is a literature review NOT?

❌ An annotated bibliography

❌ Original research

❌ A summary

❌ Something to be conducted at the end of your research

❌ An opinion piece

❌ A chronological compilation of studies

The reason for conducting a literature review is to:

what is systematic review of related literature in research

Literature Reviews: An Overview for Graduate Students

While this 9-minute video from NCSU is geared toward graduate students, it is useful for anyone conducting a literature review.

what is systematic review of related literature in research

Writing the literature review: A practical guide

Available 3rd floor of Perkins

what is systematic review of related literature in research

Writing literature reviews: A guide for students of the social and behavioral sciences

Available online!

what is systematic review of related literature in research

So, you have to write a literature review: A guided workbook for engineers

what is systematic review of related literature in research

Telling a research story: Writing a literature review

what is systematic review of related literature in research

The literature review: Six steps to success

what is systematic review of related literature in research

Systematic approaches to a successful literature review

Request from Duke Medical Center Library

what is systematic review of related literature in research

Doing a systematic review: A student's guide

  • Next: Types of reviews >>
  • Last Updated: Feb 15, 2024 1:45 PM
  • URL: https://guides.library.duke.edu/lit-reviews

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

University of Maryland Libraries Logo

Systematic Review

  • Library Help
  • What is a Systematic Review (SR)?
  • Steps of a Systematic Review
  • Framing a Research Question
  • Developing a Search Strategy
  • Searching the Literature
  • Managing the Process
  • Meta-analysis
  • Publishing your Systematic Review

Searching the literature

  • Consult a Librarian

Finding Existing Systematic Reviews

  • Databases for Scholarly Literature
  • Regional Databases
  • Grey Literature

Search more databases to limit bias! Why?

The more resources, the better

“The conduct of the search for and selection of evidence may have serious implications for patients’ and clinicians’ decisions. A systematic review  might lead to the wrong conclusions and, ultimately, the wrong clinical recommendations .” 

  • Institute of Medicine (U.S.). Committee on Standards for Systematic Reviews of Comparative Effectiveness Research. (2011). Finding what works in health care: Standards for systematic reviews (J. Eden, L. Levit, A. Berg, & S. Morton, Eds.). Washington, D.C.: National Academies Press. Chapter 3
  • ​​ Cochrane Handbook, Chapter 10  to learn about different types of reporting biases.
  • Bramer, W. M., Rethlefsen, M. L., Kleijnen, J., & Franco, O. H. (2017). Optimal database combinations for literature searches in systematic reviews : A prospective exploratory study. Systematic Reviews, 6. https://doi.org/10.1186/s13643-017-0644-y

Check the Search Strategy Used to Create the Systematic Reviews Subset on PubMed

  • 3iE Database International Impact for Impact Evaluations: for policymakers and researchers who are looking for evidence on what works, what doesn’t, and why in development.
  • BMC Systematic Reviews Search the BioMed Central open access Systematic Reviews Journal for systematic reviews and protocols.
  • Campbell Collaboration Published reviews and protocols by the Campbell Collaboration, which focuses on reviews outside of clinical medicine.
  • CareSearch Systematic Review Collection To locate palliative care systematic reviews, non-comprehensive collection organized by subject.
  • Centre for Reviews and Dissemination Systematic reviews of health and social care interventions and economic evaluations.
  • Centre of Evidence Based Dermatology University of Nottingham created this resource to allow easy access to all systematic reviews for a given dermatological condition.
  • Cochrane Library For published Cochrane systematic reviews.
  • The Collaboration for Environmental Evidence The Collaboration for Environmental Evidence (CEE) is an open community of scientists and managers working towards a sustainable global environment and the conservation of biodiversity.
  • The Community Guide The Guide to Community Preventive Services is a free database of systematic reviews to help practitioners choose programs and policies to improve health and prevent disease in their community.
  • DoPHER: Database of Promoting Health Effectiveness Reviews DoPHER is unique in its focussed coverage of systematic and non-systematic reviews of effectiveness in health promotion and public health worldwide.
  • Epistemonikos An evidence library for decision in health care and health policy.
  • EPPI-Centre The Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre) conducts systematic reviews in the fields of Education, Health Promotion and Public Health, as well as social welfare and international development.
  • FAIR-Finding Answers Intervention Research Article summaries from systematic reviews of health disparities interventions.
  • HealthEvidence.org For systematic reviews on public health interventions.
  • JBI Evidence Synthesis For systematic reviews published by JBI.
  • McMaster Health Evidence Must be a free registered user to access 4,742+ quality-rated systematic reviews evaluating the effectiveness of public health interventions.
  • PDQ Evidence for Informed Health Policymaking PDQ (“pretty darn quick”) Evidence facilitates rapid access to the best available evidence for decisions about health systems. It includes systematic reviews, broad syntheses of reviews (including evidence-based policy briefs), primary studies included in systematic reviews and structured summaries of that evidence.
  • PROSPERO A registry for prospective systematic reviews.
  • PubMed A resource for finding systematic reviews provided by the National Library of Medicine including Cochrane's DARE database. Use "search term" AND "systematic review"[pt] to find systematic reviews in PubMed. Note: [pt] stands for "publication type."
  • SHARE SHARE is a repository of HIV-related systematic reviews and provides a ‘one-stop shop’ for HIV-related information that has been published through a systematic review.
  • SRDR (Systematic Review Data Repository) This resource serves as both an archive and data extraction tool in the production of systematic reviews. It is an open and searchable archive of systematic reviews and their data.
  • TRIP Database A searchable evidence library--allows searching by PICO.
  • What Works Clearinghouse Education Systematic Reviews

Below are some common tools used for this purpose.

  • AMSTAR A Measurement Tool to Assess Systematic Reviews.
  • CASP Critical Appraisal Skills Programme
  • EQUATOR EQUATOR network listing over 300 sets of reporting guidelines. Click on any link in the first column.
  • GRADE Grading of Recommendations Assessment, Development and Evaluation
  • ROBIS ROBIS is a new tool for assessing the risk of bias in systematic reviews (rather than in primary studies).
  • STROBE STrengthening the Reporting of OBservational studies in Epidemiology

How many and which databases to choose from?

There are systematic reviews in many subject areas such as education, social work, and even engineering where different databases are appropriate and very different conclusions would be reached. Some research is suggested below:

  • Agriculture: Ritchie, S.M., Young, L.M., & Sigman, J. (2018). A comparison of selected bibliographic database subject overlap for agricultural information. Issues in Science and Technology Librarianship, 89 . http://www.istl.org/18-spring/refereed2.html
  • Borrego, M., Foster, M. J., & Froyd, J. E. (2014). Systematic literature reviews in engineering education and other developing interdisciplinary fields.   Journal of Engineering Education, 103 (1), 45–76.
  • Borrego, M., Foster, M. J., & Froyd, J. E. (2015). What is the state of the art of systematic review in engineering education? Journal of Engineering Education , 104 (2), 212–242.
  • Engineering: ​ Kitchenham, B. (2007).  Guidelines for performing systematic literature reviews in software engineering . [Technical Report]. Keele, UK, Keele University, 33(2004), 1-26. - List of databases can be found on p. 17.
  • Health: Bramer, W. M., Rethlefsen, M. L., Kleijnen, J., & Franco, O. H. (2017). Optimal database combinations for literature searches in systematic reviews: A prospective exploratory study. Systematic Reviews, 6, 245. https://doi.org/10.1186/s13643-017-0644-y
  • Veterinary: Grindlay, D. J. C., Brennan, M. L., & Dean, R. S. (2012). Searching the veterinary literature: A comparison of the coverage of veterinary journals by nine bibliographic databases. Journal of Veterinary Medical Education, 39 (4), 404–412. https://doi.org/10.3138/jvme.1111.109R

Suggested databases for different subject areas

Agriculture, food and nutrition

Human health and medicine.

Kinesiology

Organizational development, economics and policy

Psychology, human development and other social sciences, reproductive health and women.

  • African Index Medicus The World Health Organization, in collaboration with the Association for Health Information and Libraries in Africa (AHILA), produces this international index to African health literature and information sources.
  • China Academic Journals (CAJ) Full-text Database CAJ is the most comprehensive full-text database of Chinese journals in the world. CAJ Series F, G, H, and J contain more than 4,400 journals. You can search the database to retrieve article titles and abstracts in English by clicking the drop-down on the top right and selecting "English," although all full text is in Chinese. The collection allows cross-searching multiple databases in the following subject areas: literature/history/philosophy (Series F), economics/politics/law (Series G), education/social sciences (Series H), and economics/management (Series J). Series F = 1994 through 4/15/2016Series G to J = 1994 through 12/31/2016
  • CiNii Articles (Scholarly Academic Information Navigator) A database with a Japanese focus that can be used to search information on academic articles published in academic society journals, university research bulletins or articles included in the National Diet Library's Japanese Periodicals Index Database and databases.
  • Europe PMC Europe PMC is a repository, providing access to worldwide life sciences articles, books, patents and clinical guidelines. Europe PMC provides links to relevant records in databases such as Uniprot, European Nucleotide Archive (ENA), Protein Data Bank Europe (PDBE) and BioStudies.
  • Index Medicus for the Eastern Mediterranean Region (IMEMR) IMEMR one of the major projects of the Virtual Health Sciences Library (VHSL), was initiated in response to a pressing need to index health and biomedical journals from the region.
  • Index Medicus for the South-East Asian Region (IMSEAR) IMSEAR is a database of articles published in selected journals within the WHO South-East Asia Region. It is a collaborative effort of participating libraries in Health Literature, Library and Information Services (HELLIS) network in the region.
  • Indian Medlars Centre The purpose of IndMED is to index selected peer reviewed medical journals published from India. It supplements international indexing services like PubMed. It covers about 100 journals indexed from 1985 onwards.
  • Joanna Briggs Institute Library The Joanna Briggs Institute (JBI) Library is a repository for publications and information for policy makers, health professionals, health scientists and others with a practical or academic interest in evidence based healthcare.
  • KCl Korean Journal Database The KCI Korean Journal Database provides a comprehensive snapshot of the most influential regional content from researchers in South Korea.Using citation connections from the Web of Science™, regional work is framed within the broader context of global research.
  • LILACS LILACS is an index of scientific and technical literature of Latin America and the Caribbean, for 31 years contributing to increase visibility, access and quality of health information in the Region.
  • OTseeker OTseeker is a database that contains abstracts of systematic reviews and randomized controlled trials relevant to occupational therapy. Trials have been critically appraised and rated to assist you to evaluate their validity and interpretability.
  • Pedro PEDro is the Physiotherapy Evidence Database. PEDro is a free database of over 26,000 randomised trials, systematic reviews and clinical practice guidelines in physiotherapy.
  • Russian Citation Index Access bibliographic information and citations to scholarly articles from Russian researchers in over 500 science, technology, medical, and education journals. Leading publications have been carefully selected and provided by the Scientific Electronic Library (eLIBRARY.RU), Russia's largest research information provider.
  • SciELO: Scientific Electronic Library Online A bibliographic database and digital library of open access journals with a focus on literature published in developing countries, primarily in South America.
  • Systematic reviews (journal) The journal aims to publish high quality systematic review products including systematic review protocols, systematic reviews related to a very broad definition of health, rapid reviews, updates of already completed systematic reviews, and methods research related to the science of systematic reviews, such as decision modeling.
  • TRIP Database The TRIP Database is a clinical search tool designed to allow health professionals to rapidly identify the highest quality clinical evidence for clinical practice. Advanced search allows searching using PICO framework. Registered users (registration is free) benefit from extra features such as CPD, search history, and collaborative tools.
  • USDA Nutrition Evidence Library (NEL) USDA’s Nutrition Evidence Library (NEL) specializes in conducting systematic reviews to inform federal nutrition policy and programs.
  • Virginia Henderson Global Nursing e-Repository This repository offers nurses around the globe, FREE online access to reliable nursing research and evidence-based knowledge. Browse items by "Level of Evidence" to find systematic reviews.
  • Virtual Health Library Find more sources for Latin America and the Caribbean here, beyond LILACS and SciELO.
  • Western Pacific Region Index Medicus (WPRIM) The goal of WPRIM is the creation of an online index of medical and health journals published in Member States of the WHO Western Pacific Region which can be accessed on the Internet thus ensuring global accessibility of medical and health research done in the region.

How to find grey literature?

Tips and tricks.

Googling the Greys: Tips for Searching Beyond Health Databases and Turning Information into Insights  - This presentation provides an excellent guide to searching Google effectively to find grey literature.

Comprehensive guides with resources

  • The International Health Technology Assessment (HTA) database
  • WHO International Clinical Trials Registry Platform (ICTRP)
  • ClinicalTrials.gov
  • Dissertations & Theses Global (ProQuest)
  • Conference Proceedings Citation Index

Duke University Medical Center Guide to Resource for Searching the Gray Literature - A more thorough guide to grey literature, including resources for trial registries, pharmacological studies, conference abstracts, government documents, and more.

Discovery and scholarly communication aspects of preprints - An article describing the classification, purposes, and criticisms of preprints, and provides a list of pre-print repositories.

Free databases of grey literature

  • BASE: Bielefeld Academic Search Engine
  • Child Welfare Information Gateway
  • Cordis: Community Research and Development Information Service (European Union)
  • Directory of Open Access Journals
  • Federal RePORTER
  • NICHSR ONESearch: Federated search of all NICHSR databases (HSRProj, HSRR, HSRIC, PHPartners)
  • Science.gov
  • UK Government Publications
  • World Health Organization

Theses and dissertations

UMD restricted access

International databases

  • AUSTRALIA - Informit's Health Collection
  • WHO IRIS database - Western Pacific Region

Conference Proceedings

  • BioMed Central - BMC Proceedings Quick and advanced searches available
  • BIOSIS Previews Limit to LITERATURE TYPE options: Meeting Abstract, Meeting Address, Meeting Paper, Meeting Poster, Meeting Report, Meeting Slide, Meeting summary DOCUMENT TYPE options: Meeting Paper, Meeting
  • Conference Proceedings Citation Index, 1990-present This index helps researchers access the published literature from the most significant conferences, symposia, seminars, and more.
  • MEDLINE Limit to Publication Type: Congresses, Consensus development conference
  • PapersFirst OCLC index of papers presented at conferences worldwide. Over 2,380,000 records. Covers every published congress, symposium, conference, exposition, workshop and meeting received by The British Library Document Supply Centre. Wide variety of subjects. Only one user allowed. 'Phrase' searches will find records with the words not necessarily next to each other. For example, a search on 'sunny day' might find 'the day was sunny'.
  • ProceedingsFirst OCLC index of worldwide conference proceedings. Over 74,000 records. Each record contains a list of the papers presented at each conference. Covers every published congress, symposium, conference, exposition, workshop and meeting received by The British Library Document Supply Center. 1993- present. 'Phrase' searches will find records with the words not necessarily next to each other. For example, a search on 'sunny day' might find 'the day was sunny'.
  • << Previous: Developing a Search Strategy
  • Next: Managing the Process >>
  • Last Updated: Jan 26, 2024 4:35 PM
  • URL: https://lib.guides.umd.edu/SR

SMU Libraries logo

  •   SMU Libraries
  • Scholarship & Research
  • Teaching & Learning
  • Bridwell Library
  • Business Library
  • DeGolyer Library
  • Fondren Library
  • Hamon Arts Library
  • Underwood Law Library
  • Fort Burgwin Library
  • Exhibits & Digital Collections
  • SMU Scholar
  • Special Collections & Archives
  • Connect With Us
  • Research Guides by Subject
  • How Do I . . . ? Guides
  • Find Your Librarian
  • Writing Support

Evidence Syntheses and Systematic Reviews: Overview

  • Choosing a Review

Analyze and Report

What is evidence synthesis.

Evidence Synthesis: general term used to refer to any method of identifying, selecting, and combining results from multiple studies. There are several types of reviews which fall under this term; the main ones are in the table below: 

Types of Reviews

General steps for conducting systematic reviews.

The number of steps for conducting Evidence Synthesis varies a little, depending on the source that one consults. However, the following steps are generally accepted in how Systematic Reviews are done:

  • Identify a gap in the literature and form a well-developed and answerable research question which will form the basis of your search
  • Select a framework that will help guide the type of study you’re undertaking
  • Different guidelines are used for documenting and reporting the protocols of your systematic review before the review is conducted. The protocol is created following whatever guideline you select.
  • Select Databases and Grey Literature Sources
  • For steps 3 and 4, it is advisable to consult a librarian before embarking on this phase of the review process. They can recommend databases and other sources to use and even help design complex searches.
  • A protocol is a detailed plan for the project, and after it is written, it should be registered with an appropriate registry.
  • Search Databases and Other Sources
  • Not all databases use the same search syntax, so when searching multiple databases, use search syntaxes that would work in individual databases.
  • Use a citation management tool to help store and organize your citations during the review process; great help when de-duplicating your citation results
  • Inclusion and exclusion criteria already developed help you remove articles that are not relevant to your topic. 
  • Assess the quality of your findings to eliminate bias in either the design of the study or in the results/conclusions (generally not done outside of Systematic Reviews).

Extract and Synthesize

  • Extract the data from what's left of the studies that have been analyzed
  • Extraction tools are used to get data from individual studies that will be analyzed or summarized. 
  • Synthesize the main findings of your research

Report Findings

Report the results using a statistical approach or in a narrative form.

Need More Help?

Librarians can:

  • Provide guidance on which methodology best suits your goals
  • Recommend databases and other information sources for searching
  • Design and implement comprehensive and reproducible database-specific search strategies 
  • Recommend software for article screening
  • Assist with the use of citation management
  • Offer best practices on documentation of searches

Related Guides

  • Literature Reviews
  • Choose a Citation Manager
  • Project Management

Steps of a Systematic Review - Video

  • Next: Choosing a Review >>
  • Last Updated: Feb 16, 2024 5:40 PM
  • URL: https://guides.smu.edu/evidencesyntheses

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Korean J Anesthesiol
  • v.71(2); 2018 Apr

Introduction to systematic review and meta-analysis

1 Department of Anesthesiology and Pain Medicine, Inje University Seoul Paik Hospital, Seoul, Korea

2 Department of Anesthesiology and Pain Medicine, Chung-Ang University College of Medicine, Seoul, Korea

Systematic reviews and meta-analyses present results by combining and analyzing data from different studies conducted on similar research topics. In recent years, systematic reviews and meta-analyses have been actively performed in various fields including anesthesiology. These research methods are powerful tools that can overcome the difficulties in performing large-scale randomized controlled trials. However, the inclusion of studies with any biases or improperly assessed quality of evidence in systematic reviews and meta-analyses could yield misleading results. Therefore, various guidelines have been suggested for conducting systematic reviews and meta-analyses to help standardize them and improve their quality. Nonetheless, accepting the conclusions of many studies without understanding the meta-analysis can be dangerous. Therefore, this article provides an easy introduction to clinicians on performing and understanding meta-analyses.

Introduction

A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality. A meta-analysis is a valid, objective, and scientific method of analyzing and combining different results. Usually, in order to obtain more reliable results, a meta-analysis is mainly conducted on randomized controlled trials (RCTs), which have a high level of evidence [ 2 ] ( Fig. 1 ). Since 1999, various papers have presented guidelines for reporting meta-analyses of RCTs. Following the Quality of Reporting of Meta-analyses (QUORUM) statement [ 3 ], and the appearance of registers such as Cochrane Library’s Methodology Register, a large number of systematic literature reviews have been registered. In 2009, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [ 4 ] was published, and it greatly helped standardize and improve the quality of systematic reviews and meta-analyses [ 5 ].

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f1.jpg

Levels of evidence.

In anesthesiology, the importance of systematic reviews and meta-analyses has been highlighted, and they provide diagnostic and therapeutic value to various areas, including not only perioperative management but also intensive care and outpatient anesthesia [6–13]. Systematic reviews and meta-analyses include various topics, such as comparing various treatments of postoperative nausea and vomiting [ 14 , 15 ], comparing general anesthesia and regional anesthesia [ 16 – 18 ], comparing airway maintenance devices [ 8 , 19 ], comparing various methods of postoperative pain control (e.g., patient-controlled analgesia pumps, nerve block, or analgesics) [ 20 – 23 ], comparing the precision of various monitoring instruments [ 7 ], and meta-analysis of dose-response in various drugs [ 12 ].

Thus, literature reviews and meta-analyses are being conducted in diverse medical fields, and the aim of highlighting their importance is to help better extract accurate, good quality data from the flood of data being produced. However, a lack of understanding about systematic reviews and meta-analyses can lead to incorrect outcomes being derived from the review and analysis processes. If readers indiscriminately accept the results of the many meta-analyses that are published, incorrect data may be obtained. Therefore, in this review, we aim to describe the contents and methods used in systematic reviews and meta-analyses in a way that is easy to understand for future authors and readers of systematic review and meta-analysis.

Study Planning

It is easy to confuse systematic reviews and meta-analyses. A systematic review is an objective, reproducible method to find answers to a certain research question, by collecting all available studies related to that question and reviewing and analyzing their results. A meta-analysis differs from a systematic review in that it uses statistical methods on estimates from two or more different studies to form a pooled estimate [ 1 ]. Following a systematic review, if it is not possible to form a pooled estimate, it can be published as is without progressing to a meta-analysis; however, if it is possible to form a pooled estimate from the extracted data, a meta-analysis can be attempted. Systematic reviews and meta-analyses usually proceed according to the flowchart presented in Fig. 2 . We explain each of the stages below.

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f2.jpg

Flowchart illustrating a systematic review.

Formulating research questions

A systematic review attempts to gather all available empirical research by using clearly defined, systematic methods to obtain answers to a specific question. A meta-analysis is the statistical process of analyzing and combining results from several similar studies. Here, the definition of the word “similar” is not made clear, but when selecting a topic for the meta-analysis, it is essential to ensure that the different studies present data that can be combined. If the studies contain data on the same topic that can be combined, a meta-analysis can even be performed using data from only two studies. However, study selection via a systematic review is a precondition for performing a meta-analysis, and it is important to clearly define the Population, Intervention, Comparison, Outcomes (PICO) parameters that are central to evidence-based research. In addition, selection of the research topic is based on logical evidence, and it is important to select a topic that is familiar to readers without clearly confirmed the evidence [ 24 ].

Protocols and registration

In systematic reviews, prior registration of a detailed research plan is very important. In order to make the research process transparent, primary/secondary outcomes and methods are set in advance, and in the event of changes to the method, other researchers and readers are informed when, how, and why. Many studies are registered with an organization like PROSPERO ( http://www.crd.york.ac.uk/PROSPERO/ ), and the registration number is recorded when reporting the study, in order to share the protocol at the time of planning.

Defining inclusion and exclusion criteria

Information is included on the study design, patient characteristics, publication status (published or unpublished), language used, and research period. If there is a discrepancy between the number of patients included in the study and the number of patients included in the analysis, this needs to be clearly explained while describing the patient characteristics, to avoid confusing the reader.

Literature search and study selection

In order to secure proper basis for evidence-based research, it is essential to perform a broad search that includes as many studies as possible that meet the inclusion and exclusion criteria. Typically, the three bibliographic databases Medline, Embase, and Cochrane Central Register of Controlled Trials (CENTRAL) are used. In domestic studies, the Korean databases KoreaMed, KMBASE, and RISS4U may be included. Effort is required to identify not only published studies but also abstracts, ongoing studies, and studies awaiting publication. Among the studies retrieved in the search, the researchers remove duplicate studies, select studies that meet the inclusion/exclusion criteria based on the abstracts, and then make the final selection of studies based on their full text. In order to maintain transparency and objectivity throughout this process, study selection is conducted independently by at least two investigators. When there is a inconsistency in opinions, intervention is required via debate or by a third reviewer. The methods for this process also need to be planned in advance. It is essential to ensure the reproducibility of the literature selection process [ 25 ].

Quality of evidence

However, well planned the systematic review or meta-analysis is, if the quality of evidence in the studies is low, the quality of the meta-analysis decreases and incorrect results can be obtained [ 26 ]. Even when using randomized studies with a high quality of evidence, evaluating the quality of evidence precisely helps determine the strength of recommendations in the meta-analysis. One method of evaluating the quality of evidence in non-randomized studies is the Newcastle-Ottawa Scale, provided by the Ottawa Hospital Research Institute 1) . However, we are mostly focusing on meta-analyses that use randomized studies.

If the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) system ( http://www.gradeworkinggroup.org/ ) is used, the quality of evidence is evaluated on the basis of the study limitations, inaccuracies, incompleteness of outcome data, indirectness of evidence, and risk of publication bias, and this is used to determine the strength of recommendations [ 27 ]. As shown in Table 1 , the study limitations are evaluated using the “risk of bias” method proposed by Cochrane 2) . This method classifies bias in randomized studies as “low,” “high,” or “unclear” on the basis of the presence or absence of six processes (random sequence generation, allocation concealment, blinding participants or investigators, incomplete outcome data, selective reporting, and other biases) [ 28 ].

The Cochrane Collaboration’s Tool for Assessing the Risk of Bias [ 28 ]

Data extraction

Two different investigators extract data based on the objectives and form of the study; thereafter, the extracted data are reviewed. Since the size and format of each variable are different, the size and format of the outcomes are also different, and slight changes may be required when combining the data [ 29 ]. If there are differences in the size and format of the outcome variables that cause difficulties combining the data, such as the use of different evaluation instruments or different evaluation timepoints, the analysis may be limited to a systematic review. The investigators resolve differences of opinion by debate, and if they fail to reach a consensus, a third-reviewer is consulted.

Data Analysis

The aim of a meta-analysis is to derive a conclusion with increased power and accuracy than what could not be able to achieve in individual studies. Therefore, before analysis, it is crucial to evaluate the direction of effect, size of effect, homogeneity of effects among studies, and strength of evidence [ 30 ]. Thereafter, the data are reviewed qualitatively and quantitatively. If it is determined that the different research outcomes cannot be combined, all the results and characteristics of the individual studies are displayed in a table or in a descriptive form; this is referred to as a qualitative review. A meta-analysis is a quantitative review, in which the clinical effectiveness is evaluated by calculating the weighted pooled estimate for the interventions in at least two separate studies.

The pooled estimate is the outcome of the meta-analysis, and is typically explained using a forest plot ( Figs. 3 and ​ and4). 4 ). The black squares in the forest plot are the odds ratios (ORs) and 95% confidence intervals in each study. The area of the squares represents the weight reflected in the meta-analysis. The black diamond represents the OR and 95% confidence interval calculated across all the included studies. The bold vertical line represents a lack of therapeutic effect (OR = 1); if the confidence interval includes OR = 1, it means no significant difference was found between the treatment and control groups.

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f3.jpg

Forest plot analyzed by two different models using the same data. (A) Fixed-effect model. (B) Random-effect model. The figure depicts individual trials as filled squares with the relative sample size and the solid line as the 95% confidence interval of the difference. The diamond shape indicates the pooled estimate and uncertainty for the combined effect. The vertical line indicates the treatment group shows no effect (OR = 1). Moreover, if the confidence interval includes 1, then the result shows no evidence of difference between the treatment and control groups.

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f4.jpg

Forest plot representing homogeneous data.

Dichotomous variables and continuous variables

In data analysis, outcome variables can be considered broadly in terms of dichotomous variables and continuous variables. When combining data from continuous variables, the mean difference (MD) and standardized mean difference (SMD) are used ( Table 2 ).

Summary of Meta-analysis Methods Available in RevMan [ 28 ]

The MD is the absolute difference in mean values between the groups, and the SMD is the mean difference between groups divided by the standard deviation. When results are presented in the same units, the MD can be used, but when results are presented in different units, the SMD should be used. When the MD is used, the combined units must be shown. A value of “0” for the MD or SMD indicates that the effects of the new treatment method and the existing treatment method are the same. A value lower than “0” means the new treatment method is less effective than the existing method, and a value greater than “0” means the new treatment is more effective than the existing method.

When combining data for dichotomous variables, the OR, risk ratio (RR), or risk difference (RD) can be used. The RR and RD can be used for RCTs, quasi-experimental studies, or cohort studies, and the OR can be used for other case-control studies or cross-sectional studies. However, because the OR is difficult to interpret, using the RR and RD, if possible, is recommended. If the outcome variable is a dichotomous variable, it can be presented as the number needed to treat (NNT), which is the minimum number of patients who need to be treated in the intervention group, compared to the control group, for a given event to occur in at least one patient. Based on Table 3 , in an RCT, if x is the probability of the event occurring in the control group and y is the probability of the event occurring in the intervention group, then x = c/(c + d), y = a/(a + b), and the absolute risk reduction (ARR) = x − y. NNT can be obtained as the reciprocal, 1/ARR.

Calculation of the Number Needed to Treat in the Dichotomous table

Fixed-effect models and random-effect models

In order to analyze effect size, two types of models can be used: a fixed-effect model or a random-effect model. A fixed-effect model assumes that the effect of treatment is the same, and that variation between results in different studies is due to random error. Thus, a fixed-effect model can be used when the studies are considered to have the same design and methodology, or when the variability in results within a study is small, and the variance is thought to be due to random error. Three common methods are used for weighted estimation in a fixed-effect model: 1) inverse variance-weighted estimation 3) , 2) Mantel-Haenszel estimation 4) , and 3) Peto estimation 5) .

A random-effect model assumes heterogeneity between the studies being combined, and these models are used when the studies are assumed different, even if a heterogeneity test does not show a significant result. Unlike a fixed-effect model, a random-effect model assumes that the size of the effect of treatment differs among studies. Thus, differences in variation among studies are thought to be due to not only random error but also between-study variability in results. Therefore, weight does not decrease greatly for studies with a small number of patients. Among methods for weighted estimation in a random-effect model, the DerSimonian and Laird method 6) is mostly used for dichotomous variables, as the simplest method, while inverse variance-weighted estimation is used for continuous variables, as with fixed-effect models. These four methods are all used in Review Manager software (The Cochrane Collaboration, UK), and are described in a study by Deeks et al. [ 31 ] ( Table 2 ). However, when the number of studies included in the analysis is less than 10, the Hartung-Knapp-Sidik-Jonkman method 7) can better reduce the risk of type 1 error than does the DerSimonian and Laird method [ 32 ].

Fig. 3 shows the results of analyzing outcome data using a fixed-effect model (A) and a random-effect model (B). As shown in Fig. 3 , while the results from large studies are weighted more heavily in the fixed-effect model, studies are given relatively similar weights irrespective of study size in the random-effect model. Although identical data were being analyzed, as shown in Fig. 3 , the significant result in the fixed-effect model was no longer significant in the random-effect model. One representative example of the small study effect in a random-effect model is the meta-analysis by Li et al. [ 33 ]. In a large-scale study, intravenous injection of magnesium was unrelated to acute myocardial infarction, but in the random-effect model, which included numerous small studies, the small study effect resulted in an association being found between intravenous injection of magnesium and myocardial infarction. This small study effect can be controlled for by using a sensitivity analysis, which is performed to examine the contribution of each of the included studies to the final meta-analysis result. In particular, when heterogeneity is suspected in the study methods or results, by changing certain data or analytical methods, this method makes it possible to verify whether the changes affect the robustness of the results, and to examine the causes of such effects [ 34 ].

Heterogeneity

Homogeneity test is a method whether the degree of heterogeneity is greater than would be expected to occur naturally when the effect size calculated from several studies is higher than the sampling error. This makes it possible to test whether the effect size calculated from several studies is the same. Three types of homogeneity tests can be used: 1) forest plot, 2) Cochrane’s Q test (chi-squared), and 3) Higgins I 2 statistics. In the forest plot, as shown in Fig. 4 , greater overlap between the confidence intervals indicates greater homogeneity. For the Q statistic, when the P value of the chi-squared test, calculated from the forest plot in Fig. 4 , is less than 0.1, it is considered to show statistical heterogeneity and a random-effect can be used. Finally, I 2 can be used [ 35 ].

I 2 , calculated as shown above, returns a value between 0 and 100%. A value less than 25% is considered to show strong homogeneity, a value of 50% is average, and a value greater than 75% indicates strong heterogeneity.

Even when the data cannot be shown to be homogeneous, a fixed-effect model can be used, ignoring the heterogeneity, and all the study results can be presented individually, without combining them. However, in many cases, a random-effect model is applied, as described above, and a subgroup analysis or meta-regression analysis is performed to explain the heterogeneity. In a subgroup analysis, the data are divided into subgroups that are expected to be homogeneous, and these subgroups are analyzed. This needs to be planned in the predetermined protocol before starting the meta-analysis. A meta-regression analysis is similar to a normal regression analysis, except that the heterogeneity between studies is modeled. This process involves performing a regression analysis of the pooled estimate for covariance at the study level, and so it is usually not considered when the number of studies is less than 10. Here, univariate and multivariate regression analyses can both be considered.

Publication bias

Publication bias is the most common type of reporting bias in meta-analyses. This refers to the distortion of meta-analysis outcomes due to the higher likelihood of publication of statistically significant studies rather than non-significant studies. In order to test the presence or absence of publication bias, first, a funnel plot can be used ( Fig. 5 ). Studies are plotted on a scatter plot with effect size on the x-axis and precision or total sample size on the y-axis. If the points form an upside-down funnel shape, with a broad base that narrows towards the top of the plot, this indicates the absence of a publication bias ( Fig. 5A ) [ 29 , 36 ]. On the other hand, if the plot shows an asymmetric shape, with no points on one side of the graph, then publication bias can be suspected ( Fig. 5B ). Second, to test publication bias statistically, Begg and Mazumdar’s rank correlation test 8) [ 37 ] or Egger’s test 9) [ 29 ] can be used. If publication bias is detected, the trim-and-fill method 10) can be used to correct the bias [ 38 ]. Fig. 6 displays results that show publication bias in Egger’s test, which has then been corrected using the trim-and-fill method using Comprehensive Meta-Analysis software (Biostat, USA).

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f5.jpg

Funnel plot showing the effect size on the x-axis and sample size on the y-axis as a scatter plot. (A) Funnel plot without publication bias. The individual plots are broader at the bottom and narrower at the top. (B) Funnel plot with publication bias. The individual plots are located asymmetrically.

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f6.jpg

Funnel plot adjusted using the trim-and-fill method. White circles: comparisons included. Black circles: inputted comparisons using the trim-and-fill method. White diamond: pooled observed log risk ratio. Black diamond: pooled inputted log risk ratio.

Result Presentation

When reporting the results of a systematic review or meta-analysis, the analytical content and methods should be described in detail. First, a flowchart is displayed with the literature search and selection process according to the inclusion/exclusion criteria. Second, a table is shown with the characteristics of the included studies. A table should also be included with information related to the quality of evidence, such as GRADE ( Table 4 ). Third, the results of data analysis are shown in a forest plot and funnel plot. Fourth, if the results use dichotomous data, the NNT values can be reported, as described above.

The GRADE Evidence Quality for Each Outcome

N: number of studies, ROB: risk of bias, PON: postoperative nausea, POV: postoperative vomiting, PONV: postoperative nausea and vomiting, CI: confidence interval, RR: risk ratio, AR: absolute risk.

When Review Manager software (The Cochrane Collaboration, UK) is used for the analysis, two types of P values are given. The first is the P value from the z-test, which tests the null hypothesis that the intervention has no effect. The second P value is from the chi-squared test, which tests the null hypothesis for a lack of heterogeneity. The statistical result for the intervention effect, which is generally considered the most important result in meta-analyses, is the z-test P value.

A common mistake when reporting results is, given a z-test P value greater than 0.05, to say there was “no statistical significance” or “no difference.” When evaluating statistical significance in a meta-analysis, a P value lower than 0.05 can be explained as “a significant difference in the effects of the two treatment methods.” However, the P value may appear non-significant whether or not there is a difference between the two treatment methods. In such a situation, it is better to announce “there was no strong evidence for an effect,” and to present the P value and confidence intervals. Another common mistake is to think that a smaller P value is indicative of a more significant effect. In meta-analyses of large-scale studies, the P value is more greatly affected by the number of studies and patients included, rather than by the significance of the results; therefore, care should be taken when interpreting the results of a meta-analysis.

When performing a systematic literature review or meta-analysis, if the quality of studies is not properly evaluated or if proper methodology is not strictly applied, the results can be biased and the outcomes can be incorrect. However, when systematic reviews and meta-analyses are properly implemented, they can yield powerful results that could usually only be achieved using large-scale RCTs, which are difficult to perform in individual studies. As our understanding of evidence-based medicine increases and its importance is better appreciated, the number of systematic reviews and meta-analyses will keep increasing. However, indiscriminate acceptance of the results of all these meta-analyses can be dangerous, and hence, we recommend that their results be received critically on the basis of a more accurate understanding.

1) http://www.ohri.ca .

2) http://methods.cochrane.org/bias/assessing-risk-bias-included-studies .

3) The inverse variance-weighted estimation method is useful if the number of studies is small with large sample sizes.

4) The Mantel-Haenszel estimation method is useful if the number of studies is large with small sample sizes.

5) The Peto estimation method is useful if the event rate is low or one of the two groups shows zero incidence.

6) The most popular and simplest statistical method used in Review Manager and Comprehensive Meta-analysis software.

7) Alternative random-effect model meta-analysis that has more adequate error rates than does the common DerSimonian and Laird method, especially when the number of studies is small. However, even with the Hartung-Knapp-Sidik-Jonkman method, when there are less than five studies with very unequal sizes, extra caution is needed.

8) The Begg and Mazumdar rank correlation test uses the correlation between the ranks of effect sizes and the ranks of their variances [ 37 ].

9) The degree of funnel plot asymmetry as measured by the intercept from the regression of standard normal deviates against precision [ 29 ].

10) If there are more small studies on one side, we expect the suppression of studies on the other side. Trimming yields the adjusted effect size and reduces the variance of the effects by adding the original studies back into the analysis as a mirror image of each study.

Easy guide to conducting a systematic review

Affiliations.

  • 1 Discipline of Child and Adolescent Health, University of Sydney, Sydney, New South Wales, Australia.
  • 2 Department of Nephrology, The Children's Hospital at Westmead, Sydney, New South Wales, Australia.
  • 3 Education Department, The Children's Hospital at Westmead, Sydney, New South Wales, Australia.
  • PMID: 32364273
  • DOI: 10.1111/jpc.14853

A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a predefined search strategy to identify and appraise all available published literature on a specific topic. The meticulous nature of the systematic review research methodology differentiates a systematic review from a narrative review (literature review or authoritative review). This paper provides a brief step by step summary of how to conduct a systematic review, which may be of interest for clinicians and researchers.

Keywords: research; research design; systematic review.

© 2020 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

Publication types

  • Systematic Review
  • Research Design*
  • Open access
  • Published: 14 August 2018

Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies

  • Chris Cooper   ORCID: orcid.org/0000-0003-0864-5607 1 ,
  • Andrew Booth 2 ,
  • Jo Varley-Campbell 1 ,
  • Nicky Britten 3 &
  • Ruth Garside 4  

BMC Medical Research Methodology volume  18 , Article number:  85 ( 2018 ) Cite this article

194k Accesses

193 Citations

121 Altmetric

Metrics details

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving readers clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before.

The purpose of this review is to determine if a shared model of the literature searching process can be detected across systematic review guidance documents and, if so, how this process is reported in the guidance and supported by published studies.

A literature review.

Two types of literature were reviewed: guidance and published studies. Nine guidance documents were identified, including: The Cochrane and Campbell Handbooks. Published studies were identified through ‘pearl growing’, citation chasing, a search of PubMed using the systematic review methods filter, and the authors’ topic knowledge.

The relevant sections within each guidance document were then read and re-read, with the aim of determining key methodological stages. Methodological stages were identified and defined. This data was reviewed to identify agreements and areas of unique guidance between guidance documents. Consensus across multiple guidance documents was used to inform selection of ‘key stages’ in the process of literature searching.

Eight key stages were determined relating specifically to literature searching in systematic reviews. They were: who should literature search, aims and purpose of literature searching, preparation, the search strategy, searching databases, supplementary searching, managing references and reporting the search process.

Conclusions

Eight key stages to the process of literature searching in systematic reviews were identified. These key stages are consistently reported in the nine guidance documents, suggesting consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews. Further research to determine the suitability of using the same process of literature searching for all types of systematic review is indicated.

Peer Review reports

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving review stakeholders clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before. This is in contrast to the information science literature, which has developed information processing models as an explicit basis for dialogue and empirical testing. Without an explicit model, research in the process of systematic literature searching will remain immature and potentially uneven, and the development of shared information models will be assumed but never articulated.

One way of developing such a conceptual model is by formally examining the implicit “programme theory” as embodied in key methodological texts. The aim of this review is therefore to determine if a shared model of the literature searching process in systematic reviews can be detected across guidance documents and, if so, how this process is reported and supported.

Identifying guidance

Key texts (henceforth referred to as “guidance”) were identified based upon their accessibility to, and prominence within, United Kingdom systematic reviewing practice. The United Kingdom occupies a prominent position in the science of health information retrieval, as quantified by such objective measures as the authorship of papers, the number of Cochrane groups based in the UK, membership and leadership of groups such as the Cochrane Information Retrieval Methods Group, the HTA-I Information Specialists’ Group and historic association with such centres as the UK Cochrane Centre, the NHS Centre for Reviews and Dissemination, the Centre for Evidence Based Medicine and the National Institute for Clinical Excellence (NICE). Coupled with the linguistic dominance of English within medical and health science and the science of systematic reviews more generally, this offers a justification for a purposive sample that favours UK, European and Australian guidance documents.

Nine guidance documents were identified. These documents provide guidance for different types of reviews, namely: reviews of interventions, reviews of health technologies, reviews of qualitative research studies, reviews of social science topics, and reviews to inform guidance.

Whilst these guidance documents occasionally offer additional guidance on other types of systematic reviews, we have focused on the core and stated aims of these documents as they relate to literature searching. Table  1 sets out: the guidance document, the version audited, their core stated focus, and a bibliographical pointer to the main guidance relating to literature searching.

Once a list of key guidance documents was determined, it was checked by six senior information professionals based in the UK for relevance to current literature searching in systematic reviews.

Identifying supporting studies

In addition to identifying guidance, the authors sought to populate an evidence base of supporting studies (henceforth referred to as “studies”) that contribute to existing search practice. Studies were first identified by the authors from their knowledge on this topic area and, subsequently, through systematic citation chasing key studies (‘pearls’ [ 1 ]) located within each key stage of the search process. These studies are identified in Additional file  1 : Appendix Table 1. Citation chasing was conducted by analysing the bibliography of references for each study (backwards citation chasing) and through Google Scholar (forward citation chasing). A search of PubMed using the systematic review methods filter was undertaken in August 2017 (see Additional file 1 ). The search terms used were: (literature search*[Title/Abstract]) AND sysrev_methods[sb] and 586 results were returned. These results were sifted for relevance to the key stages in Fig.  1 by CC.

figure 1

The key stages of literature search guidance as identified from nine key texts

Extracting the data

To reveal the implicit process of literature searching within each guidance document, the relevant sections (chapters) on literature searching were read and re-read, with the aim of determining key methodological stages. We defined a key methodological stage as a distinct step in the overall process for which specific guidance is reported, and action is taken, that collectively would result in a completed literature search.

The chapter or section sub-heading for each methodological stage was extracted into a table using the exact language as reported in each guidance document. The lead author (CC) then read and re-read these data, and the paragraphs of the document to which the headings referred, summarising section details. This table was then reviewed, using comparison and contrast to identify agreements and areas of unique guidance. Consensus across multiple guidelines was used to inform selection of ‘key stages’ in the process of literature searching.

Having determined the key stages to literature searching, we then read and re-read the sections relating to literature searching again, extracting specific detail relating to the methodological process of literature searching within each key stage. Again, the guidance was then read and re-read, first on a document-by-document-basis and, secondly, across all the documents above, to identify both commonalities and areas of unique guidance.

Results and discussion

Our findings.

We were able to identify consensus across the guidance on literature searching for systematic reviews suggesting a shared implicit model within the information retrieval community. Whilst the structure of the guidance varies between documents, the same key stages are reported, even where the core focus of each document is different. We were able to identify specific areas of unique guidance, where a document reported guidance not summarised in other documents, together with areas of consensus across guidance.

Unique guidance

Only one document provided guidance on the topic of when to stop searching [ 2 ]. This guidance from 2005 anticipates a topic of increasing importance with the current interest in time-limited (i.e. “rapid”) reviews. Quality assurance (or peer review) of literature searches was only covered in two guidance documents [ 3 , 4 ]. This topic has emerged as increasingly important as indicated by the development of the PRESS instrument [ 5 ]. Text mining was discussed in four guidance documents [ 4 , 6 , 7 , 8 ] where the automation of some manual review work may offer efficiencies in literature searching [ 8 ].

Agreement between guidance: Defining the key stages of literature searching

Where there was agreement on the process, we determined that this constituted a key stage in the process of literature searching to inform systematic reviews.

From the guidance, we determined eight key stages that relate specifically to literature searching in systematic reviews. These are summarised at Fig. 1 . The data extraction table to inform Fig. 1 is reported in Table  2 . Table 2 reports the areas of common agreement and it demonstrates that the language used to describe key stages and processes varies significantly between guidance documents.

For each key stage, we set out the specific guidance, followed by discussion on how this guidance is situated within the wider literature.

Key stage one: Deciding who should undertake the literature search

The guidance.

Eight documents provided guidance on who should undertake literature searching in systematic reviews [ 2 , 4 , 6 , 7 , 8 , 9 , 10 , 11 ]. The guidance affirms that people with relevant expertise of literature searching should ‘ideally’ be included within the review team [ 6 ]. Information specialists (or information scientists), librarians or trial search co-ordinators (TSCs) are indicated as appropriate researchers in six guidance documents [ 2 , 7 , 8 , 9 , 10 , 11 ].

How the guidance corresponds to the published studies

The guidance is consistent with studies that call for the involvement of information specialists and librarians in systematic reviews [ 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 ] and which demonstrate how their training as ‘expert searchers’ and ‘analysers and organisers of data’ can be put to good use [ 13 ] in a variety of roles [ 12 , 16 , 20 , 21 , 24 , 25 , 26 ]. These arguments make sense in the context of the aims and purposes of literature searching in systematic reviews, explored below. The need for ‘thorough’ and ‘replicable’ literature searches was fundamental to the guidance and recurs in key stage two. Studies have found poor reporting, and a lack of replicable literature searches, to be a weakness in systematic reviews [ 17 , 18 , 27 , 28 ] and they argue that involvement of information specialists/ librarians would be associated with better reporting and better quality literature searching. Indeed, Meert et al. [ 29 ] demonstrated that involving a librarian as a co-author to a systematic review correlated with a higher score in the literature searching component of a systematic review [ 29 ]. As ‘new styles’ of rapid and scoping reviews emerge, where decisions on how to search are more iterative and creative, a clear role is made here too [ 30 ].

Knowing where to search for studies was noted as important in the guidance, with no agreement as to the appropriate number of databases to be searched [ 2 , 6 ]. Database (and resource selection more broadly) is acknowledged as a relevant key skill of information specialists and librarians [ 12 , 15 , 16 , 31 ].

Whilst arguments for including information specialists and librarians in the process of systematic review might be considered self-evident, Koffel and Rethlefsen [ 31 ] have questioned if the necessary involvement is actually happening [ 31 ].

Key stage two: Determining the aim and purpose of a literature search

The aim: Five of the nine guidance documents use adjectives such as ‘thorough’, ‘comprehensive’, ‘transparent’ and ‘reproducible’ to define the aim of literature searching [ 6 , 7 , 8 , 9 , 10 ]. Analogous phrases were present in a further three guidance documents, namely: ‘to identify the best available evidence’ [ 4 ] or ‘the aim of the literature search is not to retrieve everything. It is to retrieve everything of relevance’ [ 2 ] or ‘A systematic literature search aims to identify all publications relevant to the particular research question’ [ 3 ]. The Joanna Briggs Institute reviewers’ manual was the only guidance document where a clear statement on the aim of literature searching could not be identified. The purpose of literature searching was defined in three guidance documents, namely to minimise bias in the resultant review [ 6 , 8 , 10 ]. Accordingly, eight of nine documents clearly asserted that thorough and comprehensive literature searches are required as a potential mechanism for minimising bias.

The need for thorough and comprehensive literature searches appears as uniform within the eight guidance documents that describe approaches to literature searching in systematic reviews of effectiveness. Reviews of effectiveness (of intervention or cost), accuracy and prognosis, require thorough and comprehensive literature searches to transparently produce a reliable estimate of intervention effect. The belief that all relevant studies have been ‘comprehensively’ identified, and that this process has been ‘transparently’ reported, increases confidence in the estimate of effect and the conclusions that can be drawn [ 32 ]. The supporting literature exploring the need for comprehensive literature searches focuses almost exclusively on reviews of intervention effectiveness and meta-analysis. Different ‘styles’ of review may have different standards however; the alternative, offered by purposive sampling, has been suggested in the specific context of qualitative evidence syntheses [ 33 ].

What is a comprehensive literature search?

Whilst the guidance calls for thorough and comprehensive literature searches, it lacks clarity on what constitutes a thorough and comprehensive literature search, beyond the implication that all of the literature search methods in Table 2 should be used to identify studies. Egger et al. [ 34 ], in an empirical study evaluating the importance of comprehensive literature searches for trials in systematic reviews, defined a comprehensive search for trials as:

a search not restricted to English language;

where Cochrane CENTRAL or at least two other electronic databases had been searched (such as MEDLINE or EMBASE); and

at least one of the following search methods has been used to identify unpublished trials: searches for (I) conference abstracts, (ii) theses, (iii) trials registers; and (iv) contacts with experts in the field [ 34 ].

Tricco et al. (2008) used a similar threshold of bibliographic database searching AND a supplementary search method in a review when examining the risk of bias in systematic reviews. Their criteria were: one database (limited using the Cochrane Highly Sensitive Search Strategy (HSSS)) and handsearching [ 35 ].

Together with the guidance, this would suggest that comprehensive literature searching requires the use of BOTH bibliographic database searching AND supplementary search methods.

Comprehensiveness in literature searching, in the sense of how much searching should be undertaken, remains unclear. Egger et al. recommend that ‘investigators should consider the type of literature search and degree of comprehension that is appropriate for the review in question, taking into account budget and time constraints’ [ 34 ]. This view tallies with the Cochrane Handbook, which stipulates clearly, that study identification should be undertaken ‘within resource limits’ [ 9 ]. This would suggest that the limitations to comprehension are recognised but it raises questions on how this is decided and reported [ 36 ].

What is the point of comprehensive literature searching?

The purpose of thorough and comprehensive literature searches is to avoid missing key studies and to minimize bias [ 6 , 8 , 10 , 34 , 37 , 38 , 39 ] since a systematic review based only on published (or easily accessible) studies may have an exaggerated effect size [ 35 ]. Felson (1992) sets out potential biases that could affect the estimate of effect in a meta-analysis [ 40 ] and Tricco et al. summarize the evidence concerning bias and confounding in systematic reviews [ 35 ]. Egger et al. point to non-publication of studies, publication bias, language bias and MEDLINE bias, as key biases [ 34 , 35 , 40 , 41 , 42 , 43 , 44 , 45 , 46 ]. Comprehensive searches are not the sole factor to mitigate these biases but their contribution is thought to be significant [ 2 , 32 , 34 ]. Fehrmann (2011) suggests that ‘the search process being described in detail’ and that, where standard comprehensive search techniques have been applied, increases confidence in the search results [ 32 ].

Does comprehensive literature searching work?

Egger et al., and other study authors, have demonstrated a change in the estimate of intervention effectiveness where relevant studies were excluded from meta-analysis [ 34 , 47 ]. This would suggest that missing studies in literature searching alters the reliability of effectiveness estimates. This is an argument for comprehensive literature searching. Conversely, Egger et al. found that ‘comprehensive’ searches still missed studies and that comprehensive searches could, in fact, introduce bias into a review rather than preventing it, through the identification of low quality studies then being included in the meta-analysis [ 34 ]. Studies query if identifying and including low quality or grey literature studies changes the estimate of effect [ 43 , 48 ] and question if time is better invested updating systematic reviews rather than searching for unpublished studies [ 49 ], or mapping studies for review as opposed to aiming for high sensitivity in literature searching [ 50 ].

Aim and purpose beyond reviews of effectiveness

The need for comprehensive literature searches is less certain in reviews of qualitative studies, and for reviews where a comprehensive identification of studies is difficult to achieve (for example, in Public health) [ 33 , 51 , 52 , 53 , 54 , 55 ]. Literature searching for qualitative studies, and in public health topics, typically generates a greater number of studies to sift than in reviews of effectiveness [ 39 ] and demonstrating the ‘value’ of studies identified or missed is harder [ 56 ], since the study data do not typically support meta-analysis. Nussbaumer-Streit et al. (2016) have registered a review protocol to assess whether abbreviated literature searches (as opposed to comprehensive literature searches) has an impact on conclusions across multiple bodies of evidence, not only on effect estimates [ 57 ] which may develop this understanding. It may be that decision makers and users of systematic reviews are willing to trade the certainty from a comprehensive literature search and systematic review in exchange for different approaches to evidence synthesis [ 58 ], and that comprehensive literature searches are not necessarily a marker of literature search quality, as previously thought [ 36 ]. Different approaches to literature searching [ 37 , 38 , 59 , 60 , 61 , 62 ] and developing the concept of when to stop searching are important areas for further study [ 36 , 59 ].

The study by Nussbaumer-Streit et al. has been published since the submission of this literature review [ 63 ]. Nussbaumer-Streit et al. (2018) conclude that abbreviated literature searches are viable options for rapid evidence syntheses, if decision-makers are willing to trade the certainty from a comprehensive literature search and systematic review, but that decision-making which demands detailed scrutiny should still be based on comprehensive literature searches [ 63 ].

Key stage three: Preparing for the literature search

Six documents provided guidance on preparing for a literature search [ 2 , 3 , 6 , 7 , 9 , 10 ]. The Cochrane Handbook clearly stated that Cochrane authors (i.e. researchers) should seek advice from a trial search co-ordinator (i.e. a person with specific skills in literature searching) ‘before’ starting a literature search [ 9 ].

Two key tasks were perceptible in preparing for a literature searching [ 2 , 6 , 7 , 10 , 11 ]. First, to determine if there are any existing or on-going reviews, or if a new review is justified [ 6 , 11 ]; and, secondly, to develop an initial literature search strategy to estimate the volume of relevant literature (and quality of a small sample of relevant studies [ 10 ]) and indicate the resources required for literature searching and the review of the studies that follows [ 7 , 10 ].

Three documents summarised guidance on where to search to determine if a new review was justified [ 2 , 6 , 11 ]. These focused on searching databases of systematic reviews (The Cochrane Database of Systematic Reviews (CDSR) and the Database of Abstracts of Reviews of Effects (DARE)), institutional registries (including PROSPERO), and MEDLINE [ 6 , 11 ]. It is worth noting, however, that as of 2015, DARE (and NHS EEDs) are no longer being updated and so the relevance of this (these) resource(s) will diminish over-time [ 64 ]. One guidance document, ‘Systematic reviews in the Social Sciences’, noted, however, that databases are not the only source of information and unpublished reports, conference proceeding and grey literature may also be required, depending on the nature of the review question [ 2 ].

Two documents reported clearly that this preparation (or ‘scoping’) exercise should be undertaken before the actual search strategy is developed [ 7 , 10 ]).

The guidance offers the best available source on preparing the literature search with the published studies not typically reporting how their scoping informed the development of their search strategies nor how their search approaches were developed. Text mining has been proposed as a technique to develop search strategies in the scoping stages of a review although this work is still exploratory [ 65 ]. ‘Clustering documents’ and word frequency analysis have also been tested to identify search terms and studies for review [ 66 , 67 ]. Preparing for literature searches and scoping constitutes an area for future research.

Key stage four: Designing the search strategy

The Population, Intervention, Comparator, Outcome (PICO) structure was the commonly reported structure promoted to design a literature search strategy. Five documents suggested that the eligibility criteria or review question will determine which concepts of PICO will be populated to develop the search strategy [ 1 , 4 , 7 , 8 , 9 ]. The NICE handbook promoted multiple structures, namely PICO, SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) and multi-stranded approaches [ 4 ].

With the exclusion of The Joanna Briggs Institute reviewers’ manual, the guidance offered detail on selecting key search terms, synonyms, Boolean language, selecting database indexing terms and combining search terms. The CEE handbook suggested that ‘search terms may be compiled with the help of the commissioning organisation and stakeholders’ [ 10 ].

The use of limits, such as language or date limits, were discussed in all documents [ 2 , 3 , 4 , 6 , 7 , 8 , 9 , 10 , 11 ].

Search strategy structure

The guidance typically relates to reviews of intervention effectiveness so PICO – with its focus on intervention and comparator - is the dominant model used to structure literature search strategies [ 68 ]. PICOs – where the S denotes study design - is also commonly used in effectiveness reviews [ 6 , 68 ]. As the NICE handbook notes, alternative models to structure literature search strategies have been developed and tested. Booth provides an overview on formulating questions for evidence based practice [ 69 ] and has developed a number of alternatives to the PICO structure, namely: BeHEMoTh (Behaviour of interest; Health context; Exclusions; Models or Theories) for use when systematically identifying theory [ 55 ]; SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) for identification of social science and evaluation studies [ 69 ] and, working with Cooke and colleagues, SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) [ 70 ]. SPIDER has been compared to PICO and PICOs in a study by Methley et al. [ 68 ].

The NICE handbook also suggests the use of multi-stranded approaches to developing literature search strategies [ 4 ]. Glanville developed this idea in a study by Whitting et al. [ 71 ] and a worked example of this approach is included in the development of a search filter by Cooper et al. [ 72 ].

Writing search strategies: Conceptual and objective approaches

Hausner et al. [ 73 ] provide guidance on writing literature search strategies, delineating between conceptually and objectively derived approaches. The conceptual approach, advocated by and explained in the guidance documents, relies on the expertise of the literature searcher to identify key search terms and then develop key terms to include synonyms and controlled syntax. Hausner and colleagues set out the objective approach [ 73 ] and describe what may be done to validate it [ 74 ].

The use of limits

The guidance documents offer direction on the use of limits within a literature search. Limits can be used to focus literature searching to specific study designs or by other markers (such as by date) which limits the number of studies returned by a literature search. The use of limits should be described and the implications explored [ 34 ] since limiting literature searching can introduce bias (explored above). Craven et al. have suggested the use of a supporting narrative to explain decisions made in the process of developing literature searches and this advice would usefully capture decisions on the use of search limits [ 75 ].

Key stage five: Determining the process of literature searching and deciding where to search (bibliographic database searching)

Table 2 summarises the process of literature searching as reported in each guidance document. Searching bibliographic databases was consistently reported as the ‘first step’ to literature searching in all nine guidance documents.

Three documents reported specific guidance on where to search, in each case specific to the type of review their guidance informed, and as a minimum requirement [ 4 , 9 , 11 ]. Seven of the key guidance documents suggest that the selection of bibliographic databases depends on the topic of review [ 2 , 3 , 4 , 6 , 7 , 8 , 10 ], with two documents noting the absence of an agreed standard on what constitutes an acceptable number of databases searched [ 2 , 6 ].

The guidance documents summarise ‘how to’ search bibliographic databases in detail and this guidance is further contextualised above in terms of developing the search strategy. The documents provide guidance of selecting bibliographic databases, in some cases stating acceptable minima (i.e. The Cochrane Handbook states Cochrane CENTRAL, MEDLINE and EMBASE), and in other cases simply listing bibliographic database available to search. Studies have explored the value in searching specific bibliographic databases, with Wright et al. (2015) noting the contribution of CINAHL in identifying qualitative studies [ 76 ], Beckles et al. (2013) questioning the contribution of CINAHL to identifying clinical studies for guideline development [ 77 ], and Cooper et al. (2015) exploring the role of UK-focused bibliographic databases to identify UK-relevant studies [ 78 ]. The host of the database (e.g. OVID or ProQuest) has been shown to alter the search returns offered. Younger and Boddy [ 79 ] report differing search returns from the same database (AMED) but where the ‘host’ was different [ 79 ].

The average number of bibliographic database searched in systematic reviews has risen in the period 1994–2014 (from 1 to 4) [ 80 ] but there remains (as attested to by the guidance) no consensus on what constitutes an acceptable number of databases searched [ 48 ]. This is perhaps because thinking about the number of databases searched is the wrong question, researchers should be focused on which databases were searched and why, and which databases were not searched and why. The discussion should re-orientate to the differential value of sources but researchers need to think about how to report this in studies to allow findings to be generalised. Bethel (2017) has proposed ‘search summaries’, completed by the literature searcher, to record where included studies were identified, whether from database (and which databases specifically) or supplementary search methods [ 81 ]. Search summaries document both yield and accuracy of searches, which could prospectively inform resource use and decisions to search or not to search specific databases in topic areas. The prospective use of such data presupposes, however, that past searches are a potential predictor of future search performance (i.e. that each topic is to be considered representative and not unique). In offering a body of practice, this data would be of greater practicable use than current studies which are considered as little more than individual case studies [ 82 , 83 , 84 , 85 , 86 , 87 , 88 , 89 , 90 ].

When to database search is another question posed in the literature. Beyer et al. [ 91 ] report that databases can be prioritised for literature searching which, whilst not addressing the question of which databases to search, may at least bring clarity as to which databases to search first [ 91 ]. Paradoxically, this links to studies that suggest PubMed should be searched in addition to MEDLINE (OVID interface) since this improves the currency of systematic reviews [ 92 , 93 ]. Cooper et al. (2017) have tested the idea of database searching not as a primary search method (as suggested in the guidance) but as a supplementary search method in order to manage the volume of studies identified for an environmental effectiveness systematic review. Their case study compared the effectiveness of database searching versus a protocol using supplementary search methods and found that the latter identified more relevant studies for review than searching bibliographic databases [ 94 ].

Key stage six: Determining the process of literature searching and deciding where to search (supplementary search methods)

Table 2 also summaries the process of literature searching which follows bibliographic database searching. As Table 2 sets out, guidance that supplementary literature search methods should be used in systematic reviews recurs across documents, but the order in which these methods are used, and the extent to which they are used, varies. We noted inconsistency in the labelling of supplementary search methods between guidance documents.

Rather than focus on the guidance on how to use the methods (which has been summarised in a recent review [ 95 ]), we focus on the aim or purpose of supplementary search methods.

The Cochrane Handbook reported that ‘efforts’ to identify unpublished studies should be made [ 9 ]. Four guidance documents [ 2 , 3 , 6 , 9 ] acknowledged that searching beyond bibliographic databases was necessary since ‘databases are not the only source of literature’ [ 2 ]. Only one document reported any guidance on determining when to use supplementary methods. The IQWiG handbook reported that the use of handsearching (in their example) could be determined on a ‘case-by-case basis’ which implies that the use of these methods is optional rather than mandatory. This is in contrast to the guidance (above) on bibliographic database searching.

The issue for supplementary search methods is similar in many ways to the issue of searching bibliographic databases: demonstrating value. The purpose and contribution of supplementary search methods in systematic reviews is increasingly acknowledged [ 37 , 61 , 62 , 96 , 97 , 98 , 99 , 100 , 101 ] but understanding the value of the search methods to identify studies and data is unclear. In a recently published review, Cooper et al. (2017) reviewed the literature on supplementary search methods looking to determine the advantages, disadvantages and resource implications of using supplementary search methods [ 95 ]. This review also summarises the key guidance and empirical studies and seeks to address the question on when to use these search methods and when not to [ 95 ]. The guidance is limited in this regard and, as Table 2 demonstrates, offers conflicting advice on the order of searching, and the extent to which these search methods should be used in systematic reviews.

Key stage seven: Managing the references

Five of the documents provided guidance on managing references, for example downloading, de-duplicating and managing the output of literature searches [ 2 , 4 , 6 , 8 , 10 ]. This guidance typically itemised available bibliographic management tools rather than offering guidance on how to use them specifically [ 2 , 4 , 6 , 8 ]. The CEE handbook provided guidance on importing data where no direct export option is available (e.g. web-searching) [ 10 ].

The literature on using bibliographic management tools is not large relative to the number of ‘how to’ videos on platforms such as YouTube (see for example [ 102 ]). These YouTube videos confirm the overall lack of ‘how to’ guidance identified in this study and offer useful instruction on managing references. Bramer et al. set out methods for de-duplicating data and reviewing references in Endnote [ 103 , 104 ] and Gall tests the direct search function within Endnote to access databases such as PubMed, finding a number of limitations [ 105 ]. Coar et al. and Ahmed et al. consider the role of the free-source tool, Zotero [ 106 , 107 ]. Managing references is a key administrative function in the process of review particularly for documenting searches in PRISMA guidance.

Key stage eight: Documenting the search

The Cochrane Handbook was the only guidance document to recommend a specific reporting guideline: Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [ 9 ]. Six documents provided guidance on reporting the process of literature searching with specific criteria to report [ 3 , 4 , 6 , 8 , 9 , 10 ]. There was consensus on reporting: the databases searched (and the host searched by), the search strategies used, and any use of limits (e.g. date, language, search filters (The CRD handbook called for these limits to be justified [ 6 ])). Three guidance documents reported that the number of studies identified should be recorded [ 3 , 6 , 10 ]. The number of duplicates identified [ 10 ], the screening decisions [ 3 ], a comprehensive list of grey literature sources searched (and full detail for other supplementary search methods) [ 8 ], and an annotation of search terms tested but not used [ 4 ] were identified as unique items in four documents.

The Cochrane Handbook was the only guidance document to note that the full search strategies for each database should be included in the Additional file 1 of the review [ 9 ].

All guidance documents should ultimately deliver completed systematic reviews that fulfil the requirements of the PRISMA reporting guidelines [ 108 ]. The guidance broadly requires the reporting of data that corresponds with the requirements of the PRISMA statement although documents typically ask for diverse and additional items [ 108 ]. In 2008, Sampson et al. observed a lack of consensus on reporting search methods in systematic reviews [ 109 ] and this remains the case as of 2017, as evidenced in the guidance documents, and in spite of the publication of the PRISMA guidelines in 2009 [ 110 ]. It is unclear why the collective guidance does not more explicitly endorse adherence to the PRISMA guidance.

Reporting of literature searching is a key area in systematic reviews since it sets out clearly what was done and how the conclusions of the review can be believed [ 52 , 109 ]. Despite strong endorsement in the guidance documents, specifically supported in PRISMA guidance, and other related reporting standards too (such as ENTREQ for qualitative evidence synthesis, STROBE for reviews of observational studies), authors still highlight the prevalence of poor standards of literature search reporting [ 31 , 110 , 111 , 112 , 113 , 114 , 115 , 116 , 117 , 118 , 119 ]. To explore issues experienced by authors in reporting literature searches, and look at uptake of PRISMA, Radar et al. [ 120 ] surveyed over 260 review authors to determine common problems and their work summaries the practical aspects of reporting literature searching [ 120 ]. Atkinson et al. [ 121 ] have also analysed reporting standards for literature searching, summarising recommendations and gaps for reporting search strategies [ 121 ].

One area that is less well covered by the guidance, but nevertheless appears in this literature, is the quality appraisal or peer review of literature search strategies. The PRESS checklist is the most prominent and it aims to develop evidence-based guidelines to peer review of electronic search strategies [ 5 , 122 , 123 ]. A corresponding guideline for documentation of supplementary search methods does not yet exist although this idea is currently being explored.

How the reporting of the literature searching process corresponds to critical appraisal tools is an area for further research. In the survey undertaken by Radar et al. (2014), 86% of survey respondents (153/178) identified a need for further guidance on what aspects of the literature search process to report [ 120 ]. The PRISMA statement offers a brief summary of what to report but little practical guidance on how to report it [ 108 ]. Critical appraisal tools for systematic reviews, such as AMSTAR 2 (Shea et al. [ 124 ]) and ROBIS (Whiting et al. [ 125 ]), can usefully be read alongside PRISMA guidance, since they offer greater detail on how the reporting of the literature search will be appraised and, therefore, they offer a proxy on what to report [ 124 , 125 ]. Further research in the form of a study which undertakes a comparison between PRISMA and quality appraisal checklists for systematic reviews would seem to begin addressing the call, identified by Radar et al., for further guidance on what to report [ 120 ].

Limitations

Other handbooks exist.

A potential limitation of this literature review is the focus on guidance produced in Europe (the UK specifically) and Australia. We justify the decision for our selection of the nine guidance documents reviewed in this literature review in section “ Identifying guidance ”. In brief, these nine guidance documents were selected as the most relevant health care guidance that inform UK systematic reviewing practice, given that the UK occupies a prominent position in the science of health information retrieval. We acknowledge the existence of other guidance documents, such as those from North America (e.g. the Agency for Healthcare Research and Quality (AHRQ) [ 126 ], The Institute of Medicine [ 127 ] and the guidance and resources produced by the Canadian Agency for Drugs and Technologies in Health (CADTH) [ 128 ]). We comment further on this directly below.

The handbooks are potentially linked to one another

What is not clear is the extent to which the guidance documents inter-relate or provide guidance uniquely. The Cochrane Handbook, first published in 1994, is notably a key source of reference in guidance and systematic reviews beyond Cochrane reviews. It is not clear to what extent broadening the sample of guidance handbooks to include North American handbooks, and guidance handbooks from other relevant countries too, would alter the findings of this literature review or develop further support for the process model. Since we cannot be clear, we raise this as a potential limitation of this literature review. On our initial review of a sample of North American, and other, guidance documents (before selecting the guidance documents considered in this review), however, we do not consider that the inclusion of these further handbooks would alter significantly the findings of this literature review.

This is a literature review

A further limitation of this review was that the review of published studies is not a systematic review of the evidence for each key stage. It is possible that other relevant studies could help contribute to the exploration and development of the key stages identified in this review.

This literature review would appear to demonstrate the existence of a shared model of the literature searching process in systematic reviews. We call this model ‘the conventional approach’, since it appears to be common convention in nine different guidance documents.

The findings reported above reveal eight key stages in the process of literature searching for systematic reviews. These key stages are consistently reported in the nine guidance documents which suggests consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews.

In Table 2 , we demonstrate consensus regarding the application of literature search methods. All guidance documents distinguish between primary and supplementary search methods. Bibliographic database searching is consistently the first method of literature searching referenced in each guidance document. Whilst the guidance uniformly supports the use of supplementary search methods, there is little evidence for a consistent process with diverse guidance across documents. This may reflect differences in the core focus across each document, linked to differences in identifying effectiveness studies or qualitative studies, for instance.

Eight of the nine guidance documents reported on the aims of literature searching. The shared understanding was that literature searching should be thorough and comprehensive in its aim and that this process should be reported transparently so that that it could be reproduced. Whilst only three documents explicitly link this understanding to minimising bias, it is clear that comprehensive literature searching is implicitly linked to ‘not missing relevant studies’ which is approximately the same point.

Defining the key stages in this review helps categorise the scholarship available, and it prioritises areas for development or further study. The supporting studies on preparing for literature searching (key stage three, ‘preparation’) were, for example, comparatively few, and yet this key stage represents a decisive moment in literature searching for systematic reviews. It is where search strategy structure is determined, search terms are chosen or discarded, and the resources to be searched are selected. Information specialists, librarians and researchers, are well placed to develop these and other areas within the key stages we identify.

This review calls for further research to determine the suitability of using the conventional approach. The publication dates of the guidance documents which underpin the conventional approach may raise questions as to whether the process which they each report remains valid for current systematic literature searching. In addition, it may be useful to test whether it is desirable to use the same process model of literature searching for qualitative evidence synthesis as that for reviews of intervention effectiveness, which this literature review demonstrates is presently recommended best practice.

Abbreviations

Behaviour of interest; Health context; Exclusions; Models or Theories

Cochrane Database of Systematic Reviews

The Cochrane Central Register of Controlled Trials

Database of Abstracts of Reviews of Effects

Enhancing transparency in reporting the synthesis of qualitative research

Institute for Quality and Efficiency in Healthcare

National Institute for Clinical Excellence

Population, Intervention, Comparator, Outcome

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Setting, Perspective, Intervention, Comparison, Evaluation

Sample, Phenomenon of Interest, Design, Evaluation, Research type

STrengthening the Reporting of OBservational studies in Epidemiology

Trial Search Co-ordinators

Booth A. Unpacking your literature search toolbox: on search styles and tactics. Health Information & Libraries Journal. 2008;25(4):313–7.

Article   Google Scholar  

Petticrew M, Roberts H. Systematic reviews in the social sciences: a practical guide. Oxford: Blackwell Publishing Ltd; 2006.

Book   Google Scholar  

Institute for Quality and Efficiency in Health Care (IQWiG). IQWiG Methods Resources. 7 Information retrieval 2014 [Available from: https://www.ncbi.nlm.nih.gov/books/NBK385787/ .

NICE: National Institute for Health and Care Excellence. Developing NICE guidelines: the manual 2014. Available from: https://www.nice.org.uk/media/default/about/what-we-do/our-programmes/developing-nice-guidelines-the-manual.pdf .

Sampson M. MJ, Lefebvre C, Moher D, Grimshaw J. Peer Review of Electronic Search Strategies: PRESS; 2008.

Google Scholar  

Centre for Reviews & Dissemination. Systematic reviews – CRD’s guidance for undertaking reviews in healthcare. York: Centre for Reviews and Dissemination, University of York; 2009.

eunetha: European Network for Health Technology Assesment Process of information retrieval for systematic reviews and health technology assessments on clinical effectiveness 2016. Available from: http://www.eunethta.eu/sites/default/files/Guideline_Information_Retrieval_V1-1.pdf .

Kugley SWA, Thomas J, Mahood Q, Jørgensen AMK, Hammerstrøm K, Sathe N. Searching for studies: a guide to information retrieval for Campbell systematic reviews. Oslo: Campbell Collaboration. 2017; Available from: https://www.campbellcollaboration.org/library/searching-for-studies-information-retrieval-guide-campbell-reviews.html

Lefebvre C, Manheimer E, Glanville J. Chapter 6: searching for studies. In: JPT H, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions; 2011.

Collaboration for Environmental Evidence. Guidelines for Systematic Review and Evidence Synthesis in Environmental Management.: Environmental Evidence:; 2013. Available from: http://www.environmentalevidence.org/wp-content/uploads/2017/01/Review-guidelines-version-4.2-final-update.pdf .

The Joanna Briggs Institute. Joanna Briggs institute reviewers’ manual. 2014th ed: the Joanna Briggs institute; 2014. Available from: https://joannabriggs.org/assets/docs/sumari/ReviewersManual-2014.pdf

Beverley CA, Booth A, Bath PA. The role of the information specialist in the systematic review process: a health information case study. Health Inf Libr J. 2003;20(2):65–74.

Article   CAS   Google Scholar  

Harris MR. The librarian's roles in the systematic review process: a case study. Journal of the Medical Library Association. 2005;93(1):81–7.

PubMed   PubMed Central   Google Scholar  

Egger JB. Use of recommended search strategies in systematic reviews and the impact of librarian involvement: a cross-sectional survey of recent authors. PLoS One. 2015;10(5):e0125931.

Li L, Tian J, Tian H, Moher D, Liang F, Jiang T, et al. Network meta-analyses could be improved by searching more sources and by involving a librarian. J Clin Epidemiol. 2014;67(9):1001–7.

Article   PubMed   Google Scholar  

McGowan J, Sampson M. Systematic reviews need systematic searchers. J Med Libr Assoc. 2005;93(1):74–80.

Rethlefsen ML, Farrell AM, Osterhaus Trzasko LC, Brigham TJ. Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews. J Clin Epidemiol. 2015;68(6):617–26.

Weller AC. Mounting evidence that librarians are essential for comprehensive literature searches for meta-analyses and Cochrane reports. J Med Libr Assoc. 2004;92(2):163–4.

Swinkels A, Briddon J, Hall J. Two physiotherapists, one librarian and a systematic literature review: collaboration in action. Health Info Libr J. 2006;23(4):248–56.

Foster M. An overview of the role of librarians in systematic reviews: from expert search to project manager. EAHIL. 2015;11(3):3–7.

Lawson L. OPERATING OUTSIDE LIBRARY WALLS 2004.

Vassar M, Yerokhin V, Sinnett PM, Weiher M, Muckelrath H, Carr B, et al. Database selection in systematic reviews: an insight through clinical neurology. Health Inf Libr J. 2017;34(2):156–64.

Townsend WA, Anderson PF, Ginier EC, MacEachern MP, Saylor KM, Shipman BL, et al. A competency framework for librarians involved in systematic reviews. Journal of the Medical Library Association : JMLA. 2017;105(3):268–75.

Cooper ID, Crum JA. New activities and changing roles of health sciences librarians: a systematic review, 1990-2012. Journal of the Medical Library Association : JMLA. 2013;101(4):268–77.

Crum JA, Cooper ID. Emerging roles for biomedical librarians: a survey of current practice, challenges, and changes. Journal of the Medical Library Association : JMLA. 2013;101(4):278–86.

Dudden RF, Protzko SL. The systematic review team: contributions of the health sciences librarian. Med Ref Serv Q. 2011;30(3):301–15.

Golder S, Loke Y, McIntosh HM. Poor reporting and inadequate searches were apparent in systematic reviews of adverse effects. J Clin Epidemiol. 2008;61(5):440–8.

Maggio LA, Tannery NH, Kanter SL. Reproducibility of literature search reporting in medical education reviews. Academic medicine : journal of the Association of American Medical Colleges. 2011;86(8):1049–54.

Meert D, Torabi N, Costella J. Impact of librarians on reporting of the literature searching component of pediatric systematic reviews. Journal of the Medical Library Association : JMLA. 2016;104(4):267–77.

Morris M, Boruff JT, Gore GC. Scoping reviews: establishing the role of the librarian. Journal of the Medical Library Association : JMLA. 2016;104(4):346–54.

Koffel JB, Rethlefsen ML. Reproducibility of search strategies is poor in systematic reviews published in high-impact pediatrics, cardiology and surgery journals: a cross-sectional study. PLoS One. 2016;11(9):e0163309.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Fehrmann P, Thomas J. Comprehensive computer searches and reporting in systematic reviews. Research Synthesis Methods. 2011;2(1):15–32.

Booth A. Searching for qualitative research for inclusion in systematic reviews: a structured methodological review. Systematic Reviews. 2016;5(1):74.

Article   PubMed   PubMed Central   Google Scholar  

Egger M, Juni P, Bartlett C, Holenstein F, Sterne J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health technology assessment (Winchester, England). 2003;7(1):1–76.

Tricco AC, Tetzlaff J, Sampson M, Fergusson D, Cogo E, Horsley T, et al. Few systematic reviews exist documenting the extent of bias: a systematic review. J Clin Epidemiol. 2008;61(5):422–34.

Booth A. How much searching is enough? Comprehensive versus optimal retrieval for technology assessments. Int J Technol Assess Health Care. 2010;26(4):431–5.

Papaioannou D, Sutton A, Carroll C, Booth A, Wong R. Literature searching for social science systematic reviews: consideration of a range of search techniques. Health Inf Libr J. 2010;27(2):114–22.

Petticrew M. Time to rethink the systematic review catechism? Moving from ‘what works’ to ‘what happens’. Systematic Reviews. 2015;4(1):36.

Betrán AP, Say L, Gülmezoglu AM, Allen T, Hampson L. Effectiveness of different databases in identifying studies for systematic reviews: experience from the WHO systematic review of maternal morbidity and mortality. BMC Med Res Methodol. 2005;5

Felson DT. Bias in meta-analytic research. J Clin Epidemiol. 1992;45(8):885–92.

Article   PubMed   CAS   Google Scholar  

Franco A, Malhotra N, Simonovits G. Publication bias in the social sciences: unlocking the file drawer. Science. 2014;345(6203):1502–5.

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. Grey literature in systematic reviews: a cross-sectional study of the contribution of non-English reports, unpublished studies and dissertations to the results of meta-analyses in child-relevant reviews. BMC Med Res Methodol. 2017;17(1):64.

Schmucker CM, Blümle A, Schell LK, Schwarzer G, Oeller P, Cabrera L, et al. Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research. PLoS One. 2017;12(4):e0176210.

Egger M, Zellweger-Zahner T, Schneider M, Junker C, Lengeler C, Antes G. Language bias in randomised controlled trials published in English and German. Lancet (London, England). 1997;350(9074):326–9.

Moher D, Pham B, Lawson ML, Klassen TP. The inclusion of reports of randomised trials published in languages other than English in systematic reviews. Health technology assessment (Winchester, England). 2003;7(41):1–90.

Pham B, Klassen TP, Lawson ML, Moher D. Language of publication restrictions in systematic reviews gave different results depending on whether the intervention was conventional or complementary. J Clin Epidemiol. 2005;58(8):769–76.

Mills EJ, Kanters S, Thorlund K, Chaimani A, Veroniki A-A, Ioannidis JPA. The effects of excluding treatments from network meta-analyses: survey. BMJ : British Medical Journal. 2013;347

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. The contribution of databases to the results of systematic reviews: a cross-sectional study. BMC Med Res Methodol. 2016;16(1):127.

van Driel ML, De Sutter A, De Maeseneer J, Christiaens T. Searching for unpublished trials in Cochrane reviews may not be worth the effort. J Clin Epidemiol. 2009;62(8):838–44.e3.

Buchberger B, Krabbe L, Lux B, Mattivi JT. Evidence mapping for decision making: feasibility versus accuracy - when to abandon high sensitivity in electronic searches. German medical science : GMS e-journal. 2016;14:Doc09.

Lorenc T, Pearson M, Jamal F, Cooper C, Garside R. The role of systematic reviews of qualitative evidence in evaluating interventions: a case study. Research Synthesis Methods. 2012;3(1):1–10.

Gough D. Weight of evidence: a framework for the appraisal of the quality and relevance of evidence. Res Pap Educ. 2007;22(2):213–28.

Barroso J, Gollop CJ, Sandelowski M, Meynell J, Pearce PF, Collins LJ. The challenges of searching for and retrieving qualitative studies. West J Nurs Res. 2003;25(2):153–78.

Britten N, Garside R, Pope C, Frost J, Cooper C. Asking more of qualitative synthesis: a response to Sally Thorne. Qual Health Res. 2017;27(9):1370–6.

Booth A, Carroll C. Systematic searching for theory to inform systematic reviews: is it feasible? Is it desirable? Health Info Libr J. 2015;32(3):220–35.

Kwon Y, Powelson SE, Wong H, Ghali WA, Conly JM. An assessment of the efficacy of searching in biomedical databases beyond MEDLINE in identifying studies for a systematic review on ward closures as an infection control intervention to control outbreaks. Syst Rev. 2014;3:135.

Nussbaumer-Streit B, Klerings I, Wagner G, Titscher V, Gartlehner G. Assessing the validity of abbreviated literature searches for rapid reviews: protocol of a non-inferiority and meta-epidemiologic study. Systematic Reviews. 2016;5:197.

Wagner G, Nussbaumer-Streit B, Greimel J, Ciapponi A, Gartlehner G. Trading certainty for speed - how much uncertainty are decisionmakers and guideline developers willing to accept when using rapid reviews: an international survey. BMC Med Res Methodol. 2017;17(1):121.

Ogilvie D, Hamilton V, Egan M, Petticrew M. Systematic reviews of health effects of social interventions: 1. Finding the evidence: how far should you go? J Epidemiol Community Health. 2005;59(9):804–8.

Royle P, Milne R. Literature searching for randomized controlled trials used in Cochrane reviews: rapid versus exhaustive searches. Int J Technol Assess Health Care. 2003;19(4):591–603.

Pearson M, Moxham T, Ashton K. Effectiveness of search strategies for qualitative research about barriers and facilitators of program delivery. Eval Health Prof. 2011;34(3):297–308.

Levay P, Raynor M, Tuvey D. The Contributions of MEDLINE, Other Bibliographic Databases and Various Search Techniques to NICE Public Health Guidance. 2015. 2015;10(1):19.

Nussbaumer-Streit B, Klerings I, Wagner G, Heise TL, Dobrescu AI, Armijo-Olivo S, et al. Abbreviated literature searches were viable alternatives to comprehensive searches: a meta-epidemiological study. J Clin Epidemiol. 2018;102:1–11.

Briscoe S, Cooper C, Glanville J, Lefebvre C. The loss of the NHS EED and DARE databases and the effect on evidence synthesis and evaluation. Res Synth Methods. 2017;8(3):256–7.

Stansfield C, O'Mara-Eves A, Thomas J. Text mining for search term development in systematic reviewing: A discussion of some methods and challenges. Research Synthesis Methods.n/a-n/a.

Petrova M, Sutcliffe P, Fulford KW, Dale J. Search terms and a validated brief search filter to retrieve publications on health-related values in Medline: a word frequency analysis study. Journal of the American Medical Informatics Association : JAMIA. 2012;19(3):479–88.

Stansfield C, Thomas J, Kavanagh J. 'Clustering' documents automatically to support scoping reviews of research: a case study. Res Synth Methods. 2013;4(3):230–41.

PubMed   Google Scholar  

Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi S. PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv Res. 2014;14:579.

Andrew B. Clear and present questions: formulating questions for evidence based practice. Library Hi Tech. 2006;24(3):355–68.

Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qual Health Res. 2012;22(10):1435–43.

Whiting P, Westwood M, Bojke L, Palmer S, Richardson G, Cooper J, et al. Clinical effectiveness and cost-effectiveness of tests for the diagnosis and investigation of urinary tract infection in children: a systematic review and economic model. Health technology assessment (Winchester, England). 2006;10(36):iii-iv, xi-xiii, 1–154.

Cooper C, Levay P, Lorenc T, Craig GM. A population search filter for hard-to-reach populations increased search efficiency for a systematic review. J Clin Epidemiol. 2014;67(5):554–9.

Hausner E, Waffenschmidt S, Kaiser T, Simon M. Routine development of objectively derived search strategies. Systematic Reviews. 2012;1(1):19.

Hausner E, Guddat C, Hermanns T, Lampert U, Waffenschmidt S. Prospective comparison of search strategies for systematic reviews: an objective approach yielded higher sensitivity than a conceptual one. J Clin Epidemiol. 2016;77:118–24.

Craven J, Levay P. Recording database searches for systematic reviews - what is the value of adding a narrative to peer-review checklists? A case study of nice interventional procedures guidance. Evid Based Libr Inf Pract. 2011;6(4):72–87.

Wright K, Golder S, Lewis-Light K. What value is the CINAHL database when searching for systematic reviews of qualitative studies? Syst Rev. 2015;4:104.

Beckles Z, Glover S, Ashe J, Stockton S, Boynton J, Lai R, et al. Searching CINAHL did not add value to clinical questions posed in NICE guidelines. J Clin Epidemiol. 2013;66(9):1051–7.

Cooper C, Rogers M, Bethel A, Briscoe S, Lowe J. A mapping review of the literature on UK-focused health and social care databases. Health Inf Libr J. 2015;32(1):5–22.

Younger P, Boddy K. When is a search not a search? A comparison of searching the AMED complementary health database via EBSCOhost, OVID and DIALOG. Health Inf Libr J. 2009;26(2):126–35.

Lam MT, McDiarmid M. Increasing number of databases searched in systematic reviews and meta-analyses between 1994 and 2014. Journal of the Medical Library Association : JMLA. 2016;104(4):284–9.

Bethel A, editor Search summary tables for systematic reviews: results and findings. HLC Conference 2017a.

Aagaard T, Lund H, Juhl C. Optimizing literature search in systematic reviews - are MEDLINE, EMBASE and CENTRAL enough for identifying effect studies within the area of musculoskeletal disorders? BMC Med Res Methodol. 2016;16(1):161.

Adams CE, Frederick K. An investigation of the adequacy of MEDLINE searches for randomized controlled trials (RCTs) of the effects of mental health care. Psychol Med. 1994;24(3):741–8.

Kelly L, St Pierre-Hansen N. So many databases, such little clarity: searching the literature for the topic aboriginal. Canadian family physician Medecin de famille canadien. 2008;54(11):1572–3.

Lawrence DW. What is lost when searching only one literature database for articles relevant to injury prevention and safety promotion? Injury Prevention. 2008;14(6):401–4.

Lemeshow AR, Blum RE, Berlin JA, Stoto MA, Colditz GA. Searching one or two databases was insufficient for meta-analysis of observational studies. J Clin Epidemiol. 2005;58(9):867–73.

Sampson M, Barrowman NJ, Moher D, Klassen TP, Pham B, Platt R, et al. Should meta-analysts search Embase in addition to Medline? J Clin Epidemiol. 2003;56(10):943–55.

Stevinson C, Lawlor DA. Searching multiple databases for systematic reviews: added value or diminishing returns? Complementary Therapies in Medicine. 2004;12(4):228–32.

Suarez-Almazor ME, Belseck E, Homik J, Dorgan M, Ramos-Remus C. Identifying clinical trials in the medical literature with electronic databases: MEDLINE alone is not enough. Control Clin Trials. 2000;21(5):476–87.

Taylor B, Wylie E, Dempster M, Donnelly M. Systematically retrieving research: a case study evaluating seven databases. Res Soc Work Pract. 2007;17(6):697–706.

Beyer FR, Wright K. Can we prioritise which databases to search? A case study using a systematic review of frozen shoulder management. Health Info Libr J. 2013;30(1):49–58.

Duffy S, de Kock S, Misso K, Noake C, Ross J, Stirk L. Supplementary searches of PubMed to improve currency of MEDLINE and MEDLINE in-process searches via Ovid. Journal of the Medical Library Association : JMLA. 2016;104(4):309–12.

Katchamart W, Faulkner A, Feldman B, Tomlinson G, Bombardier C. PubMed had a higher sensitivity than Ovid-MEDLINE in the search for systematic reviews. J Clin Epidemiol. 2011;64(7):805–7.

Cooper C, Lovell R, Husk K, Booth A, Garside R. Supplementary search methods were more effective and offered better value than bibliographic database searching: a case study from public health and environmental enhancement (in Press). Research Synthesis Methods. 2017;

Cooper C, Booth, A., Britten, N., Garside, R. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: A methodological review. (In Press). BMC Systematic Reviews. 2017.

Greenhalgh T, Peacock R. Effectiveness and efficiency of search methods in systematic reviews of complex evidence: audit of primary sources. BMJ (Clinical research ed). 2005;331(7524):1064–5.

Article   PubMed Central   Google Scholar  

Hinde S, Spackman E. Bidirectional citation searching to completion: an exploration of literature searching methods. PharmacoEconomics. 2015;33(1):5–11.

Levay P, Ainsworth N, Kettle R, Morgan A. Identifying evidence for public health guidance: a comparison of citation searching with web of science and Google scholar. Res Synth Methods. 2016;7(1):34–45.

McManus RJ, Wilson S, Delaney BC, Fitzmaurice DA, Hyde CJ, Tobias RS, et al. Review of the usefulness of contacting other experts when conducting a literature search for systematic reviews. BMJ (Clinical research ed). 1998;317(7172):1562–3.

Westphal A, Kriston L, Holzel LP, Harter M, von Wolff A. Efficiency and contribution of strategies for finding randomized controlled trials: a case study from a systematic review on therapeutic interventions of chronic depression. Journal of public health research. 2014;3(2):177.

Matthews EJ, Edwards AG, Barker J, Bloor M, Covey J, Hood K, et al. Efficient literature searching in diffuse topics: lessons from a systematic review of research on communicating risk to patients in primary care. Health Libr Rev. 1999;16(2):112–20.

Bethel A. Endnote Training (YouTube Videos) 2017b [Available from: http://medicine.exeter.ac.uk/esmi/workstreams/informationscience/is_resources,_guidance_&_advice/ .

Bramer WM, Giustini D, de Jonge GB, Holland L, Bekhuis T. De-duplication of database search results for systematic reviews in EndNote. Journal of the Medical Library Association : JMLA. 2016;104(3):240–3.

Bramer WM, Milic J, Mast F. Reviewing retrieved references for inclusion in systematic reviews using EndNote. Journal of the Medical Library Association : JMLA. 2017;105(1):84–7.

Gall C, Brahmi FA. Retrieval comparison of EndNote to search MEDLINE (Ovid and PubMed) versus searching them directly. Medical reference services quarterly. 2004;23(3):25–32.

Ahmed KK, Al Dhubaib BE. Zotero: a bibliographic assistant to researcher. J Pharmacol Pharmacother. 2011;2(4):303–5.

Coar JT, Sewell JP. Zotero: harnessing the power of a personal bibliographic manager. Nurse Educ. 2010;35(5):205–7.

Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

Sampson M, McGowan J, Tetzlaff J, Cogo E, Moher D. No consensus exists on search reporting methods for systematic reviews. J Clin Epidemiol. 2008;61(8):748–54.

Toews LC. Compliance of systematic reviews in veterinary journals with preferred reporting items for systematic reviews and meta-analysis (PRISMA) literature search reporting guidelines. Journal of the Medical Library Association : JMLA. 2017;105(3):233–9.

Booth A. "brimful of STARLITE": toward standards for reporting literature searches. Journal of the Medical Library Association : JMLA. 2006;94(4):421–9. e205

Faggion CM Jr, Wu YC, Tu YK, Wasiak J. Quality of search strategies reported in systematic reviews published in stereotactic radiosurgery. Br J Radiol. 2016;89(1062):20150878.

Mullins MM, DeLuca JB, Crepaz N, Lyles CM. Reporting quality of search methods in systematic reviews of HIV behavioral interventions (2000–2010): are the searches clearly explained, systematic and reproducible? Research Synthesis Methods. 2014;5(2):116–30.

Yoshii A, Plaut DA, McGraw KA, Anderson MJ, Wellik KE. Analysis of the reporting of search strategies in Cochrane systematic reviews. Journal of the Medical Library Association : JMLA. 2009;97(1):21–9.

Bigna JJ, Um LN, Nansseu JR. A comparison of quality of abstracts of systematic reviews including meta-analysis of randomized controlled trials in high-impact general medicine journals before and after the publication of PRISMA extension for abstracts: a systematic review and meta-analysis. Syst Rev. 2016;5(1):174.

Akhigbe T, Zolnourian A, Bulters D. Compliance of systematic reviews articles in brain arteriovenous malformation with PRISMA statement guidelines: review of literature. Journal of clinical neuroscience : official journal of the Neurosurgical Society of Australasia. 2017;39:45–8.

Tao KM, Li XQ, Zhou QH, Moher D, Ling CQ, Yu WF. From QUOROM to PRISMA: a survey of high-impact medical journals' instructions to authors and a review of systematic reviews in anesthesia literature. PLoS One. 2011;6(11):e27611.

Wasiak J, Tyack Z, Ware R. Goodwin N. Jr. Poor methodological quality and reporting standards of systematic reviews in burn care management. International wound journal: Faggion CM; 2016.

Tam WW, Lo KK, Khalechelvam P. Endorsement of PRISMA statement and quality of systematic reviews and meta-analyses published in nursing journals: a cross-sectional study. BMJ Open. 2017;7(2):e013905.

Rader T, Mann M, Stansfield C, Cooper C, Sampson M. Methods for documenting systematic review searches: a discussion of common issues. Res Synth Methods. 2014;5(2):98–115.

Atkinson KM, Koenka AC, Sanchez CE, Moshontz H, Cooper H. Reporting standards for literature searches and report inclusion criteria: making research syntheses more transparent and easy to replicate. Res Synth Methods. 2015;6(1):87–95.

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol. 2016;75:40–6.

Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, Lefebvre C. An evidence-based practice guideline for the peer review of electronic search strategies. J Clin Epidemiol. 2009;62(9):944–52.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ (Clinical research ed). 2017;358.

Whiting P, Savović J, Higgins JPT, Caldwell DM, Reeves BC, Shea B, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.

Relevo R, Balshem H. Finding evidence for comparing medical interventions: AHRQ and the effective health care program. J Clin Epidemiol. 2011;64(11):1168–77.

Medicine Io. Standards for Systematic Reviews 2011 [Available from: http://www.nationalacademies.org/hmd/Reports/2011/Finding-What-Works-in-Health-Care-Standards-for-Systematic-Reviews/Standards.aspx .

CADTH: Resources 2018.

Download references

Acknowledgements

CC acknowledges the supervision offered by Professor Chris Hyde.

This publication forms a part of CC’s PhD. CC’s PhD was funded through the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme (Project Number 16/54/11). The open access fee for this publication was paid for by Exeter Medical School.

RG and NB were partially supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care South West Peninsula.

The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

Author information

Authors and affiliations.

Institute of Health Research, University of Exeter Medical School, Exeter, UK

Chris Cooper & Jo Varley-Campbell

HEDS, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK

Andrew Booth

Nicky Britten

European Centre for Environment and Human Health, University of Exeter Medical School, Truro, UK

Ruth Garside

You can also search for this author in PubMed   Google Scholar

Contributions

CC conceived the idea for this study and wrote the first draft of the manuscript. CC discussed this publication in PhD supervision with AB and separately with JVC. CC revised the publication with input and comments from AB, JVC, RG and NB. All authors revised the manuscript prior to submission. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Chris Cooper .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:.

Appendix tables and PubMed search strategy. Key studies used for pearl growing per key stage, working data extraction tables and the PubMed search strategy. (DOCX 30 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Cooper, C., Booth, A., Varley-Campbell, J. et al. Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies. BMC Med Res Methodol 18 , 85 (2018). https://doi.org/10.1186/s12874-018-0545-3

Download citation

Received : 20 September 2017

Accepted : 06 August 2018

Published : 14 August 2018

DOI : https://doi.org/10.1186/s12874-018-0545-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Literature Search Process
  • Citation Chasing
  • Tacit Models
  • Unique Guidance
  • Information Specialists

BMC Medical Research Methodology

ISSN: 1471-2288

what is systematic review of related literature in research

  • Open access
  • Published: 17 February 2024

The complexity of leadership in coproduction practices: a guiding framework based on a systematic literature review

  • Sofia Kjellström 1 , 2 ,
  • Sophie Sarre 2 &
  • Daniel Masterson 1  

BMC Health Services Research volume  24 , Article number:  219 ( 2024 ) Cite this article

Metrics details

As coproduction in public services increases, understanding the role of leadership in this context is essential to the tasks of establishing relational partnerships and addressing power differentials among groups. The aims of this review are to explore models of coproduction leadership and the processes involved in leading coproduction as well as, based on that exploration, to develop a guiding framework for coproduction practices.

A systematic review that synthesizes the evidence reported by 73 papers related to coproduction of health and welfare.

Despite the fact that models of coleadership and collective leadership exhibit a better fit with the relational character of coproduction, the majority of the articles included in this review employed a leader-centric underlying theory. The practice of coproduction leadership is a complex activity pertaining to interactions among people, encompassing nine essential practices: initiating, power-sharing, training, supporting, establishing trust, communicating, networking, orchestration, and implementation.

Conclusions

This paper proposes a novel framework for coproduction leadership practices based on a systematic review of the literature and a set of reflective questions. This framework aims to help coproduction leaders and participants understand the complexity, diversity, and flexibility of coproduction leadership and to challenge and enhance their capacity to collaborate effectively.

Peer Review reports

Introduction

For more than 40 years, scholars and practitioners have sought to identify and understand various aspects of coproduction with the goal of improving services as well as equalizing (or at least reorganizing) power relations in service design and delivery [ 1 ]. More recently, such discussion has focused on the roles of leaders and leadership in coproduction, seeking to describe and assess the various types of leaders and leadership that might maximize the goals of coproduction processes and outcomes. Leaders can act to make coproduction, in all its forms, happen [ 2 , 3 ]. Leaders can enhance coproduction by providing resources, establishing inviting structures, and prioritizing the involvement of various stakeholders. Conversely, they can inhibit coproduction by perpetuating conservative administrative cultures, failing to provide training, or being reluctant to share power [ 3 ]. Coproduction relies on leadership at all levels, ranging from senior managers to local “champions” and including the citizens and third-sector organizations that participate in coproduction activities and practices.

This review presents a synthesis of research on the leadership of coproduction, which has been recognized for its scarcity [ 3 , 4 , 5 , 6 ]. The review provides new knowledge regarding the fact that coproduction leadership must become more deliberately (in)formed by collective leadership models. It also illustrates the multiplicity and complexity associated with coproduction leadership activities by outlining practices in which leaders must engage to ensure success. This review can inform a framework that offers guiding insights on which commissioners, evaluators, managers and leaders of coproduction can reflect as well as suggestions and directions for future research.

  • Coproduction

Coproduction is a broad concept that is associated with different meanings across a range of contexts [ 1 ]. Many definitions and uses of the term coproduction and codesign have been identified [ 7 ]. Throughout this paper, although we acknowledge the distinctions associated with the concepts and origins of the notion of codesign, we use the broad term coproduction to refer to some form of collaboration or partnership between service providers and service users or citizens. For this review, we follow the definitions provided by Osborne and Strokosch [ 8 ], who identified ‘ consumer coproduction’ as an inevitable component of value creation in interactions among service providers; ‘participatory coproduction’, in which context participation is deliberative and occurs at the strategic level of service design and planning; and ‘enhanced coproduction’, which represents a potential mechanism for transforming organizational processes and boundaries.

Power is inevitably central to coproduction. Schlappa and Ymani claimed that the coproduction process is “inherently negotiated, emergent and reliant on a range of actors who may have both common and contrasting motivations, and are able to exercise power, which in turn is moderated by the context in which these relations occur” [ 6 ]. This sensitivity to motivation, context and power is helpful for our understanding of leadership in coproduction.

Leadership models

Most conceptualizations of leadership have been based on the claim that leadership is a kind of inherent characteristic exhibited by human beings, such that leaders are depicted as heroes with unique traits, styles or behaviours [ 9 ]. However, research on leadership in coproduction is important in relation to an emerging body of research that focuses on the notion of “leadership in the plural” [ 10 ] or “collective leadership” [ 11 , 12 ]. These phrases act as umbrella terms that refer to overlapping concepts such as shared, collaborative, distributed, pooled and relational leadership. A core feature of these models is that leadership is not (only) viewed as a property of individuals and their behaviours but rather as a collective phenomenon that is distributed or shared among different people [ 10 ]. A distinction can be made between two types of collective leadership. Leadership can be shared in interpersonal relationships; for example, it can be pooled among duos or trios at the top of an organization, or shared leadership can be exercised within teams working on a project. This notion is based upon the assumption that people have different skills that complement each other. The second kind of collective leadership is a more radical version of this notion, according to which leadership emerges as a result of direction, alignment, and commitment within a group [ 11 ] or can be observed to reside within the system, for example, in the form of distributed leadership across interorganizational and intraorganizational boundaries and networks [ 10 , 12 ]. In cross-sectoral collaboration, leadership is distributed across time and space, which requires structures to guide how leadership is shared and organized. It has been argued that collective leadership is best suited to the analysis of coproduction practices [ 4 , 6 , 13 , 14 ].

It is important to note that distinctions have been made between management (planning, monitoring and controlling) and leadership (creating a vision, inspiring and changing) based on behaviours [ 15 ]. However, many authors have not made such a distinction, and the terms have frequently been used interchangeably. We therefore adopt the practice employed in the papers included in this review and use the terms leadership and leader as catch-all terms; we only use the words management or manager when the papers refer to job titles or ‘public management’.

Leadership models can be regarded as resembling a colour palette that offers a variety of choices, and similar to colours, some models fit a situation better than others. This paper investigates the use and fit of various leadership models for coproduction.

Leadership of coproduction research

Extant research on the leadership of coproduction has been described as “sparse” [ 4 ], a “neglected area” [ 5 ] and “overlooked” [ 3 , 6 ]. Despite a recent resurgence of interest in the potential of coproduction as a means of maintaining and improving the quality of health and social care, significant questions regarding how coproduction can and should be led in this context remain unanswered. Most reviews of coproduction have not addressed this issue [ 2 , 16 , 17 , 18 ]. Clarke et al.’s (2017) review identified the lack of managerial authority and leadership as a key barrier to the implementation of coproduced interventions but did not explore the implications of this finding for future practice. The review conducted by Bussu and Galanti (2018) stands alone in its focus on leadership, although the empirical cases explored by those authors were restricted to the context of local government in the UK. Recent empirical case studies that have explored leadership [ 13 , 14 , 15 , 19 ] have focused on public managers [ 3 , 5 , 14 ] or on identifying the consequences of different models of leadership. This review contributes to the literature by providing knowledge regarding how to make deliberate choices pertaining to coproduction leadership in terms of how it is conceptualized and shared and the activities that are necessary for leading coproduction.

Coproduction leadership practices

The leadership of coproduction poses a number of challenges. A proposed aim of coproduction is to drive change within services and in traditional state-citizen relationships by establishing equal and reciprocal relationships among professionals, the people using services, and their families and neighbours. This task requires a restructuring of health and welfare services to equalize power between providers and other stakeholders with an interest in the design and provision of these services. However, it has been suggested that coproduction runs the risk of reproducing existing inequalities in power rather than mitigating them since coproduction is inevitably saturated with unequal power relations that must be acknowledged but cannot be managed away [ 20 ].

In this paper, we present the findings of a systematic review of the literature on leadership in coproduction. The purpose of this review is to explore models of coproduction leadership and the practices involved in leading coproduction in the context of health and social care sectors [ 7 ]. The results are synthesized to develop a framework for actors who seek to commission, design, lead or evaluate coproduction processes. This framework emphasizes the need to make more deliberate choices regarding the underlying conceptualization of leadership and the ways in which such a conceptualization is related to the activities necessary for leading coproduction. Based on the framework, we also propose specific guiding questions for individuals involved in coproduction in practice and make suggestions for future research.

This systematic literature review is based on a study protocol on coproduction research in the context of health and social care sectors [ 21 ], and data were obtained from a published scoping review, where the full search strategy is provided [ 7 ]. The scoping review set out to identify ‘what is out there’ and to explore the definitions of the concepts of coproduction and codesign. In brief, the following search terms for the relevant concept (co-produc* OR coproduc* OR co-design* OR codesign*) and context (health OR social OR & “public service*” OR “public sector”) were used to query the following databases: CINAHL with Full Text (EBSCOHost), Cochrane Central Register of Controlled Trials (Wiley), MEDLINE (EBSCOhost), PsycINFO (ProQuest), PubMed (legacy), and Scopus (Elsevier). This paper focused on leadership. All titles and abstracts included in the scoping review ( n  = 979) were obtained and searched for leadership concepts (leader* OR manage*) ( n  = 415). These materials were reviewed independently by SK and SS using the following inclusion criterion: conceptual, empirical and reflection papers that included references to the management and/or leadership of coproduction. Study protocols were excluded because we wanted to capture lessons drawn from implementation, and conference papers were excluded because they lacked sufficient detail. Articles focusing on the context of individual-level coproduction (i.e., cases in which an individual client or patient was the focus of coproduction) were excluded, as we were interested in the leadership processes involved in collective coproduction. Conflicts were resolved through discussion and further consideration of disputed papers. This process led to the inclusion of 73 articles (Fig.  1 – PRISMA flow chart).

figure 1

PRISMA flow chart

The method used for this research was a systematic review with qualitative synthesis. The strength of this approach lies in its ability to complement research evidence with user and practitioner considerations [ 22 ]. In the process of examining the full texts of the papers, two researchers (SK and SS) extracted background data independently. To promote coproduction, four stakeholders were strategically selected through the personal networks of one of the authors, SK. These stakeholders exhibited diverse expertise in the leadership of coproduction. One was a leadership developer and family member of an individual with 24/7 care needs. Another was a physician. The third worked in peer support and had personal experience with mental health services. The fourth was a health care leader. Four key articles were chosen due to the diversity of leadership ideas they exhibit and the depth of the explicit text on leadership they provided. During the analysis by stakeholders, no themes were changed or refined; instead, the analysis confirmed the relevance of the initially identified themes, thus emphasizing the robustness of our findings based on a process that involved reading four key articles and identifying the perceived key implications for our research aim.

A qualitative synthesis unites the findings of individual studies in a different arrangement, thereby constructing new knowledge that is not apparent from the individual studies in isolation [ 23 ]. This fact is particularly evident in this review, since leadership was seldom the main focus of the included articles. Accordingly, we employed multiple pieces of information to construct a pattern. The process of synthesis started at a very broad level with the goal of understanding which aspects of leadership were addressed in the literature. This process then separated into two strands. One such strand focused on interpreting the data from the perspective of current leadership models, while the other focused on interpreting leadership practices – i.e., the activities and relationships that are part of the process of leading coproduction. We searched for themes both within and across individual articles, and our goal was interpretative rather than purely aggregative. This process resulted in three themes pertaining to coproduction leadership models and nine coproduction leadership practices. We present these findings together in the form of a framework because consideration of both leadership models and practices prompts better and more conscious choices, which can improve the quality of coproduction. Persons one and two from the stakeholder group also provided feedback on a draft of this paper, and their insights were integrated into this research.

Sample description

We included 73 papers (Additional file 1 ) dating from 1994 to 2019 (the year in which the initial search was performed). Most of these papers were empirical ( n  = 54), and more than half of them were case studies ( n  = 30). Fifteen articles were conceptual papers, and four were literature reviews. The setting or focus of the papers was predominantly on services ( n  = 66), while the remainder of the papers were on research ( n  = 4) or policy ( n  = 3). The papers drew on evidence collected from 13 countries, and the most common national setting was the UK ( n  = 29). Nine cross-national papers were also included. Issues related to leadership were rarely the focus of the papers.

Results: A coproduction leadership framework

The synthesis consists of three parts (roles, models and practices), which are combined to develop an overarching and integrative framework for essential issues pertaining to coproduction leadership [ 4 , 24 ].

People and roles

The way in which the leadership of coproduction has been conceptualized in the literature suggests that a range of actors are involved in the coproduction of health and wellbeing and that these actors can take on different leadership roles and functions. Service users, community members and community representatives can play a vital role in the task of deliberatively coproducing or even transforming services, as can third-sector organizations, external experts, politicians, mid-level facilitators, managers, and senior leaders.

It has been argued that it is important to involve leaders from diverse backgrounds who have personal experiential knowledge of public involvement to encourage involvement from a broader population [ 25 , 26 , 27 ]. Service users and community members play leadership roles in coproduction initiatives related to health or well-being. These roles involve shared decision-making and accountability at various levels, ranging from the personal to the systemic.

Senior leaders include formal representatives of organizations (executives, politicians, or formal managers) and formal or respected leaders of communities. They play an important role throughout this process. During the initiation stage, by implementing and sustaining the outcomes of coproduction, they play a crucial role in the provision of resources such as time, money, materials, and access to networks. In the interim stages, their commitment to coproduction, sponsorship, and engagement is vital.

Champions and ambassadors use their expertise and passion to drive coproduction efforts. In particular, "insider" champions can establish trust among participants and help service providers understand the importance of coproduction. These champions advocate for coproduction and actively support initiatives [ 28 , 29 , 30 , 31 ]. Ambassadors are individuals who have expertise and volunteer their time to train others or work with clients in coproduced services. They play a crucial role in the tasks of supporting and promoting coproduction [ 28 , 32 , 33 ].

Project leaders and facilitators are individuals who are responsible for guiding and supporting coproduction projects, thereby ensuring their smooth operation and collaborative nature. Project leaders are responsible for overall project management, including the setting of goals, objectives, and timelines. They play a pivotal role in ensuring that projects remain on track, and they facilitate accessible and transparent dialogue among stakeholders and ensure equal representation [ 34 , 35 ]. Facilitators focus on supporting the group involved in coproduction, maintaining respectful interactions, empowering service users and carers, and addressing any tensions that may arise during the collaborative process [ 36 , 37 ].

In summary, senior leaders sponsor and support coproduction. Champions and ambassadors are individuals who advocate for and support coproduction initiatives, while project leaders and facilitators are responsible for managing and guiding coproduction projects themselves, thereby ensuring effective collaboration among stakeholders. All of these roles can be played by people drawn from various backgrounds, including senior staff, health care professionals, experts in coproduction, researchers, citizens, or volunteers.

Three models of leadership in coproduction

These actors play different leadership roles, and leadership can be exercised by individuals or groups. Three leadership models have been proposed: leadership as enacted by individual leaders, coleadership and collective leadership.

Leadership by individual leaders

A leader-centric view has been the dominant interpretation of leadership in the field of coproduction. Many references were made to “senior leaders”. This term was used to describe formal representatives of organizations or services (senior managers, executives), formally appointed community leaders (policy-makers, local government leaders), or respected leaders of communities. Senior support was described as an important success factor in coproduction [ 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 ]. Other leadership roles included project leaders, facilitators, ambassadors, and champions – as described in the previous section.

Some papers referred to traits and characteristics exhibited by leaders that facilitate coproduction. These factors included innovativeness, personability, action orientation [ 46 ], courage [ 47 ], passion [ 32 , 46 ], and empathy [ 25 , 46 , 48 ]. “Strong leadership” was often mentioned, albeit without elaboration [ 49 , 50 , 51 , 52 , 53 , 54 , 55 ]. By implication, “strong leadership” appeared to include providing clear direction and guidance, having a clear vision [ 53 ], holding onto a vision [ 34 ], and keeping the vision alive for the team [ 43 ].

Other researchers noted a more collaborative and democratic leadership style that is characterized by listening, transparency, deliberation, and nurturing coproductive behaviours [ 27 , 30 , 48 ]. Senior leaders could use a “top-down” approach to promote user involvement. Alternatively, they could “learn to manage horizontally not top down; embrace ground up initiatives; [and] aim to empower partners” [ 32 , 45 , 51 ] and be “open to changes that would disturb traditional relationships and power disparities between service users and providers” [ 41 ]. Respondents to a survey of participants in a peer-led support network favoured a traditional directive model of leadership alongside a more facilitative and enabling style [ 56 ]. However, they found it challenging to transition to a more distributed and collective leadership approach.

Co-leadership

The terms “co-lead”, “co-leadership” and “dual leadership” refer to situations in which a formal leadership role is allocated to more than one person, in which context the relevant people may represent different institutions or different groups, e.g., different professional groups, researchers and service users/citizens, or teachers and students [ 28 , 31 , 40 , 41 , 57 , 58 ]. Coleads were defined as “individuals who led and made joint decisions” [ 59 ]. Some papers explored the leadership role of service users or community members in the coproduction of research related to health or wellbeing [ 35 , 60 , 61 ]. In these studies, areas of research were proposed by patients/community members, who then collaborated with academic researchers, thereby playing an equal or leading role. Coleadership was reported to result in shared learning.

Collective leadership

Few discernible differences among “ shared”, “distributed” and “collective” leadership were found in the papers included in this review. The approaches examined in this context were characterized by distributed roles and responsibilities in which different individuals’ skills and expertise were identified as best suited to the task at hand. Shared leadership depends on willingness on the part of leaders (implicitly non-community leaders) to be challenged and directed by community members rather than rigidly maintaining their previous conceptions of the issues and the appropriate means of addressing them [ 36 ].

Ward, De Brún, Beirne, Conway, Cunningham, English, Fitzsimons, Furlong, Kane and Kelly [ 62 ] referred to collective leadership as an emergent and dynamic team phenomenon. Other authors argued for a more structured approach to shared leadership [ 36 , 41 ] or distributed leadership [ 28 , 42 , 56 , 59 , 63 ]. Such an approach could involve allocating specific roles to service users, engaging them in a formal structure and/or enabling them to set an agenda [ 41 ], specifying shared roles and responsibilities [ 36 ], and/or providing dedicated support to lay “champions” in research studies [ 28 ]. Various benefits were attributed to collective leadership, such as empowering people to speak up [ 36 , 51 ] and feel engaged.

Nine practices associated with leading coproduction

We identified nine processes that encompass wide-ranging activities and interactions between individuals and groups with regard to leading the coproduction of health and wellbeing. As Farr noted, “Coproduction and codesign […] involves facilitating, managing and co-ordinating a complex set of psychological, social, cultural and institutional interactions” [ 64 ]. In some cases, these processes naturally align with certain actors—for instance, senior leaders play key roles in the tasks of initiating coproduction and implementing and sustaining its results—but other processes (championing coproduction, establishing trusting relationships, and ensuring good communication) are applicable to any and all participants in the coproduction process. Similarly, some of these practices occur at particular timepoints in a coproduction arc (namely, during the stages of initiation or implementation), while others can occur at any or all timepoints (i.e., during the assimilation stage or beyond). Deliberately considering the most suitable leadership model with regard to the aims and context of an initiative is useful at the start, but reflecting on the operation and appropriateness of the model is always salient.

Initiating coproduction

The initiation of coproduction entails recognizing the need for coproduction, dedicating resources, inviting and establishing relevant multi-stakeholder coproduction networks, and coproducing a vision and goals.

It has been argued that senior leaders act as gatekeepers for coproduction because they must recognize the need for it [ 45 ]. Senior leaders play a role in the task of determining the extent to which communities are given the opportunity to influence service design and integration [ 38 , 51 ]. Coproduction requires resources (principally time and money but also networks), which can be used to take advantage of other resources such as skills [ 29 , 31 , 34 , 40 ]. Senior leaders often control or provide access to such resources, which means that they are best positioned to initiate coproduction initiatives [ 41 , 65 ]. However, the findings of a cross-national study on the coproduction of policy showed that, in practice, senior leaders’ control over resources meant that they tended to define the means, methods and forms of participation [ 65 ].

In the task of establishing a conducive environment for coproduction, it is important to pay attention to which actors (organizations or individuals) are participating in the process [ 33 , 42 , 64 , 66 ] and to factors that may delimit those participants or their involvement [ 36 , 42 , 67 ]. Several papers emphasized the need to ensure that all stakeholders are involved from the outset [ 37 , 38 , 41 , 48 , 51 ]. In the initiation stages, a shared vision should be created [ 36 , 61 , 68 ], goals should be coproduced, and responsibilities should be clearly allocated [ 65 ]. Role clarity, ability, and motivation have been identified as determinants of coproductive behaviour, and leaders must implement arrangements to achieve these goals for coproducers [ 69 ].

Power sharing

It has been argued that coproduction leadership must attend to issues pertaining to power redistribution [ 60 , 61 , 63 , 64 ] and uphold the ideology of coproduction by promoting the values of democracy and transparency [ 30 , 32 , 70 , 71 , 72 , 73 , 74 ]. This process can occur at different levels.

At the macro system level, several cultural shifts have been implicated in the redistribution of power – a shift in current professional and stakeholder identities; more fluid, flattened and consensus-based ways of working; and a willingness to accommodate ‘messy’ issues [ 75 ]. The last of those issues was highlighted by Hopkins, Foster and Nikitin [ 29 , s 192], who suggested that coproduction requires service providers to “sit more easily with the unknown, to be comfortable in not having all the answers.” Similarly, “The challenge is that to be transformative, power must be shared with health service users. To do this entails building new relationships and fostering a new culture in health-care institutions that is supportive of participatory approaches” [ 42 , p 379].

At the meso level, several practices could be used to share power. Greenhalgh, Jackson, Shaw and Janamian [ 30 ] identified the importance of equitable decision-making practices and “evenly distributed power constellations.” This goal can be achieved, for instance, by ensuring that service users represent a majority on the project management committee or in codesign events with the goal of challenging dominant professional structures and discourses [ 37 ]. Other scholars called for clear roles and responsibilities [ 38 , 59 , 65 ]. Mulvale, Moll, Miatello, Robert, Larkin, Palmer, Powell, Gable and Girling [ 36 ] recommended the establishment of shared roles and responsibilities, the creation of a representative expert panel to resolve stalemates, and possibly the implementation of formal agreements regarding data and reporting. Importantly, however, Greenhalgh, Jackson, Shaw and Janamian [ 30 ] noted that governance structures and processes alone do not automatically overcome the subtle and inconspicuous uses of power. Farr [ 64 ] recommended the constant practice of critical reflection and dialogue and posed several questions for participants to consider: who is involved, what the interactions are like, how coproduction efforts are implemented within and across structures, and what changes are made.

Although sharing power has been described as an essential component in coproduction, the involvement of stakeholders does not necessarily entail empowerment [ 47 ], and case studies have demonstrated that service improvement initiatives that involve citizens or service users can be instrumental and effective with regard to improving services without enhancing or sharing power or political consciousness if stakeholders are invited but power is not shared [ 32 ]. Farr [ 64 ] noted that rather than coproduction being inherently emancipatory, coproduction and codesign processes can have either dominating or emancipatory effects [ 33 ], and the exclusion of vulnerable groups from coproduction has the potential to reinforce existing inequities [ 75 ].

Training and development for emerging leadership

The importance of appropriate training and mutual learning was noted in several papers [ 36 , 42 , 48 , 63 , 69 , 76 , 77 ]. Implicitly, training for professionals was framed in terms of training in the process of sharing power with service users or facilitating collaboration, whereas training for service users was framed as capacity-building in terms of collaboration and/or leadership. In one case study focusing on coproduced research, participants rejected the notion of “training” from academic researchers with the aim of avoiding suggesting that a certain level of “expertise” needed to be transferred [ 60 ].

Playing a leadership role can be empowering [ 51 , 71 ], but for some individuals, it can be overwhelming [ 71 ]. Leading coproduction requires practice and the development of skills and capacities [ 26 , 48 ]. In some initiatives, lay partners were initially involved in limited roles and gradually took on more responsible leadership tasks over time [ 28 , 42 , 78 ]. In addition, community members’ level of involvement was flexible—they could be participants or take on additional roles as volunteers, paid staff members or directors of organizations. This flexibility offered participants the opportunity to "begin sharing, as opposed to shouldering, the burden of involvement” [ 71 ].

The provision of support

Support is necessary throughout the coproduction process from its outset to the stages of implementation and sustainment [ 25 , 34 , 68 ]. Key dimensions of support include facilitating, advocating for, and championing coproduction. Project management is instrumental to the smooth operation and facilitation of coproduction [ 34 , 35 , 37 , 44 ]. Several facilitation activities are conducted by project leaders and facilitators [ 41 , 42 , 59 , 61 , 78 ]. These activities include holding onto a vision and keeping it alive for the team, ensuring that the project remains on track, and helping maintain momentum. In one codesign case study, facilitators helped people focus on quick wins with the goal of maintaining motivation and engagement; they "needed to support movement from inaction to action, by sifting through group ideas to fix a plan" [ 34 ]. Although these authors acknowledged that this approach may have limited coproduction, they argued that such initiatives would not be sustainable if they were perceived to be “unfeasible.”

Another key function entails advocating for and championing coproduction initiatives to ensure that the process remains ongoing [ 25 , 28 , 29 , 30 , 31 , 32 , 37 , 41 , 74 , 79 ]. Senior leaders play an important role in the task of championing coproduction, and their support has often been described as an important success factor [ 34 , 38 , 39 , 43 , 80 ]. However, effective champions could equally include health care professionals [ 37 ], experts in coproduction [ 51 ], researchers [ 35 , 60 , 61 ], volunteers [ 51 ] or other citizens [ 41 , 61 ]. Champions with lived experience can gain the confidence of their peers and help create understanding among service providers [ 28 , 36 ].

Establishing trusting relationships

Coproduction is essentially relational and requires concerted efforts to establish trusting relationships and a sense of commitment. The importance of trust among stakeholders in coproduction has been noted in several papers [ 28 , 30 , 36 , 37 , 38 , 46 , 48 , 64 , 74 , 81 , 82 ]. In the field of health research, it is difficult to secure funding for the process of establishing relationships and working in the context of partnerships during the early stages of development [ 25 ]. It can therefore be helpful to base recruitment for coproduction initiatives on pre-existing trusting relationships [ 36 ]. If such pre-existing trusting relationships do not exist, policy-makers and senior leaders play a role in the creation of frameworks that can facilitate the development of trust both among organizations and between organizations and citizens, such as political and bureaucratic commitment on the part of regional and local governments and the engagement of actors who play a “boundary-spanning” role in the relationships between service providers, non-government organizations and communities [ 38 ]. Trust is established based on clear responsibilities [ 38 ] and adherence to the principles of engagement in coproduction. In addition to these frameworks, individual leaders must develop trust through interactions with coproducers, using collaborative skills such as those pertaining to communication and listening [ 48 ]. In one case study, through the frank sharing of the organizational, financial, and governance challenges and opportunities faced by stakeholders, people reached a growing understanding and appreciation of each other’s positions, which engendered trust [ 30 ]. Mulvale, Moll, Miatello, Robert, Larkin, Palmer, Powell, Gable [ 36 ] highlighted the importance of understanding and responding to participants’ histories, contexts, and cultural differences.

Commitment can be viewed as more important than resources [ 59 ]. The commitment to and engagement in coproduction exhibited by an organization’s senior leaders demonstrate organizational commitment and lend credibility to coproduction initiatives [ 25 , 34 , 38 , 41 , 47 , 59 , 80 , 83 ]. On some occasions, coproduction initiatives are reported to senior leaders, while on other occasions, the senior leaders were part of the coproduction team. Senior leaders who adopt a more hands-on approach serve as role models [ 25 ], advocating for patient engagement and engendering commitment on the part of staff and patients [ 28 ]. In public health initiatives, buy-in from community leaders confers legitimacy on innovations, helps ensure community trust [ 61 , 78 ], increases the engagement of community members [ 78 ] and is key to a project’s success [ 83 ].

Communication

Communication is a key activity in coproduction, and leaders must establish an environment that is conducive to “epistemological tolerance” [ 47 ], such that different perspectives are valued and appreciated. Such environments facilitate dialogue among partners [ 28 , 30 , 35 , 51 ] and allow critical voices to be heard [ 42 ] . Open dialogue among stakeholders is a starting point for the task of identifying the sources of assumptions and stereotypes, which is itself a prerequisite for change in attitudes and practice [ 28 ]. Project leaders must also facilitate accessible and transparent dialogue and ensure the equal representation of all stakeholders, including those who are less able to communicate verbally [ 57 , 71 ]. Professional leaders are responsible for critically reviewing their professional norms, organizational/institutional processes and past and present policies and practices [ 55 , 75 ].

Dealing with multiple stakeholders, which is inevitably required in coproduction, requires addressing multiple perspectives in an attempt to bring them together. This task frequently involves a degree of conflict and peace negotiation [ 30 , 34 , 41 , 48 , 61 , 64 ]. Leaders should be alert to conflict and power dynamics [ 34 , 36 ]. It may be necessary for meeting chairs to encourage participants to move on from their familiar, entrenched positions to avoid descending into circular arguments and stalemates (Chisholm et al. 2018). This task could require the injection of a critical voice, as Greenhalgh explained:

“Meeting chairs were selected for their leadership qualities, ability to identify and rise above “groupthink” (bland consensus was explicitly discouraged), and commitment to ensuring that potential challenges to new ideas were identified and vigorously discussed. They set an important ethos of constructive criticism and creative innovation, with the patient experience as the central focus. They recognized that if properly handled, conflict was not merely healthy and constructive, but an essential process in achieving successful change in a complex adaptive system. ” [ 30 ]

Leaders must acknowledge the facts that discomfort can arise when more equitable relationships are established [ 61 ] and that challenges to professional identity [ 84 ] and the loss of control [ 72 ] are factors in this process.

Networking refers to the practice of establishing and maintaining relationships with various stakeholders both within and outside the coproduction initiative. Since coproduction involves working with different stakeholders in networks, several papers have discussed the vital mediating processes associated with this context.

“Bridging, brokering and boundary spanning roles have a key role in cross fertilization of ideas between groups, for generating new ideas and for increasing understanding and cooperation” [ 32 , 53 ].

In policy-making, it is helpful to develop coordination structures and processes such as cross-sector working groups and committees, intersector communication channels [ 65 ], and relationship and dialogue structures [ 42 ]. Community representatives can play a mediating role between individuals and public organizations and may alleviate professionals’ concerns regarding the transaction costs of coproduction in the planning and management of services [ 26 , 81 ]. However, these representatives may or may not use this power to amplify the voices of individual coproducers [ 81 ].

An important role of project leaders is that of the “broker” [ 32 , 85 ], who focuses on mediating among different stakeholders in an attempt to align their perspectives [ 26 ,  37 ,  72 , 86 ]. Another role focuses on spanning the boundaries across sites [ 50 ], between local service providers [ 68 ], or among local services, non-government organizations and the community [ 38 ]. Bovaird, drawing on a number of cases of coproduction, came to the following conclusion:

“ there is a need for a new type of public service professional: the coproduction development officer, who can help to overcome the reluctance of many professionals to share power with users and their communities and who can act internally in organizations (and partnerships) to broker new roles for coproduction between traditional service professionals, service managers, and the political decision-makers who shape the strategic direction of the service system.” [ 81 ]

Orchestration

This practice involves reflecting on and improving coproduction itself. It includes activities such as evaluating the effectiveness of coproduction efforts, assessing the impact of coproduction on outcomes, and making adjustments to improve the coproduction process. Several papers have addressed the roles of local government or public managers or health professionals in overseeing and (as we refer to this process) ‘orchestrating’ the networks involved in coproduction at the community or local government level [ 30 , 33 , 65 , 74 , 87 ]. Orchestration involves recruiting the appropriate actors as noted above as well as directing and coordinating activities, thereby ensuring that the whole is more than the sum of its parts. As part of their orchestration work, leaders play a role in the task of managing risk in service innovation [ 55 , 87 ] and must commit to self-reflexivity and a critical review of norms, policies and practices to alert themselves to any unintended negative consequences and strive to counteract them [ 55 ]. Sturmberg, Martin and O’Halloran [ 88 ] used the metaphor of ‘conducting’ to describe the function of leadership in health care – i.e., leading the orchestra through inspiration and empowerment rather than control, leading to the provision of feedback as the performance unfolds.

From a public service perspective, Powers and Thompson [ 69 ] argued that coproduction requires the leader (“usually a public official”) to mobilize the community on behalf of the public good, organize the provision of the good, create incentives, and supervise the enforcement of community norms. Sancino [ 74 ] argued that local governments play a ‘meta-coproduction role’ that requires them to maximize the coproduction and peer-production of community outcomes by taking into account community contributions and deciding which services should be commissioned or decommissioned (a point that was also made by Wilson [ 87 ]) and to promote coproduction and peer-production in such a way as to promote the coproduction of outcomes that have been decided through a democratic process. In this way, he argued,

"the local government becomes the pivot of different kinds of relationships and networks made up of different actors who collectively assume the responsibility for implementing an overall strategic plan of the community beyond their specific roles and interests." [ 74 ]

Sancino [ 74 ] attempted to draw out the leadership implications of this situation, arguing that rather than focusing on service delivery, public managers must create appropriate conditions for such meta-coproduction. This task entails a directing role based on framing shared scenarios for change in the community through sense-making; an activator role based on activating, mobilizing and consolidating the social capital of the community to promote diffused public leadership; a convenor role based on serving as a meta-manager in the process of self-organizing the knowledge, resources and competencies pertaining to the community in question; and an empowering role based on creating conditions in which peer production and coproduction can be combined to create the corresponding added value (i.e., higher levels of community outcomes) [ 74 ]. This practice essentially focuses on self-assessment and continuous improvement within the coproduction framework.

Implementation

It has been argued that coproduction in services [ 30 , 79 ] or policy-making [ 65 ] may improve implementation. The role of leadership in supporting the implementation of the outcomes of coproduction is essential [ 37 , 41 , 49 , 52 , 64 , 65 , 85 , 86 ]. Leaders can argue for the legitimacy of coproduced innovations [ 89 ] and implement mechanisms aimed at acting on the issues thus raised and continuing to promote patients’ involvement [ 28 , 41 ]. Implementing the outcomes of coproduction relies on outcome-focused leadership [ 30 ]. The results of coproduction initiatives must be transformed into strategic plans and policies [ 41 ], and patient perspectives must be translated into actionable quality improvement initiatives [ 49 ]. Conversely, implementation can be blocked by leaders who fail to respond to the results of coproduction initiatives or who implement policies or procedures that are poorly aligned with the recommendations arising from coproduction [ 30 , 41 ]. It should also be acknowledged that not all demands thus generated can always be met [ 61 ]. Failures of implementation run the risk of stakeholder disillusionment; thus, the management of expectations is important.

A framework for coproduction leadership

When coproduction is initiated, it is possible to consider the actors involved and to imagine various forms of coproduction. In the design process, it is possible to make a deliberate choice with regard to the most appropriate model of leadership, and depending on the leadership model selected (leader-centric, coleadership, or collective leadership), different leadership practices emerge. The nine leadership practices identified can be enacted by different people and in different ways. The leadership of coproduction that thus emerges is shaped by issues such as the model of coproduction, the stakeholders involved, participants’ motivations and the context of coproduction. A main concern lies in the need to design project structures and work practices that are aligned and that enable leadership to emerge. We thus created a table (Table  1 ) that illustrates potential reflective questions in this context.

This discussion highlights and problematizes the two main findings of this systematic review, namely, the need to deliberately consider underlying models of leadership and the complex character of leading coproduction.

The need for the deliberate use of leadership in the plural

A focus on leader-centric approaches and the quality of leaders has characterized public leadership research [ 90 ]. Such a focus is echoed in our findings on coproduction leadership, first with regard to the prominence of senior leaders and, to a lesser extent, facilitators. Politicians were rarely identified in the papers included in our review despite representing some of the main actors identified in a previous review [ 4 , 91 ]. Second, many papers referenced the need for “strong” leaders, and the skills and behaviours of individual leaders were noted. As other researchers have found, despite the focus of this field on relationships and interactions, its emphasis has frequently remained on the individual leader and their ability to engage and inspire followers [ 13 ]. Furthermore, even in papers that emphasized ‘coleadership’ or ‘collective leadership,’ the focus remained on public managers, service managers and facilitators. Very little evidence has been reported concerning individual service users’ or citizens’ leadership of (as distinct from involvement in ) coproduction. Although the involvement of community leaders was reported to play a role in project success, no articles explored this issue.

However, some important exceptions should be noted. For example, some studies exhibited a preference for mixed models, employing both a directive approach (particularly in the beginning) and a more facilitative and distributed leadership approach [ 56 ]. Rycroft-Malone, Burton, Wilkinson, Harvey, McCormack, Baker, Dopson, Graham, Staniszewska and Thompson [ 53 ] concluded that consideration should be given to models that combine hierarchical, directive structures with distributed facilitative forms of leadership.

One explanation for this rather narrow view of leadership is that despite the rapidly increasing number of publications in the general field of coproduction [ 7 , 18 ], empirical studies have still lacked depth with regard to investigations of the leadership of this process. Most empirical studies included in this review mentioned leadership only in passing or derived some conclusions regarding leadership from case studies focusing on other aspects of the coproduction process.

Another explanation for this situation is that although coproduction focuses on partnership, in most cases, senior leaders have control over resources and the power to define the means, methods, extent and forms of participation [ 65 ]. Even shared leadership models seem to rely on traditional leaders’ willingness to share power [ 10 ], as leaders are the actors who invite, facilitate, and support the participation of coleaders. However, some signs of change towards a broader view should be noted. Recent publications have theorized the leadership of coproduction and included case studies that have demonstrated leadership to be a social, collective and relational phenomenon that emerges as a property of interactions among individuals in given contexts [ 13 , 19 ].

The complexity of coproduction leadership practices

Our findings indicate that the leadership of coproduction practices entails challenging and complex tasks. Complexity emerges in cases in which many parts are interrelated in multiple ways. Different kinds of leadership activities may be necessary depending on the stakeholders involved [ 92 ], the context [ 13 ], and the mode, level, and phase of coproduction [ 93 ]. A complexity perspective based on systems thinking is therefore useful [ 13 , 19 ]. All actors involved in coproduction are potential leaders, but for that potential to be realized, the coproduction initiative and its leadership must be framed and comprehended in a more plural way. A recent study on systems thinking and complex adaptive thinking as means of initiating coproduction advocated a collective leadership approach [ 19 ].

Our findings highlight the need for a complex way of making meaning of leadership throughout the coproduction process, such as the ability to be flexible due to circumstances and employ both strong leadership and more facilitative approaches when necessary. Leaders must also promote the values of democracy, transparency and the redistribution of power among stakeholders throughout the process [ 64 , 94 ]. These practices and tasks are complex, which must be matched by an inner mental complexity [ 95 , 96 ]. Several practices identified in this research, such as genuinely valuing diverse perspectives, promoting mutual transformative power sharing and welcoming conflicts, require a complex mode of meaning-making that results from psychological development. These issues warrant further exploration. Future studies featuring a thoughtful choice of leadership and complexity models as well as a broader methodological repertoire are thus necessary (see Table  2 for an overview).

Methodological strengths and limitations

A strength of this review lies in its integration of research on the sparse and overlooked issue of leadership in coproduction. Our search strategy, which involved using the key words manag* and lead*, may have excluded some relevant papers. To verify that this approach did not represent an excessively blunt exclusion criterion, we checked 10% of the articles that were excluded based on this criterion. All of these articles would also have been excluded for failing to include any exploration of the management or leadership of coproduction. We therefore determined that this exclusion criterion was justifiable. Many papers did not have an explicit focus on leadership; however, by synthesizing the data, all data were treated as reflections that jointly created a larger pattern, similar to a kaleidoscope. The exclusion of non-peer-reviewed papers is likely to have led to the exclusion of coproduced outputs, which may have offered important insights into the leadership of coproduction, particularly with regard to the experiences of service users and citizens playing leadership roles. In the reporting of this review, the PRISMA guidelines were followed (Additional file 2 ). It should be noted that the lack of reporting bias assessment and certainty assessment represents a limitation of this study.

Future research

Future research (see Table  2 ) should focus on under-represented roles, such as those of politicians and community leaders, and explore emerging collective leadership models based on real-time observational studies. It should also investigate the balance between strong and shared leadership by using qualitative and participatory research methods. Incorporating systems thinking and relevant leadership models can offer new perspectives on collective leadership practices.

Practical implications

This paper explored coproduction leadership practices and revealed that they require a deliberate and plural understanding of leadership roles and tasks. We proposed a framework for coproduction leadership that takes into account the actors involved, the models of leadership, and the leadership practices that emerge in different contexts and during different phases of coproduction. We also provided a set of reflective considerations that can help all actors involved in this process make more deliberate choices regarding the parties involved, leadership models of coproduction, and practices (Table  1 ).

Our systematic review revealed some gaps in the literature on coproduction leadership, such as the lack of attention to the mental complexity of coproduction leaders, the under-representation of service users and citizens as leaders, and the need for more empirical studies that use appropriate models and methods to capture the complexity of coproduction leadership. We suggest that future research should address these gaps, thus contributing to the advancement of coproduction theory and practice.

Our framework also has some practical implications for coproduction leaders and participants. At the start of coproduction process, all people, particularly leaders, must learn more about different models of leadership and how power is shared. Throughout this process, flexibility is necessary because leadership constellations change over time; they emerge and fade away, thus implying different underlying leadership models. A multitude of practices must be implemented throughout the coproduction process. People in leader roles must be aware of their personal strengths and limitations, not only with the goal of sharing leadership but also with the aim of establishing partnerships with others who have competence in certain practices, such as facilitation or addressing conflicts. Reflecting upon the guiding questions can also help illustrate the extent to which power and leadership are being shared. In conclusion, to create more equal power relations over time, we must challenge our current practices and work deliberately to enhance the capacity of individuals and groups to effectively engage in coproduction leadership.

Osborne SP, Radnor Z, Strokosch K. Co-production and the co-creation of value in public services: a suitable case for treatment? Public Manag Rev. 2016;18(5):639–53.

Article   Google Scholar  

Voorberg WH, Bekkers VJJM, Tummers LG. A Systematic Review of Co-Creation and Co-Production: Embarking on the social innovation journey. Public Manag Rev. 2015;17(9):1333–57.

Gassner D, Gofen A. Coproduction investments: Street-level management perspective on coproduction. Cogent Business Management. 2019;6(1):1617023.

Bussu S, Tullia Galanti M. Facilitating coproduction: the role of leadership in coproduction initiatives in the UK. Policy Society. 2018;37(3):347–67.

Brown PR, Head BW. Navigating tensions in co-production: A missing link in leadership for public value. Public Administration (London). 2019;97(2):250–63.

Schlappa H, Imani Y: Who is in the lead? New perspectives on leading service co-production. In: Co-production and co-creation. edn.: Routledge; 2018: 99–108.

Masterson D. Mapping definitions of co-production and co-design in health and social care:systematic scoping review providing lessons for the future. Health Expect. 2022;25(3):902–13.

Article   PubMed   PubMed Central   Google Scholar  

Osborne SP, Strokosch K. It takes Two to Tango? Understanding the C o-production of Public Services by Integrating the Services Management and Public Administration Perspectives. Br J Manag. 2013;24:S31–47.

Parry KW, Bryman A: Leadership in organizations. In: The SAGE Handbook of Organization Studies. edn. Edited by Clegg SR, Hardy C, Lawrence TB, Nord WR. London: SAGE; 2006: 446–468.

Denis J-L, Langley A, Sergi V. Leadership in the plural. Acad Manag Ann. 2012;6(1):211–83.

Drath WH, McCauley CD, Palus CJ, Van Velsor E, O’Connor PMG, McGuire JB. Direction, alignment, commitment: Toward a more integrative ontology of leadership. Leadersh Q. 2008;19(6):635–53.

Ospina SM, Foldy EG, Fairhurst GT, Jackson B. Collective dimensions of leadership: Connecting theory and method. Human Relations. 2020;73(4):441–63.

Schlappa H, Imani Y, Nishino T. Relational leadership: an analytical lens for the exploration of co-production. In: Loeffler E, Bovaird T, editors. The Palgrave Handbook of Co-Production of Public Services and Outcomes. edn.: Springer; 2021. p. 471–90.

Loeffler E. Co-production of public services and outcomes. 1st ed. In: The Palgrave Handbook of Co-Production of Public Services and Outcomes. Cham, Switzerland: Palgrave Macmillan; 2021. p. 387–408. https://link.springer.com/book/10.1007/978-3-030-55509-2 .

Kotter JP. Force for change: how leadership differs from management: Simon and Schuster; 2008.

Palumbo R. Contextualizing co-production of health care: a systematic literature review. Intern J Public Sector Management. 2016;29(1):72–90.

Palumbo R, Manesh MF: Travelling along the public service co-production road: a bibliometric analysis and interpretive review. Public management review 2021:1–37.

Fusco F, Marsilio M, Guglielmetti C. Co-production in health policy and management: a comprehensive bibliometric review. BMC Health Serv Res. 2020;20(1):504–504.

Docherty K: Exploring Collective Leadership and Co-Production: An Empirical Study. In: Processual Perspectives on the Co-Production Turn in Public Sector Organizations. edn.: IGI Global; 2021: 130–155.

Turnhout E, Metze T, Wyborn C, Klenk N, Louder E. The politics of co-production: participation, power, and transformation. Current Opinion Environmental Sustainability. 2020;42:15–21.

Kjellström S, Areskoug-Josefsson K, Andersson Gäre B, Andersson A-C, Ockander M, Käll J, McGrath J, Donetto S, Robert G. Exploring, measuring and enhancing the coproduction of health and well-being at the national, regional and local levels through comparative case studies in Sweden and England: the “Samskapa” research programme protocol. BMJ Open. 2019;9(7):e029723–e029723.

Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J. 2009;26(2):91–108.

Article   PubMed   Google Scholar  

Denyer D, Tranfield D: Producing a Systematic Review. In: The Sage handbook of organizational research methods. edn. Edited by Buchanan D, Bryman A: Sage Publications Ltd; 2009.

Sicilia M, Sancino A, Nabatchi T, Guarini E. Facilitating co-production in public services: management implications from a systematic literature review. Public Money Management. 2019;39(4):233–40.

Staniszewska S, Denegri S, Matthews R, Minogue V. Reviewing progress in public involvement in NIHR research: developing and implementing a new vision for the future. BMJ Open. 2018;8(7):e017124.

Byrne L, Stratford A, Davidson L. The global need for lived experience leadership. Psychiatr Rehabil J. 2018;41(1):76.

Nicol E, Sang B. A co-productive health leadership model to support the liberation of the NHS. J R Soc Med. 2011;104(2):64–8.

Hafford-Letchfield T, Simpson P, Willis PB, Almack K. Developing inclusive residential care for older lesbian, gay, bisexual and trans (LGBT) people: An evaluation of the Care Home Challenge action research project. Health Soc Care Community. 2018;26(2):e312–20.

Hopkins L, Foster A, Nikitin L. The process of establishing Discovery College in Melbourne. Ment Health Soc Incl. 2018;22(4):187–94.

Greenhalgh T, Jackson C, Shaw S, Janamian T. Achieving research impact through co-creation in community-based health services: literature review and case study. Milbank Q. 2016;94(2):392–429.

Green S, Beveridge E, Evans L, Trite J, Jayacodi S, Evered R, Parker C, Polledri L, Tabb E, Green J. Implementing guidelines on physical health in the acute mental health setting: a quality improvement approach. Int J Ment Heal Syst. 2018;12(1):1–9.

Poocharoen O-o. Ting B: Collaboration, co-production, networks: Convergence of theories. Public Manag Rev. 2015;17(4):587–614.

Topp SM, Sharma A, Chileshe C, Magwende G, Henostroza G, Moonga CN. The health system accountability impact of prison health committees in Zambia. International journal for equity in health. 2018;17(1):74–74.

Chisholm L, Holttum S, Springham N. Processes in an experience-based co-design project with family carers in community mental health. SAGE Open. 2018;8(4):2158244018809220.

Mader LB, Harris T, Kläger S, Wilkinson IB, Hiemstra TF. Inverting the patient involvement paradigm: defining patient led research. Res Involv Engagem. 2018;4(1):1–7.

Mulvale G, Moll S, Miatello A, Robert G, Larkin M, Palmer VJ, Powell A, Gable C, Girling M. Codesigning health and other public services with vulnerable and disadvantaged populations: Insights from an international collaboration. Health Expect. 2019;22(3):284–97.

Larkin M, Boden ZV, Newton E. On the brink of genuinely collaborative care: experience-based co-design in mental health. Qual Health Res. 2015;25(11):1463–76.

Farooqi SA. Co-production: what makes co-production work? Evidence from Pakistan. Int J Public Sect Manag. 2016;29(4):381–95.

Murphy L, Wells JS, Lachman P, Bergin M. A quality improvement initiative in community mental health in the Republic of Ireland. Health Sci J. 2015;9(1):1.

Google Scholar  

Redwood S, Brangan E, Leach V, Horwood J, Donovan JL. Integration of research and practice to improve public health and healthcare delivery through a collaborative’Health Integration Team’model-a qualitative investigation. BMC Health Serv Res. 2016;16(1):1–13.

Bombard Y, Baker GR, Orlando E, Fancott C, Bhatia P, Casalino S, Onate K, Denis J-L, Pomey M-P. Engaging patients to improve quality of care: a systematic review. Implement Sci. 2018;13(1):1–22.

Marston C, Hinton R, Kean S, Baral S, Ahuja A, Costello A, Portela A. Community participation for transformative action on women’s, children’s and adolescents’ health. Bull World Health Organ. 2016;94(5):376–82.

Burhouse A, Lea C, Ray S, Bailey H, Davies R, Harding H, Howard R, Jordan S, Menzies N, White S. Preventing cerebral palsy in preterm labour: a multiorganisational quality improvement approach to the adoption and spread of magnesium sulphate for neuroprotection. BMJ open quality. 2017;6(2):e000189.

Burhouse A, Rowland M, Niman HM, Abraham D, Collins E, Matthews H, Denney J, Ryland H. Coaching for recovery: a quality improvement project in mental healthcare. BMJ Open Quality. 2015;4(1):u206576–w202641.

Millenson ML, DiGioia AM III, Greenhouse PK, Swieskowski D. Turning patient-centeredness from ideal to real: lessons from 2 success stories. J Ambul Care Manage. 2013;36(4):319–34.

Yokota F, Biyani M, Islam R, Ahmed A, Nishikitani M, Kikuchi K, Nohara Y, Nakashima N. Lessons learned from Co-Design and Co-Production in a portable health clinic research project in Jaipur district, India (2016–2018). Sustainability. 2018;10(11):4148.

Cooke J, Langley J, Wolstenholme D, Hampshaw S. Seeing" the difference: the importance of visibility and action as a mark of" authenticity" in co-production: comment on" collaboration and co-production of knowledge in healthcare: opportunities and challenges. Int J Health Policy Manag. 2017;6(6):345.

Sicilia M, Guarini E, Sancino A, Andreani M, Ruffini R. Public services management and co-production in multi-level governance settings. Int Rev Adm Sci. 2016;82(1):8–27.

Bak K, Moody L, Wheeler SM, Gilbert J. Patient and Staff Engagement in Health System Improvement: A Qualitative Evaluation of the Experience-Based Co-design Approach in Canada. Healthcare Quarterly (Toronto, Ont). 2018;21(2):24–9.

Bagot KL, Cadilhac DA, Kim J, Vu M, Savage M, Bolitho L, Howlett G, Rabl J, Dewey HM, Hand PJ. Transitioning from a single-site pilot project to a state-wide regional telehealth service: The experience from the Victorian Stroke Telemedicine programme. J Telemed Telecare. 2017;23(10):850–5.

Bruce G, Wistow G, Kramer R. Connected Care Re-visited: Hartlepool and Beyond. J Integr Care. 2011;19(2):13–21.

Hogan MJ, Johnston H, Broome B, McMoreland C, Walsh J, Smale B, Duggan J, Andriessen J, Leyden KM, Domegan C. Consulting with citizens in the design of wellbeing measures and policies: lessons from a systems science application. Soc Indic Res. 2015;123(3):857–77.

Rycroft-Malone J, Burton CR, Wilkinson J, Harvey G, McCormack B, Baker R, Dopson S, Graham ID, Staniszewska S, Thompson C. Collective action for implementation: a realist evaluation of organisational collaboration in healthcare. Implement Sci. 2015;11(1):1–17.

Sorrentino M, Guglielmetti C, Gilardi S, Marsilio M. Health care services and the coproduction puzzle: filling in the blanks. Administration Society. 2017;49(10):1424–49.

Williams BN, Kang S-C, Johnson J. (Co)-contamination as the dark side of co-production: Public value failures in co-production processes. Public Manag Rev. 2016;18(5):692–717.

Gillard S, Foster R, Turner K. Evaluating the Prosper peer-led peer support network: a participatory, coproduced evaluation. Ment Health Soc Incl. 2016;20(2):80–92.

Greenwood DA, Litchman ML, Ng AH, Gee PM, Young HM, Ferrer M, Ferrer J, Memering CE, Eichorst B, Scibilia R. Development of the intercultural diabetes online community research council: codesign and social media processes. J Diabetes Sci Technol. 2019;13(2):176–86.

Lindsay C, Pearson S, Batty E, Cullen AM, Eadson W. Co-production as a route to employability: Lessons from services with lone parents. Public Administration. 2018;96(2):318–32.

Bell T, Vat LE, McGavin C, Keller M, Getchell L, Rychtera A, Fernandez N. Co-building a patient-oriented research curriculum in Canada. Res Involv Engagem. 2019;5(1):1–13.

Cox N, Clayson A, Webb L. A safe place to reflect on the meaning of recovery: a recovery community co-productive approach using multimedia interviewing technology. Drugs Alcohol Today. 2016;16(1):4–15.

Haynes E, Marawili M, Marika BM, Mitchell AG, Phillips J, Bessarab D, Walker R, Cook J, Ralph AP. Community-based participatory action research on rheumatic heart disease in an Australian Aboriginal homeland: Evaluation of the ‘On track watch’project. Eval Program Plann. 2019;74:38–53.

Ward ME, De Brún A, Beirne D, Conway C, Cunningham U, English A, Fitzsimons J, Furlong E, Kane Y, Kelly A. Using co-design to develop a collective leadership intervention for healthcare teams to improve safety culture. Int J Environ Res Public Health. 2018;15(6):1182.

McGregor J, Repper J, Brown H. “The college is so different from anything I have done”. A study of the characteristics of Nottingham Recovery College. J Mental Health Training, Education Practice. 2014;9(1):3–15.

Farr M. Power dynamics and collaborative mechanisms in co-production and co-design processes. Crit Soc Policy. 2018;38(4):623–44.

Hämäläinen R-M, Aro AR, Lau CJ, Rus D, Cori L, Syed AM. Cross-sector cooperation in health-enhancing physical activity policymaking: more potential than achievements? Health Res Policy Syst. 2016;14(1):33–33.

Farmer J, Taylor J, Stewart E, Kenny A. Citizen participation in health services co-production: a roadmap for navigating participation types and outcomes. Aust J Prim Health. 2017;23(6):509–15.

Morton M, Paice E. Co-Production at the Strategic Level: Co-Designing an Integrated Care System with Lay Partners in North West London. England Int J Integr Care. 2016;16(2):2–2.

Jeffs L, Merkley J, Sinno M, Thomson N, Peladeau N, Richardson S. Engaging Stakeholders to Co-design an Academic Practice Strategic Plan in an Integrated Health System: The Key Roles of the Nurse Executive and Planning Team. Nurs Adm Q. 2019;43(2):186–92.

Powers KJ, Thompson F. Managing Coprovision: Using Expectancy Theory to Overcome the Free-Rider Problem. J Public Adm Res Theory. 1994;4(2):179–96.

Andrews R, Brewer GA. Social capital, management capacity and public service performance: Evidence from the US states. Public Manag Rev. 2013;15(1):19–42.

Budge G, Mitchell A, Rampling T, Down P, Collective B. “It kind of fosters a culture of interdependence”: A participatory appraisal study exploring participants’ experiences of the democratic processes of a peer-led organisation. J Community Applied Social Psychology. 2019;29(3):178–92.

Farmer J, Currie M, Kenny A, Munoz S-A. An exploration of the longer-term impacts of community participation in rural health services design. Soc Sci Med. 2015;141:64–71.

Ersoy A. The spread of coproduction: How the concept reached the northernmost city in the UK. Local Econ. 2016;31(3):410–23.

Article   MathSciNet   Google Scholar  

Sancino A. The meta co-production of community outcomes: Towards a citizens’ capabilities approach. International J Voluntary Nonprofit Organizations. 2016;27(1):409–24.

Nies H. Communities as co-producers in integrated care. Int J Integr Care. 2014;(14):1–4.

Terp M, Laursen BS, Jørgensen R, Mainz J, Bjørnes CD. A room for design: Through participatory design young adults with schizophrenia become strong collaborators: A Room for Design. Int J Ment Health Nurs. 2016;25(6):496–506.

Seid M, Dellal G, Peterson LE, Provost L, Gloor PA, Fore DL, Margolis PA. Co-designing a Collaborative Chronic Care Network (C3N) for inflammatory bowel disease: development of methods. JMIR Hum Factors. 2018;5(1):e8083.

Schaaf M, Topp SM, Ngulube M. From favours to entitlements: community voice and action and health service quality in Zambia. Health Policy Plan. 2017;32(6):847–59.

Oertzen A-S, Odekerken-Schröder G, Brax SA, Mager B. Co-creating services—conceptual clarification, forms and outcomes. J Serv Manag. 2018;29(4):641–79.

Nimegeer A, Farmer J, West C, Currie M. Addressing the problem of rural community engagement in healthcare service design. Health Place. 2011;17(4):1004–6.

Bovaird T. Beyond engagement and participation: User and community coproduction of public services. Public Adm Rev. 2007;67(5):846–60.

Macaulay B. Considering social enterprise involvement in the commissioning of health services in Shetland. Local Econ. 2016;31(5):650–9.

Miller A, Young EL, Tye V, Cody R, Muscat M, Saunders V, Smith ML, Judd JA, Speare R. A community-directed integrated Strongyloides control program in Queensland, Australia. Trop Med Infect Dis. 2018;3(2):48.

Walsh M, Kittler MG, Mahal D. Towards a new paradigm of healthcare: Addressing challenges to professional identities through Community Operational Research. Eur J Oper Res. 2018;268(3):1125–33.

Windrum P. Third sector organizations and the co-production of health innovations. Manag Decis. 2014;5(6):1046–56.

Farmer J, Carlisle K, Dickson-Swift V, Teasdale S, Kenny A, Taylor J, Croker F, Marini K, Gussy M. Applying social innovation theory to examine how community co-designed health services develop: using a case study approach and mixed methods. BMC Health Serv Res. 2018;18(1):1–12.

Wilson G. Co-Production and Self-Care: New Approaches to Managing Community Care Services for Older People1. Social Policy Administration. 1994;28(3):236–50.

Sturmberg JP, Martin CM, O’Halloran D. Music in the Park. An integrating metaphor for the emerging primary (health) care system: Music in the Park. J Eval Clin Pract. 2010;16(3):409–14.

Vennik FD, van de Bovenkamp HM, Putters K, Grit KJ. Co-production in healthcare: rhetoric and practice. Int Rev Adm Sci. 2016;82(1):150–68.

Crosby BC, Bryson JM. Why leadership of public leadership research matters: and what to do about it. Public Manag Rev. 2018;20(9):1265–86.

Clarke D, Jones F, Harris R, Robert G. What outcomes are associated with developing and implementing co-produced interventions in acute healthcare settings? A rapid evidence synthesis. BMJ Open. 2017;7(7):e014650.

Gallan AS, Jarvis CB, Brown SW, Bitner MJ. Customer positivity and participation in services: an empirical test in a health care context. J Acad Mark Sci. 2013;41(3):338–56.

Nabatchi T, Sancino A, Sicilia M. Varieties of Participation in Public Services: The Who, When, and What of Coproduction. Public Adm Rev. 2017;77(5):766–76.

Larsen T, Karlsen JE, Sagvaag H: Keys to unlocking service provider engagement in constrained co-production partnerships. Action Research 2020:1476750320925862.

Kegan R, Laskow Lahey L. How the way we talk can change the way we work : seven languages for transformation. San Francisco: Jossey-Bass Pfeiffer; Wiley; 2001.

Hill R. Thinking like a round table leader: how mental complexity enables leaders to succeed in a complex environment. Journal of Character and Leadership Development. 2021;8(1):116–30.

Download references

Acknowledgements

The authors wish to thank Forte, the Swedish Research Council for Health, Working Life and Welfare. In particular, we would like to thank Mary McCall for valuable help.

Open access funding provided by Jönköping University. The study of Samskapa, a coproduction research programme, received funding from Forte, the Swedish Research Council for Health, Working Life and Welfare, under grant agreement no. 2018–01431.

Author information

Authors and affiliations.

The Jönköping Academy for Improvement of Health and Welfare, School of Health and Welfare, Jönköping University, Barnarpsgatan 39, Jönköping, Sweden

Sofia Kjellström & Daniel Masterson

Florence Nightingale Faculty of Nursing, Midwifery & Palliative Care, King’s College London, London, UK

Sofia Kjellström & Sophie Sarre

You can also search for this author in PubMed   Google Scholar

Contributions

SK and SS performed the data extraction, qualitative synthesis and drafted the manuscript and Table 1 . SK finalized the manuscript. D.M. screened the data from a previous scoping review, provided the search strategy (Additional file 1 : Appendix 1) and constructed the Prisma flowchart. SS compiled sample description in Additional file 2 : Appendix 2. All authors reviewed and approved the manuscript and agreed to be accountable for all aspects of the work.

Corresponding author

Correspondence to Sofia Kjellström .

Ethics declarations

Ethics approval and consent to participate.

Not applicable; according to Swedish law research, ethical approval is not needed for research that does not involve human persons.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: appendix 1. .

Description of included papers. 

Additional file 2: Appendix 2. 

PRISMA_2020_checklist - Mangement review.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Kjellström, S., Sarre, S. & Masterson, D. The complexity of leadership in coproduction practices: a guiding framework based on a systematic literature review. BMC Health Serv Res 24 , 219 (2024). https://doi.org/10.1186/s12913-024-10549-4

Download citation

Received : 11 April 2023

Accepted : 03 January 2024

Published : 17 February 2024

DOI : https://doi.org/10.1186/s12913-024-10549-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Health and welfare

BMC Health Services Research

ISSN: 1472-6963

what is systematic review of related literature in research

  • Open access
  • Published: 06 February 2024

What are the learning objectives in surgical training – a systematic literature review of the surgical competence framework

  • Niklas Pakkasjärvi 1 , 2 ,
  • Henrika Anttila 3 &
  • Kirsi Pyhältö 3 , 4  

BMC Medical Education volume  24 , Article number:  119 ( 2024 ) Cite this article

223 Accesses

2 Altmetric

Metrics details

To map the landscape of contemporary surgical education through a competence framework by conducting a systematic literature review on learning outcomes of surgical education and the instructional methods applied to attain the outcomes.

Surgical education has seen a paradigm shift towards competence-based training. However, a gap remains in the literature regarding the specific components of competency taught and the instructional methods employed to achieve these outcomes. This paper aims to bridge this gap by conducting a systematic review on the learning outcomes of surgical education within a competence framework and the instructional methods applied. The primary outcome measure was to elucidate the components of competency emphasized by modern surgical curricula. The secondary outcome measure was to discern the instructional methods proven effective in achieving these competencies.

A search was conducted across PubMed, Medline, ProQuest Eric, and Cochrane databases, adhering to PRISMA guidelines, limited to 2017–2021. Keywords included terms related to surgical education and training. Inclusion criteria mandated original empirical studies that described learning outcomes and methods, and targeted both medical students and surgical residents.

Out of 42 studies involving 2097 participants, most concentrated on technical skills within competency-based training, with a lesser emphasis on non-technical competencies. The effect on clinical outcomes was infrequently explored.

The shift towards competency in surgical training is evident. However, further studies on its ramifications on clinical outcomes are needed. The transition from technical to clinical competence and the creation of validated assessments are crucial for establishing a foundation for lifelong surgical learning.

Peer Review reports

Introduction

Surgery requires a highly specialized set of surgical knowledge, skills, and attitudes that will allow a surgeon to perform the requisite procedures in collaboration with the patient and the multi-professional team. These competencies are fundamental to a surgeon’s ability to function effectively, necessitating flexibility, adaptability, and continuous professional development. In the field of learning sciences, the term competence is used to refer to the combination of knowledge, skills, and attitudes that allows an individual to solve the job-related task or a problem at hand and act professionally [ 1 , 2 , 3 , 4 ]. Accordingly, it can be claimed that cultivating a set of surgical competencies organically integrating knowledge, skills, and attitudes needed in surgeons’ work is imperative for high-quality surgical education. This calls for the understanding of both the range of competencies acquired in surgery training and the kinds of instructional methods that are effective in adopting them. Interestingly, many studies in surgical education, including systematic literature reviews, appear to often focus on a single learning outcome. This typically involves exploring either a specific technical skill or content knowledge in a surgical area, along with assessing the effectiveness of a particular instructional method [ 5 , 6 , 7 , 8 , 9 ].

The traditional Halstedian methods, with their focus on incremental responsibility and volume-based exposure, have been foundational in surgical training. Over the past few decades, the approach has been complemented with more tailored instructional methods [ 10 , 11 ]. For example, technical skills are often contemplated with models and simulators [ 12 , 13 ], thus increasing patient safety during surgery, and allowing the training surgeon to focus on the operation without feeling pressured to execute technical tasks [ 11 ]. Simulation training has demonstrated positive effects, especially in technical skills [ 14 , 15 , 16 ], but also in the longitudinal transfer of skills [ 17 ]. Much of the research on simulation has focused on training assessment with validated programs becoming more widely available [ 18 , 19 , 20 , 21 , 22 ]. Procedure-specific assessment has become common in evaluating surgical learning outcomes and has resulted in a set of validated task-specific assessment tools, such as OSATS (Objective Structured Assessment of Technical Skills) [ 23 ]. However, reducing surgery to separated technical tasks infers risks related to developing surgical competence, mainly a lack of integration in the learning of surgical skills, knowledge, and attitudes, further compromising continuous professional development, and thus potentially occupational wellbeing. There is also contradictory evidence on the effectiveness of the surgical training method in achieving the desired learning outcomes, but this may be more related to the unrealized potential of evidence-based training methods [ 24 ]. Further, the implementation of modern surgical training is lagging [ 25 ]. To sum up, while research on surgical education has significantly advanced our understanding of more tailored methods for cultivating surgical learning, it has also typically adapted a single ingredient approach [ 10 , 11 ]. A problem with this approach is that it neglects the complexity of surgical competence development and, without coherence building, bears the inherent risk of reducing surgery into mastering a series of technical tasks rather than providing tools for cultivating surgical competencies. Moreover, only a few prior systematic reviews on surgical education have studied surgical learning across the fields of surgery or among both medical students and surgical residents. Our study aims to comprehensively analyze the competencies targeted in contemporary surgical education, as revealed through a systematic literature review. We seek to elucidate the nature of these competencies—including skills, knowledge, and attitudes—and the instructional methods employed to develop them in medical students and surgical residents. This approach will highlight how competencies are defined, integrated, and cultivated in surgical education according to existing literature. Specifically, our primary outcome is to identify and detail the competencies (skills, knowledge, and attitudes) emphasized in the existing research on surgical education. We aim to understand how these competencies are conceptualized, taught, and developed, providing insights into the current focus of surgical training programs. As a secondary objective, we will examine the instructional methods discussed in the literature for teaching these competencies. This involves analyzing the effectiveness and application of different teaching strategies in nurturing a comprehensive set of surgical competencies, focusing on integrating technical and non-technical skills. To our knowledge, this is the first published effort within surgery to review the literature comprehensively on surgical competencies development and instructional methods across the fields of surgery, with studies conducted with both medical students and surgical residents.

We conducted a systematic literature review by using the guidelines of the Preferred Reporting Item for Systematic Reviews and Meta-analysis statement (PRISMA) [ 26 ].

Research strategy and data sources

We searched four electronic databases: PubMed, Medline, ProQuest Eric, and Cochrane databases on 18 February 2021. Only articles in English were considered, and the search was limited to years 2017–2021. This restriction was based on a pilot search, which identified a high volume of review articles before 2017 and a significant increase in the quantity and relevance of primary research studies on the surgical competence framework beginning in 2017. The search string consisted of the following keywords: “Surgical Education”, “Surgical Training”, “Surgical Intern*”, “surgical resident” OR “surgical apprentice” AND “learning”. The detailed syntax of the search was: (“surgical intern” AND learning) OR (“surgical training” AND learning) OR (“surgical intern*” AND learning) OR (“surgical resident” AND learning) OR (“surgical apprentice” AND learning). The database search resulted 1305 articles (1297 from PubMed/Medline, 6 from Cochrane databases, and 2 from ProQuest Eric).

Inclusion criteria and study selection

We applied five inclusion criteria for the data. To be included in the review, the articles had to fulfil the following criteria:

be original empirical studies.

be published in a peer-reviewed journal between 2017 and 2021.

be written in English, although the study could have been conducted in any country.

include surgical residents and/or medical students as participants.

include descriptions of learning outcomes and methods of learning in the results of the study.

Data were extracted manually in several increments. Two of the authors (NP) and (HA) independently reviewed the titles and abstracts of all articles identified by the search and marked potentially relevant articles for full-text retrieval (see Fig.  1 for the PRISMA diagram for the review flow). After reading the titles and abstracts, and removing the duplicates, 1236 articles were excluded as they did not meet the inclusion criteria. This also included 13 literature reviews that were excluded from the study as they were not empirical. However, the references of the reviews were reviewed by using a snowball method to detect additional references. This resulted in 16 studies being added to the full-text analysis. After this, the two authors independently examined the full texts of the remaining 85 articles with the inclusion criteria and selected the studies eligible for inclusion in the review. At this point, 43 articles were excluded as they did not explain learning outcomes or learning activities. Disagreements between the two authors were minimal and were resolved through a joint review of the full-text articles and discussion with the third co-author (KP). All articles that matched the inclusion criteria were included in the review, resulting in 42 articles being included in the review.

figure 1

The PRISMA diagram depicts the flow of the systematic review, from the initial identification of 1305 database hits to the ultimate inclusion of 42 articles

Data extraction

Two of the authors (NP) and (HA) extracted and documented information about 11 factors of each study into the Excel file to create a data sheet for the analysis. The following characteristics of the studies were recorded: country, participants, field of surgery, study design, use of a control group, tool, outcome measure, core finding, results on surgical learning outcomes, instructional design applied and clinical setting. Learning outcomes were categorized according to the three components of surgical competence: (a) knowledge , (b) skills (including both technical and non-technical skills), and (c) attitudes [ 22 ]. Surgical knowledge included results concerning training surgeons’ theoretical and practical knowledge about surgery, procedure, or medicine in more general. Surgical skills entailed results on their technical and non-technical skills, strategies, reflection, and self-regulation. Surgical attitudes involved results on training surgeons about their attitudes to their work and views about themselves as surgeons. The instructional design reported in the studies was coded into seven categories according to the mode of instruction applied in the study for training surgeons: (a) learning by doing , including (b) learning through reflection , including instructions where the training surgeons reflected their own learning (c) learning by modelling , (d) learning by direct instruction , (e) learning by self-directed study , (f) learning by mentoring or teaching , and (g) learning by gaming.

The “ Learning by doing ” category included instructional situations in which medical students and surgeons learned while working as surgeons, for example, by completing surgical tasks and procedures. “ Learning through reflection ” included situations in which they learned by reflecting on their prior experiences, thoughts, own development, and performance in specific tasks.

In the “ Learning by modeling ” category, learning occurred by observing or copying the behaviors of their peers or more experienced surgeons. “ Learning by direct instruction ” included situations in which they learned while attending formal education, lectures, or seminars and by receiving tips or practical guidance from others.

The “ Learning by self-directed study ” category encompassed situations where training surgeons learned through self-directed study, such as reading, seeking information, and independently watching procedure videos, without any external intervention.

In the “ Learning by mentoring or teaching ” category, training surgeons learned while they taught or mentored their peers. “ Learning by gaming ” included situations where training surgeons played games to improve their competence.

Regarding categorization, each of the studies included in the review could belong to one or more of these categories. However, to be included in a category, the article needed to clearly explain that the instructional method in question was used in the study. For example, even though performing surgical procedures might also involve self-reflection, the article was categorized under “ learning by doing ” and not additionally under “ learning by self-reflection ” unless the reflection was explicitly mentioned in the article.

We included 42 empirical studies involving 2097 medical students and surgeons in training in this systematic review. The studies on surgical learning were geographically distributed across ten countries. Most of the studies were conducted in the USA ( n  = 22), and Canada ( n  = 12), however studies from the UK, the Netherlands, Austria, Chile, Germany, Finland, and Switzerland were also present. Surgical learning was typically explored with small-scale studies with a median of 28 participants, interquartile range 46 (see Table  1 ). Most of the studies focused on surgical residents’ learning ( n  = 29), whereas medical students’ surgical learning was explored in 11 studies. One study had both residents and medical students as participants. Twenty-seven studies investigated surgical learning in general surgery, with the remaining 16 in various other surgical specialties (including gynecology, cardiology, urology, pediatrics, neurosurgery, microsurgery, orthopedics, vascular surgery, gastro surgery and otolaryngology). The study design of the empirical studies varied from simulation (including bench models, animals, human cadavers, and virtual reality (VR)), operating room (OR) procedures, interviews, surveys, writing tasks, to knowledge tests and the resident report card. Most of the studies employed multimodal designs. Eighteen of the studies were controlled; 13 studies were randomized controlled trials (RCT), and five were controlled trials (CT). The core finding was discussed in all studies and where applicable, statistical tests were applied to highlight the significance. Almost half of the studies ( n  = 18) were conducted in clinical settings.

Primary outcome measures: learning objectives of surgeons in training and competency components

Most of the included studies on surgical learning focused on surgical skills and their attainment ( n  = 36) (See Table  1 ). Training surgeons commonly learned technical skills such as knot tying, distinct surgical procedures, and robotic skills ( n  = 25). In contrast, learning of non-technical skills ( n  = 11), such as communication, patient management, reflection, self-regulation, and decision-making skills, were less often reported. Twenty-two studies focused on the acquisition of surgical knowledge, such as general medical or surgical knowledge or more specific knowledge of certain procedures. Some of the studies ( n  = 10) reported attitudinal learning outcomes including confidence, resilience, and self-efficacy. Most of the studies ( n  = 26) had a single focus on surgical competence, i.e., they focused on learning of skills, knowledge, or attitudes. However, in 19 studies, the training surgeons’ learning was a combination of several skills, knowledge, and attitudes, most typically technical skills, and surgical knowledge. Empirical studies relied on performance assessment ( n  = 15), including studies in which the performance assessment was utilized by other reports, such as senior surgeons assessing the performance of the training surgeons, and self-reporting of the learning outcomes ( n  = 11). Sixteen studies combined both performance assessment and self-report of learning.

Learning was measured with validated objective tools in half of the studies. Most studies utilized either the OSATS global evaluation tool or a derivative optimized for the given conditions. These derivatives included ABSITE (The American Board of Surgery In-Service Training Exam) [ 69 ]; OSA-LS (OSATS salpingectomy-specific form) [ 70 ]; ASSET (Arthroscopic Surgical Skill Evaluation Tool) [ 71 ]; SP-CAT (Simulation Participants-Communication Assessment Tool) [ 72 ]; UWOMSA (University of Western Ontario Microsurgical Acquisition/assessment instrument) [ 73 ], and NRS (Numeric Rating Scale). Cognitive task analysis (CTA) was utilized in only two studies. In both studies, CTA improved scores in outcome testing [ 62 , 64 ]. CTA-based training was considered suitable for expediting learning but based on our study cohort, it is scarcely applied.

Secondary outcome measures: what kind of instructional designs do surgeons in training learn through?

The included studies in the present review employed various instructional methods ranging from learning by doing to mentoring and teaching fellow residents. Learning by doing , including technical training (of specific procedures, knot tying, etc.) both in OR settings and in simulation (e.g., VR, robotic, bench model, human cadaver, and animal), was most typically applied as the primary instructional method ( n  = 26), especially in teaching technical skills and non-technical surgical skills both for surgical residents and medical students. Partly mixed resulted in terms of the effectiveness of the method for novice and more advanced surgical students. For example, while Feins et al. showed that residents’ performance in component tasks and complete cardiac surgical procedures improved by simulation, Korte et al. reported, that especially more novice surgeons benefitted from simulations more than those who had more experience [ 29 , 37 ]. Most skill curricula improved assessment scores, but surgical outcomes may remain unaffected by similar interventions as shown by Jokinen et al. [ 43 ]. Also, learning through reflection , through which training surgeons reflecting on their own learning experiences and development, such as by participating in debriefing after operations or via video-based guided reflection ( n  = 13) was a commonly emphasized instructional method. Engaging in reflection was shown to be effective in promoting the learning of non-technical skills and attitudes. Trickey et al. showed that reflecting on positive learning experiences increased residents’ confidence and improved their communication skills, while Soucisse et al. and Naik et al. reported that self-reflecting on surgical tasks performed improved technical skills as well [ 55 , 57 , 65 ]. Ranney et al. furthermore showed that residents, who can reflect on their learning and thought processes are more in control and proceed to autonomy more quickly [ 56 ].

Commonly used instructional methods for enhancing surgical learning include modeling ( n  = 5), particularly observing more experienced surgeons performing surgical procedures, s elf-directed study ( n  = 6), such as preparing for surgery, reading, and self-studying and direct instruction ( n  = 7). The latter included participating in contact teaching and lectures, watching videos, and getting practical advice from senior surgeons, and these were frequently used in teaching future surgeons. Raiche et al. showed that observing and modelling, have their limitations, as residents have challenges in identifying where to focus their attention and in understanding what it is teaching them [ 52 ]. To be effective, such a form of instruction seems to call for explanation and support from senior surgeons. Naik et al. showed that receiving feedback during technical skill learning had a significant impact on residents’ performance in technical skills [ 57 ]. The results also emphasized the importance of pre-preparation for the OR for learning gains. For example, Logishetty et al. showed that residents preparing for arthroplasty with a CTA tool improved operative times and reduced mistakes and were taught both decision-making skills as well as technical skills [ 64 ].

On the other hand, learning through gaming (including playing escape rooms, jeopardy, and other quiz games) ( n  = 4) and mentoring or teaching fellow training surgeons ( N  = 1) were seldomly applied in the teaching of future surgeons. The empirical evidence still implies that such instructional methods can enhance surgical learning. Hancock et al., Chon et al., Kinio et al. and Amer et al., all showed that gaming improved surgical knowledge [ 40 , 42 , 54 , 61 ]. Zundel et al. found that peers are an extremely important source of instruction for training surgeons and that they both acquire knowledge and learn technical skills every day from each other [ 44 ]. Unfortunately, they receive little educational training in peer mentoring and thus the resource of peers as learning support is not exploited to its full potential [ 44 ].

To sum up, the results indicate that multimodal instructional designs are more commonly applied in studies exploring surgical learning and means to enhance it. In just over half of the studies ( n  = 23) participants were engaged in a combination of two to three different instructional activities.

Our results show that studies on surgical residents and medical students’ surgical learning focus heavily on learning surgical skills, particularly technical skills, and acquiring knowledge on how to perform specific procedures or surgical tasks. This indicates that, at least implicitly, quite a few studies on surgical learning are drawing on a competence framework by combining the learning of surgical skills and knowledge acquisition. However, the scope of such studies typically remains very specific.

Learning surgical soft skills such as communication and teamwork, learning skills, and adaptability were rarely investigated. Interestingly, none of the studies address learning skills such as self- or co-regulated learning as part of surgical learning. However, they are fundamental for flexible and adaptive professional behaviors and engagement in continuous professional development [ 74 , 75 ]. In addition, the studies included in the review rarely addressed learning of attitudes such as self- or co-efficacy or resilience as part of surgical learning, though self-efficacy has shown to be one of the main predictors of learning outcomes and good performance [ 76 , 77 ]. This may imply that such skills and attitudes are not considered to be at the core of surgical learning or that they are expected to result as by-product of other surgical learning activities. This can be considered to be a gap in the literature on surgical learning. The lack of knowledge on developing soft skills and attitudes among future surgeons also has practical implications since they play a central role in patient safety and a surgeon’s recovery from adverse events [ 78 , 79 ]. The importance of these non-technical skills is further supported by research from Galayia et al. and Gleason et al. [ 80 , 81 ]. Their studies highlight how factors like workload, emotional intelligence, and resilience are crucial in managing burnout, with a clear correlation shown between these skills, job resources, and burnout rates among surgical trainees.

Surgeons’ lack of familiarity with non-technical skills and insufficient training for handling adverse events [ 82 , 83 ] exacerbate this issue. In our review, systematic approaches to address adverse events were notably absent. The fact that soft skills and attitudes are often overlooked in surgical competencies poses a challenge for both research on surgical learning and the development of informed surgical education.

Recently, high incidences of burnout among surgery residents have been reported [ 84 ]. This concerning trend underscores the need for a holistic approach to surgical education. Addressing stressors in surgical education is not solely an individual concern but a systemic issue, necessitating substantial transformations in healthcare delivery and success measurement [ 85 ]. Fortunately, there has been a noticeable increase in publications emphasizing the acquisition of non-technical skills, reflecting a growing awareness of their importance in surgical training [ 86 ]. However, it is essential to note that most literature on simulation-based surgical training still predominantly focuses on technical skills [ 86 ]. This ongoing emphasis suggests that while strides are being made towards a more comprehensive educational approach, there remains a significant skew towards technical proficiency in current training paradigms.

The studies we reviewed applied various validated assessment tools. In this systematic review, learning was most focused on technical skills and evaluated by OSATS or a derivative. OSATS is a validated evaluation tool used for technical skill assessment [ 87 ]. While it is the gold standard in evaluation, it has limitations. The use of OSATS is limited in clinical operating room settings. Hence many studies have attempted to optimize and modify it according to their specific needs [ 32 , 88 , 89 ]. An assessment tool must meet the following requirements: (1) the inter-rater reliability must exceed 0.90, and (2) this reliability should be based on the amount of agreement between the observers [ 90 ]. Based on Groenier et al.’s systematic review and meta-analysis, considerable caution is required with the use of assessment tools, especially when high-stake decision-making is required [ 91 ]. Advancing proficiency in technical skills with progression toward clinical application poses many issues. Surgeons gaining false self-confidence through inadequate testing may increase the risks of adverse events in clinical applications. Thus, competence testing protocols must be validated, and must be evidence based. In addition to technical proficiency, a surgical intervention requires vast competence and robust, validated assessment tools for surgical soft skills, including learning and interpersonal skills and attitudes.

The results showed that learning by doing, typically simulation, and learning through guided reflection were the most used instructional methods to promote surgical residents’ and medical students’ surgical learning. Both methods effectively promote acquiring knowledge about performing surgical tasks and surgical skills. For instance, simulation training has been shown to enhance fluency in technical performance of specific surgical procedures and patient safety and in increasing a surgeon’s confidence [ 17 , 51 , 91 ]. While building confidence is essential for progression, self-reflection to maintain competence awareness is needed. Hence, self-assessment is fundamental to surgical learning and can be used in many forms [ 92 ]. Also, modeling, particularly observing more experienced surgeons performing surgical procedures, self-directed study, and direct instruction were commonly applied to enhance surgical learning. In turn, learning by gaming and mentoring or teaching fellow training surgeons was rarely applied in the studies as forms of instruction in cultivating surgical learning. The result indicates that gaming and peer learning are still both under-studied and under-utilized resources for systematically promoting the learning of future surgeons. The quality and quantity of social interactions with peers, senior surgeons, and patients are fundamental for surgical learning. Learning of all higher-order competencies proceeds from an inter-individual to an intra-individual sphere [ 93 , 94 , 95 ]. Moreover, since no surgeon works alone, the surgeon must be trained to work with and within the team. Accordingly, systematic use of peer learning would be essential not only for enhancing specific surgical knowledge and skills, but also for cultivating much-needed surgical soft skills. Nevertheless, emerging qualitative evidence suggests that peer learning is being increasingly implemented in medical education [ 96 ]. This trend underscores the growing recognition of the value of collaborative learning environments, where peers can share knowledge, challenge each other, and collectively develop the comprehensive skill set required in modern surgical practice.

Half of the studies we reviewed applied multimodal instruction to enhance surgical learning. This reflects a more modern understanding of learning in which varied instructional methods should be used depending on the object of learning, participants, and context. It also implies that traditional surgical teaching methods of incremental responsibility, with increasing volume-based exposure during residency, will gradually complement more varied research-informed instructional practices. However, it is essential to recall that learning always depends on our actions. This means that if we want to educate reflective practitioners who are good at solving complex problems [ 36 ], able to work in teams and engaged in continuous professional development, the instructional designs must systematically engage the future surgeons in such activities [ 97 ].

However, based on our review, many questions remain unanswered. The most fundamental of these is related to the transfer of surgical learning from a learning setting to other settings and across the competence ingredients. Firstly, further studies are needed on the extent and how surgical competencies, particularly beyond the technical skills attained in simulation (for instance), transfer into clinical work. This is also connected with the optimal length of the interval between preparation and execution, which was not analyzed thoroughly in most articles, nor was the time for initiation of skill waning explicitly stated. Feins et al. observed a transient decline from the end of one session to the beginning of the next, which was subsequently recovered and improved [ 37 ]. Green et al. showed that technical skills attained during preparatory courses are maintained into residency without additional interventions, with similar results from Maertens et al. and Lee-Riddle et al., who recorded proficiency levels to be maintained for at least three months [ 41 , 51 , 60 ]. Secondly, based on our review, studies addressing the learning and training of surgical competencies were highly task specific. Accordingly, further studies on the interrelation between competence ingredients, including surgical knowledge, technical and soft skills, and attitudes, are needed to promote the development of comprehensive surgical competencies among future surgeons. Thirdly, while simulation has proven essential for technical training, many operative interventions contain elements that cannot be simulated with current systems. The preparation for such interventions demands a multimodal approach, including preparatory discussions and visualization, until further methods become available.

Surgical residency is demanding in many aspects, not the least timewise. Among surgeons, mini-fellowships are uncommon as a learning method as opposed to traditional learning-by-doing approaches. While more effective methods are acknowledged, they are not applied due to time concerns [ 98 ]. As shown by Bohl et al., dedicated synthetic model training may alleviate time demands, allowing residents to recover better and thus improving preparedness for subsequent tasks [ 45 ]. Cognitive task analysis-based training is a valuable adjunct to the modern surgical curriculum, especially considering the global reduction in operating times and volumes during training [ 99 , 100 ]. CTA-based training improves procedural knowledge and technical performance [ 99 ]. However, it was applied in only a few of the studies analyzed here. Interestingly, CTA seems more effective in the later stages of surgical education, with less impact on medical students [ 101 ]. In addition, CTA-based training is suitable for electronic delivery, utilization through web-based tools, and gaming applications, all of which are accessible and provide opportunities for frequent revisits without personnel or resource investments [ 102 , 103 ]. Learning through gaming was also rarely applied in teaching situations in the studies analyzed here. While serious gaming in medical education is beneficial, validating each application for a specific purpose is mandatory [ 104 ].

Postgraduate medical education has recently moved towards competency-based education in many countries. Entrusted professional activities (EPA) are utilized as milestones in many competency frameworks [ 105 ]. Although EPAs have been applied to and gained rapid acceptance in postgraduate medical education, their potential within undergraduate education remains unverified [ 106 ]. In addition, while EPAs are becoming more prominent in surgical education, their widespread adoption and dissemination remain challenging [ 107 ]. We advocate for using all tools that collectively embrace a holistic approach to all competency components within surgical learning.

Our study is not without limitations. While we attempted to acquire a comprehensive picture of the pedagogical surgical landscape, we may have yet to detect some reports. Although geographical coverage was acceptable, all the studies we identified were from Western countries. Thus, the actual coverage of multimodal surgical learning warrants further studies. One potential limitation of our study is the decision to restrict our literature search to studies published from 2017 onwards. While this approach allowed us to focus on the most recent and relevant developments in surgical training and competence, it may have excluded earlier studies that could provide additional historical context or foundational insights into the evolution of surgical education practices. Finally, although we limited our study population to students and residents, learning continues through a surgeon’s career and evolves depending on the learner’s situation. Competence-based learning applies equally to all stages of surgical learning and should be incorporated, irrespective of career stage.

Advancing proficiency through adequate competency assessment is crucial for effective surgical learning. As we observe, contemporary surgical education is high quality and continuously evolves. Most studies focused on objective assessments, yet the measurement and assurance of the transition from technical to clinical proficiency remain areas for further exploration. Defining competency and creating validated assessments are fundamental to lifelong surgical learning.

While acquiring operational skills, decision-making knowledge, and confidence in performing technical tasks are teachable, the ultimate success in learning also hinges on the learner’s attitude and willingness to learn. Therefore, it is vital to incorporate non-technical skills alongside technical aptitude testing and academic achievements in designing modern surgical curricula.

To optimize learning outcomes, learners must adopt an approach encompassing the full spectrum of surgical education. This means integrating technical and non-technical skills to create a learning environment that nurtures a broad range of competencies essential for comprehensive surgical expertise.

Availability of data and materials

The dataset supporting the conclusions of the current study is available from the corresponding author on reasonable request.

Lizzio A, Wilson K. Action learning in higher education: an investigation of its potential to develop professional capability. Stud High Educ. 2004;29(4):469–88.

Article   Google Scholar  

Parry S. Just what is a competency? (and why should you care?). Training. 1996;35(6):58–64.

Google Scholar  

Eraut M. Developing professional knowledge and competence. London: Taylor & Francis Group; 1994.

Baartman L, Bastiaens T, Kirschner P, Van der Vleuten C. Evaluating assessment quality in competence-based education: a qualitative comparison of two frameworks. Educational Res Rev. 2007;2(2):114–29.

Aim F, Lonjon G, Hannouche D, Nizard R. Effectiveness of virtual reality training on orthopaedic surgery. Arthroscopy. 2016;32(1):224–32.

Article   PubMed   Google Scholar  

Alaker M, Wynn GR, Arulampalam T. Virtual reality training in laparoscopic surgery: a systematic review & meta-analysis. Int J Surg. 2016;29:85–94.

Zendekas B, Brydges R, Hamstra S, Cook D. State of the evidence on simulation-based training for laparoscopic surgery - a systematic review. Ann Surg. 2013;257(4):586–93.

Yokoyama S, Mizunuma K, Kurashima Y, Watanabe Y, Mizota T, Poudel S, et al. Evaluation methods and impact of simulation-based training in pediatric surgery: a systematic review. Pediatr Surg Int. 2019;35(10):1085–94.

Herrera-Aliaga E, Estrada LD. Trends and innovations of simulation for twenty first century medical education. Front Public Health. 2022;10: 619769.

Article   PubMed   PubMed Central   Google Scholar  

Haluck RS, Krummel TM. Computers and virtual reality for surgical education in the 21st century. Arch Surg. 2000;135(7):786–92.

Article   CAS   PubMed   Google Scholar  

Reznick RK, MacRae H. Teaching surgical skills–changes in the wind. N Engl J Med. 2006;355(25):2664–9.

Scallon SE, Fairholm DJ, Cochrane DD, Taylor DC. Evaluation of the operating room as a surgical teaching venue. Can J Surg. 1992;35(2):173–6.

CAS   PubMed   Google Scholar  

Reznick RK. Teaching and testing technical skills. Am J Surg. 1993;165(3):358–61.

Sutherland LM, Middleton PF, Anthony A, Hamdorf J, Cregan P, Scott D, et al. Surgical simulation: a systematic review. Ann Surg. 2006;243(3):291–300.

Tavakol M, Mohagheghi MA, Dennick R. Assessing the skills of surgical residents using simulation. J Surg Educ. 2008;65(2):77–83.

Young M, Lewis C, Kailavasan M, Satterthwaite L, Safir O, Tomlinson J, et al. A systematic review of methodological principles and delivery of surgical simulation bootcamps. Am J Surg. 2022;223(6):1079–87.

Dawe SR, Pena GN, Windsor JA, Broeders JA, Cregan PC, Maddern GJ. Systematic review of skills transfer after surgical simulation-based training. Br J Surg. 2014;101(9):1063–76.

Grober ED, Hamstra SJ, Wanzel KR, Reznick RK, Matsumoto ED, Sidhu RS, et al. The educational impact of bench model fidelity on the acquisition of technical skill: the use of clinically relevant outcome measures. Ann Surg. 2004;240(2):374–81.

Grantcharov TP, Kristiansen VB, Bendix J, Bardram L, Rosenberg J, Funch-Jensen P. Randomized clinical trial of virtual reality simulation for laparoscopic skills training. Br J Surg. 2004;91(2):146–50.

Seymour NE, Gallagher AG, Roman SA, O’Brien MK, Bansal VK, Andersen DK, et al. Virtual reality training improves operating room performance: results of a randomized, double-blinded study. Ann Surg. 2002;236(4):458-463. discussion 63-4.

Grantcharov TP, Reznick RK. Teaching procedural skills. BMJ. 2008;336(7653):1129–31.

Seil R, Hoeltgen C, Thomazeau H, Anetzberger H, Becker R. Surgical simulation training should become a mandatory part of orthopaedic education. J Exp Orthop. 2022;9(1):22.

Reznick R, Regehr G, MacRae H, Martin J, McCulloch W. Testing technical skill via an innovative “bench station” examination. Am J Surg. 1997;173(3):226–30.

Bjerrum F, Thomsen ASS, Nayahangan LJ, Konge L. Surgical simulation: current practices and future perspectives for technical skills training. Med Teach. 2018;40(7):668–75.

Kurashima Y, Hirano S. Systematic review of the implementation of simulation training in surgical residency curriculum. Surg Today. 2017;47(7):777–82.

Moher D, Liberati A, Tetzlaff J, Altman DG, Group P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535.

Babchenko O, Scott K, Jung S, Frank S, Elmaraghi S, Thiagarajasubramaniam S, et al. Resident perspectives on Effective Surgical training: incivility, confidence, and Mindset. J Surg Educ. 2020;77(5):1088–96.

Geoffrion R, Koenig NA, Sanaee MS, Lee T, Todd NJ. Optimizing resident operative self-confidence through competency-based surgical education modules: are we there yet? Int Urogynecol J. 2019;30(3):423–8.

Korte W, Merz C, Kirchhoff F, Heimeshoff J, Goecke T, Beckmann E, et al. Train early and with deliberate practice: simple coronary surgery simulation platform results in fast increase in technical surgical skills in residents and students. Interact Cardiovasc Thorac Surg. 2020;30(6):871–8.

Pandian TK, Buckarma EH, Mohan M, Gas BL, Naik ND, Abbott EF, et al. At home preresidency preparation for general surgery internship: a pilot study. J Surg Educ. 2017;74(6):952–7.

Charak G, Prigoff JG, Heneghan S, Cooper S, Weil H, Nowygrod R. Surgical education and the longitudinal model at the columbia-bassett program. J Surg Educ. 2020;77(4):854–8.

Harriman D, Singla R, Nguan C. The resident report card: A tool for operative feedback and evaluation of technical skills. J Surg Res. 2019;239:261–8.

Kumins NH, Qin VL, Driscoll EC, Morrow KL, Kashyap VS, Ning AY, et al. Computer-based video training is effective in teaching basic surgical skills to novices without faculty involvement using a self-directed, sequential and incremental program. Am J Surg. 2021;221(4):780–7.

Peshkepija AN, Basson MD, Davis AT, Ali M, Haan PS, Gupta RN, et al. Perioperative self-reflection among surgical residents. Am J Surg. 2017;214(3):564–70.

Cadieux DC, Mishra A, Goldszmidt MA. Before the scalpel: exploring surgical residents’ preoperative preparatory strategies. Med Educ. 2021;55(6):733–40.

Dressler JA, Ryder BA, Connolly M, Blais MD, Miner TJ, Harrington DT. Tweet-format writing is an effective tool for medical student reflection. J Surg Educ. 2018;75(5):1206–10.

Feins RH, Burkhart HM, Conte JV, Coore DN, Fann JI, Hicks GL Jr, et al. Simulation-based training in cardiac surgery. Ann Thorac Surg. 2017;103(1):312–21.

Patel P, Martimianakis MA, Zilbert NR, Mui C, Hammond Mobilio M, Kitto S, et al. Fake it ‘Til you make it: pressures to measure up in surgical training. Acad Med. 2018;93(5):769–74.

Acosta D, Castillo-Angeles M, Garces-Descovich A, Watkins AA, Gupta A, Critchlow JF, et al. Surgical practical skills learning curriculum: implementation and interns’ confidence perceptions. J Surg Educ. 2018;75(2):263–70.

Chon SH, Timmermann F, Dratsch T, Schuelper N, Plum P, Bertlh F, et al. Serious games in surgical medical education: a virtual emergency department as a tool for teaching clinical reasoning to medical students. JMIR Serious Games. 2019;7(1):e13028.

Green CA, Huang E, Zhao NW, O’Sullivan PS, Kim E, Chern H. Technical skill improvement with surgical preparatory courses: what advantages are reflected in residency? Am J Surg. 2018;216(1):155–9.

Hancock KJ, Klimberg VS, Williams TP, Tyler DS, Radhakrishnan R, Tran S. Surgical Jeopardy: play to learn. J Surg Res. 2021;257:9–14.

Jokinen E, Mikkola TS, Harkki P. Effect of structural training on surgical outcomes of residents’ first operative laparoscopy: a randomized controlled trial. Surg Endosc. 2019;33(11):3688–95.

Zundel S, Stocker M, Szavay P. Resident as teacher in pediatric surgery: Innovation is overdue in Central Europe. J Pediatr Surg. 2017;52(11):1859–65.

Bohl MA, McBryan S, Spear C, Pais D, Preul MC, Wilhelmi B, et al. Evaluation of a novel surgical skills training course: are cadavers still the gold standard for surgical skills training? World Neurosurg. 2019;127:63–71.

Lees MC, Zheng B, Daniels LM, White JS. Factors affecting the development of confidence among surgical trainees. J Surg Educ. 2019;76(3):674–83.

Harris DJ, Vine SJ, Wilson MR, McGrath JS, LeBel ME, Buckingham G. A randomised trial of observational learning from 2D and 3D models in robotically assisted surgery. Surg Endosc. 2018;32(11):4527–32.

Gabrysz-Forget F, Young M, Zahabi S, Nepomnayshy D, Nguyen LHP. Surgical errors happen, but are learners trained to recover from them? A survey of North American surgical residents and fellows. J Surg Educ. 2020;77(6):1552–61.

Klitsie PJ, Ten Brinke B, Timman R, Busschbach JJV, Theeuwes HP, Lange JF, et al. Training for endoscopic surgical procedures should be performed in the dissection room: a randomized study. Surg Endosc. 2017;31(4):1754–9.

Siroen KL, Ward CDW, Escoto A, Naish MD, Bureau Y, Patel RV, et al. Mastery learning - does the method of learning make a difference in skills acquisition for robotic surgery? Int J Med Robot. 2017;13(4):e1828.

Maertens H, Aggarwal R, Moreels N, Vermassen F, Van Herzeele I. A proficiency based stepwise endovascular curricular training (PROSPECT) program enhances operative performance in real life: a randomised controlled trial. Eur J Vasc Endovasc Surg. 2017;54(3):387–96.

Raiche I, Hamstra S, Gofton W, Balaa F, Dionne E. Cognitive challenges of junior residents attempting to learn surgical skills by observing procedures. Am J Surg. 2019;218(2):430–5.

LeCompte M, Stewart M, Harris T, Rives G, Guth C, Ehrenfeld J, et al. See one, do one, teach one: a randomized controlled study evaluating the benefit of autonomy in surgical education. Am J Surg. 2019;217(2):281–7.

Kinio AE, Dufresne L, Brandys T, Jetty P. Break out of the classroom: the use of escape rooms as an alternative teaching strategy in Surgical Education. J Surg Educ. 2019;76(1):134–9.

Soucisse ML, Boulva K, Sideris L, Drolet P, Morin M, Dube P. Video coaching as an efficient teaching method for surgical residents-a randomized controlled trial. J Surg Educ. 2017;74(2):365–71.

Ranney SE, Bedrin NG, Roberts NK, Hebert JC, Forgione PM, Nicholas CF. Maximizing learning in the operating room: residents’ perspectives. J Surg Res. 2021;263:5–13.

Naik ND, Abbott EF, Gas BL, Murphy BL, Farley DR, Cook DA. Personalized video feedback improves suturing skills of incoming general surgery trainees. Surgery. 2018;163(4):921–6.

Lesch H, Johnson E, Peters J, Cendan JC. VR Simulation leads to enhanced procedural confidence for Surgical trainees. J Surg Educ. 2020;77(1):213–8.

Fletcher BP, Gusic ME, Robinson WP. Simulation training incorporating a pulsatile carotid endarterectomy model results in increased procedure-specific knowledge, confidence, and comfort in post-graduate trainees. J Surg Educ. 2020;77(5):1289–99.

Lee-Riddle GS, Sigmon DF, Newton AD, Kelz RR, Dumon KR, Morris JB. Surgical Boot camps increases confidence for residents transitioning to senior responsibilities. J Surg Educ. 2021;78(3):987–90.

Amer K, Mur T, Amer K, Ilyas A. A mobile-based surgical simulation application: a comparative analysis of efficacy using a carpal tunnel release module. J Hand Surg. 2017;42(5):P389.E1-.E9.

Bhattacharyya R, Davidson DJ, Sugand K, Bartlett MJ, Bhattacharya R, Gupte CM. Knee arthroscopy simulation: a Randomized controlled trial evaluating the effectiveness of the imperial knee arthroscopy cognitive task analysis (IKACTA) Tool. J Bone Joint Surg Am. 2017;99(19):e103.

Levin A, Haq I. Pre-course cognitive training using a smartphone application in orthopaedic intern surgical skills “boot camps.” J Orthop. 2018;15:506–8.

Logishetty K, Gofton WT, Rudran B, Beaule PE, Gupte CM, Cobb JP. A multicenter randomized controlled trial evaluating the effectiveness of cognitive training for anterior approach total hip arthroplasty. J Bone Joint Surg Am. 2020;102(2):e7.

Trickey AW, Newcomb AB, Porrey M, Piscitani F, Wright J, Graling P, et al. Two-year experience implementing a curriculum to improve residents’ patient-centered communication skills. J Surg Educ. 2017;74(6):e124–32.

Grant AL, Temple-Oberle C. Utility of a validated rating scale for self-assessment in microsurgical training. J Surg Educ. 2017;74(2):360–4.

Quick JA, Kudav V, Doty J, Crane M, Bukoski AD, Bennett BJ, et al. Surgical resident technical skill self-evaluation: increased precision with training progression. J Surg Res. 2017;218:144–9.

Jethwa AR, Perdoni CJ, Kelly EA, Yueh B, Levine SC, Adams ME. Randomized controlled pilot study of video self-assessment for resident mastoidectomy training. OTO Open. 2018;2(2):2473974X18770417.

Miller AT, Swain GW, Widmar M, Divino CM. How important are American board of surgery in-training examination scores when applying for fellowships? J Surg Educ. 2010;67(3):149–51.

Larsen CR, Grantcharov T, Schouenborg L, Ottosen C, Soerensen JL, Ottesen B. Objective assessment of surgical competence in gynaecological laparoscopy: development and validation of a procedure-specific rating scale. BJOG. 2008;115(7):908–16.

Koehler RJ, Amsdell S, Arendt EA, Bisson LJ, Braman JP, Butler A, et al. The arthroscopic Surgical skill evaluation Tool (ASSET). Am J Sports Med. 2013;41(6):1229–37.

Makoul G, Krupat E, Chang CH. Measuring patient views of physician communication skills: development and testing of the communication assessment tool. Patient Educ Couns. 2007;67(3):333–42.

Dumestre D, Yeung JK, Temple-Oberle C. Evidence-based microsurgical skills acquisition series part 2: validated assessment instruments–a systematic review. J Surg Educ. 2015;72(1):80–9.

Schunk D, Greene J. Handbook of self-regulation of learning and performance. London: Routledge / Taylor & Francis Group; 2018.

Hadwin A, Järvelä D, Miller M. Self-regulated, coregulated and socially shared regulation of learning. In: Zimmerman B, Schunk D, editors. Handbook of self-regulation of learning and performance. New York, NY: Routledge; 2011. p. 65–84.

Zimmerman BJ. Self-efficacy: an essential motive to learn. Contemp Educ Psychol. 2000;25(1):82–91.

Jackson JW. Enhancing self-efficacy and learning performance. J Experimental Educ. 2002;70(3):243–54.

Dedy NJ, Bonrath EM, Zevin B, Grantcharov TP. Teaching nontechnical skills in surgical residency: a systematic review of current approaches and outcomes. Surgery. 2013;154(5):1000–8.

Srinivasa S, Gurney J, Koea J. Potential consequences of patient complications for Surgeon Well-being: a systematic review. JAMA Surg. 2019;154(5):451–7.

Galayia R, Kinross J, Arulampalam T. Factors associated with burnout syndrome in surgeons: a systematic review. Ann R Coll Surg Engl. 2020;102:401–7.

Gleason F, Baker SJ, Wood T, Wood L, Hollis RH, Chu DI, Lindeman B. Emotional Intelligence and Burnout in Surgical residents: a 5-Year study. J Surg Educ. 2020;77(6):e63–70.

Ounounou E, Aydin A, Brunckhorst O, Khan MS, Dasgupta P, Ahmed K. Nontechnical skills in surgery: a systematic review of current training modalities. J Surg Educ. 2019;76(1):14–24.

Turner K, Bolderston H, Thomas K, Greville-Harris M, Withers C, McDougall S. Impact of adverse events on surgeons. Br J Surg. 2022;109(4):308–10.

Hu Y-Y, Ellis RJ, Hewitt DB, Yang AD, Cheung EO, Moskowitz JT, et al. Discrimination, abuse, harassment, and burnout in surgical residency training. N Engl J Med. 2019;381(18):1741–52.

Hartzband P, Groopman J. Physician burnout, interrupted. N Engl J Med. 2020;382(26):2485–7.

Rosendal AA, Sloth SB, Rölfing JD, Bie M, Jensen RD. Techinical, non-technical, or both? A scoping review of skills in simulation-based surgical training. J Surg Educ. 2023;80(5):731–49.

Martin JA, Regehr G, Reznick R, MacRae H, Murnaghan J, Hutchison C, et al. Objective structured assessment of technical skill (OSATS) for surgical residents. Br J Surg. 1997;84(2):273–8.

Ahmed K, Miskovic D, Darzi A, Athanasiou T, Hanna GB. Observational tools for assessment of procedural skills: a systematic review. Am J Surg. 2011;202(4):469-80 e6.

van Hove PD, Tuijthof GJ, Verdaasdonk EG, Stassen LP, Dankelman J. Objective assessment of technical surgical skills. Br J Surg. 2010;97(7):972–87.

Groenier M, Brummer L, Bunting BP, Gallagher AG. Reliability of observational assessment methods for outcome-based assessment of surgical skill: systematic review and Meta-analyses. J Surg Educ. 2020;77(1):189–201.

Vanderbilt AA, Grover AC, Pastis NJ, Feldman M, Granados DD, Murithi LK, et al. Randomized controlled trials: a systematic review of laparoscopic surgery and simulation-based training. Glob J Health Sci. 2014;7(2):310–27.

Nayar SK, Musto L, Baruah G, Fernandes R, Bharathan R. Self-assessment of surgical skills: a systematic review. J Surg Educ. 2020;77(2):348–61.

Lave J, Wenger E. Situated learning: legitimate peripheral participation. Cambridge University Press; 1991.

Book   Google Scholar  

Bruner JS. The process of Education. Cambridge: Mass. Harvard University Press; 1960.

Vygotsky LS. Mind in society development of higher psychological processes.In: Michael Cole, Vera John-Steiner, Sylvia Scribner, and Ellen Souberman, editors. Cambridge: Harvard University Press; 1978.

Burgess A, van Diggele C, Roberts C, Mellis C. Introduction to the peer teacher training in health professional education supplement series. BMC Med Educ. 2020;20(Suppl 2):454.

Achenbach J, Schafer T. Modelling the effect of age, semester of study and its interaction on self-reflection of competencies in medical students. Int J Environ Res Public Health. 2022;19(15):9579.

Jaffe TA, Hasday SJ, Knol M, Pradarelli J, Pavuluri Quamme SR, Greenberg CC, et al. Strategies for new skill acquisition by practicing surgeons. J Surg Educ. 2018;75(4):928–34.

Schwartz SI, Galante J, Kaji A, Dolich M, Easter D, Melcher ML, et al. Effect of the 16-hour work limit on general surgery intern operative case volume: a multi-institutional study. JAMA Surg. 2013;148(9):829–33.

Tofel-Grehl C, Feldon D. Cognitive task analysis-based training: a meta-analysis of studies. J Cogn Eng Decis Making. 2013;7:293–304.

Edwards TC, Coombs AW, Szyszka B, Logishetty K, Cobb JP. Cognitive task analysis-based training in surgery: a meta-analysis. BJS Open. 2021;5(6):zrab122.

Maertens H, Madani A, Landry T, Vermassen F, Van Herzeele I, Aggarwal R. Systematic review of e-learning for surgical training. Br J Surg. 2016;103(11):1428–37.

Gentry SV, Gauthier A, L’Estrade Ehrstrom B, Wortley D, Lilienthal A, Tudor Car L, et al. Serious gaming and Gamification Education in Health professions: systematic review. J Med Internet Res. 2019;21(3): e12994.

Graafland M, Schraagen JM, Schijven MP. Systematic review of serious games for medical education and surgical skills training. Br J Surg. 2012;99(10):1322–30.

LoGiudice AB, Sibbald M, Monteiro S, Sherbino J, Keuhl A, Norman GR, et al. Intrinsic or invisible? An audit of CanMEDS roles in Entrustable Professional activities. Acad Med. 2022;97:1213–8.

Bramley AL, McKenna L. Entrustable professional activities in entry-level health professional education: a scoping review. Med Educ. 2021;55:1011–32.

Liu L, Jiang Z, Qi X, Xie A, Wu H, Cheng H, et al. An update on current EPAs in graduate medical education: a scoping review. Med Educ Online. 2021;26:1981198.

Download references

Acknowledgements

Not applicable.

Disclosures

This research did not receive any specific grants from funding agencies in the public, commercial, or not-for-profit sectors.

Consent to publish

Not applicable due to the nature of the study.

Conflict of interest

Open access funding provided by Uppsala University.

Author information

Authors and affiliations.

Department of Pediatric Surgery, New Children’s Hospital, Helsinki University Hospital, Helsinki, Finland

Niklas Pakkasjärvi

Department of Pediatric Surgery, Section of Urology, University Children’s Hospital, Uppsala, Sweden

Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland

Henrika Anttila & Kirsi Pyhältö

Centre for Higher and Adult Education, Faculty of Education, Stellenbosch University, Stellenbosch, South Africa

Kirsi Pyhältö

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, N.P. & K.P; methodology, N.P.; software, H.A.; validation, N.P., H.A. and K.P.; formal analysis, N.P., H.A.; investigation, N.P., H.A.; resources, H.A.; data curation, H.A.; writing—original draft preparation, N.P.; writing—review and editing, N.P, , H.A., K.P.; visualization, N.P.; supervision, K.P.; project administration, K.P. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Niklas Pakkasjärvi .

Ethics declarations

Ethics approval and consent to participate.

This systematic review did not involve any human participants or experimental interventions; therefore, ethical approval was not required. We adhered to PRISMA guidelines for methodology.

Consent for publication

Consent to participate was not applicable due to the nature of the study which did not involve human participants or experimental interventions.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Pakkasjärvi, N., Anttila, H. & Pyhältö, K. What are the learning objectives in surgical training – a systematic literature review of the surgical competence framework. BMC Med Educ 24 , 119 (2024). https://doi.org/10.1186/s12909-024-05068-z

Download citation

Received : 25 September 2023

Accepted : 17 January 2024

Published : 06 February 2024

DOI : https://doi.org/10.1186/s12909-024-05068-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Surgical competence
  • Surgical education
  • Systematic literature review

BMC Medical Education

ISSN: 1472-6920

what is systematic review of related literature in research

Disclaimer: Early release articles are not considered as final versions. Any changes will be reflected in the online version in the month the article is officially released.

Volume 30, Number 3—March 2024

Systematic review of scales for measuring infectious disease–related stigma.

Main Article

Frequency of inclusion of domains of stigma in a systematic review of scales for measuring infectious disease–related stigma. Graph displays existing scales from framework synthesis. Action-oriented stigma domains included the following: social, stigmatization by friends and family; public, stigmatization by broader community and strangers; occupational, stigmatization by colleagues and employers; provider-related, stigmatization by service providers; structural, stigmatization by institutions; self, internalized stigma; anticipated, disclosure concerns or avoidance due to fear of stigma; nonspecific actor, item does not specify who is enacting stigma.

Figure 3 . Frequency of inclusion of domains of stigma in a systematic review of scales for measuring infectious disease–related stigma. Graph displays existing scales from framework synthesis. Action-oriented stigma domains included the following: social, stigmatization by friends and family; public, stigmatization by broader community and strangers; occupational, stigmatization by colleagues and employers; provider-related, stigmatization by service providers; structural, stigmatization by institutions; self, internalized stigma; anticipated, disclosure concerns or avoidance due to fear of stigma; nonspecific actor, item does not specify who is enacting stigma.

Exit Notification / Disclaimer Policy

  • The Centers for Disease Control and Prevention (CDC) cannot attest to the accuracy of a non-federal website.
  • Linking to a non-federal website does not constitute an endorsement by CDC or any of its employees of the sponsors or the information and products presented on the website.
  • You will be subject to the destination website's privacy policy when you follow the link.
  • CDC is not responsible for Section 508 compliance (accessibility) on other federal or private website.

IMAGES

  1. How to Conduct a Systematic Review

    what is systematic review of related literature in research

  2. The Systematic Review Process

    what is systematic review of related literature in research

  3. Overview

    what is systematic review of related literature in research

  4. How To Write A Literature Review

    what is systematic review of related literature in research

  5. systematic literature review use cases

    what is systematic review of related literature in research

  6. 15 Literature Review Examples (2024)

    what is systematic review of related literature in research

VIDEO

  1. Systematic literature review

  2. Introduction to Systematic Literature Review by Dr. K. G. Priyashantha

  3. Research Methods: Writing a Literature Review

  4. SYSTEMATIC AND LITERATURE REVIEWS

  5. Introduction Systematic Literature Review-Various frameworks Bibliometric Analysis

  6. How To Structure A Literature Review

COMMENTS

  1. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review

  2. How-to conduct a systematic literature review: A quick guide for

    A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure . An SLR updates the reader with current literature about a subject .

  3. Guidance on Conducting a Systematic Literature Review

    In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature reviews in planning education and research. Introduction

  4. LibGuides: Systematic Reviews: Introduction to Systematic Reviews

    "A systematic review is a summary of the medical literature that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue. It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and random errors." Gopalakrishnan S, Ganeshkumar P. Systematic Reviews and Meta-analysis ...

  5. What are systematic reviews?

    Systematic reviews are a type of literature review of research which require equivalent standards of rigour as primary research. They have a clear, logical rationale that is reported to the reader of the review. They are used in research and policymaking to inform evidence-based decisions and practice. They differ from traditional literature ...

  6. Research Guides: Systematic Reviews: Types of Literature Reviews

    Rapid review. Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research. Completeness of searching determined by time constraints. Time-limited formal quality assessment. Typically narrative and tabular.

  7. Systematic, Scoping, and Other Literature Reviews: Overview

    A systematic review, however, is a comprehensive literature review conducted to answer a specific research question. Authors of a systematic review aim to find, code, appraise, and synthesize all of the previous research on their question in an unbiased and well-documented manner.

  8. Literature Review: Systematic literature reviews

    Systematic literature review. A systematic literature review (SLR) identifies, selects and critically appraises research in order to answer a clearly formulated question (Dewey, A. & Drahota, A. 2016). The systematic review should follow a clearly defined protocol or plan where the criteria is clearly stated before the review is conducted.

  9. Systematic Review

    Systematic review vs literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarise and evaluate previous work, without using a formal, explicit method.

  10. Systematic reviews: Structure, form and content

    The systematic, transparent searching techniques outlined in this article can be adopted and adapted for use in other forms of literature review (Grant & Booth 2009), for example, while the critical appraisal tools highlighted are appropriate for use in other contexts in which the reliability and applicability of medical research require ...

  11. Guidance on Conducting a Systematic Literature Review

    Literature review is an essential feature of academic research. Fundamentally, knowledge advancement must be built on prior existing work. To push the knowledge frontier, we must know where the frontier is. By reviewing relevant literature, we understand the breadth and depth of the existing body of work and identify gaps to explore.

  12. Getting started

    What is a literature review? Definition: A literature review is a systematic examination and synthesis of existing scholarly research on a specific topic or subject. Purpose: It serves to provide a comprehensive overview of the current state of knowledge within a particular field. Analysis: Involves critically evaluating and summarizing key findings, methodologies, and debates found in ...

  13. What is a Systematic Review (SR)?

    Doing a systematic literature review in legal scholarship by Marnix Snel, ... one of the challenges is interpreting such apparently conflicting research. A systematic review is a method to systematically identify relevant research, appraise its quality, and synthesize the results. The last two decades have seen increasing interest and ...

  14. How to Do a Systematic Review: A Best Practice Guide for ...

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of e …

  15. PDF Systematic Literature Reviews: an Introduction

    Systematic literature reviews (SRs) are a way of synthesising scientific evidence to answer a particular research question in a way that is transparent and reproducible, while seeking to include all published evidence on the topic and appraising the quality of this evidence.

  16. Research Guides: Systematic Review: Searching the Literature

    Image by Solidaria Garden. Searching for existing systematic reviews on a topic related to your own research question can be a good place to start. These systematic reviews may provide a model for approaching your own review of the literature, including recommended search strategies and resources.

  17. Guidelines for writing a systematic review

    Contemporary issuesGuidelines for writing a systematic review. A key feature of any academic activity is to have a sufficient understanding of the subject area under investigation and thus an awareness of previous research. Undertaking a literature review with an analysis of the results on a specific issue is required to demonstrate sufficient ...

  18. Overview

    Systematic Review: Comprehensive literature synthesis on a specific research question, typically requires a team: Systematic; exhaustive and comprehensive; search of all available evidence: Yes: Yes: Narrative and tables, describes what is known and unknown, recommendations for future research, limitations of findings: Rapid Review

  19. An overview of methodological approaches in systematic reviews

    Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to...

  20. Systematic Reviews and Meta-analysis: Understanding the Best Evidence

    Systematic reviews aim to identify, evaluate, and summarize the findings of all relevant individual studies over a health-related issue, thereby making the available evidence more accessible to decision makers.

  21. Introduction to systematic review and meta-analysis

    A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality.

  22. Easy guide to conducting a systematic review

    32364273. 10.1111/jpc.14853. A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology.

  23. Defining the process to literature searching in systematic reviews: a

    Background Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving readers clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence. Information specialists and review teams ...

  24. Literature review as a research methodology: An ...

    A literature review can broadly be described as a more or less systematic way of collecting and synthesizing previous research ( Baumeister & Leary, 1997Tranfield, Denyer, & Smart, 2003 ).

  25. Systematic Review of Scales for Measuring Infectious Disease-Related Stigma

    Research Systematic Review of Scales for Measuring Infectious Disease-Related Stigma Amy Paterson , Ashleigh Cheyne, Benjamin Jones, Stefan Schilling, Louise Sigfrid, Jeni Stolow, Lina Moses, ... Measuring health-related stigma—a literature review. Psychol Health Med.

  26. How to do a systematic review

    A systematic review aims to bring evidence together to answer a pre-defined research question. This involves the identification of all primary research relevant to the defined review question, the critical appraisal of this research, and the synthesis of the findings.13 Systematic reviews may combine data from different.

  27. The complexity of leadership in coproduction practices: a guiding

    This systematic literature review is based on a study protocol on coproduction research in the context of health and social care sectors [], and data were obtained from a published scoping review, where the full search strategy is provided [].The scoping review set out to identify 'what is out there' and to explore the definitions of the concepts of coproduction and codesign.

  28. What are the learning objectives in surgical training

    To map the landscape of contemporary surgical education through a competence framework by conducting a systematic literature review on learning outcomes of surgical education and the instructional methods applied to attain the outcomes. Surgical education has seen a paradigm shift towards competence-based training. However, a gap remains in the literature regarding the specific components of ...

  29. Systematic Review of Scales for Measuring Infectious Disease-Related Stigma

    Research Systematic Review of Scales for Measuring Infectious Disease-Related Stigma Amy Paterson , Ashleigh Cheyne, Benjamin Jones, Stefan Schilling, ... Frequency of inclusion of domains of stigma in a systematic review of scales for measuring infectious disease-related stigma. Graph displays existing scales from framework synthesis.