- Research Process
Levels of evidence in research
- 5 minute read
- 70.9K views
Table of Contents
Level of evidence hierarchy
When carrying out a project you might have noticed that while searching for information, there seems to be different levels of credibility given to different types of scientific results. For example, it is not the same to use a systematic review or an expert opinion as a basis for an argument. It’s almost common sense that the first will demonstrate more accurate results than the latter, which ultimately derives from a personal opinion.
In the medical and health care area, for example, it is very important that professionals not only have access to information but also have instruments to determine which evidence is stronger and more trustworthy, building up the confidence to diagnose and treat their patients.
5 levels of evidence
With the increasing need from physicians – as well as scientists of different fields of study-, to know from which kind of research they can expect the best clinical evidence, experts decided to rank this evidence to help them identify the best sources of information to answer their questions. The criteria for ranking evidence is based on the design, methodology, validity and applicability of the different types of studies. The outcome is called “levels of evidence” or “levels of evidence hierarchy”. By organizing a well-defined hierarchy of evidence, academia experts were aiming to help scientists feel confident in using findings from high-ranked evidence in their own work or practice. For Physicians, whose daily activity depends on available clinical evidence to support decision-making, this really helps them to know which evidence to trust the most.
So, by now you know that research can be graded according to the evidential strength determined by different study designs. But how many grades are there? Which evidence should be high-ranked and low-ranked?
There are five levels of evidence in the hierarchy of evidence – being 1 (or in some cases A) for strong and high-quality evidence and 5 (or E) for evidence with effectiveness not established, as you can see in the pyramidal scheme below:
Level 1: (higher quality of evidence) – High-quality randomized trial or prospective study; testing of previously developed diagnostic criteria on consecutive patients; sensible costs and alternatives; values obtained from many studies with multiway sensitivity analyses; systematic review of Level I RCTs and Level I studies.
Level 2: Lesser quality RCT; prospective comparative study; retrospective study; untreated controls from an RCT; lesser quality prospective study; development of diagnostic criteria on consecutive patients; sensible costs and alternatives; values obtained from limited stud- ies; with multiway sensitivity analyses; systematic review of Level II studies or Level I studies with inconsistent results.
Level 3: Case-control study (therapeutic and prognostic studies); retrospective comparative study; study of nonconsecutive patients without consistently applied reference “gold” standard; analyses based on limited alternatives and costs and poor estimates; systematic review of Level III studies.
Level 4: Case series; case-control study (diagnostic studies); poor reference standard; analyses with no sensitivity analyses.
Level 5: (lower quality of evidence) – Expert opinion.
By looking at the pyramid, you can roughly distinguish what type of research gives you the highest quality of evidence and which gives you the lowest. Basically, level 1 and level 2 are filtered information – that means an author has gathered evidence from well-designed studies, with credible results, and has produced findings and conclusions appraised by renowned experts, who consider them valid and strong enough to serve researchers and scientists. Levels 3, 4 and 5 include evidence coming from unfiltered information. Because this evidence hasn’t been appraised by experts, it might be questionable, but not necessarily false or wrong.
Examples of levels of evidence
As you move up the pyramid, you will surely find higher-quality evidence. However, you will notice there is also less research available. So, if there are no resources for you available at the top, you may have to start moving down in order to find the answers you are looking for.
- Systematic Reviews: -Exhaustive summaries of all the existent literature about a certain topic. When drafting a systematic review, authors are expected to deliver a critical assessment and evaluation of all this literature rather than a simple list. Researchers that produce systematic reviews have their own criteria to locate, assemble and evaluate a body of literature.
- Meta-Analysis: Uses quantitative methods to synthesize a combination of results from independent studies. Normally, they function as an overview of clinical trials. Read more: Systematic review vs meta-analysis .
- Critically Appraised Topic: Evaluation of several research studies.
- Critically Appraised Article: Evaluation of individual research studies.
- Randomized Controlled Trial: a clinical trial in which participants or subjects (people that agree to participate in the trial) are randomly divided into groups. Placebo (control) is given to one of the groups whereas the other is treated with medication. This kind of research is key to learning about a treatment’s effectiveness.
- Cohort studies: A longitudinal study design, in which one or more samples called cohorts (individuals sharing a defining characteristic, like a disease) are exposed to an event and monitored prospectively and evaluated in predefined time intervals. They are commonly used to correlate diseases with risk factors and health outcomes.
- Case-Control Study: Selects patients with an outcome of interest (cases) and looks for an exposure factor of interest.
- Background Information/Expert Opinion: Information you can find in encyclopedias, textbooks and handbooks. This kind of evidence just serves as a good foundation for further research – or clinical practice – for it is usually too generalized.
Of course, it is recommended to use level A and/or 1 evidence for more accurate results but that doesn’t mean that all other study designs are unhelpful or useless. It all depends on your research question. Focusing once more on the healthcare and medical field, see how different study designs fit into particular questions, that are not necessarily located at the tip of the pyramid:
- Questions concerning therapy: “Which is the most efficient treatment for my patient?” >> RCT | Cohort studies | Case-Control | Case Studies
- Questions concerning diagnosis: “Which diagnose method should I use?” >> Prospective blind comparison
- Questions concerning prognosis: “How will the patient’s disease will develop over time?” >> Cohort Studies | Case Studies
- Questions concerning etiology: “What are the causes for this disease?” >> RCT | Cohort Studies | Case Studies
- Questions concerning costs: “What is the most cost-effective but safe option for my patient?” >> Economic evaluation
- Questions concerning meaning/quality of life: “What’s the quality of life of my patient going to be like?” >> Qualitative study
Find more about Levels of evidence in research on Pinterest:
- Publication Process
Salami slicing research
You may also like.
Five Common Mistakes to Avoid When Writing a Biomedical Research Paper
Making Technical Writing in Environmental Engineering Accessible
To Err is Not Human: The Dangers of AI-assisted Academic Writing
When Data Speak, Listen: Importance of Data Collection and Analysis Methods
Choosing the Right Research Methodology: A Guide for Researchers
Navigating the Reproducibility Crisis: A Guide to Analytical Method Validation
Why is data validation important in research?
Writing a good review article
Input your search keywords and press Enter.
Purdue Online Writing Lab Purdue OWL® College of Liberal Arts
Writing a Literature Review
Welcome to the Purdue OWL
This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.
Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.
A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.
Where, when, and why would I write a lit review?
There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.
A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.
Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.
What are the parts of a lit review?
Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.
- An introductory paragraph that explains what your working topic and thesis is
- A forecast of key topics or texts that will appear in the review
- Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
- Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
- Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
- Critically Evaluate: Mention the strengths and weaknesses of your sources
- Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.
- Summarize the key findings you have taken from the literature and emphasize their significance
- Connect it back to your primary research question
How should I organize my lit review?
Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:
- Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
- Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
- Qualitative versus quantitative research
- Empirical versus theoretical scholarship
- Divide the research by sociological, historical, or cultural sources
- Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.
What are some strategies or tips I can use while writing my lit review?
Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .
As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.
Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:
- It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
- Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
- Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
- Read more about synthesis here.
The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.
Please Note: Froedtert Health has implemented a new security protocol which has resulted in the degradation of on-campus access to MCW Libraries' electronic resources. Users on the Froedtert network must sign in via OpenAthens when attempting to access electronic library resources (including UpToDate, VisualDx, etc.). MCW Libraries is aware of this change and is working to provide a permanent access solution. If you have MCW credentials, please sign in via MCW employees and students when prompted. If you are a Froedtert employee, click here to register for an OpenAthens account or view more information .
Evidence Based Practice: Study Designs & Evidence Levels
- Databases to Search
- EBP Resources
- Study Designs & Evidence Levels
- How Do I...
This section reviews some research definitions and provides commonly used evidence tables.
Levels of Evidence Johns Hopkins Nursing Evidence Based Practice
Dang, D., & Dearholt, S. (2017). Johns Hopkins nursing evidence-based practice: model and guidelines. 3rd ed. Indianapolis, IN: Sigma Theta Tau International. www.hopkinsmedicine.org/evidence-based-practice/ijhn_2017_ebp.html
Identifying the Study Design
The type of study can generally be figured out by looking at three issues:
Q1. What was the aim of the study?
- To simply describe a population (PO questions) = descriptive
- To quantify the relationship between factors (PICO questions) = analytic.
Q2. If analytic, was the intervention randomly allocated?
- Yes? = RCT
- No? = Observational study
For an observational study, the main type will then depend on the timing of the measurement of outcome, so our third question is:
Q3. When were the outcomes determined?
- Some time after the exposure or intervention? = Cohort study ('prospective study')
- At the same time as the exposure or intervention? = Cross sectional study or survey
- Before the exposure was determined? = Case-control study ('retrospective study' based on recall of the exposure)
Centre for Evidence-Based Medicine (CEBM)
Definitions of Study Types
Case report / Case series: A report on a series of patients with an outcome of interest. No control group is involved.
Case control study: A study which involves identifying patients who have the outcome of interest (cases) and patients without the same outcome (controls), and looking back to see if they had the exposure of interest.
Cohort study: Involves identification of two groups (cohorts) of patients, one which received the exposure of interest, and one which did not, and following these cohorts forward for the outcome of interest.
Randomized controlled clinical trial: Participants are randomly allocated into an experimental group or a control group and followed over time for the variables/outcomes of interest.
Systematic review: A summary of the medical literature that uses explicit methods to perform a comprehensive literature search and critical appraisal of individual studies and that uses appropriate statistical techniques to combine these valid studies.
Meta-analysis: A systematic review that uses quantitative methods to synthesize and summarize the results.
Meta-synthesis: A systematic approach to the analysis of data across qualitative studies. -- EJ Erwin, MJ Brotherson, JA Summers. Understanding Qualitative Meta-synthesis. Issues and Opportunities in Early Childhood Intervention Research, 33(3) 186-200 .
Cross sectional study: The observation of a defined population at a single point in time or time interval. Exposure and outcome are determined simultaneously.
Prospective, blind comparison to a gold standard: Studies that show the efficacy of a diagnostic test are also called prospective, blind comparison to a gold standard study. This is a controlled trial that looks at patients with varying degrees of an illness and administers both diagnostic tests — the test under investigation and the “gold standard” test — to all of the patients in the study group. The sensitivity and specificity of the new test are compared to that of the gold standard to determine potential usefulness.
Qualitative research: answers a wide variety of questions related to human responses to actual or potential health problems.The purpose of qualitative research is to describe, explore and explain the health-related phenomena being studied.
Retrospective cohort: follows the same direction of inquiry as a cohort study. Subjects begin with the presence or absence of an exposure or risk factor and are followed until the outcome of interest is observed. However, this study design uses information that has been collected in the past and kept in files or databases. Patients are identified for exposure or non-exposures and the data is followed forward to an effect or outcome of interest.
(Adapted from CEBM's Glossary and Duke Libraries' Intro to Evidence-Based Practice )
American Association of Critical Care Nursing-- Levels of Evidence
Level A Meta-analysis of multiple controlled studies or meta-synthesis of qualitative studies with results that consistently support a specific action, intervention or treatment
Level B Well designed controlled studies, both randomized and nonrandomized, with results that consistently support a specific action, intervention, or treatment
Level C Qualitative studies, descriptive or correlational studies, integrative reviews, systematic reviews, or randomized controlled trials with inconsistent results
Level D Peer-reviewed professional organizational standards, with clinical studies to support recommendations
Level E Theory-based evidence from expert opinion or multiple case reports
Level M Manufacturers’ recommendations only
Armola RR, Bourgault AM, Halm MA, Board RM, Bucher L, Harrington L, Heafey CA, Lee R, Shellner PK, Medina J. (2009) AACN levels of evidence: what's new ? J.Crit Care Nurse. Aug;29(4):70-3.
Flow Chart of Study Designs
Figure: Flow chart of different types of studies (Q1, 2, and 3 refer to the three questions below in "Identifying the Study Design" box.) Centre for Evidence-Based Medicine (CEBM)
What is a "Confidence Interval (CI)"?
A confidence interval (CI) can be used to show within which interval the population's mean score will probably fall. Most researchers use a CI of 95%. By using a CI of 95%, researchers accept there is a 5% chance they have made the wrong decision in treatment. Therefore, if 0 falls within the agreed CI, it can be concluded that there is no significant difference between the two treatments. When 0 lies outside the CI, researchers will conclude that there is a statistically significant difference.
Halfens, R. G., & Meijers, J. M. (2013). Back to basics: an introduction to statistics. Journal Of Wound Care , 22 (5), 248-251.
What is a "p-value?"
Categorical (nominal) tests This category of tests can be used when the dependent, or outcome, variable is categorical (nominal), such as the difference between two wound treatments and the healing of the wound (healed versus nonhealed). One of the most used tests in this category is the chisquared test (χ2). The chisquared statistic is calculated by comparing the differences between the observed and the expected frequencies. The expected frequencies are the frequencies that would be found if there was no relationship between the two variables.
Based on the calculated χ2 statistic, a probability (p value) is given, which indicates the probability that the two means are not different from each other. Researchers are often satisfied if the probability is 5% or less, which means that the researchers would conclude that for p < 0.05, there is a significant difference. A p value ≥ 0.05 suggests that there is no significant difference between the means.
Halfens, R. G., & Meijers, J. M. (2013). Back to basics: an introduction to statistics. Journal Of Wound Care, 22(5), 248-251.
- << Previous: EBP Resources
- Next: Citations Managers >>
- Last Updated: Aug 30, 2023 9:31 AM
- URL: https://mcw.libguides.com/evidencebasedpractice
MCW Libraries 8701 Watertown Plank Road Milwaukee, WI 53226 (414) 955-8300
Contact Us Locations & Hours Send Us Your Comments
- Evidence-Based Medicine
- Finding the Evidence
- eJournals for EBM
Levels of Evidence
- JAMA Users' Guides
- Tutorials (Learning EBM)
- Web Resources
Resources That Rate The Evidence
- ACP Smart Medicine
- Agency for Healthcare Research and Quality
- Clinical Evidence
- Cochrane Library
- Health Services/Technology Assessment Texts (HSTAT)
- PDQ® Cancer Information Summaries from NCI
- Trip Database
Critically Appraised Individual Articles
- Evidence-Based Complementary and Alternative Medicine
- Evidence-Based Dentistry
- Evidence-Based Nursing
- Journal of Evidence-Based Dental Practice
Grades of Recommendation
Critically-appraised individual articles and synopses include:
- Level I: Evidence from a systematic review of all relevant randomized controlled trials.
- Level II: Evidence from a meta-analysis of all relevant randomized controlled trials.
- Level III: Evidence from evidence summaries developed from systematic reviews
- Level IV: Evidence from guidelines developed from systematic reviews
- Level V: Evidence from meta-syntheses of a group of descriptive or qualitative studies
- Level VI: Evidence from evidence summaries of individual studies
- Level VII: Evidence from one properly designed randomized controlled trial
- Level VIII: Evidence from nonrandomized controlled clinical trials, nonrandomized clinical trials, cohort studies, case series, case reports, and individual qualitative studies.
- Level IX: Evidence from opinion of authorities and/or reports of expert committee
Two things to remember:
1. Studies in which randomization occurs represent a higher level of evidence than those in which subject selection is not random.
2. Controlled studies carry a higher level of evidence than those in which control groups are not used.
Strength of Recommendation Taxonomy (SORT)
- SORT The American Academy of Family Physicians uses the Strength of Recommendation Taxonomy (SORT) to label key recommendations in clinical review articles. In general, only key recommendations are given a Strength-of-Recommendation grade. Grades are assigned on the basis of the quality and consistency of available evidence.
- << Previous: eJournals for EBM
- Next: JAMA Users' Guides >>
- Last Updated: Mar 14, 2023 2:49 PM
- URL: https://guides.library.stonybrook.edu/evidence-based-medicine
- Request a Class
- Hours & Locations
- Ask a Librarian
- Special Collections
- Library Faculty & Staff
Library Administration: 631.632.7100
- Stony Brook Home
- Campus Maps
- Web Accessibility Information
- Accessibility Barrier Report Form
Comments or Suggestions? | Library Webmaster
Except where otherwise noted, this work by SBU Libraries is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License .
Evidence-Based Practice (EBP)
- The EBP Process
- Forming a Clinical Question
- Inclusion & Exclusion Criteria
- Acquiring Evidence
- Appraising the Quality of the Evidence
- Writing a Literature Review
- NEW! Finding Psychological Tests & Assessment Instruments
What Is a Literature Review?
A literature review is an integrated analysis of scholarly writings that are related directly to your research question. Put simply, it's a critical evaluation of what's already been written on a particular topic . It represents the literature that provides background information on your topic and shows a connection between those writings and your research question.
A literature review may be a stand-alone work or the introduction to a larger research paper, depending on the assignment. Rely heavily on the guidelines your instructor has given you.
What a Literature Review Is Not:
- A list or summary of sources
- An annotated bibliography
- A grouping of broad, unrelated sources
- A compilation of everything that has been written on a particular topic
- Literary criticism (think English) or a book review
Why Literature Reviews Are Important
- They explain the background of research on a topic
- They demonstrate why a topic is significant to a subject area
- They discover relationships between research studies/ideas
- They identify major themes, concepts, and researchers on a topic
- They identify critical gaps and points of disagreement
- They discuss further research questions that logically come out of the previous studies
To Learn More about Conducting and Writing a Lit Review . . .
Monash University (in Australia) has created several extremely helpful, interactive tutorials.
- The Stand-Alone Literature Review, https://www.monash.edu/rlo/assignment-samples/science/stand-alone-literature-review
- Researching for Your Literature Review, https://guides.lib.monash.edu/researching-for-your-literature-review/home
- Writing a Literature Review, https://www.monash.edu/rlo/graduate-research-writing/write-the-thesis/writing-a-literature-review
Keep Track of Your Sources!
A citation manager can be helpful way to work with large numbers of citations. See UMSL Libraries' Citing Sources guide for more information. Personally, I highly recommend Zotero —it's free, easy to use, and versatile. If you need help getting started with Zotero or one of the other citation managers, please contact a librarian.
- << Previous: Appraising the Quality of the Evidence
- Next: NEW! Finding Psychological Tests & Assessment Instruments >>
- Last Updated: Jun 5, 2023 2:24 PM
- URL: https://libguides.umsl.edu/ebp
Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
- How to Write a Literature Review | Guide, Examples, & Templates
How to Write a Literature Review | Guide, Examples, & Templates
Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.
What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .
There are five key steps to writing a literature review:
- Search for relevant literature
- Evaluate sources
- Identify themes, debates, and gaps
- Outline the structure
- Write your literature review
A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.
Table of contents
What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.
- Quick Run-through
- Step 1 & 2
When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:
- Demonstrate your familiarity with the topic and its scholarly context
- Develop a theoretical framework and methodology for your research
- Position your work in relation to other researchers and theorists
- Show how your research addresses a gap or contributes to a debate
- Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.
Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.
A faster, more affordable way to improve your paper
Scribbr’s new AI Proofreader checks your document and corrects spelling, grammar, and punctuation mistakes with near-human accuracy and the efficiency of AI!
Proofread my paper
Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.
- Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
- Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
- Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
- Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)
You can also check out our templates with literature review examples and sample outlines at the links below.
Download Word doc Download Google doc
Before you begin searching for literature, you need a clearly defined topic .
If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .
Make a list of keywords
Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.
- Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
- Body image, self-perception, self-esteem, mental health
- Generation Z, teenagers, adolescents, youth
Search for relevant sources
Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:
- Your university’s library catalogue
- Google Scholar
- Project Muse (humanities and social sciences)
- Medline (life sciences and biomedicine)
- EconLit (economics)
- Inspec (physics, engineering and computer science)
You can also use boolean operators to help narrow down your search.
Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.
You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.
For each publication, ask yourself:
- What question or problem is the author addressing?
- What are the key concepts and how are they defined?
- What are the key theories, models, and methods?
- Does the research use established frameworks or take an innovative approach?
- What are the results and conclusions of the study?
- How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
- What are the strengths and weaknesses of the research?
Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.
You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.
Take notes and cite your sources
As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.
It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.
Here's why students love Scribbr's proofreading services
Discover proofreading & editing
To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:
- Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
- Themes: what questions or concepts recur across the literature?
- Debates, conflicts and contradictions: where do sources disagree?
- Pivotal publications: are there any influential theories or studies that changed the direction of the field?
- Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?
This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.
- Most research has focused on young women.
- There is an increasing interest in the visual aspects of social media.
- But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.
There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).
The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.
Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.
If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.
For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.
If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:
- Look at what results have emerged in qualitative versus quantitative research
- Discuss how the topic has been approached by empirical versus theoretical scholarship
- Divide the literature into sociological, historical, and cultural sources
A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.
You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.
Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.
The introduction should clearly establish the focus and purpose of the literature review.
Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.
As you write, you can follow these tips:
- Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
- Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
- Critically evaluate: mention the strengths and weaknesses of your sources
- Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts
In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.
When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !
This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.
Scribbr slides are free to use, customize, and distribute for educational purposes.
Open Google Slides Download PowerPoint
If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.
- Sampling methods
- Simple random sampling
- Stratified sampling
- Cluster sampling
- Likert scales
- Null hypothesis
- Statistical power
- Probability distribution
- Effect size
- Poisson distribution
- Optimism bias
- Cognitive bias
- Implicit bias
- Hawthorne effect
- Anchoring bias
- Explicit bias
A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .
It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.
There are several reasons to conduct a literature review at the beginning of a research project:
- To familiarize yourself with the current state of knowledge on your topic
- To ensure that you’re not just repeating what others have already done
- To identify gaps in knowledge and unresolved problems that your research can address
- To develop your theoretical framework and methodology
- To provide an overview of the key findings and debates on the topic
Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.
The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .
A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other academic texts , with an introduction , a main body, and a conclusion .
An annotated bibliography is a list of source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a paper .
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved November 7, 2023, from https://www.scribbr.com/dissertation/literature-review/
Is this article helpful?
Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, what is your plagiarism score.
Harvey Cushing/John Hay Whitney Medical Library
- Research Help
YSN Doctoral Programs: Steps in Conducting a Literature Review
- Biomedical Databases
- Global (Public Health) Databases
- Soc. Sci., History, and Law Databases
- Grey Literature
- Trials Registers
- Data and Statistics
- Public Policy
- Google Tips
- Recommended Books
- Steps in Conducting a Literature Review
What is a literature review?
A literature review is an integrated analysis -- not just a summary-- of scholarly writings and other relevant evidence related directly to your research question. That is, it represents a synthesis of the evidence that provides background information on your topic and shows a association between the evidence and your research question.
A literature review may be a stand alone work or the introduction to a larger research paper, depending on the assignment. Rely heavily on the guidelines your instructor has given you.
Why is it important?
A literature review is important because it:
- Explains the background of research on a topic.
- Demonstrates why a topic is significant to a subject area.
- Discovers relationships between research studies/ideas.
- Identifies major themes, concepts, and researchers on a topic.
- Identifies critical gaps and points of disagreement.
- Discusses further research questions that logically come out of the previous studies.
APA7 Style resources
APA Style Blog - for those harder to find answers
1. Choose a topic. Define your research question.
Your literature review should be guided by your central research question. The literature represents background and research developments related to a specific research question, interpreted and analyzed by you in a synthesized way.
- Make sure your research question is not too broad or too narrow. Is it manageable?
- Begin writing down terms that are related to your question. These will be useful for searches later.
- If you have the opportunity, discuss your topic with your professor and your class mates.
2. Decide on the scope of your review
How many studies do you need to look at? How comprehensive should it be? How many years should it cover?
- This may depend on your assignment. How many sources does the assignment require?
3. Select the databases you will use to conduct your searches.
Make a list of the databases you will search.
Where to find databases:
- use the tabs on this guide
- Find other databases in the Nursing Information Resources web page
- More on the Medical Library web page
- ... and more on the Yale University Library web page
4. Conduct your searches to find the evidence. Keep track of your searches.
- Use the key words in your question, as well as synonyms for those words, as terms in your search. Use the database tutorials for help.
- Save the searches in the databases. This saves time when you want to redo, or modify, the searches. It is also helpful to use as a guide is the searches are not finding any useful results.
- Review the abstracts of research studies carefully. This will save you time.
- Use the bibliographies and references of research studies you find to locate others.
- Check with your professor, or a subject expert in the field, if you are missing any key works in the field.
- Ask your librarian for help at any time.
- Use a citation manager, such as EndNote as the repository for your citations. See the EndNote tutorials for help.
Review the literature
Some questions to help you analyze the research:
- What was the research question of the study you are reviewing? What were the authors trying to discover?
- Was the research funded by a source that could influence the findings?
- What were the research methodologies? Analyze its literature review, the samples and variables used, the results, and the conclusions.
- Does the research seem to be complete? Could it have been conducted more soundly? What further questions does it raise?
- If there are conflicting studies, why do you think that is?
- How are the authors viewed in the field? Has this study been cited? If so, how has it been analyzed?
- Review the abstracts carefully.
- Keep careful notes so that you may track your thought processes during the research process.
- Create a matrix of the studies for easy analysis, and synthesis, across all of the studies.
- << Previous: Recommended Books
- Last Updated: Oct 31, 2023 3:00 PM
- URL: https://guides.library.yale.edu/YSNDoctoral
- Need help? Ask a Librarian
Nursing Literature and Other Types of Reviews
- Literature and Other Types of Reviews
- Starting Your Search
- Constructing Your Search
- Selecting Databases and Saving Your Search
Levels of Evidence
- Creating a PRISMA Table
- Literature Table and Synthesis
- Other Resources
Levels of evidence (sometimes called hierarchy of evidence) are assigned to studies based on the methodological quality of their design, validity, and applicability to patient care. These decisions gives the grade (or strength) of recommendation. Just because something is lower on the pyramid doesn't mean that the study itself is lower-quality, it just means that the methods used may not be as clinically rigorous as higher levels of the pyramid. In nursing, the system for assigning levels of evidence is often from Melnyk & Fineout-Overholt's 2011 book, Evidence-based Practice in Nursing and Healthcare: A Guide to Best Practice . The Levels of Evidence below are adapted from Melnyk & Fineout-Overholt's (2011) model.
Melnyk & Fineout-Overholt (2011)
- << Previous: Selecting Databases and Saving Your Search
- Next: Creating a PRISMA Table >>
- Last Updated: Sep 5, 2023 3:14 PM
- URL: https://libguides.lib.msu.edu/nursinglitreview
- Call Us: (517) 353-8700
- Contact Information
- Site A to Z
- Privacy Statement
- Call MSU: (517) 355-1855
- Visit: msu.edu
- MSU is an affirmative-action, equal-opportunity employer.
- Notice of Nondescrimination
- SPARTANS WILL.
- © Michigan State University Board of Trustees
Penn State University Libraries
- Home-Articles and Databases
- Asking the clinical question
- PICO & Finding Evidence
- Evaluating the Evidence
- Systematic Review vs. Literature Review
- Ethical & Legal Issues for Nurses
- Nursing Library Instruction Course
- Data Management Toolkit This link opens in a new window
- Useful Nursing Resources
- Writing Resources
- LionSearch and Finding Articles
- The Catalog and Finding Books
Know the Difference! Systematic Review vs. Literature Review
It is common to confuse systematic and literature reviews as both are used to provide a summary of the existent literature or research on a specific topic. Even with this common ground, both types vary significantly. Please review the following chart (and its corresponding poster linked below) for the detailed explanation of each as well as the differences between each type of review.
- What's in a name? The difference between a Systematic Review and a Literature Review, and why it matters by Lynn Kysh, MLIS, University of Southern California - Norris Medical Library
- << Previous: Evaluating the Evidence
- Next: Ethical & Legal Issues for Nurses >>
- Last Updated: Oct 6, 2023 2:24 PM
- URL: https://guides.libraries.psu.edu/nursing
Darrell W. Krueger Library Krueger Library
Evidence based practice toolkit.
- What is EBP?
- Asking Your Question
Levels of Evidence / Evidence Hierarchy
Evidence pyramid (levels of evidence), definitions, research designs in the hierarchy, clinical questions --- research designs.
- Evidence Appraisal
- Find Research
- Standards of Practice
Levels of evidence (sometimes called hierarchy of evidence) are assigned to studies based on the research design, quality of the study, and applicability to patient care. Higher levels of evidence have less risk of bias .
Levels of Evidence (Melnyk & Fineout-Overholt 2023)
*Adapted from: Melnyk, & Fineout-Overholt, E. (2023). Evidence-based practice in nursing & healthcare: A guide to best practice (Fifth edition.). Wolters Kluwer.
" Evidence Pyramid " is a product of Tufts University and is licensed under BY-NC-SA license 4.0
Tufts' "Evidence Pyramid" is based in part on the Oxford Centre for Evidence-Based Medicine: Levels of Evidence (2009)
Levels of Evidence (LoBiondo-Wood & Haber 2022)
Adapted from LoBiondo-Wood, G. & Haber, J. (2022). Nursing research: Methods and critical appraisal for evidence-based practice (10th ed.). Elsevier.
- Oxford Centre for Evidence Based Medicine Glossary
Different types of clinical questions are best answered by different types of research studies. You might not always find the highest level of evidence (i.e., systematic review or meta-analysis) to answer your question. When this happens, work your way down to the next highest level of evidence.
This table suggests study designs best suited to answer each type of clinical question.
- << Previous: Asking Your Question
- Next: Evidence Appraisal >>
- Last Updated: Oct 17, 2023 10:21 AM
- URL: https://libguides.winona.edu/ebptoolkit
Overview of literature reviews.
- Systematic Reviews
- Systematic Reviews for Social Sciences
- Checklists, Guides
What Type of Review is Right for You?
- Decision tree for review types from Cornell University Library
'Literature review' is a generic term that is often used to describe a range of different review types. For a class assignment, you may be required to review academic literature based on a topic of interest and write about the sources you selected. Or, if your are working on a research project, you may need to conduct a comprehensive search of the literature to write a literature review or a literature review chapter for a thesis or dissertation.
Listed below are common review types with brief descriptions for a quick comparison of characteristics. Citations are included for follow up and more details.
- Traditional Review (Integrative/Narrative) Grant & Booth (2009) describe 14 review types. They note the aim of this type of literature review is to examine the current/recent literature, so it may not include comprehensive searches and often it describes only a group of selected sources.
Grant, M. J., & Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information and Libraries Journal, 26 (2), 91–108. https://doi.org/10.1111/j.1471-1842.2009.00848.x
- Systematic Review Aims to be comprehensive, adheres to transparent procedures, and provides evidence synthesis that can be used in intervention decisions and policy making.
- Systematized Review Incorporates some systematic review procedures that can be included as part of a narrative (traditional) review.
- Meta-Analysis Uses statistical methods to evaluate relevant research studies and may be part of a systematic review.
- Rapid Review Applies systematic review methods but sets a time limit on locating and appraising sources for a shorten timeframe.
- Scoping Review Explores research questions to map key concepts, evidence, and gaps in the literature and may take longer to complete than a systematic review,
- Umbrella Review
- Compiles evidence from multiple reviews based on a broad problem for which there are competing interventions.
- Next: Systematic Reviews >>
- Last Updated: Nov 1, 2023 3:10 PM
- URL: https://guides.ucf.edu/literaturereviews
- Levels of Evidence
- Evidence Pyramid
- Joanna Briggs Institute
The evidence pyramid is often used to illustrate the development of evidence. At the base of the pyramid is animal research and laboratory studies – this is where ideas are first developed. As you progress up the pyramid the amount of information available decreases in volume, but increases in relevance to the clinical setting.
Meta Analysis – systematic review that uses quantitative methods to synthesize and summarize the results.
Systematic Review – summary of the medical literature that uses explicit methods to perform a comprehensive literature search and critical appraisal of individual studies and that uses appropriate st atistical techniques to combine these valid studies.
Randomized Controlled Trial – Participants are randomly allocated into an experimental group or a control group and followed over time for the variables/outcomes of interest.
Cohort Study – Involves identification of two groups (cohorts) of patients, one which received the exposure of interest, and one which did not, and following these cohorts forward for the outcome of interest.
Case Control Study – study which involves identifying patients who have the outcome of interest (cases) and patients without the same outcome (controls), and looking back to see if they had the exposure of interest.
Case Series – report on a series of patients with an outcome of interest. No control group is involved.
- Levels of Evidence from The Centre for Evidence-Based Medicine
- The JBI Model of Evidence Based Healthcare
- How to Use the Evidence: Assessment and Application of Scientific Evidence From the National Health and Medical Research Council (NHMRC) of Australia. Book must be downloaded; not available to read online.
When searching for evidence to answer clinical questions, aim to identify the highest level of available evidence. Evidence hierarchies can help you strategically identify which resources to use for finding evidence, as well as which search results are most likely to be "best".
Image source: Evidence-Based Practice: Study Design from Duke University Medical Center Library & Archives. This work is licensed under a Creativ e Commons Attribution-ShareAlike 4.0 International License .
The hierarchy of evidence (also known as the evidence-based pyramid) is depicted as a triangular representation of the levels of evidence with the strongest evidence at the top which progresses down through evidence with decreasing strength. At the top of the pyramid are research syntheses, such as Meta-Analyses and Systematic Reviews, the strongest forms of evidence. Below research syntheses are primary research studies progressing from experimental studies, such as Randomized Controlled Trials, to observational studies, such as Cohort Studies, Case-Control Studies, Cross-Sectional Studies, Case Series, and Case Reports. Non-Human Animal Studies and Laboratory Studies occupy the lowest level of evidence at the base of the pyramid.
- Finding Evidence-Based Answers to Clinical Questions – Quickly & Effectively A tip sheet from the health sciences librarians at UC Davis Libraries to help you get started with selecting resources for finding evidence, based on type of question.
- << Previous: What is a Systematic Review?
- Next: Locating Systematic Reviews >>
- Getting Started
- What is a Systematic Review?
- Locating Systematic Reviews
- Searching Systematically
- Developing Answerable Questions
- Identifying Synonyms & Related Terms
- Using Truncation and Wildcards
- Identifying Search Limits/Exclusion Criteria
- Keyword vs. Subject Searching
- Where to Search
- Search Filters
- Sensitivity vs. Precision
- Core Databases
- Other Databases
- Clinical Trial Registries
- Conference Presentations
- Databases Indexing Grey Literature
- Web Searching
- Citation Indexes
- Documenting the Search Process
- Managing your Review
- Last Updated: Sep 1, 2023 1:28 PM
- URL: https://guides.library.ucdavis.edu/systematic-reviews
Nursing-Johns Hopkins Evidence-Based Practice Model
Jhebp model for levels of evidence, jhebp levels of evidence overview.
- Levels I, II and III
Evidence-Based Practice (EBP) uses a rating system to appraise evidence (usually a research study published as a journal article). The level of evidence corresponds to the research study design. Scientific research is considered to be the strongest form of evidence and recommendations from the strongest form of evidence will most likely lead to the best practices. The strength of evidence can vary from study to study based on the methods used and the quality of reporting by the researchers. You will want to seek the highest level of evidence available on your topic (Dang et al., 2022, p. 130).
The Johns Hopkins EBP model uses 3 ratings for the level of scientific research evidence
- true experimental (level I)
- quasi-experimental (level II)
- nonexperimental (level III)
The level determination is based on the research meeting the study design requirements (Dang et al., 2022, p. 146-7).
You will use the Research Appraisal Tool (Appendix E) along with the Evidence Level and Quality Guide (Appendix D) to analyze and appraise research studies . (Tools linked below.)
N onresearch evidence is covered in Levels IV and V.
- Evidence Level and Quality Guide (Appendix D)
- Research Evidence Appraisal Tool (Appendix E)
Level I Experimental study
randomized controlled trial (RCT)
Systematic review of RCTs, with or without meta-analysis
Level II Quasi-experimental Study
Systematic review of a combination of RCTs and quasi-experimental, or quasi-experimental studies only, with or without meta-analysis.
Level III Non-experimental study
Systematic review of a combination of RCTs, quasi-experimental and non-experimental, or non-experimental studies only, with or without meta-analysis.
Qualitative study or systematic review, with or without meta-analysis
Level IV Opinion of respected authorities and/or nationally recognized expert committees/consensus panels based on scientific evidence.
Clinical practice guidelines
Level V Based on experiential and non-research evidence.
Quality improvement, program, or financial evaluation
Opinion of nationally recognized expert(s) based on experiential evidence
These flow charts can also help you detemine the level of evidence throigh a series of questions.
Single Quantitative Research Study
These charts are a part of the Research Evidence Appraisal Tool (Appendix E) document.
Dang, D., Dearholt, S., Bissett, K., Ascenzi, J., & Whalen, M. (2022). Johns Hopkins evidence-based practice for nurses and healthcare professionals: Model and guidelines. 4th ed. Sigma Theta Tau International
- << Previous: Start Here
- Next: Levels I, II and III >>
- Last Updated: Sep 11, 2023 11:23 AM
- URL: https://bradley.libguides.com/jhebp
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Account settings
- Advanced Search
- Journal List
- HHS Author Manuscripts
The Levels of Evidence and their role in Evidence-Based Medicine
Patricia b. burns.
1 Research Associate, Section of Plastic Surgery, Department of Surgery, The University of Michigan Health System
Rod J. Rohrich
2 Professor of Surgery, Department of Plastic Surgery, University of Texas Southwestern Medical Center
Kevin C. Chung
3 Professor of Surgery, Section of Plastic Surgery, Department of Surgery, The University of Michigan Health System
As the name suggests, evidence-based medicine (EBM), is about finding evidence and using that evidence to make clinical decisions. A cornerstone of EBM is the hierarchical system of classifying evidence. This hierarchy is known as the levels of evidence. Physicians are encouraged to find the highest level of evidence to answer clinical questions. Several papers published in Plastic Surgery journals concerning EBM topics have touched on this subject. 1 – 6 Specifically, previous papers have discussed the lack of higher level evidence in PRS and need to improve the evidence published in the journal. Before that can be accomplished, it is important to understand the history behind the levels and how they should be interpreted. This paper will focus on the origin of levels of evidence, their relevance to the EBM movement and the implications for the field of plastic surgery as well as the everyday practice of plastic surgery.
History of Levels of Evidence
The levels of evidence were originally described in a report by the Canadian Task Force on the Periodic Health Examination in 1979. 7 The report’s purpose was to develop recommendations on the periodic health exam and base those recommendations on evidence in the medical literature. The authors developed a system of rating evidence ( Table 1 ) when determining the effectiveness of a particular intervention. The evidence was taken into account when grading recommendations. For example, a Grade A recommendation was given if there was good evidence to support a recommendation that a condition be included in the periodic health exam. The levels of evidence were further described and expanded by Sackett 8 in an article on levels of evidence for antithrombotic agents in 1989 ( Table 2 ). Both systems place randomized controlled trials (RCT) at the highest level and case series or expert opinions at the lowest level. The hierarchies rank studies according to the probability of bias. RCTs are given the highest level because they are designed to be unbiased and have less risk of systematic errors. For example, by randomly allocating subjects to two or more treatment groups, these types of studies also randomize confounding factors that may bias results. A case series or expert opinion is often biased by the author’s experience or opinions and there is no control of confounding factors.
Canadian Task Force on the Periodic Health Examination’s Levels of Evidence *
Levels of Evidence from Sackett *
Modification of levels
Since the introduction of levels of evidence, several other organizations and journals have adopted variation of the classification system. Diverse specialties are often asking different questions and it was recognized that the type and level of evidence needed to be modified accordingly. Research questions are divided into the categories: treatment, prognosis, diagnosis, and economic/decision analysis. For example, Table 3 shows the levels of evidence developed by the American Society of Plastic Surgeons (ASPS) for prognosis 9 and Table 4 shows the levels developed by the Centre for Evidence Based Medicine (CEBM) for treatment. 10 The two tables highlight the types of studies that are appropriate for the question (prognosis versus treatment) and how quality of data is taken into account when assigning a level. For example, RCTs are not appropriate when looking at the prognosis of a disease. The question in this instance is: “What will happen if we do nothing at all”? Because a prognosis question does not involve comparing treatments, the highest evidence would come from a cohort study or a systematic review of cohort studies. The levels of evidence also take into account the quality of the data. For example, in the chart from CEBM, poorly designed RCTs have the same level of evidence as a cohort study.
Levels of Evidence for Prognostic Studies *
Levels of Evidence for Therapeutic Studies *
A grading system that provides strength of recommendations based on evidence has also changed over time. Table 5 shows the Grade Practice Recommendations developed by ASPS. The grading system provides an important component in evidence-based medicine and assists in clinical decision making. For example, a strong recommendation is given when there is level I evidence and consistent evidence from Level II, III and IV studies available. The grading system does not degrade lower level evidence when deciding recommendations if the results are consistent.
Grade Practice Recommendations *
Interpretation of levels
Many journals assign a level to the papers they publish and authors often assign a level when submitting an abstract to conference proceedings. This allows the reader to know the level of evidence of the research but the designated level of evidence does always guarantee the quality of the research. It is important that readers not assume that level 1 evidence is always the best choice or appropriate for the research question. This concept will be very important for all of us to understand as we evolve into the field of EBM in Plastic Surgery. By design, our designated surgical specialty will always have important articles that may have a lower level of evidence due to the level of innovation and technique articles which are needed to move our surgical specialty forward.
Although RCTs are the often assigned the highest level of evidence, not all RCTs are conducted properly and the results should be carefully scrutinized. Sackett 8 stressed the importance of estimating types of errors and the power of studies when interpreting results from RCTs. For example, a poorly conducted RCT may report a negative result due to low power when in fact a real difference exists between treatment groups. Scales such as the Jadad scale have been developed to judge the quality of RCTs. 11 Although physicians may not have the time or inclination to use a scale to assess quality, there are some basic items that should be taken into account. Items used for assessing RCTs include: randomization, blinding, a description of the randomization and blinding process, description of the number of subjects who withdrew or drop out of the study; the confidence intervals around study estimates; and a description of the power analysis. For example, Bhandari et al. 12 published a paper assessing the quality of surgical RCTs. The authors evaluated the quality of RCTs reported in the Journal of Bone and Joint Surgery (JBJS) from 1988–2000. Papers with a score of > 75% were deemed high quality and 60% of the papers had a score < 75%. The authors identified 72 RCTs during this time period and the mean score was 68%. The main reason for the low quality score was lack of appropriate randomization, blinding, and a description of patient exclusion criteria. Another paper found the same quality score of papers in JBJS with a level 1 rating compared to level 2. 13 Therefore, one should not assume that level 1 studies have higher quality than level 2.
A resource for surgeons when appraising levels of evidence are the users’ guides published in the Canadian Journal of Surgery 14 , 15 and the Journal of Bone and Joint Surgery. 16 Similar papers that are not specific to surgery have been published in the Journal of the American Medical Association (JAMA). 17 , 18
Plastic surgery and EBM
The field of plastic surgery has been slow to adopt evidence-based medicine. This was demonstrated in a paper examining the level of evidence of papers published in PRS. 19 The authors assigned levels of evidence to papers published in PRS over a 20 year period. The majority of studies (93% in 1983) were level 4 or 5, which denotes case series and case reports. Although the results are disappointing, there was some improvement over time. By 2003 there were more level 1studies (1.5%) and fewer level 4 and 5 studies (87%). A recent analysis looked at the number of level 1 studies in 5 different plastic surgery journals from 1978–2009. The authors defined level 1 studies as RCTs and meta-analysis and restricted their search to these studies. The number of level 1 studies increased from 1 in 1978 to 32 by 2009. 20 From these results, we see that the field of plastic surgery is improving the level of evidence but still has a way to go, especially in improving the quality of studies published. For example, approximately a third of the studies involved double blinding, but the majority did not randomize subjects, describe the randomization process, or perform a power analysis. Power analysis is another area of concern in plastic surgery. A review of the plastic surgery literature found that the majority of published studies have inadequate power to detect moderate to large differences between treatment groups. 21 No matter what the level of evidence for a study, if it is under powered, the interpretation of results is questionable.
Although the goal is to improve the overall level of evidence in plastic surgery, this does not mean that all lower level evidence should be discarded. Case series and case reports are important for hypothesis generation and can lead to more controlled studies. Additionally, in the face of overwhelming evidence to support a treatment, such as the use of antibiotics for wound infections, there is no need for an RCT.
Clinical examples using levels of evidence
In order to understand how the levels of evidence work and aid the reader in interpreting levels, we provide some examples from the plastic surgery literature. The examples also show the peril of medical decisions based on results from case reports.
An association was hypothesized between lymphoma and silicone breast implants based on case reports. 22 – 27 The level of evidence for case reports, depending on the scale used, is 4 or 5. These case reports were used to generate the hypothesis that a possible association existed. Because of these results, several large retrospective cohort studies from the United States, Canada, Denmark, Sweden and Finland were conducted. 28 – 32 The level of evidence for a retrospective cohort is 2. All of these studies had many years of follow-up for a large number of patients. Some of the studies found an elevated risk and others no risk for lymphoma. None of the studies reached statistical significance. Therefore, higher level evidence from cohort studies does not provide evidence of any risk of lymphoma. Finally, a systematic review was performed that combined the evidence from the retrospective cohorts. 27 The results found an overall standardized incidence ratio of 0.89 (95% CI 0.67–1.18). Because the confidence intervals include 1, the results indicate there is no increased incidence. The level of evidence for the systematic review is 1. Based on the best available evidence, there is no association between lymphoma and silicone implants. This example shows how low level evidence studies were used to generate a hypothesis, which then led to higher level evidence that disproved the hypothesis. This example also demonstrates that RCTs are not feasible for rare events such as cancer and the importance of observational studies for a specific study question. A case-control study is a better option and provides higher evidence for testing the prognosis of the long-term effect of silicone breast implants.
Another example is the injection of epinephrine in fingers. Based on case reports prior to 1950, physicians were advised that epinephrine injection can result in finger ischemia. 33 We see in this example in which level 4 or 5 evidence was accepted as fact and incorporated into medical textbooks and teaching. However, not all physicians accepted this evidence and are performing injections of epinephrine into the fingers with no adverse effects on the hand. Obviously, it was time for higher level evidence to resolve this issue. An in-depth review of the literature from 1880 to 2000 by Denkler, 33 identified 48 cases of digital infarction of which 21 were injected with epinephrine. Further analysis found that the addition of procaine to the epinephrine injection was the cause of the ischemia. 34 The procaine used in these injections included toxic acidic batches that were recalled in 1948. In addition, several cohort studies found no complications from the use of epinephrine in the fingers and hand. 35 , 36 , 37 The results from these cohort studies increased the level of evidence. Based on the best available evidence from these studies, the hypothesis that epinephrine injection will harm fingers was rejected. This example highlights the biases inherent in case reports. It also shows the risk when spurious evidence is handed down and integrated into medical teaching.
Obtaining the best evidence
We have established the need for RCTs to improve evidence in plastic surgery but have also acknowledged the difficulties, particularly with randomization and blinding. Although RCTs may not be appropriate for many surgical questions, well designed and conducted cohort or case-control studies could boost the level of evidence. Many of the current studies tend to be descriptive and lack a control group. The way forward seems clear. Plastic surgery researchers need to consider utilizing a cohort or case-control design whenever an RCT is not possible. If designed properly, the level of evidence for observational studies can approach or surpass those from an RCT. In some instances, observation studies and RCTs have found similar results. 38 If enough cohort or case-control studies become available, this increases the prospect of systematic reviews of these studies that will increase overall evidence levels in plastic surgery.
The levels of evidence are an important component of EBM. Understanding the levels and why they are assigned to publications and abstracts helps the reader to prioritize information. This is not to say that all level 4 evidence should be ignored and all level 1 evidence accepted as fact. The levels of evidence provide a guide and the reader needs to be cautious when interpreting these results.
Supported in part by a Midcareer Investigator Award in Patient-Oriented Research (K24 AR053120) from the National Institute of Arthritis and Musculoskeletal and Skin Diseases (to Dr. Kevin C. Chung).
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Writing a Literature Review
Digital tech & fye librarian.
What is a literature review?
Flip the words around and you have the beginning of your answer: a review of the literature.
“The literature” means scholarly sources (mostly academic journal articles and books) conducted on a specific topic.
“Review” means that you’ve read that work carefully in order to create a piece of writing that organizes, summarizes, analyzes, and makes connections between sources , as well as identifying areas of research still needed .
Why write a literature review?
A lit review can serve several purposes:
- Orient the reader to a topic of study in order to validate the need for a new study.
- Reveal patterns or problems in previous research , which is its own kind of “finding” or result.
In primary research that includes the results of an experiment or fieldwork, it precedes the results and sets up a later discussion of the results in the context of previous findings.
What is the difference between a literature review and an annotated bibliography?
There is not just one way to write a literature review or an annotated bibliography, so differences vary. However, one of the main differences is that an annotated bibliography is typically organized source by source —each one has its own paragraph of explanation, analysis, etc.
In a literature review, the writing is organized thematically , often with multiple sources addressed in each paragraph, and there is an overarching narrative driving the review.
Although there are “bibliographic essays” that are essentially narratively-driven annotated bibliographies , in general annotated bibliographies are a drafting step toward a more formal piece of writing , while a literature review is more likely to be that more formal piece itself.
Ok, what about the difference between a literature review and a research paper?
Here’s a secret: there’s no such thing as “A Research Paper.” There are papers that use research in many different ways, and a literature review is one of those ways. Typically, though, if your assignment is specifically to write a “literature review,” it may mean you are being asked to focus less on creating your own argument, driven by a thesis with research as supporting evidence, and more on finding something to say based on the patterns and questions of the research you’ve read .
How should I organize a literature review?
Typically, literature reviews are organized thematically , not chronologically or source by source. This means that you will need to identify several sub-topics and figure out how to group sources to tell a story in themes. Some sources may show up in multiple sections, and some sources will only appear once. For practical suggestions on how to organize, see organizing a literature review (as of 3/23/20: in progress!).
How comprehensive should my review be?
This really depends on the assignment or type of literature review that you’re doing. Some reviews are quite extensive and aim to be “exhaustive,” looking at every article on a particular topic. Chances are, yours is not that. For guidance you may want to ask your professor this question , or more specific questions like, “should I consider articles published more than 20 years ago? What about 10?” etc.
You may also want to consider if it makes sense to narrow your focus to a particular region, demographic, or even type of study or article, such as focusing on specific methods used.
Finally, the scope of your review may also be influenced by the state of prior research . If you are exploring a relatively under-researched or interdisciplinary topic, you may draw from a broader and more diverse set of articles. If you are looking at something that has a well-established scholarly history, your focus will likely be much narrower.
How do I know if I’m “done” researching/haven’t missed anything?
The truth is, research is never “done.” But it’s true you have to come to a stopping point so you can write and finish your review! Here are a few tips for making this assessment:
- You see the same authors being cited over and over again in your sources and you have those sources, too . That can be a sign that you’ve hit on a particular scholarly conversation and identified most of the major voices in it.
- Ask a librarian to help you! While librarians are great at finding sources, we can also help you determine if there are no more sources available to find.
- Outline your review and make sure that each section of your review is supported by adequate research. If you have sections that are much lighter than others, you may want to give those a second look.
- Make sure you’ve given yourself achievable parameters . If you feel like there are just thousands more articles on your exact topic, you may need to narrow yours down, or at least explain why you have selected certain articles instead of other, similar ones.
- Finally, don’t forget to evaluate as you write . It’s likely that the writing process itself will help you determine whether you have the sources you need to achieve your goals.
A literature review can be challenging, and requires a lot of careful thinking as well as the steps of finding articles and writing. But with time, patience, and help, you can do it, and you'll be proud of the results once you're done.
- Last Updated: Nov 1, 2023 1:01 PM
- URL: https://libguides.trinity.edu/literaturereview
AACN Levels of Evidence
Add to Collection
Added to Collection
Level A — Meta-analysis of quantitative studies or metasynthesis of qualitative studies with results that consistently support a specific action, intervention, or treatment (including systematic review of randomized controlled trials).
Level B — Well-designed, controlled studies with results that consistently support a specific action, intervention, or treatment.
Level C — Qualitative studies, descriptive or correlational studies, integrative review, systematic review, or randomized controlled trials with inconsistent results.
Level D — Peer-reviewed professional and organizational standards with the support of clinical study recommendations.
Level E — Multiple case reports, theory-based evidence from expert opinions, or peer-reviewed professional organizational standards without clinical studies to support recommendations.
Level M — Manufacturer’s recommendations only.
(Excerpts from Peterson et al. Choosing the Best Evidence to Guide Clinical Practice: Application of AACN Levels of Evidence. Critical Care Nurse. 2014;34:58-68.)
What is the purpose of levels of evidence (LOEs)?
“The amount and availability of research supporting evidence-based practice can be both useful and overwhelming for critical care clinicians. Therefore, clinicians must critically evaluate research before attempting to put the findings into practice. Evaluation of research generally occurs on 2 levels: rating or grading the evidence by using a formal level-of-evidence system and individually critiquing the quality of the study. Determining the level of evidence is a key component of appraising the evidence.1-3 Levels or hierarchies of evidence are used to evaluate and grade evidence. The purpose of determining the level of evidence and then critiquing the study is to ensure that the evidence is credible (eg, reliable and valid) and appropriate for inclusion into practice.3 Critique questions and checklists are available in most nursing research and evidence-based practice texts to use as a starting point in evaluation.”
How are LOEs determined?
“The most common method used to classify or determine the level of evidence is to rate the evidence according to the methodological rigor or design of the research study.3,4 The rigor of a study refers to the strict precision or exactness of the design. In general, findings from experimental research are considered stronger than findings from nonexperimental studies, and similar findings from more than 1 study are considered stronger than results of single studies. Systematic reviews of randomized controlled trials are considered the highest level of evidence, despite the inability to provide answers to all questions in clinical practice.”4,5
Who developed the AACN LOEs?
“As interest in promoting evidence-based practice has grown, many professional organizations have adopted criteria to evaluate evidence and develop evidence-based guidelines for their members.”1,5 Originally developed in 1995, AACN’s rating scale has been updated in 2008 and 2014 by the Evidence-Based Practice Resources Workgroup (EBPRWG). The 2011-2013 EBPRWG continued the tradition of previous workgroups to move research to the patient bedside.
What are the AACN LOEs and how are they used?
The AACN levels of evidence are structured in an alphabetical hierarchy in which the highest form of evidence is ranked as A and includes meta-analyses and meta-syntheses of the results of controlled trials. Evidence from controlled trials is rated B. Level C, the highest level for nonexperimental studies includes systematic reviews of qualitative, descriptive, or correlational studies. “Levels A, B, and C are all based on research (either experimental or nonexperimental designs) and are considered evidence. Levels D, E, and M are considered recommendations drawn from articles, theory, or manufacturers’ recommendations.”
“Clinicians must critically evaluate research before attempting to implement the findings into practice. The clinical relevance of any research must be evaluated as appropriate for inclusion into practice.”
- Polit DF, Beck CT. Resource Manual for Nursing Research: Generating and Assessing Evidence for Nursing Practice. 9th ed. Philadelphia, PA: Williams & Wilkins; 2012.
- Armola RR, Bourgault AM, Halm MA, et al; 2008-2009 Evidence-Based Practice Resource Work Group of the American Association of Critical-Care Nurses. Upgrading the American Association of Critical-Care Nurses’ evidence-leveling hierarchy. Am J Crit Care. 2009;18(5):405-409.
- Melnyk BM, Fineout-Overholt, E. Evidence-Based Practice in Nursing and Healthcare: A Guide to Best Practice. 2nd ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2011.
- Gugiu PC, Gugiu MR. A critical appraisal of standard guidelines for grading levels of evidence. Eval Health Prof. 2010;33(3):233-255. doi:10.1177/0163278710373980.
- Evans D. Hierarchy of evidence: a framework for ranking evidence evaluating healthcare interventions. J Clin Nurs. 2003;12(1):77-84.
What Is a Literature Review?
Hero Images / Getty Images
- An Introduction to Punctuation
Olivia Valdes was the Associate Editorial Director for ThoughtCo. She worked with Dotdash Meredith from 2017 to 2021.
- B.A., American Studies, Yale University
A literature review summarizes and synthesizes the existing scholarly research on a particular topic. Literature reviews are a form of academic writing commonly used in the sciences, social sciences, and humanities. However, unlike research papers, which establish new arguments and make original contributions, literature reviews organize and present existing research. As a student or academic, you might produce a literature review as a standalone paper or as a portion of a larger research project.
What Literature Reviews Are Not
In order to understand literature reviews, it's best to first understand what they are not . First, literature reviews are not bibliographies. A bibliography is a list of resources consulted when researching a particular topic. Literature reviews do more than list the sources you’ve consulted: they summarize and critically evaluate those sources.
Second, literature reviews are not subjective. Unlike some of the other well-known "reviews" (e.g. theater or book reviews), literature reviews steer clear of opinion statements. Instead, they summarize and critically assess a body of scholarly literature from a relatively objective perspective. Writing a literature review is a rigorous process, requiring a thorough evaluation of the quality and findings of each source discussed.
Why Write a Literature Review?
Writing a literature review is a time-consuming process that requires extensive research and critical analysis . So, why should you spend so much time reviewing and writing about research that’s already been published?
- Justifying your own research . If you’re writing a literature review as part of a larger research project , the literature review allows you to demonstrate what makes your own research valuable. By summarizing the existing research on your research question, a literature review reveals points of consensus and points of disagreement, as well as the gaps and open questions that remain. Presumably, your original research has emerged from one of those open questions, so the literature review serves as a jumping-off point for the rest of your paper.
- Demonstrating your expertise. Before you can write a literature review, you must immerse yourself in a significant body of research. By the time you’ve written the review, you’ve read widely on your topic and are able to synthesize and logically present the information. This final product establishes you as a trustworthy authority on your topic.
- Joining the conversation . All academic writing is part of one never-ending conversation: an ongoing dialogue among scholars and researchers across continents, centuries, and subject areas. By producing a literature review, you’re engaging with all of the prior scholars who examined your topic and continuing a cycle that moves the field forward.
Tips for Writing a Literature Review
While specific style guidelines vary among disciplines, all literature reviews are well-researched and organized. Use the following strategies as a guide as you embark on the writing process.
- Choose a topic with a limited scope. The world of scholarly research is vast, and if you choose too broad a topic, the research process will seem never-ending. Choose a topic with a narrow focus, and be open to adjusting it as the research process unfolds. If you find yourself sorting through thousands of results every time you conduct a database search, you may need to further refine your topic .
- Take organized notes. Organizational systems such as the literature grid are essential for keeping track of your readings. Use the grid strategy, or a similar system, to record key information and main findings/arguments for each source. Once you begin the writing process, you’ll be able to refer back to your literature grid each time you want to add information about a particular source.
- Pay attention to patterns and trends . As you read, be on the lookout for any patterns or trends that emerge among your sources. You might discover that there are two clear existing schools of thought related to your research question. Or, you might discover that the prevailing ideas about your research question have shifted dramatically several times over the last hundred years. The structure of your literature review will be based on the patterns you discover. If no obvious trends stand out, choose the organizational structure that best suits your topic, such as theme, issue, or research methodology.
Writing a literature review takes time, patience, and a whole lot of intellectual energy. As you pore over countless academic articles, consider all the researchers who preceded you and those who will follow. Your literature review is much more than a routine assignment: it's a contribution to the future of your field.
- How to Get Started on a Literature Review
- What Is a Research Paper?
- Writing an Annotated Bibliography for a Paper
- An Introduction to Academic Writing
- Abstract Writing for Sociology
- What Is a Senior Thesis?
- What Is a Bibliography?
- What Is Proposal Writing?
- Definition and Examples of Analysis in Composition
- Writing a History Book Review
- How to Write a News Article That's Effective
- Bibliography: Definition and Examples
- Pilot Study in Research
- 5 Steps to Writing a Position Paper
- How to Write a Research Paper That Earns an A
- How to Find Trustworthy Sources
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts.
- Skip to search box
- Skip to main content
Princeton University Library
Ecology and evolutionary biology research guide.
- Finding Journal Articles
- Citation Searching
- Finding Books
- Finding News Sources
- Reference Sources
- Requesting Books & Articles
- Creating Bibliographies
- What is a Literature Review?
What is a literature review?
A literature review surveys scholarly articles, books and other sources relevant to a particular issue, area of research, or theory. The purpose is to offer an overview of significant literature published on a topic.
A literature review may constitute an essential chapter of a thesis or dissertation, or may be a self-contained review of writings on a subject. In either case, its purpose is to:
- Place each work in the context of its contribution to the understanding of the subject under review
- Describe the relationship of each work to the others under consideration
- Identify new ways to interpret, and shed light on any gaps in, previous research
- Resolve conflicts amongst seemingly contradictory previous studies
- Identify areas of prior scholarship to prevent duplication of effort
- Point the way forward for further research
- Place one's original work (in the case of theses or dissertations) in the context of existing literature
A literature review can be just a simple summary of the sources, but it usually has an organizational pattern and combines both summary and synthesis. A summary is a recap of the important information of the source, but a synthesis is a re-organization, or a reshuffling, of that information. It might give a new interpretation of old material or combine new with old interpretations. Or it might trace the intellectual progression of the field, including major debates. And depending on the situation, the literature review may evaluate the sources and advise the reader on the most pertinent or relevant.
Similar to primary research, development of the literature review requires four stages:
- Problem formulation—which topic or field is being examined and what are its component issues?
- Literature search—finding materials relevant to the subject being explored
- Data evaluation—determining which literature makes a significant contribution to the understanding of the topic
- Analysis and interpretation—discussing the findings and conclusions of pertinent literature
Remember, this is a process and not necessarily a linear one. As you search and evaluate the literature, you may refine your topic or head in a different direction which will take you back to the search stage. In fact, it is useful to evaluate as you go along so you don't spend hours researching one aspect of your topic only to find yourself more interested in another.
The main focus of an academic research paper is to develop a new argument, and a research paper will contain a literature review as one of its parts. In a research paper, you use the literature as a foundation and as support for a new insight that you contribute. The focus of a literature review, however, is to summarize and synthesize the arguments and ideas of others without adding new contributions.
For additional information, including suggestions for the structure of your literature review, see this guide from the University of North Carolina's Writing Center: https://writingcenter.unc.edu/handouts/literature-reviews/
This <10 minute tutorial from North Carolina State University also provides a good overview of the literature review: https://www.lib.ncsu.edu/tutorials/lit-review/
While we don't have any examples of an EEB JP literature review, it may be useful to look at other reviews to learn how researchers in the field "summarize and synthesize" the literature. Any research article or dissertation in the sciences will include a section which reviews the literature. Though the section may not be labeled as such, you will quickly recognize it by the number of citations and the discussion of the literature. Another option is to look for Review Articles, which are literature reviews as a stand alone article. Here are some resources where you can find Research Articles, Review Articles and Dissertations:
- Web of Science - If you'd like to limit your results to Review Articles, look to the left side of your results page. There you will see many options to refine your search including the section labeled Document Types. Select "Review" as the document type and click on Refine.
- Scopus - Similar to WoS, you can use the options on the left side of your results page if you'd like to limit the document type. Here you will again choose "Review" and then click on the Limit To button.
- Annual Reviews - All articles in this database are review articles. You can search for your topic or browse in a related subject area.
- Dissertations @ Princeton - Provides access to many Princeton dissertations, full text is available for most published after 1996.
*** Note about using Review Articles in your research - while they are useful in helping you to locate articles on your topic, remember that you must go to and use the original source if you intend to include a study mentioned in the review. The only time you would cite a review article is if they have made an original insight in their work that you talk about in your paper. Going to the original research paper allows you to verify the information about that study and determine whether the points made in the review are valid and accurate.
- << Previous: Creating Bibliographies
- Last Updated: Mar 22, 2023 2:36 PM
- URL: https://libguides.princeton.edu/eeb
- UWF Libraries
Evidence Based Nursing
- Types of Reviews
- What is Evidence -Based Practice & PICO
- Resources for Finding Reviews
- Cochrane Library
- Joanna Briggs Institute
- Find Books & Background Resources
- Statistics and Data
- APA Formatting Guide
- Citation Managers
- Library Presentations
What are the types of reviews?
As you begin searching through the literature for evidence, you will come across different types of publications. Below are examples of the most common types and explanations of what they are. Although systematic reviews and meta-analysis are considered the highest quality of evidence, not every topic will have an Systematic Review or Metanalysis.
Literature Review Examples
Remember, a literature review provides an overview of a topic. There may or may not be a method for how studies are collected or interpreted. Lit reviews aren't always obviously labeled "literature review"; they may be embedded within sections such as the introduction or background. You can figure this out by reading the article.
- Dance therapy for individuals with Parkinson's Disease Notice how the introduction and subheadings provide background on the topic and describe way it's important. Some studies are grouped together that convey a similar idea. Limitations of some studies are addressed as a way of showing the significance of the research topic.
- Ethical Issues Regarding Human Cloning: A Nursing Perspective Notice how this article is broken into several sections: background on human cloning, harms of cloning, and nursing issues in cloning. These are the themes of the different articles that were used in writing this literature review. Look at how the articles work together to form a cohesive piece of literature.
Systematic Review Examples
Systematic reviews address a clinical question. Reviews are gathered using a specific, defined set of criteria.
- Selection criteria is defined
- The words "Systematic Review" may appear int he title or abstract
- BTW -> Cochrane Reviews aka Systematic Reviews
- Additional reviews can be found by using a systematic review limit
- A Systematic Review of Animal-Assisted Therapy on Psychosocial Outcomes in People with Intellectual Disability
- The determinants and consequences of adult nursing staff turnover: a systematic review of systematic reviews
- Cochrane Library (Wiley) This link opens in a new window Over 5000 reviews of research on medical treatments, practices, and diagnostic tests are provided in this database. Cochrane Reviews is the premier resource for Evidence Based Practice.
- PubMed (NLM) This link opens in a new window PubMed comprises more than 22 million citations for biomedical literature from MEDLINE, life science journals, and online books.
Meta-analysis is a study that combines data from OTHER studies. All the studies are combined to argue whether a clinical intervention is statistically significant by combining the results from the other studies. For example, you want to examine a specific headache intervention without running a clinical trial. You can look at other articles that discuss your clinical intervention, combine all the participants from those articles, and run a statistical analysis to test if your results are significant. Guess what? There's a lot of math.
- Include the words "meta-analysis" or "meta analysis" in your keywords
- Meta-analyses will always be accompanied by a systematic review, but a systematic review may not have a meta-analysis
- See if the abstract or results section mention a meta-analysis
- Use databases like Cochrane or PubMed
- Exercise Interventions for Preventing Falls Among Older People in Care Facilities: A Meta-Analysis
- Acupuncture for the prevention of tension-type headache This is a systematic review that includes a meta-analysis. Check out the Abstract and Results for an example of what a meta-analysis looks like!
- << Previous: What is Evidence -Based Practice & PICO
- Next: Resources for Finding Reviews >>
- Last Updated: Oct 4, 2023 12:18 PM
- URL: https://libguides.uwf.edu/EBP
- Open access
- Published: 23 October 2023
A 10-year update to the principles for clinical trial data sharing by pharmaceutical companies: perspectives based on a decade of literature and policies
- Natansh D. Modi 1 ,
- Ganessan Kichenadasse 1 , 2 ,
- Tammy C. Hoffmann 3 ,
- Mark Haseloff 4 ,
- Jessica M. Logan 5 ,
- Areti A. Veroniki 6 , 7 ,
- Rebecca L. Venchiarutti 8 , 9 ,
- Amelia K. Smit 10 ,
- Haitham Tuffaha 11 ,
- Harindra Jayasekara 12 , 13 , 14 ,
- Arkady Manning-Bennet 1 ,
- Erin Morton 1 ,
- Ross A. McKinnon 1 ,
- Andrew Rowland 1 ,
- Michael J. Sorich 1 &
- Ashley M. Hopkins ORCID: orcid.org/0000-0001-7652-4378 1
BMC Medicine volume 21 , Article number: 400 ( 2023 ) Cite this article
Data sharing is essential for promoting scientific discoveries and informed decision-making in clinical practice. In 2013, PhRMA/EFPIA recognised the importance of data sharing and supported initiatives to enhance clinical trial data transparency and promote scientific advancements. However, despite these commitments, recent investigations indicate significant scope for improvements in data sharing by the pharmaceutical industry. Drawing on a decade of literature and policy developments, this article presents perspectives from a multidisciplinary team of researchers, clinicians, and consumers. The focus is on policy and process updates to the PhRMA/EFPIA 2013 data sharing commitments, aiming to enhance the sharing and accessibility of participant-level data, clinical study reports, protocols, statistical analysis plans, lay summaries, and result publications from pharmaceutical industry-sponsored trials. The proposed updates provide clear recommendations regarding which data should be shared, when it should be shared, and under what conditions. The suggested improvements aim to develop a data sharing ecosystem that supports science and patient-centred care. Good data sharing principles require resources, time, and commitment. Notwithstanding these challenges, enhancing data sharing is necessary for efficient resource utilization, increased scientific collaboration, and better decision-making for patients and healthcare professionals.
Peer Review reports
Clinical trial data sharing is vital for fostering transparency, quality, scientific advancement, reducing research waste, and sustaining confidence in the pharmaceutical industry. In 2013 [ 1 ], a large proportion of the industry, through the Pharmaceutical Research and Manufacturers of America (PhRMA) and European Federation of Pharmaceutical Industries and Associations (EFPIA), endorsed a commitment to:
Share participant-level data, study-level data, and protocols from clinical trials of United States (US) and European Union (EU) registered medicines with qualified researchers
Provide public access to clinical study reports (CSR), at minimum synopses, from clinical trials submitted to the Food and Drug Administration (FDA), European Medicines Agency (EMA), and EU Member States
Share summary result reports with clinical trial participants
Establish public web pages displaying the companies’ data sharing policies and procedures
At a minimum, publish results from all phase 3 and any clinical trial of significant medical importance
PhRMA and EFPIA members are currently at the forefront of data sharing commitments, surpassing academia, and statutory requirements. However, there is still room for further improvement and standardization of commitments to enhance communication of clinical trial results with the public, as well as to facilitate a more efficient data sharing ecosystem.
Progress and challenges in clinical trial data sharing
The PhRMA/EFPIA commitments marked significant progress in providing clinical trial results to participants and the general public, as well as in establishing a data sharing ecosystem that enriches the post-approval evidence base through open research conducted by independent researchers (Fig. 1 ) [ 2 , 3 , 4 , 5 , 6 , 7 ]. With 18 of the current top 20 pharmaceutical companies by revenue being PhRMA/EFPIA members, the commitment holds significant weight [ 8 ]. Moreover, 15 of the top 20 companies are also TransCelerate (a collaborative network of pharmaceutical companies) members, ensuring access to guidance on collecting trial data under standardised quality conditions from the outset [ 9 ]. However, recent investigations indicate that over 50% of the clinical trials supporting the FDA approval of 115 anticancer medicines over the past 10 years were ineligible for participant-level data sharing [ 8 ]. This finding includes 90% of the clinical trials summarised in the product labels of nivolumab, pembrolizumab, and pomalidomide—this is concerning as these medicines currently rank in the top 10 anticancer medicines by global sales. Furthermore, investigations indicate that much of the participant-level data underpinning the FDA/EMA approval of COVID-19 vaccines is currently out of scope for request and will likely remain so for some time [ 3 ]. The above findings underscore an urgent need for improvements in participant-level data transparency, especially for pivotal medicines with significant medical importance.
Potential impacts of data sharing
Since 2013, policies and recommendations for sharing specific data elements have been developed by various organizations, including the FDA, EMA, Health Canada, World Health Organization (WHO), US National Institutes of Health (NIH), Institute of Medicine (now the National Academy of Medicine), White House Office of Science and Technology Policy, International Committee of Medical Journal Editors (ICMJE), Bill and Melinda Gates Foundation, Wellcome Trust, and the GO FAIR Initiative, among others, highlighting significant developments in the data sharing landscape [ 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 ]. Despite these developments, the 2013 PhRMA/EFPIA principles still serve as a significant point of reference within the data sharing policy web pages of many pharmaceutical companies [ 1 ].
Enhancing data sharing practices
Drawing on a decade of literature and policy developments, this article presents perspectives from a multidisciplinary team of authors, including researchers, clinicians, and consumers. The article works towards proposing evidence-based recommendations for potential updates to the pharmaceutical industry data sharing principles established in 2013. Our primary aim was to review the current literature to identify and highlight feasible, urgent next steps for enhancing the data sharing ecosystem and for promoting harmonised data sharing practices among companies. We have formulated these recommendations based on current literature and reported experiences. However, we acknowledge that they may not address all the challenges at hand, and continued progress will still be necessary.
Table 1 presents our recommended updates, which aim to enhance existing principles, promote harmonised data sharing practices, and establish clearer guidelines regarding which data should be shared, when it should be shared, and under what conditions. The goal is to foster the data sharing ecosystem [ 24 , 25 ]. Exemplifying the feasibility of the recommendations presented in Table 1 , most are currently implemented in a fragmented manner across companies. While the primary focus of this manuscript is on pharmaceutical industry data sharing practices, the perspectives are also relevant to non-industry trial sponsors and investigator-initiated trials. Additionally, this manuscript is expected to be particularly valuable for smaller pharmaceutical companies that have less established data sharing practices [ 24 ]. Below, we outline the key literature and policy developments justifying our recommendations.
Participant-level data sharing
Transparent sharing of participant-level data facilitates novel secondary analyses, avoids unnecessary study duplication, and informs future trial design [ 2 , 3 , 4 , 5 , 6 , 7 ]. Participant-level data from clinical trials of newer medicines are vital as they are the centrepiece of safety and efficacy for these medicines [ 3 , 26 ]. The EMA has indicated that they will implement future policies to promote participant-level data sharing [ 27 ], albeit, no US or EU regulations currently mandate participant-level data sharing from industry-sponsored medicine trials.
Nonetheless, most large pharmaceutical companies have processes to share participant-level data [ 8 ]. However, recent research indicates that approximately 50% of participant-level data supporting newly registered medicines are not eligible (i.e., in scope) for request [ 3 , 8 , 28 , 29 ]. Specific trials are often deemed ineligible for sharing due to ongoing follow-up, extended embargos, requirements for both EMA and FDA approval, and issues related to the need for explicit informed consent from study participants [ 3 , 8 , 28 , 29 ]. To expand data sharing, research suggests that participant-level data from any clinical trial underpinning a product label or submitted to the FDA or EMA for drug approval should be immediately eligible for sharing [ 3 , 8 ]. Sharing this participant-level data should not be restricted by the clinical trial having long-term follow-up. While long-term follow-up is crucial to understanding longer-term safety and efficacy, it should not prevent the sharing of result data that are responsible for the medicines approval [ 3 , 8 ]. Pharmaceutical companies should also facilitate the sharing of clinical trials that do not directly support medicine approvals, within a well-defined timeframe after the primary results are completed or published to reduce research waste [ 3 , 8 ].
Decisions on the legitimacy of independent data requests, including the hypotheses tested, the research rationale, the analysis plan, the publication plan, and the qualifications of the research team, should be made by independent scientific review panels [ 21 ]. To facilitate these review processes, it is important to establish mechanisms that provide training to independent individuals, enabling them to develop a deep understanding of the technical, legal, and scientific aspects required to assess data requests [ 30 , 31 ]. The objective is to establish a pool of independent reviewers, enabling pharmaceutical companies to limit their role to simply determining the sharing eligibility of the requested participant-level data. Towards this, pharmaceutical companies should be aiming to maintain up-to-date, publicly accessible registers documenting the sharing eligibility of their clinical trials [ 10 ]. This should include a specific indication of clinical trials that are ineligible, along with clear reasons outlining why and when trials will become eligible. Among the various reasons for ineligibility, consent form issues have been identified as a major concern. To this issue, company web pages should provide clear information on updated consenting procedures, along with consent form examples [ 10 ].
Data protection and security must be a top priority for all parties, including the requestor [ 32 ]. Participant-level data sharing typically takes place on platforms requiring rigorous assessment of the requesting teams’ qualifications [ 8 , 21 ]. Researchers often obtain access to data in a secure, password-protected research environment from which data cannot be downloaded locally [ 21 ]. The procedures for anonymising data should align with the level of protection required. Procedures that redact key information (such as survival and adverse event data) for secondary research should be evaluated for appropriateness and necessity [ 21 , 32 ]. Furthermore, to facilitate the valid use of participant-level data, companies should enhance the findability and accessibility of clinical study reports, annotated case report forms, data dictionaries, data derivation documents, protocols, statistical analysis plans, and anonymisation guides. Such transparency, as highlighted by the FAIR (Findable, Accessible, Interoperable, and Reusable) data principles, is essential for enabling independent researchers to create detailed data requests and verify their data preparation processes when undertaking participant-level data analyses [ 10 , 33 , 34 ].
Independent researchers should also be committed to publishing their analyses, sharing code for reproducibility, maintaining data confidentiality, not disclosing data to unauthorised parties, and not attempting to re-identify study participants [ 1 , 35 ]. Acknowledgements to data contributors and original investigators should be made in all secondary data use publications, and researchers should recognise that original investigator contributions may warrant authorship on new work [ 36 ].
Sharing of clinical study reports
CSRs are standardised documents that contain detailed information (often > 1000 pages) on study designs and study-level results from clinical trials, providing vastly more detail than either clinical trial result synopses or publications [ 37 , 38 , 39 , 40 ]. Given their comprehensive and high-quality nature, CSRs are a valuable resource for research, especially for meta- and patient-level data analyses. Furthermore, they can aid healthcare providers in making informed decisions for at-risk individuals—which can be particularly important for understanding toxicity likelihoods with newer medicines [ 38 , 41 , 42 ].
CSRs are often prepared as supporting documents for medicine submissions to approval and reimbursement bodies. CSR transparency has been acknowledged by the EMA, Health Canada, and the FDA as a mechanism to support public trust in regulatory processes [ 17 , 18 , 41 , 43 ]. Both the EMA and Health Canada have regulations stating that they will publicly share CSRs submitted to them that support medicine approval decisions [ 18 , 43 ]. However, resource difficulties have hindered the EMA in disseminating CSRs, and they have not been doing so since 2018 [ 17 , 43 ]. Meanwhile, the FDA has no CSR sharing policy and instead encourages sponsors to voluntarily disclose such information due to the logistic challenges it would face in implementing such a process [ 17 , 43 ].
While initiatives to publicly share result synopses and publications are commendable, our evaluations suggest that full CSRs from all clinical trials submitted to support medicine approvals should be publicly available for direct download, irrespective of whether the trial has continuing follow-up. Additionally, subsequent versions of CSRs should be made available as they are prepared, as new reports are often created for later data cuts. Given that there are functionalities to upload supporting documents (such as CSRs) on clinical trial registration websites [ 15 , 44 ], this could be a future option for voluntary disclosure. Furthermore, while ensuring patient anonymity is critical, companies should not endorse the practice of over-redaction in their CSR anonymisation processes [ 45 , 46 , 47 ].
Sharing of protocols and statistical analysis plans
Statistical analysis plans (SAPs) and protocols are essential resources for cross-referencing planned analyses and reporting of outcome/adverse event measures from clinical trials [ 48 ]. They also provide researchers with a thorough understanding of the data gathered during a clinical trial, facilitating the design of secondary data analyses [ 19 ].
The ICMJE recommends that SAPs and protocols should be reviewed when evaluating journal submissions and be made publicly available upon publication [ 20 ]. Similarly, NIH regulations (effective from 2017) indicate that SAPs and protocols should be publicly available at the time of publishing summary results [ 11 , 14 ]. Notably, in 2020, both Moderna and Pfizer released detailed protocols for their COVID-19 vaccine trials, well before publishing the results [ 49 ]. We propose that companies should publicly share SAPs and protocols for all published clinical trials and consider sharing them within 6 months of enrolling the first participant. Functionalities to upload SAP and protocol documents are available on clinical trial registries [ 15 , 44 ]. Subsequent versions of SAPs and protocols should be made available when prepared (i.e., updates occur). Data management and data sharing plans should be outlined in SAPs and protocols [ 50 ].
For secondary analyses of shared data, academic institutions and data sharing platforms should have public processes for documenting approved SAPs and requests.
Sharing results with trial participants
Lay summary documents (or plain language summaries) are reports that convey clinical trial results in a simplified format for study participants and the general public [ 51 , 52 ]. Sharing of such documents is recognised by regulators and companies as a mechanism to enhance public trust in medicines [ 52 , 53 , 54 ]. The Declaration of Helsinki (2013) mandates that all participants ‘should be given the option of being informed about the general outcome and results of the study [ 55 ].’
Companies should meet the lay summary requirements of the European Union Clinical Trials Regulation (EU CTR) 536/2014 (effective January 2022) [ 16 , 54 ]. The regulation states, and we support, that all clinical trial participants should be provided a lay summary reporting the results of the clinical trial within 12 months of primary outcome completion [ 16 , 54 ]. Subsequent summaries should be prepared for collected follow-up data. EU CTR indicates all lay summaries should be made publicly available. Towards best practices, preparation, and dissemination plans for lay summaries should be included in study protocols [ 52 ].
Publishing clinical trial results
The Declaration of Helsinki (2013) mandates that results from human studies should be made publicly available [ 55 ]. US and EU regulations now require the publishing of clinical trial result summaries to ClinicalTrials.gov and the Clinical Trial Information System, respectively, within 12 months of primary outcome completion [ 13 , 14 , 56 ]. Requests have also been made to make scientific journal publications available in the same timeframe [ 22 ]. We propose that the dissemination of result publications should not depend on clinical trial outcome or phase [ 22 ] and should cover all follow-up data. Furthermore, consistency of results presentations between publications, regulatory evaluations, and product information leaflets should be ensured [ 57 ].
Public data sharing policies
Pharmaceutical companies should have publicly available web pages detailing their data sharing policies, procedures, and commitments [ 1 ]. Detailed public policy information has been linked to improved clinical trial transparency [ 8 , 24 , 28 , 58 ]. Table 1 outlines our perspectives on essential policy updates for data sharing based on emerging literature over the past decade. To implement these updates, companies should establish clear public policies for sharing participant-level data, full CSRs, protocol/SAPs, lay summaries, CSR synopses, reporting of results on clinical trial registries, and journal publications [ 19 ]. These are among the critical domains of data sharing advocated by the (now) National Academy of Medicine [ 19 ].
We recommend that data sharing policies should be written in a standardised format, including sub-headings for each data item, with numbered criteria for easy referencing by independent scientific review panels. Public registers of data sharing requests and decisions should be kept up-to-date [ 21 ]. Additionally, companies should have a register of their clinical trials that are eligible for data sharing and those that are not [ 10 ]. The register should specify the eligibility criteria and procedures for accessing participant-level data, full CSR, protocol/SAPs, lay summary, CSR synopsis, reporting of results on clinical trial registries, and scientific journal publications for every clinical trial [ 10 , 19 ].
To facilitate cross-referencing and linkage between documents, company processes should aim to include both clinical trial registration numbers and internal trial numbers/names in all publications, product information leaflets, participant-level data, CSRs, protocols/SAPs, and lay summaries [ 10 , 59 ]. This cross-referencing between documents is currently undertaken poorly by most companies.
While the primary aim of the article was to highlight feasible, urgent next steps for enhancing the data sharing ecosystem, we acknowledge that continued progress will still be necessary even if all the recommendations put forward are adopted. Looking ahead, the clinical trial data sharing landscape holds tremendous potential for fostering new scientific discoveries and informing decision-making [ 60 , 61 , 62 , 63 ]. Notably, Vivli alone as a participant-level data sharing platform has facilitated the publication of over 180 research works over the past 5 years, an output that has increased from 2 manuscripts in 2019 to 85 in 2022 [ 64 ]. However, to fully realise the potential impact of the data sharing ecosystem, it will be important for all clinical trial sponsors and investigators, including non-industry trial sponsors, to take significant steps in improving standards.
It is acknowledged that at present the data sharing landscape is fragmented in many aspects [ 65 ]. In the future, there is hope for better utilization of public clinical trial registries as valuable resources for prospectively acknowledging the sharing eligibility of participant-level data, as well as facilitating public access to CSRs, protocol/SAPs, lay summaries, result publications, annotated case report forms, data dictionaries, data derivation documents, and anonymization guides [ 66 , 67 , 68 ]. At present the reporting and accessibility of these documents is somewhat disparate between companies, and the sharing eligibility of participant-level data for specific clinical trials is often not outlined prospectively.
Another consideration is the potential to centralise or transition participant-level data sharing to more open-access models. Undoubtedly, needing to access different platforms/servers (e.g. CSDR [ 69 ] and Vivli [ 70 ]) is a limiter to the effectiveness of undertaking participant-level data meta-analyses for investigations involving multiple companies. Considerations should be given to whether more open models, could facilitate crowd-sourced insights as well as minimising administrative burdens. Nonetheless, even with such a system, there is still a need for mechanisms that ensure the quality of outputs.
To enhance data sharing practices, there is a need for better methods to assess and distinguish between good and bad data sharers. A valuable step towards achieving this would be the implementation of improved meta-metrics on clinical trial data sharing. Currently, the best option for comparing the transparency practices of pharmaceutical companies is ‘The Good Pharma Scorecard’ [ 71 ]; however, it primarily ranks policies rather than comparing the outputs and performances of the companies. It is suggested that ‘The Good Pharma Scorecard’ could be significantly enhanced by incorporating insights into meta-metrics such as the total number of data requests received, the number of approved requests, and the number of citable public outputs facilitated for each company. This would offer a more comprehensive and transparent evaluation of data sharing efforts, enabling better recognition of companies with commendable metrics, and encouraging others to meet the standards of their competitors.
Data sharing plays a vital role in fostering scientific progress and supporting well-informed decisions in clinical practice. Table 1 presents policy and process updates that are our perspectives—as based on the literature—to the next steps to enhance accessibility and transparency of participant-level data, CSRs, protocol/SAPs, lay summaries, and result publications from clinical trials. Implementing these principles will require resources, time, and commitment, and we acknowledge that new issues and areas for improvement may arise [ 72 , 73 , 74 ]. Nonetheless, these achievable suggestions aim to facilitate the development of a data sharing ecosystem that prioritises science and patient-centred care. Meeting these commitments is in the best interest of all institutions involved in clinical trials, including companies, universities, PhRMA/EFPIA, medical societies, advocacy groups, regulators, funders, and journals, because the ultimate goal is to ensure efficient resource utilization, foster scientific advancement, and facilitate the best decisions for patients.
Availability of data and materials
Clinical study report
European Federation of Pharmaceutical Industries and Associations
European Medicines Agency
European Union Clinical Trials Regulation
Food and Drug Administration
International Committee of Medical Journal Editors
National Institutes of Health
Pharmaceutical Research and Manufacturers of America
Statistical analysis plan
World Health Organization
PhRMA, EFPIA. ‘Principles for responsible clinical trial data sharing’: PhRMA and EFPIA; 2013. Available from: https://www.efpia.eu/media/qndlfduy/phrmaefpiaprinciplesforresponsibledatasharing2023.pdf .
Blasimme A, Fadda M, Schneider M, Vayena E. Data sharing for precision medicine: policy lessons and future directions. Health Aff. 2018;37(5):702–9.
Article Google Scholar
Doshi P, Godlee F, Abbasi K. COVID-19 vaccines and treatments: we must have raw data, now. BMJ. 2022;376:o102.
Article PubMed Google Scholar
El Emam K, Rodgers S, Malin B. Anonymising and sharing individual patient data. BMJ. 2015;350:h1139.
Article PubMed PubMed Central Google Scholar
Lo B. Sharing clinical trial data: maximizing benefits, minimizing risk. JAMA. 2015;313(8):793–4.
Article CAS PubMed Google Scholar
Loder E. Data sharing: making good on promises. BMJ. 2018;360:k710.
Bonini S, Eichler H-G, Wathion N, Rasi G. Transparency and the European Medicines Agency—sharing of clinical trial data. N Engl J Med. 2014;371(26):2452–5.
Modi ND, Abuhelwa AY, McKinnon RA, Boddy AV, Haseloff M, Wiese MD, et al. Audit of data sharing by pharmaceutical companies for anticancer medicines approved by the US Food and Drug Administration. JAMA Oncol. 2022;8(9):1310–6.
TransCelerate. Clinical data transparency - de-identification & anonymization. 2023. Available from: https://www.transceleratebiopharmainc.com/assets/clinical-data-transparency/ .
Wilkinson MD, Dumontier M, Aalbersberg IJ, Appleton G, Axton M, Baak A, et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016;3(1):160018.
Department of Health and Human Services. U.S. Department of Health and Human Services. Clinical trials registration and results information submission - a rule by the Health and Human Services Department; 2016. Available from: https://www.federalregister.gov/documents/2016/09/21/2016-22129/clinical-trials-registration-and-results-information-submission .
National Institute of Health. NIH Office of data science strategy announces new initiative to improve access to NIH-funded Data. 2022. Available from: https://datascience.nih.gov/news/nih-office-of-data-science-strategy-announces-new-initiative-to-improve-data-access .
European Union. Regulation (EU) No 536/2014 of the European Parliament and of the Council of 16 April 2014 on clinical trials on medicinal products for human use, and repealing Directive 2001/20/EC Text with EEA relevance. 2014. Available from: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32014R0536 .
ClinicalTrials.gov. FDAAA 801 and the final rule (Section 801 of the Food and Drug Administration Amendments Act of 2007). 2022. Available from: https://clinicaltrials.gov/ct2/manage-recs/fdaaa .
European Medicines Agency. Clinical Trials Information System (CTIS): online modular training programme. 2023. Available from: https://www.ema.europa.eu/en/human-regulatory/research-development/clinical-trials/clinical-trials-information-system-ctis-online-modular-training-programme .
European Union. Regulation (EU) No 536/2014 of the European Parliament and of the Council of 16 April 2014 on clinical trials on medicinal products for human use, and repealing Directive 2001/20/EC Text with EEA relevance. 2022. Available from: http://data.europa.eu/eli/reg/2014/536/oj .
Food and Drug Administration. FDA continues to support transparency and collaboration in drug approval process as the clinical data summary pilot concludes. United States of America Government; 2020. Available from: https://www.fda.gov/news-events/press-announcements/fda-continues-support-transparency-and-collaboration-drug-approval-process-clinical-data-summary .
Health Canada. Public release of clinical information: consultation. Government of Canada; 2019. Available from: https://www.canada.ca/en/health-canada/programs/consultation-public-release-clinical-information-drug-submissions-medical-device-applications.html .
Institute of Medicine. Sharing clinical trial data: maximizing benefits, minimizing risk. Washington, DC: The National Academies Press; 2015.
International Committee of Medical Journal Editors. Recommendations. 2022. Available from: https://www.icmje.org/recommendations/ .
National Academies Press (US). Reflections on sharing clinical trial data: challenges and a way forward: Proceedings of a Workshop. 2020. https://doi.org/10.17226/25838 .
World Health Organization. Joint statement on public disclosure of results from clinical trials. 2017. Available from: https://www.who.int/news/item/18-05-2017-joint-statement-on-registration .
White House Office of Science and Technology Policy. Ensuring free, immediate, and equitable access to federally funded research. 2022. Available from: https://www.whitehouse.gov/ostp/news-updates/2022/08/25/ostp-issues-guidance-to-make-federally-funded-research-freely-available-without-delay/ .
Goldacre B, Lane S, Mahtani KR, Heneghan C, Onakpoya I, Bushfield I, et al. Pharmaceutical companies’ policies on access to trial data, results, and methods: audit study. BMJ. 2017;358:j3334.
Miller J, Ross JS, Wilenzick M, Mello MM. Sharing of clinical trial data and results reporting practices among large pharmaceutical companies: cross sectional descriptive study and pilot of a tool to improve company practices. BMJ. 2019;366:l4217.
Umscheid CA, Margolis DJ, Grossman CE. Key concepts of clinical trials: a narrative review. Postgrad Med. 2011;123(5):194–204.
European Medicines Agency. Background to clinical data publication policy Europe. European Medicines Agency; 2023. Available from: https://www.ema.europa.eu/en/human-regulatory/marketing-authorisation/clinical-data-publication/background-clinical-data-publication-policy .
Hopkins AM, Rowland A, Sorich MJ. Data sharing from pharmaceutical industry sponsored clinical studies: audit of data availability. BMC Med. 2018;16(1):165.
Murugiah K, Ritchie JD, Desai NR, Ross JS, Krumholz HM. Availability of clinical trial data from industry-sponsored cardiovascular trials. J Am Heart Assoc. 2016;5(4):e003307.
Hemmings R, Koch A. Commentary on: Subgroup analysis and interpretation for phase 3 confirmatory trials: white paper of the EFSPI/PSI working group on subgroup analysis by Dane, Spencer, Rosenkranz, Lipkovich, and Parke. Pharm Stat. 2019;18(2):140–4.
Statisticians in the pharmaceutical industry. Data Sharing SIG; 2023. Available from: https://psiweb.org/sigs-special-interest-groups/data-sharing-working-group .
Bamford S, Lyons S, Arbuckle L, Chetelat P. Sharing Anonymized and Functionally Effective (SAFE) data standard for safely sharing rich clinical trial data. Appl Clin Trials. 2022;31(7/8):30-43.
Hoffmann T, Glasziou P, Beller E, Goldacre B, Chalmers I. Focus on sharing individual patient data distracts from other ways of improving trial transparency. BMJ. 2017;357:j2782.
Odame E, Burgess T, Arbuckle L, Belcin A, Jones G, Mesenbrink P, et al. Establishing a basis for secondary use standards for clinical trial. Appl Clin Trials. 2023. https://www.appliedclinicaltrialsonline.com/view/establishing-a-basis-for-secondary-use-standards-for-clinical-trials .
Gomes DGE, Pottier P, Crystal-Ornelas R, Hudgins EJ, Foroughirad V, Sánchez-Reyes LL, et al. Why don’t we share data and code? Perceived barriers and benefits to public archiving practices. Proc R Soc B Biol Sci. 1987;2022(289):20221113.
Tedersoo L, Küngas R, Oras E, Köster K, Eenmaa H, Leijen Ä, et al. Data sharing practices and data availability upon request differ across scientific disciplines. Sci Data. 2021;8(1):192.
Doshi P, Jefferson T. Clinical study reports of randomised controlled trials: an exploratory review of previously confidential industry reports. BMJ Open. 2013;3(2):e002496.
Doshi P, Jefferson T, Del Mar C. The imperative to share clinical study reports: recommendations from the Tamiflu experience. PLoS Med. 2012;9(4):e1001201.
Wieseler B, Wolfram N, McGauran N, Kerekes MF, Vervölgyi V, Kohlepp P, et al. Completeness of reporting of patient-relevant clinical trial outcomes: comparison of unpublished clinical study reports with publicly available data. PLoS Med. 2013;10(10):e1001526.
Hodkinson A, Dietz KC, Lefebvre C, Golder S, Jones M, Doshi P, et al. The use of clinical study reports to enhance the quality of systematic reviews: a survey of systematic review authors. Syst Rev. 2018;7(1):117.
Egilman AC, Ross JS, Herder M. Optimizing the data available via Health Canada’s clinical information portal. CMAJ. 2021;193(33):E1305-e6.
Cochrane Library. A statement in support of EMA’s clinical study report transparency policy. 2020. Available from: https://www.cochrane.org/news/statement-support-emas-clinical-study-report-transparency-policy .
European Medicines Agency. Clinical data publication. European Union; 2022. Available from: https://www.ema.europa.eu/en/human-regulatory/marketing-authorisation/clinical-data-publication .
ClinicalTrials.gov. Submit studies to ClinicalTrials.gov PRS. 2023. Available from: https://clinicaltrials.gov/ct2/manage-recs .
Minssen T, Rajam N, Bogers M. Clinical trial data transparency and GDPR compliance: implications for data sharing and open innovation. Sci Public Policy. 2020;47(5):616–26.
Marquardsen M, Ogden M, Gøtzsche PC. Redactions in protocols for drug trials: what industry sponsors concealed. J R Soc Med. 2018;111(4):136–41.
Hopkins AM, Modi ND, Abuhelwa AY, Kichenadasse G, Kuderer NM, Lyman GH, et al. Heterogeneity and utility of pharmaceutical company sharing of individual-participant data packages. JAMA Oncol. 2023. https://doi.org/10.1001/jamaoncol.2023.3996 .
Campbell D, McDonald C, Cro S, Jairath V, Kahan BC. Access to unpublished protocols and statistical analysis plans of randomised trials. Trials. 2022;23(1):674.
Grady D, Thomas K. Moderna and Pfizer reveal secret blueprints for coronavirus vaccine trials. The New York Times; 2020. Available from: https://www.nytimes.com/2020/09/17/health/covid-moderna-vaccine.html .
Statham EE, White SA, Sonwane B, Bierer BE. Primed to comply: individual participant data sharing statements on ClinicalTrials.gov. PLoS One. 2020;15(2):e0226143.
Article CAS PubMed PubMed Central Google Scholar
Kirkpatrick E, Gaisford W, Williams E, Brindley E, Tembo D, Wright D. Understanding plain English summaries. A comparison of two approaches to improve the quality of plain English summaries in research reports. Res Involv Engagem. 2017;3(1):17.
European Union. Clinical Trials Expert Group of the European Commission representing Ethics Committees and National Competent Authorities: Good Lay Summary Practice. 2021. Available from: https://health.ec.europa.eu/latest-updates/good-lay-summary-practice-guidance-2021-10-04_en .
Barnes A, Patrick S. Lay summaries of clinical study results: an overview. Pharmaceut Med. 2019;33(4):261–8.
PubMed Google Scholar
EFPIA. Implementing “Good Lay Summary Practice”. 2022. Available from: https://www.efpia.eu/news-events/the-efpia-view/efpia-news/implementing-good-lay-summary-practice/ .
World Medical Association. World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA. 2013;310(20):2191–4.
ClinicalTrials.gov. Why should I register and submit results? 2023. Available from: https://clinicaltrials.gov/ct2/manage-recs/background .
Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med. 2008;358(3):252–60.
Axson SA, Mello MM, Lincow D, Yang C, Gross CP, Ross JS, et al. Clinical trial transparency and data sharing among biopharmaceutical companies and the role of company size, location and product type: a cross-sectional descriptive analysis. BMJ Open. 2021;11(7):e053248.
Sender D, Clark J, Hoffmann TC. Analysis of articles directly related to randomized trials finds poor protocol availability and inconsistent linking of articles. J Clin Epidemiol. 2020;124:69–74.
Christian O, David M, Maximilian S, Edith M, Florian N. Status, use and impact of sharing individual participant data from clinical trials: a scoping review. BMJ Open. 2021;11(8):e049228.
Burns NS, Miller PW. Learning what we didn’t know—the SPRINT data analysis challenge. N Engl J Med. 2017;376(23):2205–7.
Eichler H-G, Sweeney F. The evolution of clinical trials: can we address the challenges of the future? Clin Trials. 2018;15(1):27–32.
Rockhold F, Bromley C, Wagner EK, Buyse M. Open science: the open clinical trials data journey. Clin Trials. 2019;16(5):539–46.
Vivli. Public disclosures. 2023. Available from: https://vivli.org/resources/public-disclosures/ .
Rydzewska LHM, Stewart LA, Tierney JF. Sharing individual participant data: through a systematic reviewer lens. Trials. 2022;23(1):167.
World Health Organization. International Clinical Trials Registry Platform (ICTRP). 2023. Available from: https://www.who.int/clinical-trials-registry-platform .
National Library of Medicine. ClinicalTrials.gov. U.S. Department of Health and Human Services; 2023. Available from: https://clinicaltrials.gov/ .
European Medicines Agency. EU Clinical Trials Register. European Union; 2023. Available from: https://www.clinicaltrialsregister.eu/ctr-search/search .
ClinicalStudyDataRequest. 2023. Available from: https://www.clinicalstudydatarequest.com/ .
Vivli. Center for Global Clinical Research Data. 2023. Available from: https://vivli.org/ .
Bioethics International. Good Pharma Scorecard. 2023. Available from: https://bioethicsinternational.org/good-pharma-scorecard/ .
Kochhar S, Knoppers B, Gamble C, Chant A, Koplan J, Humphreys GS. Clinical trial data sharing: here’s the challenge. BMJ Open. 2019;9(8):e032334.
Keerie C, Tuck C, Milne G, Eldridge S, Wright N, Lewis SC. Data sharing in clinical trials – practical guidance on anonymising trial datasets. Trials. 2018;19(1):25.
Rockhold FW. Statistical controversies in clinical research: data access and sharing - can we be more transparent about clinical research? Let’s do what’s right for patients. Ann Oncol. 2017;28(8):1734–7.
Figure 1 was created with BioRender.com.
NDM is supported by a Postgraduate Scholarship from the National Health and Medical Research Council, Australia (APP2005294).
AMH is supported by an Emerging Leader Investigator Grant from the National Health and Medical Research Council, Australia (APP2008119).
MJS is supported by a Beat Cancer Research Fellowship from Cancer Council South Australia.
HJ is supported by an NHMRC grant (GNT1163120).
AKS is supported by an NHMRC Synergy grant (#2009923).
Authors and affiliations.
College of Medicine and Public Health, Flinders University, Adelaide, SA, Australia
Natansh D. Modi, Ganessan Kichenadasse, Arkady Manning-Bennet, Erin Morton, Ross A. McKinnon, Andrew Rowland, Michael J. Sorich & Ashley M. Hopkins
Flinders Centre for Innovation in Cancer, Department of Medical Oncology, Flinders Medical Centre, Adelaide, SA, Australia
Institute for Evidence-Based Healthcare, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, QLD, Australia
Tammy C. Hoffmann
Clinical and Health Sciences, University of South Australia, Adelaide, SA, Australia
Jessica M. Logan
Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Canada
Areti A. Veroniki
Knowledge Translation Program, Li Ka Shing Knowledge Institute, St. Michael’s Hospital, Toronto, Canada
Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Camperdown, NSW, Australia
Rebecca L. Venchiarutti
Department of Head and Neck Surgery, Chris O’Brien Lifehouse, Sydney, NSW, Australia
The Daffodil Centre, The University of Sydney, A Joint Venture with Cancer Council NSW, Sydney, NSW, Australia
Amelia K. Smit
Centre for the Business and Economics of Health, The University of Queensland, Brisbane, QLD, Australia
Cancer Epidemiology Division, Cancer Council Victoria, Melbourne, VIC, Australia
Centre for Epidemiology and Biostatistics, Melbourne School of Population and Global Health, The University of Melbourne, Melbourne, VIC, Australia
School of Public Health and Preventive Medicine, Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, VIC, Australia
You can also search for this author in PubMed Google Scholar
NDM (first author) and AMH (corresponding author) were responsible for the project conception and drafting of the manuscript. All authors were involved in editing and revising the manuscript and agreed to its publication. All authors read and approved the final manuscript.
Authors’ Twitter handles
Correspondence to Ashley M. Hopkins .
Ethics approval and consent to participate, consent for publication, competing interests.
A/Prof. Rowland, Prof. Sorich, and Prof. McKinnon report grants from Pfizer, outside the submitted work. The remaining authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and Permissions
About this article
Cite this article.
Modi, N.D., Kichenadasse, G., Hoffmann, T.C. et al. A 10-year update to the principles for clinical trial data sharing by pharmaceutical companies: perspectives based on a decade of literature and policies. BMC Med 21 , 400 (2023). https://doi.org/10.1186/s12916-023-03113-0
Received : 09 May 2023
Accepted : 13 October 2023
Published : 23 October 2023
DOI : https://doi.org/10.1186/s12916-023-03113-0
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Data sharing
- Pharmaceutical industry
- Research Article
- Published: 06 November 2023
Assessing student learning in a guided inquiry-based maker learning environment: knowledge representation from the expertise development perspective
- Xun Ge ORCID: orcid.org/0000-0003-2883-9716 1 ,
- Kyungwon Koh 2 &
- Ling Hu 3
Educational technology research and development ( 2023 ) Cite this article
A qualitative study was conducted in a secondary school to evaluate student learning processes and outcomes by examining their inquiry questions, journals, and maker artifacts in a curriculum-based maker learning environment supported by the Guided Inquiry Design (GID). Thirteen 8th-grade students in a suburban middle school in the southwest of the United States participated in the study. Inquiry questions, maker artifacts, and inquiry journals were collected and analyzed with rubrics that were developed based on a critical review of literature drawn from different bodies of literature. The objectives of this study were three-fold: (1) evaluate the quality of students’ inquiry questions and maker artifacts, (2) assess students’ internal knowledge representation (IKR) and external knowledge representation (EKR), and (3) develop a robust and valid assessment framework for maker learning considering students’ expertise development over time. The findings revealed that the students progressed at different developmental levels; therefore, expertise development should be incorporated into the assessment framework for maker learning. This study also implied that scaffolding should be tailored to meet special needs of each student in a maker learning environment. The originality of this research is that the assessment framework takes into account individuals’ development and progress towards expertise over time, instead of focusing on their learning outcomes at a specific time point. An additional value of this study is that the tools and supporting materials that serve as instructional scaffolds also serve as tools to collect evidence about student learning performance, processes, and outcomes.
This is a preview of subscription content, access via your institution .
Buy single article.
Instant access to the full article PDF.
Price includes VAT (Russian Federation)
Rent this article via DeepDyve.
AASL. (2018). National School Library Standards for Learners, School Librarians, and School Libraries . ALA.
Alemdar, M., Rutstein, D., Edwards, D., Helms, M., Hernandez, D., & Usseleman, M. (2021). Utilizing evidence-centered design to develop assessments: A high school introductory computer science course. Frontiers in Education, 6 , Article 695376. https://doi.org/10.3389/feduc.2021.695376
Anderson, J. R. (1985). Cognitive psychology and its implications (2nd ed.). Freeman.
Anderson, J. R. (2015). Cognitive psychology and its implications (8th ed.). Worth Publishers.
Bevan, B., Ryoo, J. J., Vanderwerff, A., Wilkinson, K., & Petrich, M. (2017). Making Deeper Learners: A Tinkering Learning Dimensions Framework v 2.0 . Research Practice Collaboratory. Retrieved from https://www.exploratorium.edu/tinkering/our-work/learning-dimensions-making-and-tinkering
Biggs, J. (1993). What do inventories of students’ learning processes really measure? A theoretical review and clarification. British Journal of Educational Psychology, 63 , 3–19. https://doi.org/10.1111/j.2044-8279.1993.tb01038.x
Article Google Scholar
Chang, S., Penney, L., Wardrip, P., Abramovich, S., Millerjohn, R., Kumar, V., Martin, C., Widman, S., Penuel, B., & Chang-Order, J. (2020). Assessment in hands-on library learning spaces. Proceeding of the 2020 International Conference of the Learning Sciences (ICLS) , 3 , 1525–1530.
Clapp, E. P., Ross, J., Ryan, J. O., & Tishman, S. (2017). Maker-centered learning: Empowering young people to shape their worlds . Jossey-Bass.
Cun, A., Abramovich, S., & Smith, J. M. (2019). An assessment matrix for library makerspaces. Library & Information Science Research, 41 (1), 39–47. https://doi.org/10.1016/j.lisr.2019.02.008
Dougherty, D. (2012). The maker movement. Innovations, 7 (3), 11–14. https://doi.org/10.1162/INOV_a_00135
Dougherty, D. (2013). The maker mindset. In M. Honey & D. E. Kanter (Eds.), Design, make, play: Growing the next generation of STEM innovators (pp. 7–16). Taylor & Francis.
Dweck, C. S. (2006). Mindset: The new psychology of success . Random House.
Entwistle, N. (2000, November). Promoting deep learning through teaching and assessment: Conceptual frameworks and educational contexts . Paper presented at the First Annual Conference of the ESRC Teaching and Learning Research Programme, Leicester, UK.
Ericsson, K. A., & Charness, N. (1994). Expert performance: Its structure and acquisition. American Psychologist, 49 (8), 725–747. https://doi.org/10.1037/0003-066X.49.8.725
Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100 (3), 363–406. https://doi.org/10.1037/0033-295X.100.3.363
Grimm, L. R. (2014). Psychology of knowledge representation. Wires Cognitive Science, 5 (3), 261–270. https://doi.org/10.1002/wcs.1284
Halverson, E., & Sheridan, K. (2014). The maker movement in education. Harvard Educational Review, 84 (4), 495–504. https://doi.org/10.17763/haer.84.4.34j1g68140382063
Ito, M., Wortman, A., Penuel, B., Horne, K. V., Chang-Order, J., Harris, M., Michalchik, V., Podkul, T., Yoke, B., Steele, K.-F., Mack, C., Yang, M., Runyan, L., Reyes, E., Awakuni, K., & Horton, M. (n.d.). Capturing Connected Learning in Libraries . Retrieved August 4, 2020, from https://connectedlearning.uci.edu/project/capturing-connected-learning-in-libraries/
Koh, K., Ge, X., Lee, L., Lewis, K. R., Simmons, S., & Nelson, L. B. (2021). Peace prescription: Inclusive making in school libraries. In M. Melo & J. Nichols (Eds.), Re-making the library makerspace: Critical theories, reflections, and practices (pp. 135–151). Litwin Books & Library Juice Press.
Kuhlthau, C. C., Maniotes, L. K., & Caspari, A. K. (2015). Guided inquiry: Learning in the 21st century (2nd ed.). Libraries Unlimited.
Maker Education Initiative. (n.d.). Stages of development: Outcomes & indicator for educators . Retrieved August 6, 2020, from https://makered.org/wp-content/uploads/2019/03/Stages-of-Development_Educator_FINAL.pdf
Martinez, S. L., & Stager, G. (2013). Invent to learn : Making, tinkering, and engineering in the classroom . Constructing Modern Knowledge Press.
Marton, F., & Säljö, R. (1976). On qualitative differences in learning: I—Outcome and process. British Journal of Educational Psychology, 46 (1), 4–11. https://doi.org/10.1111/j.2044-8279.1976.tb02980.x
Mislevy, R. J., Behrens, J. T., Bennett, R. E., Demark, S. F., Frezzo, D. C., Levy, R., … Winters, F. I. (2010). On the roles of external knowledge representations in assessment design. Journal of Technology, Learning, and Assessment , 8 (2), 4–57. Retrieved from https://ejournals.bc.edu/index.php/jtla/article/view/1621
Mislevy, R. J. (2013). Evidence-centered design for simulation-based assessment. Military Medicine, 178 (10), 107–114. https://doi.org/10.7205/MILMED-D-13-00213
Mislevy, R. J. (2016). How developments in psychology and technology challenge validity argumentation. Journal of Educational Measurement, 53 (3), 265–292. https://doi.org/10.1111/jedm.12117
Mislevy, R. J., & Riconscente, M. M. (2006). Evidence-centered assessment design: Layers, concepts, and terminology. In S. Downing & T. Haladyna (Eds.), Handbook of test development (pp. 61–90). Erlbaum.
Murai, Y., Kim, Y. J., Martin, E., Kirschmann, P., Rosenheck, L., & Reich, J. (2019). Embedding assessment in school-based making. FabLearn ’19: Proceedings of the 8th annual conference on creativity and fabrication in education .
Peppler, K., Halverson, E. R., & Kafai, Y. B. (Eds.). (2016). Makeology: Makers as learners . Routledge.
Peppler, K., Keune, A., & Chang, S. (2018). Open Portfolio Project, Research Brief Series Phase 2. Retrieved from https://makered.org/wp-content/uploads/2018/02/MakerEdOPP_2018-Research-Briefs_LRfull.pdf
Strauss, A., & Corbin, J. M. (1990). Basics of qualitative research: Grounded theory procedures and techniques . Sage Publications.
Trahan, K., Romero, S. M., RamosZollars, Rd. A. J., & Tananis, C. (2019). Making success: What does large-scale integration of making into a middle and high school look like? Improving Schools, 22 (2), 144–157.
Wardrip, P. S., Abramovich, S., White, A., Penney, L., Chang, S., & Brahms, L. (2021). “Guest editorial”. Information and Learning Sciences , 122 (3/4), 121–126. https://doi.org/10.1108/ILS-03-2021-261
This project was made possible by the Institute of Museum and Library Services (IMLS) National Leadership Grant for Libraries.
Authors and affiliations.
Department of Learning Technologies, College of Information, University of North Texas, Denton, USA
School of Information Sciences, The University of Illinois at Urbana-Champaign, Champaign, USA
School of Foreign Language Education, Jilin University, Changchun, China
You can also search for this author in PubMed Google Scholar
Correspondence to Xun Ge .
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix 1: Rubrics for inquiry questions
Appendix 2: rubrics for maker products, appendix 3: rubrics for the maker projects, rights and permissions.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and Permissions
About this article
Cite this article.
Ge, X., Koh, K. & Hu, L. Assessing student learning in a guided inquiry-based maker learning environment: knowledge representation from the expertise development perspective. Education Tech Research Dev (2023). https://doi.org/10.1007/s11423-023-10306-0
Accepted : 06 October 2023
Published : 06 November 2023
DOI : https://doi.org/10.1007/s11423-023-10306-0
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Maker learning
- Guided inquiry design
- Knowledge representation
- Find a journal
- Publish with us