

10 Best Literature Review Tools for Researchers
This post may contain affiliate links that allow us to earn a commission at no expense to you. Learn more

Boost your research game with these Best Literature Review Tools for Researchers! Uncover hidden gems, organize your findings, and ace your next research paper!
Conducting literature reviews poses challenges for researchers due to the overwhelming volume of information available and the lack of efficient methods to manage and analyze it.
Researchers struggle to identify key sources, extract relevant information, and maintain accuracy while manually conducting literature reviews. This leads to inefficiency, errors, and difficulty in identifying gaps or trends in existing literature.
Advancements in technology have resulted in a variety of literature review tools. These tools streamline the process, offering features like automated searching, filtering, citation management, and research data extraction. They save time, improve accuracy, and provide valuable insights for researchers.
In this article, we present a curated list of the 10 best literature review tools, empowering researchers to make informed choices and revolutionize their systematic literature review process.
Table of Contents
Top 10 Literature Review Tools for Researchers: In A Nutshell (2023)
#1. semantic scholar – a free, ai-powered research tool for scientific literature.

Semantic Scholar is a cutting-edge literature review tool that researchers rely on for its comprehensive access to academic publications. With its advanced AI algorithms and extensive database, it simplifies the discovery of relevant research papers.
By employing semantic analysis, users can explore scholarly articles based on context and meaning, making it a go-to resource for scholars across disciplines.
Additionally, Semantic Scholar offers personalized recommendations and alerts, ensuring researchers stay updated with the latest developments. However, users should be cautious of potential limitations.
Not all scholarly content may be indexed, and occasional false positives or inaccurate associations can occur. Furthermore, the tool primarily focuses on computer science and related fields, potentially limiting coverage in other disciplines.
Researchers should be mindful of these considerations and supplement Semantic Scholar with other reputable resources for a comprehensive literature review. Despite these caveats, Semantic Scholar remains a valuable tool for streamlining research and staying informed.
#2. Elicit – Research assistant using language models like GPT-3

Elicit is a game-changing literature review tool that has gained popularity among researchers worldwide. With its user-friendly interface and extensive database of scholarly articles, it streamlines the research process, saving time and effort.
The tool employs advanced algorithms to provide personalized recommendations, ensuring researchers discover the most relevant studies for their field. Elicit also promotes collaboration by enabling users to create shared folders and annotate articles.
However, users should be cautious when using Elicit. It is important to verify the credibility and accuracy of the sources found through the tool, as the database encompasses a wide range of publications.
Additionally, occasional glitches in the search function have been reported, leading to incomplete or inaccurate results. While Elicit offers tremendous benefits, researchers should remain vigilant and cross-reference information to ensure a comprehensive literature review.
#3. Scite.Ai – Your personal research assistant

Scite.Ai is a popular literature review tool that revolutionizes the research process for scholars. With its innovative citation analysis feature, researchers can evaluate the credibility and impact of scientific articles, making informed decisions about their inclusion in their own work.
By assessing the context in which citations are used, Scite.Ai ensures that the sources selected are reliable and of high quality, enabling researchers to establish a strong foundation for their research.
However, while Scite.Ai offers numerous advantages, there are a few aspects to be cautious about. As with any data-driven tool, occasional errors or inaccuracies may arise, necessitating researchers to cross-reference and verify results with other reputable sources.
Moreover, Scite.Ai’s coverage may be limited in certain subject areas and languages, with a possibility of missing relevant studies, especially in niche fields or non-English publications.
Therefore, researchers should supplement the use of Scite.Ai with additional resources to ensure comprehensive literature coverage and avoid any potential gaps in their research.
Rayyan offers the following paid plans:
- Monthly Plan: $20
- Yearly Plan: $12

#4. DistillerSR – Literature Review Software

DistillerSR is a powerful literature review tool trusted by researchers for its user-friendly interface and robust features. With its advanced search capabilities, researchers can quickly find relevant studies from multiple databases, saving time and effort.
The tool offers comprehensive screening and data extraction functionalities, streamlining the review process and improving the reliability of findings. Real-time collaboration features also facilitate seamless teamwork among researchers.
While DistillerSR offers numerous advantages, there are a few considerations. Users should invest time in understanding the tool’s features and functionalities to maximize its potential. Additionally, the pricing structure may be a factor for individual researchers or small teams with limited budgets.
Despite occasional technical glitches reported by some users, the developers actively address these issues through updates and improvements, ensuring a better user experience.
Overall, DistillerSR empowers researchers to navigate the vast sea of information, enhancing the quality and efficiency of literature reviews while fostering collaboration among research teams.
#5. Rayyan – AI Powered Tool for Systematic Literature Reviews

Rayyan is a powerful literature review tool that simplifies the research process for scholars and academics. With its user-friendly interface and efficient management features, Rayyan is highly regarded by researchers worldwide.
It allows users to import and organize large volumes of scholarly articles, making it easier to identify relevant studies for their research projects. The tool also facilitates seamless collaboration among team members, enhancing productivity and streamlining the research workflow.
However, it’s important to be aware of a few aspects. The free version of Rayyan has limitations, and upgrading to a premium subscription may be necessary for additional functionalities.
Users should also be mindful of occasional technical glitches and compatibility issues, promptly reporting any problems. Despite these considerations, Rayyan remains a valuable asset for researchers, providing an effective solution for literature review tasks.
Rayyan offers both free and paid plans:
- Professional: $8.25/month
- Student: $4/month
- Pro Team: $8.25/month
- Team+: $24.99/month

#6. Consensus – Use AI to find you answers in scientific research

Consensus is a cutting-edge literature review tool that has become a go-to choice for researchers worldwide. Its intuitive interface and powerful capabilities make it a preferred tool for navigating and analyzing scholarly articles.
With Consensus, researchers can save significant time by efficiently organizing and accessing relevant research material.People consider Consensus for several reasons.
Its advanced search algorithms and filters help researchers sift through vast amounts of information, ensuring they focus on the most relevant articles. By streamlining the literature review process, Consensus allows researchers to extract valuable insights and accelerate their research progress.
However, there are a few factors to watch out for when using Consensus. As with any automated tool, researchers should exercise caution and independently verify the accuracy and relevance of the generated results. Complex or niche topics may present challenges, resulting in limited search results. Researchers should also supplement Consensus with manual searches to ensure comprehensive coverage of the literature.
Overall, Consensus is a valuable resource for researchers seeking to optimize their literature review process. By leveraging its features alongside critical thinking and manual searches, researchers can enhance the efficiency and effectiveness of their work, advancing their research endeavors to new heights.
Consensus offers both free and paid plans:
- Premium: $9.99/month
- Enterprise: Custom

#7. RAx – AI-powered reading assistant

Consensus is a revolutionary literature review tool that has transformed the research process for scholars worldwide. With its user-friendly interface and advanced features, it offers a vast database of academic publications across various disciplines, providing access to relevant and up-to-date literature.
Using advanced algorithms and machine learning, Consensus delivers personalized recommendations, saving researchers time and effort in their literature search.
However, researchers should be cautious of potential biases in the recommendation system and supplement their search with manual verification to ensure a comprehensive review.
Additionally, occasional inaccuracies in metadata have been reported, making it essential for users to cross-reference information with reliable sources. Despite these considerations, Consensus remains an invaluable tool for enhancing the efficiency and quality of literature reviews.
RAx offers both free and paid plans. Currently offering 50% discounts as of July 2023:
- Premium: $6/month $3/month
- Premium with Copilot: $8/month $4/month

#8. Lateral – Advance your research with AI

“Lateral” is a revolutionary literature review tool trusted by researchers worldwide. With its user-friendly interface and powerful search capabilities, it simplifies the process of gathering and analyzing scholarly articles.
By leveraging advanced algorithms and machine learning, Lateral saves researchers precious time by retrieving relevant articles and uncovering new connections between them, fostering interdisciplinary exploration.
While Lateral provides numerous benefits, users should exercise caution. It is advisable to cross-reference its findings with other sources to ensure a comprehensive review.
Additionally, researchers must be mindful of potential biases introduced by the tool’s algorithms and should critically evaluate and interpret the results.
Despite these considerations, Lateral remains an indispensable resource, empowering researchers to delve deeper into their fields of study and make valuable contributions to the academic community.
RAx offers both free and paid plans:
- Premium: $10.98
- Pro: $27.46

#9. Iris AI – Introducing the researcher workspace

Iris AI is an innovative literature review tool that has transformed the research process for academics and scholars. With its advanced artificial intelligence capabilities, Iris AI offers a seamless and efficient way to navigate through a vast array of academic papers and publications.
Researchers are drawn to this tool because it saves valuable time by automating the tedious task of literature review and provides comprehensive coverage across multiple disciplines.
Its intelligent recommendation system suggests related articles, enabling researchers to discover hidden connections and broaden their knowledge base. However, caution should be exercised while using Iris AI.
While the tool excels at surfacing relevant papers, researchers should independently evaluate the quality and validity of the sources to ensure the reliability of their work.
It’s important to note that Iris AI may occasionally miss niche or lesser-known publications, necessitating a supplementary search using traditional methods.
Additionally, being an algorithm-based tool, there is a possibility of false positives or missed relevant articles due to the inherent limitations of automated text analysis. Nevertheless, Iris AI remains an invaluable asset for researchers, enhancing the quality and efficiency of their research endeavors.
Iris AI offers different pricing plans to cater to various user needs:
- Basic: Free
- Premium: Monthly ($82.41), Quarterly ($222.49), and Annual ($791.07)

#10. Scholarcy – Summarize your literature through AI

Scholarcy is a powerful literature review tool that helps researchers streamline their work. By employing advanced algorithms and natural language processing, it efficiently analyzes and summarizes academic papers, saving researchers valuable time.
Scholarcy’s ability to extract key information and generate concise summaries makes it an attractive option for scholars looking to quickly grasp the main concepts and findings of multiple papers.
However, it is important to exercise caution when relying solely on Scholarcy. While it provides a useful starting point, engaging with the original research papers is crucial to ensure a comprehensive understanding.
Scholarcy’s automated summarization may not capture the nuanced interpretations or contextual information presented in the full text.
Researchers should also be aware that certain types of documents, particularly those with heavy mathematical or technical content, may pose challenges for the tool.
Despite these considerations, Scholarcy remains a valuable resource for researchers seeking to enhance their literature review process and improve overall efficiency.
Scholarcy offer the following pricing plans:
- Browser Extension and Flashcards: Free
- Personal Library: $9.99
- Academic Institution License: $8K+

Final Thoughts
In conclusion, conducting a comprehensive literature review is a crucial aspect of any research project, and the availability of reliable and efficient tools can greatly facilitate this process for researchers. This article has explored the top 10 literature review tools that have gained popularity among researchers.
Moreover, the rise of AI-powered tools like Iris.ai and Sci.ai promises to revolutionize the literature review process by automating various tasks and enhancing research efficiency.
Ultimately, the choice of literature review tool depends on individual preferences and research needs, but the tools presented in this article serve as valuable resources to enhance the quality and productivity of research endeavors.
Researchers are encouraged to explore and utilize these tools to stay at the forefront of knowledge in their respective fields and contribute to the advancement of science and academia.
Q1. What are literature review tools for researchers?
Literature review tools for researchers are software or online platforms designed to assist researchers in efficiently conducting literature reviews. These tools help researchers find, organize, analyze, and synthesize relevant academic papers and other sources of information.
Q2. What criteria should researchers consider when choosing literature review tools?
When choosing literature review tools, researchers should consider factors such as the tool’s search capabilities, database coverage, user interface, collaboration features, citation management, annotation and highlighting options, integration with reference management software, and data extraction capabilities.
It’s also essential to consider the tool’s accessibility, cost, and technical support.
Q3. Are there any literature review tools specifically designed for systematic reviews or meta-analyses?
Yes, there are literature review tools that cater specifically to systematic reviews and meta-analyses, which involve a rigorous and structured approach to reviewing existing literature. These tools often provide features tailored to the specific needs of these methodologies, such as:
Screening and eligibility assessment: Systematic review tools typically offer functionalities for screening and assessing the eligibility of studies based on predefined inclusion and exclusion criteria. This streamlines the process of selecting relevant studies for analysis.
Data extraction and quality assessment: These tools often include templates and forms to facilitate data extraction from selected studies. Additionally, they may provide features for assessing the quality and risk of bias in individual studies.
Meta-analysis support: Some literature review tools include statistical analysis features that assist in conducting meta-analyses. These features can help calculate effect sizes, perform statistical tests, and generate forest plots or other visual representations of the meta-analytic results.
Reporting assistance: Many tools provide templates or frameworks for generating systematic review reports, ensuring compliance with established guidelines such as PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses).
Q4. Can literature review tools help with organizing and annotating collected references?
Yes, literature review tools often come equipped with features to help researchers organize and annotate collected references. Some common functionalities include:
Reference management: These tools enable researchers to import references from various sources, such as databases or PDF files, and store them in a central library. They typically allow you to create folders or tags to organize references based on themes or categories.
Annotation capabilities: Many tools provide options for adding annotations, comments, or tags to individual references or specific sections of research articles. This helps researchers keep track of important information, highlight key findings, or note potential connections between different sources.
Full-text search: Literature review tools often offer full-text search functionality, allowing you to search within the content of imported articles or documents. This can be particularly useful when you need to locate specific information or keywords across multiple references.
Integration with citation managers: Some literature review tools integrate with popular citation managers like Zotero, Mendeley, or EndNote, allowing seamless transfer of references and annotations between platforms.
By leveraging these features, researchers can streamline the organization and annotation of their collected references, making it easier to retrieve relevant information during the literature review process.
Leave a Comment Cancel reply
Save my name, email, and website in this browser for the next time I comment.
We maintain and update science journals and scientific metrics. Scientific metrics data are aggregated from publicly available sources. Please note that we do NOT publish research papers on this platform. We do NOT accept any manuscript.

2012-2023 © scijournal.org

Start your free trial
Arrange a trial for your organisation and discover why FSTA is the leading database for reliable research on the sciences of food and health.
REQUEST A FREE TRIAL
- Research Skills Blog
5 software tools to support your systematic review processes
By Dr. Mina Kalantar on 19-Jan-2021 13:01:01

Systematic reviews are a reassessment of scholarly literature to facilitate decision making. This methodical approach of re-evaluating evidence was initially applied in healthcare, to set policies, create guidelines and answer medical questions.
Systematic reviews are large, complex projects and, depending on the purpose, they can be quite expensive to conduct. A team of researchers, data analysts and experts from various fields may collaborate to review and examine incredibly large numbers of research articles for evidence synthesis. Depending on the spectrum, systematic reviews often take at least 6 months, and sometimes upwards of 18 months to complete.
The main principles of transparency and reproducibility require a pragmatic approach in the organisation of the required research activities and detailed documentation of the outcomes. As a result, many software tools have been developed to help researchers with some of the tedious tasks required as part of the systematic review process.
hbspt.cta._relativeUrls=true;hbspt.cta.load(97439, 'ccc20645-09e2-4098-838f-091ed1bf1f4e', {"useNewLoader":"true","region":"na1"});
The first generation of these software tools were produced to accommodate and manage collaborations, but gradually developed to help with screening literature and reporting outcomes. Some of these software packages were initially designed for medical and healthcare studies and have specific protocols and customised steps integrated for various types of systematic reviews. However, some are designed for general processing, and by extending the application of the systematic review approach to other fields, they are being increasingly adopted and used in software engineering, health-related nutrition, agriculture, environmental science, social sciences and education.
Software tools
There are various free and subscription-based tools to help with conducting a systematic review. Many of these tools are designed to assist with the key stages of the process, including title and abstract screening, data synthesis, and critical appraisal. Some are designed to facilitate the entire process of review, including protocol development, reporting of the outcomes and help with fast project completion.
As time goes on, more functions are being integrated into such software tools. Technological advancement has allowed for more sophisticated and user-friendly features, including visual graphics for pattern recognition and linking multiple concepts. The idea is to digitalise the cumbersome parts of the process to increase efficiency, thus allowing researchers to focus their time and efforts on assessing the rigorousness and robustness of the research articles.
This article introduces commonly used systematic review tools that are relevant to food research and related disciplines, which can be used in a similar context to the process in healthcare disciplines.
These reviews are based on IFIS' internal research, thus are unbiased and not affiliated with the companies.

This online platform is a core component of the Cochrane toolkit, supporting parts of the systematic review process, including title/abstract and full-text screening, documentation, and reporting.
The Covidence platform enables collaboration of the entire systematic reviews team and is suitable for researchers and students at all levels of experience.
From a user perspective, the interface is intuitive, and the citation screening is directed step-by-step through a well-defined workflow. Imports and exports are straightforward, with easy export options to Excel and CVS.
Access is free for Cochrane authors (a single reviewer), and Cochrane provides a free trial to other researchers in healthcare. Universities can also subscribe on an institutional basis.
Rayyan is a free and open access web-based platform funded by the Qatar Foundation, a non-profit organisation supporting education and community development initiative . Rayyan is used to screen and code literature through a systematic review process.
Unlike Covidence, Rayyan does not follow a standard SR workflow and simply helps with citation screening. It is accessible through a mobile application with compatibility for offline screening. The web-based platform is known for its accessible user interface, with easy and clear export options.
Function comparison of 5 software tools to support the systematic review process
Eppi-reviewer.
EPPI-Reviewer is a web-based software programme developed by the Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI) at the UCL Institute for Education, London .
It provides comprehensive functionalities for coding and screening. Users can create different levels of coding in a code set tool for clustering, screening, and administration of documents. EPPI-Reviewer allows direct search and import from PubMed. The import of search results from other databases is feasible in different formats. It stores, references, identifies and removes duplicates automatically. EPPI-Reviewer allows full-text screening, text mining, meta-analysis and the export of data into different types of reports.
There is no limit for concurrent use of the software and the number of articles being reviewed. Cochrane reviewers can access EPPI reviews using their Cochrane subscription details.
EPPI-Centre has other tools for facilitating the systematic review process, including coding guidelines and data management tools.
CADIMA is a free, online, open access review management tool, developed to facilitate research synthesis and structure documentation of the outcomes.
The Julius Institute and the Collaboration for Environmental Evidence established the software programme to support and guide users through the entire systematic review process, including protocol development, literature searching, study selection, critical appraisal, and documentation of the outcomes. The flexibility in choosing the steps also makes CADIMA suitable for conducting systematic mapping and rapid reviews.
CADIMA was initially developed for research questions in agriculture and environment but it is not limited to these, and as such, can be used for managing review processes in other disciplines. It enables users to export files and work offline.
The software allows for statistical analysis of the collated data using the R statistical software. Unlike EPPI-Reviewer, CADIMA does not have a built-in search engine to allow for searching in literature databases like PubMed.
DistillerSR
DistillerSR is an online software maintained by the Canadian company, Evidence Partners which specialises in literature review automation. DistillerSR provides a collaborative platform for every stage of literature review management. The framework is flexible and can accommodate literature reviews of different sizes. It is configurable to different data curation procedures, workflows and reporting standards. The platform integrates necessary features for screening, quality assessment, data extraction and reporting. The software uses Artificial Learning (AL)-enabled technologies in priority screening. It is to cut the screening process short by reranking the most relevant references nearer to the top. It can also use AL, as a second reviewer, in quality control checks of screened studies by human reviewers. DistillerSR is used to manage systematic reviews in various medical disciplines, surveillance, pharmacovigilance and public health reviews including food and nutrition topics. The software does not support statistical analyses. It provides configurable forms in standard formats for data extraction.
DistillerSR allows direct search and import of references from PubMed. It provides an add on feature called LitConnect which can be set to automatically import newly published references from data providers to keep reviews up to date during their progress.
The systematic review Toolbox is a web-based catalogue of various tools, including software packages which can assist with single or multiple tasks within the evidence synthesis process. Researchers can run a quick search or tailor a more sophisticated search by choosing their approach, budget, discipline, and preferred support features, to find the right tools for their research.
If you enjoyed this blog post, you may also be interested in our recently published blog post addressing the difference between a systematic review and a systematic literature review.

- FSTA - Food Science & Technology Abstracts
- IFIS Collections
- Resources Hub
- Diversity Statement
- Sustainability Commitment
- Company news
- Frequently Asked Questions
- Privacy Policy
- Terms of Use for IFIS Collections
Ground Floor, 115 Wharfedale Road, Winnersh Triangle, Wokingham, Berkshire RG41 5RB
Get in touch with IFIS
© International Food Information Service (IFIS Publishing) operating as IFIS – All Rights Reserved | Charity Reg. No. 1068176 | Limited Company No. 3507902 | Designed by Blend

- Help Center
GET STARTED

COLLABORATE ON YOUR REVIEWS WITH ANYONE, ANYWHERE, ANYTIME

Save precious time and maximize your productivity with a Rayyan membership. Receive training, priority support, and access features to complete your systematic reviews efficiently.

Rayyan Teams+ makes your job easier. It includes VIP Support, AI-powered in-app help, and powerful tools to create, share and organize systematic reviews, review teams, searches, and full-texts.

RESEARCHERS
Rayyan makes collaborative systematic reviews faster, easier, and more convenient. Training, VIP support, and access to new features maximize your productivity. Get started now!
Over 500 million reference articles reviewed by research teams, and counting...
Intelligent, scalable and intuitive.
Rayyan understands language, learns from your decisions and helps you work quickly through even your largest systematic literature reviews.
WATCH A TUTORIAL NOW
Solutions for Organizations and Businesses

Rayyan Enterprise and Rayyan Teams+ make it faster, easier and more convenient for you to manage your research process across your organization.
- Accelerate your research across your team or organization and save valuable researcher time.
- Build and preserve institutional assets, including literature searches, systematic reviews, and full-text articles.
- Onboard team members quickly with access to group trainings for beginners and experts.
- Receive priority support to stay productive when questions arise.
- SCHEDULE A DEMO
- LEARN MORE ABOUT RAYYAN TEAMS+
RAYYAN SYSTEMATIC LITERATURE REVIEW OVERVIEW

LEARN ABOUT RAYYAN’S PICO HIGHLIGHTS AND FILTERS

Join now to learn why Rayyan is trusted by already more than 250,000 researchers
Individual plans, teams plans.
For early career researchers just getting started with research.
Free forever
- 3 Active Reviews
- Invite Unlimited Reviewers
- Import Directly from Mendeley
- Industry Leading De-Duplication
- 5-Star Relevance Ranking
- Advanced Filtration Facets
- Mobile App Access
- 100 Decisions on Mobile App
- Standard Support
- Revoke Reviewer
- Online Training
- PICO Highlights & Filters
- PRISMA (Beta)
- Auto-Resolver (Beta)
- Multiple Teams & Management Roles
- Monitor & Manage Users, Searches, Reviews, Full Texts
- Onboarding and Regular Training
Professional
For researchers who want more tools for research acceleration.
Per month billed annually
14-DAY FREE TRIAL
- Unlimited Active Reviews
- Unlimited Decisions on Mobile App
- Priority Support
For students who want more tools to accelerate their research.
Per month billed annually
Billed monthly
For a team that wants professional licenses for all members.
Per-user, per month, billed annually
- Single Team
- High Priority Support
For teams that want support and advanced tools for members.
- Multiple Teams
- Management Roles
For organizations who want access to all of their members.
Annual Subscription
Contact Sales
- Organizational Ownership
- For an organization or a company
- Access to all the premium features such as PICO Filters, Auto-Resolver, PRISMA and Mobile App
- Store and Reuse Searches and Full Texts
- A management console to view, organize and manage users, teams, review projects, searches and full texts
- Highest tier of support – Support via email, chat and AI-powered in-app help
- GDPR Compliant
- Single Sign-On
- API Integration
- Training for Experts
- Training Sessions Students Each Semester
- More options for secure access control
ANNUAL ONLY
Per-user, billed monthly
Rayyan Subscription
membership starts with 2 users. You can select the number of additional members that you’d like to add to your membership.
Total amount:
Your billing cycle will start after your free trial ends!
Great usability and functionality. Rayyan has saved me countless hours. I even received timely feedback from staff when I did not understand the capabilities of the system, and was pleasantly surprised with the time they dedicated to my problem. Thanks again!
This is a great piece of software. It has made the independent viewing process so much quicker. The whole thing is very intuitive.
Rayyan makes ordering articles and extracting data very easy. A great tool for undertaking literature and systematic reviews!
Excellent interface to do title and abstract screening. Also helps to keep a track on the the reasons for exclusion from the review. That too in a blinded manner.
Rayyan is a fantastic tool to save time and improve systematic reviews!!! It has changed my life as a researcher!!! thanks
Easy to use, friendly, has everything you need for cooperative work on the systematic review.
Rayyan makes life easy in every way when conducting a systematic review and it is easy to use.

Accelerate your research with the best systematic literature review tools
The ideal literature review tool helps you make sense of the most important insights in your research field. ATLAS.ti empowers researchers to perform powerful and collaborative analysis using the leading software for literature review.

Finalize your literature review faster with comfort
ATLAS.ti makes it easy to manage, organize, and analyze articles, PDFs, excerpts, and more for your projects. Conduct a deep systematic literature review and get the insights you need with a comprehensive toolset built specifically for your research projects.

Figure out the "why" behind your participant's motivations
Understand the behaviors and emotions that are driving your focus group participants. With ATLAS.ti, you can transform your raw data and turn it into qualitative insights you can learn from. Easily determine user intent in the same spot you're deciphering your overall focus group data.

Visualize your research findings like never before
We make it simple to present your analysis results with meaningful charts, networks, and diagrams. Instead of figuring out how to communicate the insights you just unlocked, we enable you to leverage easy-to-use visualizations that support your goals.

Everything you need to elevate your literature review
Import and organize literature data.
Import and analyze any type of text content – ATLAS.ti supports all standard text and transcription files such as Word and PDF.
Analyze with ease and speed
Utilize easy-to-learn workflows that save valuable time, such as auto coding, sentiment analysis, team collaboration, and more.
Leverage AI-driven tools
Make efficiency a priority and let ATLAS.ti do your work with AI-powered research tools and features for faster results.
Visualize and present findings
With just a few clicks, you can create meaningful visualizations like charts, word clouds, tables, networks, among others for your literature data.
The faster way to make sense of your literature review. Try it for free, today.
A literature review analyzes the most current research within a research area. A literature review consists of published studies from many sources:
- Peer-reviewed academic publications
- Full-length books
- University bulletins
- Conference proceedings
- Dissertations and theses
Literature reviews allow researchers to:
- Summarize the state of the research
- Identify unexplored research inquiries
- Recommend practical applications
- Critique currently published research
Literature reviews are either standalone publications or part of a paper as background for an original research project. A literature review, as a section of a more extensive research article, summarizes the current state of the research to justify the primary research described in the paper.
For example, a researcher may have reviewed the literature on a new supplement's health benefits and concluded that more research needs to be conducted on those with a particular condition. This research gap warrants a study examining how this understudied population reacted to the supplement. Researchers need to establish this research gap through a literature review to persuade journal editors and reviewers of the value of their research.
Consider a literature review as a typical research publication presenting a study, its results, and the salient points scholars can infer from the study. The only significant difference with a literature review treats existing literature as the research data to collect and analyze. From that analysis, a literature review can suggest new inquiries to pursue.
Identify a focus
Similar to a typical study, a literature review should have a research question or questions that analysis can answer. This sort of inquiry typically targets a particular phenomenon, population, or even research method to examine how different studies have looked at the same thing differently. A literature review, then, should center the literature collection around that focus.
Collect and analyze the literature
With a focus in mind, a researcher can collect studies that provide relevant information for that focus. They can then analyze the collected studies by finding and identifying patterns or themes that occur frequently. This analysis allows the researcher to point out what the field has frequently explored or, on the other hand, overlooked.
Suggest implications
The literature review allows the researcher to argue a particular point through the evidence provided by the analysis. For example, suppose the analysis makes it apparent that the published research on people's sleep patterns has not adequately explored the connection between sleep and a particular factor (e.g., television-watching habits, indoor air quality). In that case, the researcher can argue that further study can address this research gap.
External requirements aside (e.g., many academic journals have a word limit of 6,000-8,000 words), a literature review as a standalone publication is as long as necessary to allow readers to understand the current state of the field. Even if it is just a section in a larger paper, a literature review is long enough to allow the researcher to justify the study that is the paper's focus.
Note that a literature review needs only to incorporate a representative number of studies relevant to the research inquiry. For term papers in university courses, 10 to 20 references might be appropriate for demonstrating analytical skills. Published literature reviews in peer-reviewed journals might have 40 to 50 references. One of the essential goals of a literature review is to persuade readers that you have analyzed a representative segment of the research you are reviewing.
Researchers can find published research from various online sources:
- Journal websites
- Research databases
- Search engines (Google Scholar, Semantic Scholar)
- Research repositories
- Social networking sites (Academia, ResearchGate)
Many journals make articles freely available under the term "open access," meaning that there are no restrictions to viewing and downloading such articles. Otherwise, collecting research articles from restricted journals usually requires access from an institution such as a university or a library.
Evidence of a rigorous literature review is more important than the word count or the number of articles that undergo data analysis. Especially when writing for a peer-reviewed journal, it is essential to consider how to demonstrate research rigor in your literature review to persuade reviewers of its scholarly value.
Select field-specific journals
The most significant research relevant to your field focuses on a narrow set of journals similar in aims and scope. Consider who the most prominent scholars in your field are and determine which journals publish their research or have them as editors or reviewers. Journals tend to look favorably on systematic reviews that include articles they have published.
Incorporate recent research
Recently published studies have greater value in determining the gaps in the current state of research. Older research is likely to have encountered challenges and critiques that may render their findings outdated or refuted. What counts as recent differs by field; start by looking for research published within the last three years and gradually expand to older research when you need to collect more articles for your review.
Consider the quality of the research
Literature reviews are only as strong as the quality of the studies that the researcher collects. You can judge any particular study by many factors, including:
- the quality of the article's journal
- the article's research rigor
- the timeliness of the research
The critical point here is that you should consider more than just a study's findings or research outputs when including research in your literature review.
Narrow your research focus
Ideally, the articles you collect for your literature review have something in common, such as a research method or research context. For example, if you are conducting a literature review about teaching practices in high school contexts, it is best to narrow your literature search to studies focusing on high school. You should consider expanding your search to junior high school and university contexts only when there are not enough studies that match your focus.
You can create a project in ATLAS.ti for keeping track of your collected literature. ATLAS.ti allows you to view and analyze full text articles and PDF files in a single project. Within projects, you can use document groups to separate studies into different categories for easier and faster analysis.
For example, a researcher with a literature review that examines studies across different countries can create document groups labeled "United Kingdom," "Germany," and "United States," among others. A researcher can also use ATLAS.ti's global filters to narrow analysis to a particular set of studies and gain insights about a smaller set of literature.
ATLAS.ti allows you to search, code, and analyze text documents and PDF files. You can treat a set of research articles like other forms of qualitative data. The codes you apply to your literature collection allow for analysis through many powerful tools in ATLAS.ti:
- Code Co-Occurrence Explorer
- Code Co-Occurrence Table
- Code-Document Table
Other tools in ATLAS.ti employ machine learning to facilitate parts of the coding process for you. Some of our software tools that are effective for analyzing literature include:
- Named Entity Recognition
- Opinion Mining
- Sentiment Analysis
As long as your documents are text documents or text-enable PDF files, ATLAS.ti's automated tools can provide essential assistance in the data analysis process.
7 open source tools to make literature reviews easy

Opensource.com
A good literature review is critical for academic research in any field, whether it is for a research article, a critical review for coursework, or a dissertation. In a recent article, I presented detailed steps for doing a literature review using open source software .
The following is a brief summary of seven free and open source software tools described in that article that will make your next literature review much easier.
1. GNU Linux
Most literature reviews are accomplished by graduate students working in research labs in universities. For absurd reasons, graduate students often have the worst computers on campus. They are often old, slow, and clunky Windows machines that have been discarded and recycled from the undergraduate computer labs. Installing a flavor of GNU Linux will breathe new life into these outdated PCs. There are more than 100 distributions , all of which can be downloaded and installed for free on computers. Most popular Linux distributions come with a "try-before-you-buy" feature. For example, with Ubuntu you can make a bootable USB stick that allows you to test-run the Ubuntu desktop experience without interfering in any way with your PC configuration. If you like the experience, you can use the stick to install Ubuntu on your machine permanently.
Linux distributions generally come with a free web browser, and the most popular is Firefox . Two Firefox plugins that are particularly useful for literature reviews are Unpaywall and Zotero. Keep reading to learn why.
3. Unpaywall
Often one of the hardest parts of a literature review is gaining access to the papers you want to read for your review. The unintended consequence of copyright restrictions and paywalls is it has narrowed access to the peer-reviewed literature to the point that even Harvard University is challenged to pay for it. Fortunately, there are a lot of open access articles—about a third of the literature is free (and the percentage is growing). Unpaywall is a Firefox plugin that enables researchers to click a green tab on the side of the browser and skip the paywall on millions of peer-reviewed journal articles. This makes finding accessible copies of articles much faster that searching each database individually. Unpaywall is fast, free, and legal, as it accesses many of the open access sites that I covered in my paper on using open source in lit reviews .
Formatting references is the most tedious of academic tasks. Zotero can save you from ever doing it again. It operates as an Android app, desktop program, and a Firefox plugin (which I recommend). It is a free, easy-to-use tool to help you collect, organize, cite, and share research. It replaces the functionality of proprietary packages such as RefWorks, Endnote, and Papers for zero cost. Zotero can auto-add bibliographic information directly from websites. In addition, it can scrape bibliographic data from PDF files. Notes can be easily added on each reference. Finally, and most importantly, it can import and export the bibliography databases in all publishers' various formats. With this feature, you can export bibliographic information to paste into a document editor for a paper or thesis—or even to a wiki for dynamic collaborative literature reviews (see tool #7 for more on the value of wikis in lit reviews).
5. LibreOffice
Your thesis or academic article can be written conventionally with the free office suite LibreOffice , which operates similarly to Microsoft's Office products but respects your freedom. Zotero has a word processor plugin to integrate directly with LibreOffice. LibreOffice is more than adequate for the vast majority of academic paper writing.
If LibreOffice is not enough for your layout needs, you can take your paper writing one step further with LaTeX , a high-quality typesetting system specifically designed for producing technical and scientific documentation. LaTeX is particularly useful if your writing has a lot of equations in it. Also, Zotero libraries can be directly exported to BibTeX files for use with LaTeX.
7. MediaWiki
If you want to leverage the open source way to get help with your literature review, you can facilitate a dynamic collaborative literature review . A wiki is a website that allows anyone to add, delete, or revise content directly using a web browser. MediaWiki is free software that enables you to set up your own wikis.
Researchers can (in decreasing order of complexity): 1) set up their own research group wiki with MediaWiki, 2) utilize wikis already established at their universities (e.g., Aalto University ), or 3) use wikis dedicated to areas that they research. For example, several university research groups that focus on sustainability (including mine ) use Appropedia , which is set up for collaborative solutions on sustainability, appropriate technology, poverty reduction, and permaculture.
Using a wiki makes it easy for anyone in the group to keep track of the status of and update literature reviews (both current and older or from other researchers). It also enables multiple members of the group to easily collaborate on a literature review asynchronously. Most importantly, it enables people outside the research group to help make a literature review more complete, accurate, and up-to-date.
Wrapping up
Free and open source software can cover the entire lit review toolchain, meaning there's no need for anyone to use proprietary solutions. Do you use other libre tools for making literature reviews or other academic work easier? Please let us know your favorites in the comments.

Related Content

Systematic Reviews and Meta Analysis
- Getting Started
- Guides and Standards
- Review Protocols
- Databases and Sources
- Randomized Controlled Trials
- Controlled Clinical Trials
- Observational Designs
- Tests of Diagnostic Accuracy
Software and Tools
- Where do I get all those articles?
- Collaborations
- EPI 233/528
- Countway Mediated Search
Covidence is Web-based to for managing the review workflow. Tools for screening records, managing full-text articles, and extracting data make the process much less burdensome. Covidence currently is available for Harvard investigators with a hms.harvard.edu, hsdm.harvard.edu, or hsph.harvard.edu email address. To make use of Harvard's institutional account:
- If you don't already have a Covidence account, sign up for one at: https://www.covidence.org/signups/new Make sure you use your hms, hsph, or hsdm Harvard email address.
- Then associate your account with Harvard's institutional access at: https://www.covidence.org/organizations/58RXa/signup Use the same address you used in step 1 and follow the instructions in the resulting email.
Once your account is linked to the Harvard account, you will have access to the full range of Covidence features and can create unlimited reviews. You can do this when logged in to your individual Covidence account by going to your account dashboard page and clicking the 'Start a new review' button. This will take you to a new page where you can select the Harvard account to set up the new review.
Rayyan is an alternative review manager that has a free option. It has ranking and sorting option lacking in Covidence but takes more time to learn. We do not provide support for Rayyan.
Other Review Software Systems
There are a number of tools available to help a team manage the systematic review process. Notable examples include Eppi-Reviewer , DistillerSR , and PICO Portal . These are subscription-based services but in some cases offer a trial project. Use the Systematic Review Toolbox to explore more options.
Citation Managers
Citation managers like EndNote or Zotero can be used to collect, manage and de-duplicate bibliographic records and full-text documents but are considerable more painful to use than specialized systematic review applications. Of course, they are handy for writing up your report.
Need more, or looking for alternatives? See the SR Toolbox , a searchable database of tools to support systematic reviews and meta-analysis.
- << Previous: Tests of Diagnostic Accuracy
- Next: Where do I get all those articles? >>
- Last Updated: Oct 26, 2023 2:31 PM
- URL: https://guides.library.harvard.edu/meta-analysis
Writing in the Health and Social Sciences: Literature Reviews and Synthesis Tools
- Journal Publishing
- Style and Writing Guides
- Readings about Writing
- Citing in APA Style This link opens in a new window
- Resources for Dissertation Authors
- Citation Management and Formatting Tools
- What are Literature Reviews?
- Conducting & Reporting Systematic Reviews
- Finding Systematic Reviews
- Tutorials & Tools for Literature Reviews
Systematic Literature Reviews: Steps & Resources
These steps for conducting a systematic literature review are listed below .
Also see subpages for more information about:
- The different types of literature reviews, including systematic reviews and other evidence synthesis methods
- Tools & Tutorials
Literature Review & Systematic Review Steps
- Develop a Focused Question
- Scope the Literature (Initial Search)
- Refine & Expand the Search
- Limit the Results
- Download Citations
- Abstract & Analyze
- Create Flow Diagram
- Synthesize & Report Results
1. Develop a Focused Question
Consider the PICO Format: Population/Problem, Intervention, Comparison, Outcome
Focus on defining the Population or Problem and Intervention (don't narrow by Comparison or Outcome just yet!)
"What are the effects of the Pilates method for patients with low back pain?"
Tools & Additional Resources:
- PICO Question Help
- Stillwell, Susan B., DNP, RN, CNE; Fineout-Overholt, Ellen, PhD, RN, FNAP, FAAN; Melnyk, Bernadette Mazurek, PhD, RN, CPNP/PMHNP, FNAP, FAAN; Williamson, Kathleen M., PhD, RN Evidence-Based Practice, Step by Step: Asking the Clinical Question, AJN The American Journal of Nursing : March 2010 - Volume 110 - Issue 3 - p 58-61 doi: 10.1097/01.NAJ.0000368959.11129.79
2. Scope the Literature
A "scoping search" investigates the breadth and/or depth of the initial question or may identify a gap in the literature.
Eligible studies may be located by searching in:
- Background sources (books, point-of-care tools)
- Article databases
- Trial registries
- Grey literature
- Cited references
- Reference lists
When searching, if possible, translate terms to controlled vocabulary of the database. Use text word searching when necessary.
Use Boolean operators to connect search terms:
- Combine separate concepts with AND (resulting in a narrower search)
- Connecting synonyms with OR (resulting in an expanded search)
Search: pilates AND ("low back pain" OR backache )
Video Tutorials - Translating PICO Questions into Search Queries
- Translate Your PICO Into a Search in PubMed (YouTube, Carrie Price, 5:11)
- Translate Your PICO Into a Search in CINAHL (YouTube, Carrie Price, 4:56)
3. Refine & Expand Your Search
Expand your search strategy with synonymous search terms harvested from:
- database thesauri
- reference lists
- relevant studies
Example:
(pilates OR exercise movement techniques) AND ("low back pain" OR backache* OR sciatica OR lumbago OR spondylosis)
As you develop a final, reproducible strategy for each database, save your strategies in a:
- a personal database account (e.g., MyNCBI for PubMed)
- Log in with your NYU credentials
- Open and "Make a Copy" to create your own tracker for your literature search strategies
4. Limit Your Results
Use database filters to limit your results based on your defined inclusion/exclusion criteria. In addition to relying on the databases' categorical filters, you may also need to manually screen results.
- Limit to Article type, e.g.,: "randomized controlled trial" OR multicenter study
- Limit by publication years, age groups, language, etc.
NOTE: Many databases allow you to filter to "Full Text Only". This filter is not recommended . It excludes articles if their full text is not available in that particular database (CINAHL, PubMed, etc), but if the article is relevant, it is important that you are able to read its title and abstract, regardless of 'full text' status. The full text is likely to be accessible through another source (a different database, or Interlibrary Loan).
- Filters in PubMed
- CINAHL Advanced Searching Tutorial
5. Download Citations
Selected citations and/or entire sets of search results can be downloaded from the database into a citation management tool. If you are conducting a systematic review that will require reporting according to PRISMA standards, a citation manager can help you keep track of the number of articles that came from each database, as well as the number of duplicate records.
In Zotero, you can create a Collection for the combined results set, and sub-collections for the results from each database you search. You can then use Zotero's 'Duplicate Items" function to find and merge duplicate records.

- Citation Managers - General Guide
6. Abstract and Analyze
- Migrate citations to data collection/extraction tool
- Screen Title/Abstracts for inclusion/exclusion
- Screen and appraise full text for relevance, methods,
- Resolve disagreements by consensus
Covidence is a web-based tool that enables you to work with a team to screen titles/abstracts and full text for inclusion in your review, as well as extract data from the included studies.

- Covidence Support
- Critical Appraisal Tools
- Data Extraction Tools
7. Create Flow Diagram
The PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) flow diagram is a visual representation of the flow of records through different phases of a systematic review. It depicts the number of records identified, included and excluded. It is best used in conjunction with the PRISMA checklist .

Example from: Stotz, S. A., McNealy, K., Begay, R. L., DeSanto, K., Manson, S. M., & Moore, K. R. (2021). Multi-level diabetes prevention and treatment interventions for Native people in the USA and Canada: A scoping review. Current Diabetes Reports, 2 (11), 46. https://doi.org/10.1007/s11892-021-01414-3
- PRISMA Flow Diagram Generator (ShinyApp.io, Haddaway et al. )
- PRISMA Diagram Templates (Word and PDF)
- Make a copy of the file to fill out the template
- Image can be downloaded as PDF, PNG, JPG, or SVG
- Covidence generates a PRISMA diagram that is automatically updated as records move through the review phases
8. Synthesize & Report Results
There are a number of reporting guideline available to guide the synthesis and reporting of results in systematic literature reviews.
It is common to organize findings in a matrix, also known as a Table of Evidence (ToE).
- Reporting Guidelines for Systematic Reviews
- Download a sample template of a health sciences review matrix (GoogleSheets)
Steps modified from:
Cook, D. A., & West, C. P. (2012). Conducting systematic reviews in medical education: a stepwise approach. Medical Education , 46 (10), 943–952.
- << Previous: Citation Management and Formatting Tools
- Next: What are Literature Reviews? >>
- Last Updated: Oct 23, 2023 10:25 AM
- URL: https://guides.nyu.edu/healthwriting

An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- Advanced Search
- Journal List
- JMIR Med Inform
- v.10(5); 2022 May
Web-Based Software Tools for Systematic Literature Review in Medicine: Systematic Search and Feature Analysis
Kathryn cowie.
1 Nested Knowledge, Saint Paul, MN, United States
Asad Rahmatullah
Nicole hardy, kevin kallmes, associated data.
Supplementary Table 1: Screening Decisions for SR (systematic review) Tools Reviewed in Full.
Supplementary Table 2: Inter-observer Agreement across (1) Systematic Review (SR) Tools and (2) Features Assessed.
Systematic reviews (SRs) are central to evaluating therapies but have high costs in terms of both time and money. Many software tools exist to assist with SRs, but most tools do not support the full process, and transparency and replicability of SR depend on performing and presenting evidence according to established best practices.
This study aims to provide a basis for comparing and selecting between web-based software tools that support SR, by conducting a feature-by-feature comparison of SR tools.
We searched for SR tools by reviewing any such tool listed in the SR Toolbox, previous reviews of SR tools, and qualitative Google searching. We included all SR tools that were currently functional and required no coding, and excluded reference managers, desktop applications, and statistical software. The list of features to assess was populated by combining all features assessed in 4 previous reviews of SR tools; we also added 5 features (manual addition, screening automation, dual extraction, living review, and public outputs) that were independently noted as best practices or enhancements of transparency and replicability. Then, 2 reviewers assigned binary present or absent assessments to all SR tools with respect to all features, and a third reviewer adjudicated all disagreements.
Of the 53 SR tools found, 55% (29/53) were excluded, leaving 45% (24/53) for assessment. In total, 30 features were assessed across 6 classes, and the interobserver agreement was 86.46%. Giotto Compliance (27/30, 90%), DistillerSR (26/30, 87%), and Nested Knowledge (26/30, 87%) support the most features, followed by EPPI-Reviewer Web (25/30, 83%), LitStream (23/30, 77%), JBI SUMARI (21/30, 70%), and SRDB.PRO (VTS Software) (21/30, 70%). Fewer than half of all the features assessed are supported by 7 tools: RobotAnalyst (National Centre for Text Mining), SRDR (Agency for Healthcare Research and Quality), SyRF (Systematic Review Facility), Data Abstraction Assistant (Center for Evidence Synthesis in Health), SR Accelerator (Institute for Evidence-Based Healthcare), RobotReviewer (RobotReviewer), and COVID-NMA (COVID-NMA). Notably, of the 24 tools, only 10 (42%) support direct search, only 7 (29%) offer dual extraction, and only 13 (54%) offer living/updatable reviews.
Conclusions
DistillerSR, Nested Knowledge, and EPPI-Reviewer Web each offer a high density of SR-focused web-based tools. By transparent comparison and discussion regarding SR tool functionality, the medical community can both choose among existing software offerings and note the areas of growth needed, most notably in the support of living reviews.
Introduction
Systematic review costs and gaps.
According to the Centre for Evidence-Based Medicine, systematic reviews (SRs) of high-quality primary studies represent the highest level of evidence for evaluating therapeutic performance [ 1 ]. However, although vital to evidence-based medical practice, SRs are time-intensive, taking an average of 67.3 weeks to complete [ 2 ] and costing leading research institutions over US $141,000 in labor per published review [ 3 ]. Owing to the high costs in researcher time and complexity, up-to-date reviews cover only 10% to 17% of primary evidence in a representative analysis of the lung cancer literature [ 4 ]. Although many qualitative and noncomprehensive publications provide some level of summative evidence, SRs—defined as reviews of “evidence on a clearly formulated question that use systematic and explicit methods to identify, select and critically appraise relevant primary research, and to extract and analyze data from the studies that are included” [ 5 ]—are distinguished by both their structured approach to finding, filtering, and extracting from underlying articles and the resulting comprehensiveness in answering a concrete medical question.
Software Tools for Systematic Review
Software tools that assist with central SR activities—retrieval (searching or importing records), appraisal (screening of records), synthesis (content extraction from underlying studies), and documentation/output (presentation of SR outputs)—have shown promise in reducing the amount of effort needed in a given review [ 6 ]. Because of the time savings of web-based software tools, institutions and individual researchers engaged in evidence synthesis may benefit from using these tools in the review process [ 7 ].
Existing Studies of Software Tools
However, choosing among the existing software tools presents a further challenge to researchers; in the SR Toolbox [ 8 ], there are >240 tools indexed, of which 224 support health care reviews. Vitally, few of these tools can be used for each of the steps of SR, so comparing the features available through each tool can assist researchers in selecting an SR tool to use. This selection can be informed by feature analysis; for example, a previously published feature analysis compared 15 SR tools [ 9 ] across 21 subfeatures of interest and found that DistillerSR (Evidence Partners), EPPI-Reviewer (EPPI-Centre), SWIFT-Active Screener (Sciome), and Covidence (Cochrane) support the greatest number of features as of 2019. Harrison et al [ 10 ], Marshall et al [ 11 ], and Kohl et al [ 12 ] have completed similar analyses, but each feature assessment selected a different set of features and used different qualitative feature assessment methods, and none covered all SR tools currently available.
The SR tool landscape continues to evolve; as existing tools are updated, new software is made available to researchers, and new feature classes are developed. For instance, despite the growth of calls for living SRs, that is, reviews where the outputs are updated as new primary evidence becomes available, no feature analysis has yet covered this novel capability. Furthermore, the leading feature analyses [ 9 - 12 ] have focused on the screening phase of review, meaning that no comparison of data extraction capabilities has yet been published.
Feature Analysis of Systematic Review Tools
The authors, who are also the developers of the Nested Knowledge platform for SR and meta-analysis (Nested Knowledge, Inc) [ 13 ], have noted the lack of SR feature comparison among new tools and across all feature classes (retrieval, appraisal, synthesis, documentation/output, administration of reviews, and access/support features). To provide an updated feature analysis comparing SR software tools, we performed a feature analysis covering the full life cycle of SR across software tools.
Search Strategy
We searched the SR tools for assessment in 3 ways: first, we identified any SR tool that was published in existing reviews of SR tools (Table S1 in Multimedia Appendix 1 ). Second, we reviewed SR Toolbox [ 8 ], a repository of indexed software tools that support the SR process. Third, we performed a Google search for Systematic review software and identified any software tool that was among the first 5 pages of results. Furthermore, for any library resource pages that were among the search results, we included any SR tools mentioned by the library resource page that met our inclusion criteria. The search was completed between June and August 2021. Four additional tools, namely SRDR+ (Agency for Healthcare Research and Quality), Systematic Review Assistant-Deduplication Module (Institute for Evidence-Based Healthcare), Giotto Compliance, and Robotsearch (Robotsearch), were assessed in December 2021 following reviewer feedback.
Selection of Software Tools
The inclusion and exclusion criteria were determined by 3 authors (KK, KH, and KC). Among our search results, we queued up all software tools that had descriptions meeting our inclusion criteria for full examination of the software in a second round of review. We included any that were functioning web-based tools that require no coding by the user to install or operate, so long as they were used to support the SR process and can be used to review clinical or preclinical literature. The no coding requirement was established because the target audience of this review is medical researchers who are selecting a review software to use; thus, we aim to review only tools that this broad audience is likely to be able to adopt. We also excluded desktop applications, statistical packages, and tools built for reviewing software engineering and social sciences literature, as well as reference managers, to avoid unfairly casting these tools as incomplete review tools (as they would each score quite low in features that are not related to reference management). All software tools were screened by one reviewer (KC), and inclusion decisions were reviewed by a second (KK).
Selection of Features of Interest
We built on the previous comparisons of SR tools published by Van der Mierden et al [ 9 ], Harrison et al [ 10 ], Marshall et al [ 11 ], and Kohl et al [ 12 ], which assign features a level of importance and evaluate each feature in reference screening tools. As the studies by Van der Mierden et al [ 9 ] and Harrison et al [ 10 ] focus on reference screening, we supplemented the features with features identified in related reviews of SR tools (Table S1 in Multimedia Appendix 1 ). From a study by Kohl et al [ 12 ], we added database search, risk of bias assessment (critical appraisal), and data visualization. From Marshall et al [ 11 ], we added report writing.
We added 4 more features based on their importance to software-based SR: manual addition of records, automated full-text retrieval, dual extraction of studies, risk of bias (critical appraisal), living SR, and public outputs. Each addition represents either a best practice in SR [ 14 ] or a key feature for the accuracy, replicability, and transparency of SR. Thus, in total, we assessed the presence or absence of 30 features across 6 categories: retrieval, appraisal, synthesis, documentation/output, administration/project management, and access/support.
We adopted each feature unless it was outside of the SR process, it was required for inclusion in the present review, it duplicated another feature, it was not a discrete step for comparison, it was not necessary for English language reviews, it was not necessary for a web-based software, or it related to reference management (as we excluded reference managers from the present review). Table 1 shows all features not assessed, with rationale.
Features from systematic reviews not assessed in this review, with rationale.
Feature Assessment
To minimize bias concerning the subjective assessment of the necessity or desirability of features or of the relative performance of features, we used a binary assessment where each SR tool was scored 0 if a given feature was not present or 1 if a feature was present. Tools were assessed between June and August 2021. We assessed 30 features, divided into 6 feature classes. Of the 30 features, 77% (23/30) were identified in existing literature, and 23% (7/30) were added by the authors ( Table 2 ).
The criteria for each selected feature, as well as the rationale.
a API: application programming interface.
b Rationale only provided for features added in this review; all other features were drawn from existing feature analyses of Systematic Review Software Tools.
c RIS: Research Information System.
d PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses.
e AI: artificial intelligence.
Evaluation of Tools
For tools with free versions available, each of the researchers created an account and tested the program to determine feature presence. We also referred to user guides, publications, and training tutorials. For proprietary software, we gathered information on feature offerings from marketing webpages, training materials, and video tutorials. We also contacted all proprietary software providers to give them the opportunity to comment on feature offerings that may have been left out of those materials. Of the 8 proprietary software providers contacted, 38% (3/8) did not respond, 50% (4/8) provided feedback on feature offerings, and 13% (1/8) declined to comment. When providers provided feedback, we re-reviewed the features in question and altered the assessment as appropriate. One provider gave feedback after initial puplication, prompting issuance of a correction.
Feature assessment was completed independently by 2 reviewers (KC and AR), and all disagreements were adjudicated by a third (KK). Interobserver agreement was calculated using standard methods [ 19 ] as applied to binary assessments. First, the 2 independent assessments were compared, and the number of disagreements was counted per feature, per software. For each feature, the total number of disagreements was counted and divided by the number of software tools assessed. This provided a per-feature variability percentage; these percentages were averaged across all features to provide a cumulative interobserver agreement percentage.
Identification of SR Tools
We reviewed all 240 software tools offered on SR Toolbox and sent forward all studies that, based on the software descriptions, could meet our inclusion criteria; we then added in all software tools found on Google Scholar. This strategy yielded 53 software tools that were reviewed in full ( Figure 1 shows the PRISMA [Preferred Reporting Items for Systematic Reviews and Meta-Analyses]-based chart). Of these 53 software tools, 55% (29/53) were excluded. Of the 29 excluded tools, 17% (5/29) were built to review software engineering literature, 10% (3/29) were not functional as of August 2021, 7% (2/29) were citation managers, and 7% (2/29) were statistical packages. Other excluded tools included tools not designed for SRs (6/29, 21%), desktop applications (4/29, 14%), tools requiring users to code (3/29, 10%), a search engine (1/29, 3%), and a social science literature review tool (1/29, 3%). One tool, Research Screener [ 20 ], was excluded owing to insufficient information available on supported features. Another tool, the Health Assessment Workspace Collaborative, was excluded because it is designed to assess chemical hazards.

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses)-based chart showing the sources of all tools considered for inclusion, including 2-phase screening and reasons for all exclusions made at the full software review stage. SR: systematic review.
Overview of SR Tools
We assessed the presence of features in 24 software tools, of which 71% (17/24) are designed for health care or biomedical sciences. In addition, 63% (15/24) of the analyzed tools support the full SR process, meaning they enable search, screening, extraction, and export, as these are the basic capabilities necessary to complete a review in a single software tool. Furthermore, 21% (5/34) of the tools support the screening stage ( Table 3 ).
Breakdown of software tools for systematic review by process type (full process, screening, extraction, or visualization; n=24).
Data Gathering
Interobserver agreement between the 2 reviewers gathering data features was 86.46%, meaning that across all feature assessments, the 2 reviewers disagreed on <15% of the applications. Final assessments are summarized in Table 4 , and Table S2 in Multimedia Appendix 2 shows the interobserver agreement on a per–SR tool and per-feature basis. Interobserver agreement was ≥70% for every feature assessed and for all SR tools except 3: LitStream (ICF; 53.3%), RevMan Web (Cochrane; 50%), and SR Accelerator (Institute for Evidence-Based Healthcare; 53.3%); on investigation, these low rates of agreement were found to be due to name changes and versioning (LitStream and RevMan Web) and due to the modular nature of the subsidiary offerings (SR Accelerator). An interactive, updatable visualization of the features offered by each tool is available in the Systematic Review Methodologies Qualitative Synthesis.
Feature assessment scores by feature class for each systematic review tool analyzed. The total number of features across all feature classes is presented in descending order.
Giotto Compliance (27/30, 90%), DistillerSR (26/30, 87%), and Nested Knowledge (26/30, 87%) support the most features, followed by EPPI-Reviewer Web (25/30, 83%), LitStream (23/30, 77%), JBI SUMARI (21/30, 70%), and SRDB.PRO (VTS Software) (21/30, 70%).
The top 16 software tools are ranked by percent of features from highest to lowest in Figure 2 . Fewer than half of all features are supported by 7 tools: RobotAnalyst (National Centre for Text Mining), SRDR (Agency for Healthcare Research and Quality), SyRF (Systematic Review Facility), Data Abstraction Assistant (Center for Evidence Synthesis in Health, Institute for Evidence-Based Healthcare), SR-Accelerator, RobotReviewer (RobotReviewer), and COVID-NMA (COVID-NMA; Table 3 ).

Stacked bar chart comparing the percentage of supported features, broken down by their feature class (retrieval, appraisal, extraction, output, admin, and access), among all analyzed software tools.
Feature Assessment: Breakout by Feature Class
Of all 6 feature classes, administrative features are the most supported, and output and extraction features are the least supported ( Figure 3 ). Only 3 tools, Covidence (Cochrane), EPPI-Reviewer, and Giotto Compliance, offer all 4 extraction features ( Table 4 ). DistillerSR and Giotto support all 5 retrieval features, while Nested Knowledge supports all 5 documentation/output features. Colandr, DistillerSR, EPPI-Reviewer, Giotto Compliance, and PICOPortal support all 6 appraisal features.

Heat map of features observed in 24 analyzed software tools. Dark blue indicates that a feature is present, and light blue indicates that a feature is not present.
Feature Class 1: Retrieval
The ability to search directly within the SR tool was only present for 42% (10/24) of the software tools, meaning that for all other SR tools, the user is required to search externally and import records. The only SR tool that did not enable importing of records was COVID-NMA, which supplies studies directly from the providers of the tool but does not enable the user to do so.
Feature Class 2: Appraisal
Among the 19 tools that have title/abstract screening, all tools except for RobotAnalyst and SRDR+ enable dual screening and adjudication. Reference deduplication is less widespread, with 58% (14/24) of the tools supporting it. A form of machine learning/automation during the screening stage is present in 54% (13/24) of the tools.
Feature Class 3: Extraction
Although 75% (18/24) of the tools offer data extraction, only 29% (7/24) offer dual data extraction (Giotto Compliance, DistillerSR, SRDR+, Cadima [Cadima], Covidence, EPPI-Reviewer, and PICOPortal [PICOPortal]). A total of 54% (13/24) of the tools enable risk of bias assessments.
Feature Class 4: Output
Exporting references or collected data is available in 71% (17/24) of the tools. Of the 24 tools, 54% (13/24) generate figures or tables, 42% (10/24) of tools generate PRISMA flow diagrams, 32% (8/24%) have report writing, and only 13% (3/34) have in-text citations.
Feature Class 5: Admin
Protocols, customer support, and training materials are available in 71% (17/24), 79% (19/24), and 83% (20/24) of the tools, respectively. Of all administrative features, the least well developed are progress/activity monitoring, which is offered 67% (16/24) of the tools, and comments, which are available in 58% (14/24) of the tools.
Feature Class 6: Access
Access features cover both collaboration during the review, cost, and availability of outputs. Of the 24 software tools, 83% (20/24) permit collaboration by allowing multiple users to work on a project. COVID-NMA, RobotAnalyst, RobotReviewer, and SR-Accelerator do not allow multiple users. In addition, of the 24 tools, 71% (17/24) offer a free subscription, whereas 29% (7/24) require paid subscriptions or licenses (Covidence, DistillerSR, EPPI-Reviewer Web, Giotto Compliance, JBI Sumari, SRDB.PRO, and SWIFT-Active Screener). Only 54% (13/24) of the software tools support living, updatable reviews.
Principal Findings
Our review found a wide range of options in the SR software space; however, among these tools, many lacked features that are either crucial to the completion of a review or recommended as best practices. Only 63% (15/24) of the SR tools covered the full process from search/import through to extraction and export. Among these 15 tools, only 67% (10/15) had a search functionality directly built in, and only 47% (7/15) offered dual data extraction (which is the gold standard in quality control). Notable strengths across the field include collaborative mechanisms (offered by 20/24, 83% tools) and easy, free access (17/24, 71% of tools are free). Indeed, the top 4 software tools in terms of number of features offered (Giotto Compliance, DistillerSR, Nested Knowledge, and EPPI-Reviewer all offered between 83% and 90% of the features assessed. However, major remaining gaps include a lack of automation of any step other than screening (automated screening offered by 13/24, 54% of tools) and underprovision of living, updatable outputs.
Major Gaps in the Provision of SR Tools
Marshall et al [ 11 ] have previously noted that “the user should be able to perform an automated search from within the tool which should identify duplicate papers and handle them accordingly” [ 11 ]. Less than a third of tools (7/24, 29%) support search, reference import, and manual reference addition.
Study Selection
Screening of references is the most commonly offered feature and has the strongest offerings across features. All software tools that offer screening also support dual screening (with the exception of RobotAnalyst and SRDR+). This demonstrates adherence to SR best practices during the screening stage.
Automation and Machine Learning
Automation in medical SR screening has been growing. Some form of machine learning or other automation for screening literature is present in over half (13/24, 54%) of all the tools analyzed. Machine learning/screening includes reordering references, topic modeling, and predicting inclusion rates.
Data Extraction
In contrast to screening, extraction is underdeveloped. Although extraction is offered by 75% (18/24) tools, few tools adhere to SR best practices of dual extraction. This is a deep problem in the methods of review, as the error rate for manual extraction without dual extraction is highly variable and has even reached 50% in independent tests [ 16 ].
Although single extraction continues to be the only commonly offered method, the scientific community has noted that automating extraction would have value in both time savings and improved accuracy, but the field is as of yet underdeveloped. To quote a recent review on the subject of automated extraction, “[automation] techniques have not been fully utilized to fully or even partially automate the data extraction step of systematic review” [ 21 ]. The technologies to automate extraction have not achieved partial extraction at a sufficiently high accuracy level to be adopted; therefore, dual extraction is a pressing software requirement that is unlikely to be surpassed in the near future.
Project Management
Administrative features are well supported by SR software. However, there is a need for improved monitoring of review progress. Project monitoring is offered by 67% (16/24) of the tools, which is among the lowest of all admin features and likely the feature most closely associated with the quality of the outputs. As collaborative access is common and highly prized, SR software providers should recognize the barriers to collaboration in medical research; lack of mutual awareness, inertia in communication, and time management and capacity constraints are among the leading reasons for failure in interinstitutional research [ 22 ]. Project monitoring tools could assist with each of these pain points and improve the transparency and accountability within the research team.
Living Reviews
The scientific community has made consistent demands for SR processes to be rendered updatable, with the goal of improving the quality of evidence available to clinicians, health policymakers, and the medical public [ 23 , 24 ]. Despite these ongoing calls for change, living, updatable reviews are not yet standard in SR software tools. Only 54% (13/24) of the tools support living reviews, largely because living review depends on providing updatability at each step up through to outputs. However, until greater provision of living review tools is achieved, reviews will continue to fall out of date and out of sync with clinical practice [ 24 ].
Study Limitations
In our study design, we elected to use a binary assessment, which limited the bias induced by the subjective appeal of any given tool. Therefore, these assessments did not include any comparison of quality or usability among the SR tools. This also meant that we did not use the Desmet [ 25 ] method, which ranks features by level of importance. We also excluded certain assessments that may impact user choices such as language translation features or translated training documentation, which is supported by some technologies, including DistillerSR. We completed the review in August 2021 but added several software tools following reviewer feedback; by adding expert additions without repeating the entire search strategy, we may have missed SR tools that launched between August and December 2021. Finally, the authors of this study are the designers of one of the leading SR tools, Nested Knowledge, which may have led to tacit bias toward this tool as part of the comparison.
By assessing features offered by web-based SR applications, we have identified gaps in current technologies and areas in need of development. Feature count does not equate to value or usability; it fails to capture benefits of simple platforms, such as ease of use, effective user interface, alignment with established workflows, or relative costs. The authors make no claim about superiority of software based on feature prevalence.
Future Directions
We invite and encourage independent researchers to assess the landscape of SR tools and build on this review. We expect the list of features to be assessed will evolve as research changes. For example, this review did not include features such as the ability to search included studies, reuse of extracted data, and application programming interface calls to read data, which may grow in importance. Furthermore, this review assessed the presence of automation at a high level without evaluating details. A future direction might be characterizing specific types of automation models used in screening, as well as in other stages, for software applications that support SR of biomedical research.
The highest-performing SR tools were DistillerSR, EPPI-Reviewer Web, and Nested Knowledge, each of which offer >80% of features. The most commonly offered and robust feature class was screening, whereas extraction (especially quality-controlled dual extraction) was underprovided. Living reviews, although strongly advocated for in the scientific community, were similarly underprovided by the SR tools reviewed here. This review enables the medical community to complete transparent and comprehensive comparison of SR tools and may also be used to identify gaps in technology for further development by the providers of these or novel SR tools.
This review of web-based software review software tools represents an attempt to best capture information from software providers’ websites, free trials, peer-reviewed publications, training materials, or software tutorials. The review is based primarily on publicly available information and may not accurately reflect feature offerings, as relevant information was not always available or clear to interpret. This evaluation does not represent the views or opinions of any of the software developers or service providers, except those of the authors. The review was completed in August 2021, and readers should refer to the respective software providers’ websites to obtain updated information on feature offerings.
Acknowledgments
The authors acknowledge the software development team from Nested Knowledge, Stephen Mead, Jeffrey Johnson, and Darian Lehmann-Plantenberg for their input in designing Nested Knowledge. The authors thank the independent software providers who provided feedback on our feature assessment, which increased the quality and accuracy of the results.
Abbreviations
Multimedia appendix 1, multimedia appendix 2.
Authors' Contributions: All authors participated in the conception, drafting, and editing of the manuscript.
Conflicts of Interest: KC, NH, and KH work for and hold equity in Nested Knowledge, which provides a software application included in this assessment. AR worked for Nested Knowledge. KL works for and holds equity in Nested Knowledge, Inc, and holds equity in Superior Medical Experts, Inc. KK works for and holds equity in Nested Knowledge, and holds equity in Superior Medical Experts.
Literature Review Tips & Tools
- Tips & Examples
Organizational Tools
Tools for systematic reviews.
- Bubbl.us Free online brainstorming/mindmapping tool that also has a free iPad app.
- Coggle Another free online mindmapping tool.
- Organization & Structure tips from Purdue University Online Writing Lab
- Literature Reviews from The Writing Center at University of North Carolina at Chapel Hill Gives several suggestions and descriptions of ways to organize your lit review.
- Cochrane Handbook for Systematic Reviews of Interventions "The Cochrane Handbook for Systematic Reviews of Interventions is the official guide that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions. "
- Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) website "PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. PRISMA focuses on the reporting of reviews evaluating randomized trials, but can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions."
- PRISMA Flow Diagram Generator Free tool that will generate a PRISMA flow diagram from a CSV file (sample CSV template provided) more... less... Please cite as: Haddaway, N. R., Page, M. J., Pritchard, C. C., & McGuinness, L. A. (2022). PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis Campbell Systematic Reviews, 18, e1230. https://doi.org/10.1002/cl2.1230
- Rayyan "Rayyan is a 100% FREE web application to help systematic review authors perform their job in a quick, easy and enjoyable fashion. Authors create systematic reviews, collaborate on them, maintain them over time and get suggestions for article inclusion."
- Covidence Covidence is a tool to help manage systematic reviews (and create PRISMA flow diagrams). **UMass Amherst doesn't subscribe, but Covidence offers a free trial for 1 review of no more than 500 records. It is also set up for researchers to pay for each review.
- PROSPERO - Systematic Review Protocol Registry "PROSPERO accepts registrations for systematic reviews, rapid reviews and umbrella reviews. PROSPERO does not accept scoping reviews or literature scans. Sibling PROSPERO sites registers systematic reviews of human studies and systematic reviews of animal studies."
- Critical Appraisal Tools from JBI Joanna Briggs Institute at the University of Adelaide provides these checklists to help evaluate different types of publications that could be included in a review.
- Systematic Review Toolbox "The Systematic Review Toolbox is a community-driven, searchable, web-based catalogue of tools that support the systematic review process across multiple domains. The resource aims to help reviewers find appropriate tools based on how they provide support for the systematic review process. Users can perform a simple keyword search (i.e. Quick Search) to locate tools, a more detailed search (i.e. Advanced Search) allowing users to select various criteria to find specific types of tools and submit new tools to the database. Although the focus of the Toolbox is on identifying software tools to support systematic reviews, other tools or support mechanisms (such as checklists, guidelines and reporting standards) can also be found."
- Abstrackr Free, open-source tool that "helps you upload and organize the results of a literature search for a systematic review. It also makes it possible for your team to screen, organize, and manipulate all of your abstracts in one place." -From Center for Evidence Synthesis in Health
- SRDR Plus (Systematic Review Data Repository: Plus) An open-source tool for extracting, managing,, and archiving data developed by the Center for Evidence Synthesis in Health at Brown University
- RoB 2 Tool (Risk of Bias for Randomized Trials) A revised Cochrane risk of bias tool for randomized trials
- << Previous: Tips & Examples
- Next: Writing & Citing Help >>
- Last Updated: Oct 5, 2023 4:50 PM
- URL: https://guides.library.umass.edu/litreviews
© 2022 University of Massachusetts Amherst • Site Policies • Accessibility
Literature Reviews
- Types of Literature Reviews
- Systematic Reviews
- Systematic Reviews for Social Sciences
- Checklists, Guides
Systematic Review Tools
The following subscription-based evidence synthesis software tools can be used to manage steps in the systematic review process. Tip: For subscription-based tools, check for 'trial versions' to test software prior to purchasing.
- JBI Sumari (UCF subscription) "JBI is an international evidence-based healthcare research organisation that works with 70+ Universities and Hospitals (known as the JBI Collaboration) around the world. The organisation focuses on improving health outcomes globally by producing and disseminating research evidence, software, training, resources and publications relating to evidence-based healthcare."
- JBI Sumari Resource Creating a JBI Sumari Account & Training Videos
- << Previous: Checklists, Guides
- Next: Books >>
- Last Updated: Nov 1, 2023 3:10 PM
- URL: https://guides.ucf.edu/literaturereviews
RAxter is now Enago Read! Enjoy the same licensing and pricing with enhanced capabilities. No action required for existing customers.
Your all in one AI-powered Reading Assistant
A Reading Space to Ideate, Create Knowledge, and Collaborate on Your Research
- Smartly organize your research
- Receive recommendations that cannot be ignored
- Collaborate with your team to read, discuss, and share knowledge

From Surface-Level Exploration to Critical Reading - All in one Place!
Fine-tune your literature search.
Our AI-powered reading assistant saves time spent on the exploration of relevant resources and allows you to focus more on reading.
Select phrases or specific sections and explore more research papers related to the core aspects of your selections. Pin the useful ones for future references.
Our platform brings you the latest research news, online courses, and articles from magazines/blogs related to your research interests and project work.
Speed up your literature review
Quickly generate a summary of key sections of any paper with our summarizer.
Make informed decisions about which papers are relevant, and where to invest your time in further reading.
Get key insights from the paper, quickly comprehend the paper’s unique approach, and recall the key points.
Bring order to your research projects
Organize your reading lists into different projects and maintain the context of your research.
Quickly sort items into collections and tag or filter them according to keywords and color codes.
Experience the power of sharing by finding all the shared literature at one place.
Decode papers effortlessly for faster comprehension
Highlight what is important so that you can retrieve it faster next time.
Find Wikipedia explanations for any selected word or phrase.
Save time in finding similar ideas across your projects.
Collaborate to read with your team, professors, or students
Share and discuss literature and drafts with your study group, colleagues, experts, and advisors. Recommend valuable resources and help each other for better understanding.
Work in shared projects efficiently and improve visibility within your study group or lab members.
Keep track of your team's progress by being constantly connected and engaging in active knowledge transfer by requesting full access to relevant papers and drafts.
Find Papers From Across the World's Largest Repositories

Testimonials
Privacy and security of your research data are integral to our mission..

Everything you add or create on Enago Read is private by default. It is visible if and when you share it with other users.

You can put Creative Commons license on original drafts to protect your IP. For shared files, Enago Read always maintains a copy in case of deletion by collaborators or revoked access.

We use state-of-the-art security protocols and algorithms including MD5 Encryption, SSL, and HTTPS to secure your data.
Literature Review with MAXQDA

Interview Transcription Examples
Make the most out of your literature review.
Literature reviews are an important step in the data analysis journey of many research projects, but often it is a time-consuming and arduous affair. Whether you are reviewing literature for writing a meta-analysis or for the background section of your thesis, work with MAXQDA. Our product comes with many exciting features which make your literature review faster and easier than ever before. Whether you are a first-time researcher or an old pro, MAXQDA is your professional software solution with advanced tools for you and your team.

How to conduct a literature review with MAXQDA
Conducting a literature review with MAXQDA is easy because you can easily import bibliographic information and full texts. In addition, MAXQDA provides excellent tools to facilitate each phase of your literature review, such as notes, paraphrases, auto-coding, summaries, and tools to integrate your findings.
Step one: Plan your literature review
Similar to other research projects, one should carefully plan a literature review. Before getting started with searching and analyzing literature, carefully think about the purpose of your literature review and the questions you want to answer. This will help you to develop a search strategy which is needed to stay on top of things. A search strategy involves deciding on literature databases, search terms, and practical and methodological criteria for the selection of high-quality scientific literature.
MAXQDA supports you during this stage with memos and the newly developed Questions-Themes-Theories tool (QTT). Both are the ideal place to store your research questions and search parameters. Moreover, the Question-Themes-Theories tool is perfectly suited to support your literature review project because it provides a bridge between your MAXQDA project and your research report. It offers the perfect enviornment to bring together findings, record conclusions and develop theories.


Step two: Search, Select, Save your material
Follow your search strategy. Use the databases and search terms you have identified to find the literature you need. Then, scan the search results for relevance by reading the title, abstract, or keywords. Try to determine whether the paper falls within the narrower area of the research question and whether it fulfills the objectives of the review. In addition, check whether the search results fulfill your pre-specified eligibility criteria. As this step typically requires precise reading rather than a quick scan, you might want to perform it in MAXQDA. If the piece of literature fulfills your criteria and context, you can save the bibliographic information using a reference management system which is a common approach among researchers as these programs automatically extract a paper’s meta-data from the publishing website. You can easily import this bibliographic data into MAXQDA via a specialized import tool. MAXQDA is compatible with all reference management programs that are able to export their literature databases in RIS format which is a standard format for bibliographic information. This is the case with all mainstream literature management programs such as Citavi, DocEar, Endnote, JabRef, Mendeley, and Zotero.

Step three: Import & Organize your material in MAXQDA
Importing bibliographic data into MAXQDA is easy and works seamlessly for all reference management programs that use the standard RIS files. MAXQDA offers an import option dedicated to bibliographic data which you can find in the MAXQDA Import tab. To import the selected literature, just click on the corresponding button, select the data you want to import, and click okay. Upon import, each literature entry becomes its own text document. If full texts are imported, MAXQDA automatically connects the full text to the literature entry with an internal link. The individual information in the literature entries is automatically coded for later analysis so that, for example, all titles or abstracts can be compiled and searched. To help you keeping your literature (review) organized, MAXQDA automatically creates a document group called “References” which contains the individual literature entries. Like full texts or interview documents, the bibliographic entries can be searched, coded, linked, edited, and you can add memos for further qualitative and quantitative content analysis (Kuckartz & Rädiker, 2019). Especially, when running multiple searches using different databases or search terms, you should carefully document your approach. Besides being a great place to store the respective search parameters, memos are perfectly suited to capture your ideas while reviewing our literature and can be attached to text segments, documents, document groups, and much more.

Literature Review Methods
A literature review is a critical evaluation of existing research on a particular topic and is part of almost every research project. The literature review's purpose is to identify gaps in current knowledge, synthesize existing research findings, and provide a foundation for further research.
Over the years, numerous types of literature reviews have emerged. To empower you in coming to an informed decision, we briefly present the most common literature review methods. With MAXQDA you are free to choose which literature review method best meets your needs – a narrative review, a systematic review, a meta-analysis, or a scoping review.
Narrative Review
A narrative review summarizes and synthesizes the existing literature on a particular topic in a narrative or story-like format. This type of review is often used to provide an overview of the current state of knowledge on a topic, for example in scientific papers or final theses.
Systematic Review
A systematic review is a comprehensive and structured approach to reviewing the literature on a particular topic with the aim of answering a defined research question. It involves a systematic search of the literature using pre-specified eligibility criteria and a structured evaluation of the quality of the research.
Meta-Analysis
A meta-analysis is a type of systematic review that uses statistical techniques to combine and analyze the results from multiple studies on the same topic. The goal of a meta-analysis is to provide a more robust and reliable estimate of the effect size than can be obtained from any single study.
Scoping Review
A scoping review is a type of systematic review that aims to map the existing literature on a particular topic in order to identify the scope and nature of the research that has been done. It is often used to identify gaps in the literature and inform future research.
Analyze your literature with MAXQDA
Once imported into MAXQDA, you can explore your material using a variety of tools and functions. With MAXQDA as your literature review & analysis software, you have numerous possibilities for analyzing your literature and writing your literature review – impossible to mention all. Thus, we can present only a subset of tools here. Check out our literature about performing literature reviews with MAXQDA to discover more possibilities.
Code & Retrieve important segments
Coding qualitative data lies at the heart of many qualitative data analysis approaches and can be useful for literature reviews as well. Coding refers to the process of labeling segments of your material. For example, you may want to code definitions of certain terms, pro and con arguments, how a specific method is used, and so on. In a later step, MAXQDA allows you to compile all text segments coded with one (or more) codes of interest from one or more papers, so that you can for example compare definitions across papers.
But there is more. MAXQDA offers multiple ways of coding, such as in-vivo coding, highlighters, emoticodes, Creative Coding, or the Smart Coding Tool. The compiled segments can be enriched with variables and the segment’s context accessed with just one click. MAXQDA’s Text Search & Autocode tool is especially well-suited for a literature review, as it allows one to explore large amounts of text without reading or coding them first. Automatically search for keywords (or dictionaries of keywords), such as important concepts for your literature review, and automatically code them with just a few clicks.

Paraphrase literature into your own words
Another approach is to paraphrase the existing literature. A paraphrase is a restatement of a text or passage in your own words, while retaining the meaning and the main ideas of the original. Paraphrasing can be especially helpful in the context of literature reviews, because paraphrases force you to systematically summarize the most important statements (and only the most important statements) which can help to stay on top of things.
With MAXQDA as your literature review software, you not only have a tool for paraphrasing literature but also tools to analyze the paraphrases you have written. For example, the Categorize Paraphrases tool (allows you to code your parpahrases) or the Paraphrases Matrix (allows you to compare paraphrases side-by-side between individual documents or groups of documents.)
Summaries & Overview tables: A look at the Bigger Picture
When conducting a literature review you can easily get lost. But with MAXQDA as your literature review software, you will never lose track of the bigger picture. Among other tools, MAXQDA’s overview and summary tables are especially useful for aggregating your literature review results. MAXQDA offers overview tables for almost everything, codes, memos, coded segments, links, and so on. With MAXQDA literature review tools you can create compressed summaries of sources that can be effectively compared and represented, and with just one click you can easily export your overview and summary tables and integrate them into your literature review report.

Visualize your qualitative data
The proverb “a picture is worth a thousand words” also applies to literature reviews. That is why MAXQDA offers a variety of Visual Tools that allow you to get a quick overview of the data, and help you to identify patterns. Of course, you can export your visualizations in various formats to enrich your final report. One particularly useful visual tool for literature reviews is the Word Cloud. It visualizes the most frequent words and allows you to explore key terms and the central themes of one or more papers. Thanks to the interactive connection between your visualizations with your MAXQDA data, you will never lose sight of the big picture. Another particularly useful tool is MAXQDA’s word/code frequency tool with which you can analyze and visualize the frequencies of words or codes in one or more documents. As with Word Clouds, nonsensical words can be added to the stop list and excluded from the analysis.
QTT: Synthesize your results and write up the review
MAXQDA 2022 introduces a brand new and innovative workspace to gather important visualization, notes, segments, and other analytics results. The perfect tool to organize your thoughts and data. Create a separate worksheet for your topics and research questions, fill it with associated analysis elements from MAXQDA, and add your conclusions, theories, and insights as you go. For example, you can add Word Clouds, important coded segments, and your literature summaries and write down your insights. Subsequently, you can view all analysis elements and insights to write your final conclusion. The new Questions-Themes-Theories tool is perfectly suited to help you finalize your literature review reports. With just one click you can export your worksheet and use it as a starting point for your literature review report.

Literature about Literature Reviews and Analysis
We offer a variety of free learning materials to help you get started with your literature review. Check out our Getting Started Guide to get a quick overview of MAXQDA and step-by-step instructions on setting up your software and creating your first project with your brand new QDA software. In addition, the free Literature Reviews Guide explains how to conduct a literature review with MAXQDA in more detail.

Getting Started with MAXQDA

Literature Reviews with MAXQDA
A literature review is a critical analysis and summary of existing research and literature on a particular topic or research question. It involves systematically searching and evaluating a range of sources, such as books, academic journals, conference proceedings, and other published or unpublished works, to identify and analyze the relevant findings, methodologies, theories, and arguments related to the research question or topic.
A literature review’s purpose is to provide a comprehensive and critical overview of the current state of knowledge and understanding of a topic, to identify gaps and inconsistencies in existing research, and to highlight areas where further research is needed. Literature reviews are commonly used in academic research, as they provide a framework for developing new research and help to situate the research within the broader context of existing knowledge.
A literature review is a critical evaluation of existing research on a particular topic and is part of almost every research project. The literature review’s purpose is to identify gaps in current knowledge, synthesize existing research findings, and provide a foundation for further research. Over the years, numerous types of literature reviews have emerged. To empower you in coming to an informed decision, we briefly present the most common literature review methods.
- Narrative Review : A narrative review summarizes and synthesizes the existing literature on a particular topic in a narrative or story-like format. This type of review is often used to provide an overview of the current state of knowledge on a topic, for example in scientific papers or final theses.
- Systematic Review : A systematic review is a comprehensive and structured approach to reviewing the literature on a particular topic with the aim of answering a defined research question. It involves a systematic search of the literature using pre-specified eligibility criteria and a structured evaluation of the quality of the research.
- Meta-Analysis : A meta-analysis is a type of systematic review that uses statistical techniques to combine and analyze the results from multiple studies on the same topic. The goal of a meta-analysis is to provide a more robust and reliable estimate of the effect size than can be obtained from any single study.
- Scoping Review : A scoping review is a type of systematic review that aims to map the existing literature on a particular topic in order to identify the scope and nature of the research that has been done. It is often used to identify gaps in the literature and inform future research.
There is no “best” way to do a literature review, as the process can vary depending on the research question, field of study, and personal preferences. However, here are some general guidelines that can help to ensure that your literature review is comprehensive and effective:
- Carefully plan your literature review : Before you start searching and analyzing literature you should define a research question and develop a search strategy (for example identify relevant databases, and search terms). A clearly defined research question and search strategy will help you to focus your search and ensure that you are gathering relevant information. MAXQDA’s Questions-Themes-Theories tool is the perfect place to store your analysis plan.
- Evaluate your sources : Screen your search results for relevance to your research question, for example by reading abstracts. Once you have identified relevant sources, read them critically and evaluate their quality and relevance to your research question. Consider factors such as the methodology used, the reliability of the data, and the overall strength of the argument presented.
- Synthesize your findings : After evaluating your sources, synthesize your findings by identifying common themes, arguments, and gaps in the existing research. This will help you to develop a comprehensive understanding of the current state of knowledge on your topic.
- Write up your review : Finally, write up your literature review, ensuring that it is well-structured and clearly communicates your findings. Include a critical analysis of the sources you have reviewed, and use evidence from the literature to support your arguments and conclusions.
Overall, the key to a successful literature review is to be systematic, critical, and comprehensive in your search and evaluation of sources.
As in all aspects of scientific work, preparation is the key to success. Carefully think about the purpose of your literature review, the questions you want to answer, and your search strategy. The writing process itself will differ depending on the your literature review method. For example, when writing a narrative review use the identified literature to support your arguments, approach, and conclusions. By contrast, a systematic review typically contains the same parts as other scientific papers: Abstract, Introduction (purpose and scope), Methods (Search strategy, inclusion/exclusion characteristics, …), Results (identified sources, their main arguments, findings, …), Discussion (critical analysis of the sources you have reviewed), Conclusion (gaps or inconsistencies in the existing research, future research, implications, etc.).
Start your free trial
Your trial will end automatically after 14 days.
- Resources Home 🏠
- Try SciSpace Copilot
- Search research papers
- Add Copilot Extension
- Try AI Detector
- Try Paraphraser
- Try Citation Generator
- April Papers
- June Papers
- July Papers

Automate your literature review with AI

Table of Contents
Traditional methods of literature review can be susceptible to errors . Whether it’s overcoming human bias or sifting through an incredibly large amount of scientific research being published today. Not to forget all the papers that have already been published in the past 100 years. Putting both together makes a heap of information that is humanly impossible to sift through. At least do so in an efficient way.
Thanks to artificial intelligence, long and tedious literature reviews are becoming quick and comprehensive. No longer do researchers have to spend endless hours combing through stacks of books and journals.
In this blog post, we'll dive deep into the world of automating your literature review with AI, exploring what a literature review is, why it's so crucial, and how you can harness AI tools to make the process more effective.
What is a literature review?
A literature review is essentially the foundation of a scientific research project, providing a comprehensive overview of existing knowledge on a specific topic. It gives an overview of your chosen topic and summarizes key findings, theories, and methodologies from various sources.
This critical analysis not only showcases the current state of understanding but also identifies gaps and trends in the scientific literature. In addition, it also shows your understanding of your field and can help provide credibility to your research paper.
Types of literature review
There are several types of literature reviews but for the most part, you will come across five versions. These are:
1. Narrative review: A narrative review provides a comprehensive overview of a topic, usually without a strict methodology for selection.
2. Systematic review: Systematic reviews are a strategic synthesis of a topic. This type of review follows a strict plan to identify, evaluate, and critique all relevant research on a topic to minimize bias.
3. Meta-analysis: It is a type of systematic review that uses research data from multiple articles to draw quantitative conclusions about a specific phenomenon.
4. Scoping review: As the name suggests, the purpose of a scoping review is to study a field, highlight the gaps in it, and underline the need for the following research paper.
5. Critical review: A critical literature review assesses and critiques the strengths and weaknesses of existing literature, challenging established ideas and theories.
Benefits of using literature review AI tools?
Using literature review AI tools can be a complete game changer in your research. They can make the literature review process smarter and hassle-free. Here are some practical benefits:
AI tools for literature review can skim through tons of research papers and find the most relevant one for your topic in no time, thus saving you hours of manual searching.
Comprehensive insights
No matter how complex the topic is or how long the research papers are, AI tools can find key insights like methodology, datasets, limitations, etc, by simply scanning the abstracts or PDF documents.
Eliminate bias
AI doesn't have favorites. Based on the data it’s fed, it evaluates research papers objectively and reduces as much bias in your literature review as possible.
Faster research questions
AI tools present loads of research papers in the same place. Some AI tools let you create visual maps and connections, thus helping you identify gaps in existing literature and arriving at your research question faster.
Consistency
AI tools ensure your review is consistently structured and formatted. They can also check for proper grammar and citation style, which is crucial for scholarly writing.
Multilingual support
There are heaps of non-native English-speaking researchers who can struggle with understanding scientific jargon in English. AI tools with multilingual support can help such academicians conduct their literature review in their own language.
How to write a literature review with AI
Now that we understand the benefits of a literature review using artificial intelligence, let's explore how you can automate the process. Literature reviews with AI-powered tools can save you countless hours and allow a more comprehensive and systematic approach. Here's one process you can follow:
Choose the right AI tool
Several AI search engines like Google Scholar, SciSpace, Semantic Scholar help you find the most relevant papers semantically. Or in other words even without the right keywords. These tools understand the context of your search query and deliver the results.
Find relevant research papers
Once you input your research question or keywords into a search engine like Google Scholar, Semantic Scholar, or SciSpace, it scours millions of papers worth of databases to find relevant articles. After that, you can narrow your search results to a certain time period, journals, number of citations, and other parameters for more accuracy.
Analyze the search results
Now that you have your list of relevant academic papers, the next step would be reviewing these results. A lot of AI-powered tools for literature review will often provide summaries along with the paper. Some sophisticated tools also help you gather key points from multiple papers at once and let you ask questions regarding that topic. This way, you can get an understanding of the topic and further have a better understanding of your field.
Organize your collection
Whether you’re writing a literature review or your paper, you will need to keep track of your references. Using AI tools, you can efficiently organize your findings, store them in reference managers, and instantly generate citations automatically, saving you the hassle of manually formatting references.
Write the literature review
Now that you’ve done your groundwork, you can start writing your literature review. Although you should be doing this yourself, you can use tools like paraphrasers, grammar checkers, and co-writers to help you refine your academic writing and get your point across with more clarity.
Best AI Tools for Literature Review
Since generative AI and ChatGPT came into the picture, there are heaps of AI tools for literature review available out there. Some of the most comprehensive ones are:
SciSpace is a valuable tool to have in your arsenal. It has a repository of 270M+ papers and makes it easy to find research articles. You can also extract key information to compare and contrast multiple papers at the same time. Then, go on to converse with individual papers using Copilot, your AI research assistant.
Research Rabbit
Research Rabbit is a research discovery tool that helps you find new, connected papers using a visual graph. You can essentially create maps around metadata, which helps you not only explore similar papers but also connections between them.
Iris AI is a specialized tool that understands the context of your research question, lets you apply smart filters, and finds relevant papers. Further, you can also extract summaries and other data from papers.
If you already don’t know about ChatGPT , you must be living under a rock. ChatGPT is a chatbot that creates text based on a prompt using natural language processing (NLP). You can use it to write the first draft of your literature review, refine your writing, format it properly, write a research presentation, and many more things.
Things to keep in mind when using literature review AI tools
While AI-powered tools can significantly streamline the literature review process, there are a few things you should keep in mind while employing them:
Quality control
Always review the results generated by AI tools. AI is powerful but not infallible. Ensure that you do further analysis by yourself and determine that the selected research articles are indeed relevant to your research.
Ethical considerations
Be aware of ethical concerns, such as plagiarism and AI writing. Use of AI is still frowned upon so make sure you do a thorough check for originality of your work, which is vital for maintaining academic integrity.
Stay updated
The world of AI is ever-evolving. Stay updated on the latest advancements in AI tools for literature review to make the most of your research.
In conclusion,
Artificial intelligence is a game-changer for researchers, especially when it comes to literature reviews. It not only saves time but also enhances the quality and comprehensiveness of your work. With the right AI tool and a clear research question in hand, you can build an excellent literature review.
You might also like

AI for Essay Writing — Exploring Top 10 Essay Writers
AI for thesis writing — Unveiling 7 best AI tools

Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
Methodology
- How to Write a Literature Review | Guide, Examples, & Templates
How to Write a Literature Review | Guide, Examples, & Templates
Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.
What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .
There are five key steps to writing a literature review:
- Search for relevant literature
- Evaluate sources
- Identify themes, debates, and gaps
- Outline the structure
- Write your literature review
A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.
Table of contents
What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.
- Quick Run-through
- Step 1 & 2
When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:
- Demonstrate your familiarity with the topic and its scholarly context
- Develop a theoretical framework and methodology for your research
- Position your work in relation to other researchers and theorists
- Show how your research addresses a gap or contributes to a debate
- Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.
Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Prevent plagiarism. Run a free check.
Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.
- Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
- Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
- Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
- Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)
You can also check out our templates with literature review examples and sample outlines at the links below.
Download Word doc Download Google doc
Before you begin searching for literature, you need a clearly defined topic .
If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .
Make a list of keywords
Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.
- Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
- Body image, self-perception, self-esteem, mental health
- Generation Z, teenagers, adolescents, youth
Search for relevant sources
Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:
- Your university’s library catalogue
- Google Scholar
- Project Muse (humanities and social sciences)
- Medline (life sciences and biomedicine)
- EconLit (economics)
- Inspec (physics, engineering and computer science)
You can also use boolean operators to help narrow down your search.
Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.
You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.
For each publication, ask yourself:
- What question or problem is the author addressing?
- What are the key concepts and how are they defined?
- What are the key theories, models, and methods?
- Does the research use established frameworks or take an innovative approach?
- What are the results and conclusions of the study?
- How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
- What are the strengths and weaknesses of the research?
Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.
You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.
Take notes and cite your sources
As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.
It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.
Receive feedback on language, structure, and formatting
Professional editors proofread and edit your paper by focusing on:
- Academic style
- Vague sentences
- Style consistency
See an example

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:
- Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
- Themes: what questions or concepts recur across the literature?
- Debates, conflicts and contradictions: where do sources disagree?
- Pivotal publications: are there any influential theories or studies that changed the direction of the field?
- Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?
This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.
- Most research has focused on young women.
- There is an increasing interest in the visual aspects of social media.
- But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.
There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).
Chronological
The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.
Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.
If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.
For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.
Methodological
If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:
- Look at what results have emerged in qualitative versus quantitative research
- Discuss how the topic has been approached by empirical versus theoretical scholarship
- Divide the literature into sociological, historical, and cultural sources
Theoretical
A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.
You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.
Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.
The introduction should clearly establish the focus and purpose of the literature review.
Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.
As you write, you can follow these tips:
- Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
- Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
- Critically evaluate: mention the strengths and weaknesses of your sources
- Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts
In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.
When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !
This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.
Scribbr slides are free to use, customize, and distribute for educational purposes.
Open Google Slides Download PowerPoint
If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.
- Sampling methods
- Simple random sampling
- Stratified sampling
- Cluster sampling
- Likert scales
- Reproducibility
Statistics
- Null hypothesis
- Statistical power
- Probability distribution
- Effect size
- Poisson distribution
Research bias
- Optimism bias
- Cognitive bias
- Implicit bias
- Hawthorne effect
- Anchoring bias
- Explicit bias
A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .
It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.
There are several reasons to conduct a literature review at the beginning of a research project:
- To familiarize yourself with the current state of knowledge on your topic
- To ensure that you’re not just repeating what others have already done
- To identify gaps in knowledge and unresolved problems that your research can address
- To develop your theoretical framework and methodology
- To provide an overview of the key findings and debates on the topic
Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.
The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .
A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other academic texts , with an introduction , a main body, and a conclusion .
An annotated bibliography is a list of source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a paper .
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved November 6, 2023, from https://www.scribbr.com/dissertation/literature-review/
Is this article helpful?
Shona McCombes
Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, what is your plagiarism score.

Top 5 Ways Literature Review Software Can Help You
by | Apr 8, 2019

People who have never used literature review software often tell us that they don’t need it – they already have a process in place that works just fine. But with regulatory scrutiny increasing on the daily, maybe it’s time to look at ways to do better than“fine”.
With the EU MDR and IVDR compliance deadlines looming , device manufacturers need practical solutions for efficiently producing clinical evaluation reports. Notified Bodies, swamped with more work than available resources, are looking for evidence of transparent, repeatable processes and standardized output. The bottom line: “fine” isn’t going to cut it anymore.
While there’s no shortage of talk about MDR and IVDR and the impact the new regulations will have on device manufacturers, there is far less in the way of concrete actions to achieve compliance. What can the industry do today to prepare? And how can literature review software help ? Here are five of the top ways literature review software can help you today:
1. Compliance
Transparent, auditable, and reproducible results are fundamental to a compliant review process. When your Notified Body comes knocking, literature review software can help you with features such as a detailed audit log, data version control, and quick searching and lookup capabilities. Simply put, it effectively documents your protocol and process so that you don’t need to do it manually.
If there’s one thing that almost every researcher wishes for, it’s more time . From conducting searches to removing duplicates and irrelevant articles, screening, extracting data, and preparing reports, literature review is a time intensive process. Using literature review software to complete these tasks can improve efficiency by 40%-60%.
3. Accuracy
No one wants to discover a mistake in their review right before – or worse, during – an audit. Duplicate references, transcription errors, and data entry errors can skew, or even invalidate, your results. Literature review software provides built-in automation and validation tools that dramatically reduce the potential for errors in your review process.
4. Compatibility
Although literature review software can help with many tasks throughout the review life cycle, your process likely includes other tools for searching and storing references and data. You may also need to use literature review data in reports and submissions. Literature review software should allow you to import and export your data in all the most common file formats, such as CSV, Excel, Word, PDF, RIS, and ENLX.
5. Collaboration
Literature review software packages today are typically cloud-based, allowing them to be used from any browser on any device. With a centralized, shared data set, your team can collaborate in real time, regardless of location.

Vivian MacAdden is DistillerSR's Senior Manager, Industry Marketing - Medical Devices. Throughout her career, she has accumulated 20 years of strategic marketing experience in various industries in Canada and international markets such as Brazil, China, Singapore, Jordan and Japan. A problem solver at heart and forever an optimist (and karaoke lover), she is passionate about telling great stories that make a positive impact on the world.
View all posts
Stay in Touch with Our Quarterly Newsletter
Recent posts.

Systematic Reviews: Automating Your Workflows With DistillerSR
Collecting, evaluating and managing evidence is the most crucial part of research that drives policy and regulates medical technologies, and Systematic Reviews are the cornerstone of evidence-based research. A Systematic Review may contain tens of thousands of...

For the Record with Peter O’Blenis, CEO at DistillerSR
In a recent interview to Medical Alley, Peter O’Blenis, CEO at DistillerSR, discussed the company's innovation track record and the results of a recent market survey conducted with medical device and in-vitro diagnostics manufacturers, shedding light on how automation...

Webinar Recap: NuVasive and Geistlich Pharma Share Best Practices for EU MDR State of the Art Submissions While Achieving More Efficient Literature Reviews with DistillerSR
In a recent webinar, DistillerSR customers Monique Liston, Senior Medical Writer at NuVasive and Shelley Jambresic, Senior Clinical Evaluations Manager at Geistlich Pharma were joined by Dr. Bassil Akra, founder and CEO of Akra Team GmbH, Dr. Julien Senac, Global...
- Dissertations
- Qualitative Research
- Quantitative Research
- Academic Writing
- Getting Published
News & Events
Top 15 tools to help you with writing a literature review.
- July 27, 2015
- Posted by: Mike Rucker
- Category: Literature Review

When you are preparing to write your literature review having the right tools at hand can immensely increase your productivity. Since a lot of your time getting prepared to write will be spent searching for papers, reading and reviewing them, as well as organizing citations… here are some programs, tools and sites that can help you out and make the process more efficient and organized.
- Get access to a good research database
You need to make sure you can access research databases that cover your subject matter. The easiest way to do this is via your university. School librarians are a great, often unutilized resource, your best bet is to start there. Alternatively, you can look for literature through free websites that provide access to research articles, journals, published studies and other scholarly sources. Try the following:
- Google Scholar
- Academia.edu
- Be sure about the citation style you’re using
It’s essential that you get familiar with the citation style you are expected to use. Each style has its own specifics and for complex examples you might need to consult the original source (manuscript) as you write your review.
- For APA (American Psychological Association) style see: https://owl.english.purdue.edu/owl/resource/560/16
- Check out this quick guide for Harvard referencing: https://www.staffs.ac.uk/assets/harvard_quick_guide_tcm44-47797.pdf
- For Chicago style use: http://www.chicagomanualofstyle.org/tools_citationguide.html
- If you need to follow MLA (Modern Language Association) guidelines, have a look at: https://owl.english.purdue.edu/owl/resource/747/01
Also, some databases and websites already offer a ‘cite’ option/button, so see if you can simply utilize existing services that can save you precious time.
- Manage your literature
As you go through numerous sources, it is important to keep track of them all and to organize and store them for easy reference. The more you can make finding them again effortless, the easier it will be to find them when you need to include them in your text. This is where Mandalay and EndNote might come in handy.
- With Mendeley all you need to do is download a PDF of the paper and move it to Mendeley. The program then annotates all the titles, authors and other information, and also syncs the papers to the cloud and other devices, so you will not lose your data if your computer breaks down. It also allows you to insert the citations in the correct format as you work on your literature review.
- EndNote does a similar job and also lets you communicate and share with your colleagues. However, it’s only free for a trial period of 30 days after which you will need to pay a subscription fee.
- Watch your style and grammar
Proper grammar is key to a great literature review so it is good to have some tools at hand that can help you check or double-check meanings and definitions, as well as spelling and correct word use. The following resources you may find useful:
- Dictionary.com
- Thesaurus.com
- Store and organize the information you find
Since you will be going through a plethora of sources, it might be a good idea to keep all your ideas organized, here are a couple of programs that can help with that:
- Evernote is a simple software program that can help you with the task of storing your ideas and accessing them later. No more bookmarks and scribbly notes. You can instead take screenshots, write digital notes yourself or take photos with your phone. Evernote makes sure all data is synchronized and accessible every day, and the basic option is free of charge.
- OneNote is Microsoft’s version of Evernote and in a similar fashion to Evernote lets you keep all of your research available in one place. Those accustomed and happy with other products associated with Microsoft’s Office Suite might find OneNote’s user interface familiar.
[…] by observing best practices. We have already covered different types of literature reviews and the tools that can help you write one, as well as how to achieve the right mental attitude and attunement to […]
Comments are closed.


- MDR CER Writing
- Systematic Literature Review
- Literature Review Templates
- Full Service PMS
- Vigilance Monitoring and Reporting
- IVDR Writing
- Consulting Services
- SLR Software – CiteMed.io
- EU MDR Templates for CER and PMS
- Free Downloads
- Featured In …
Best Systematic Literature Review Software
A systematic literature review is one of the most important stages of the CER process, but it’s also time-consuming and labor-intensive.
Literature review software tools make the process a little easier by streamlining the research process, helping in effective data storage, and providing helpful data analysis. No matter how much we wish, literature review software will only automate some of the literature review processes. However, it will help a lot. So, we put together a guide for you so you can get the most out of them. Literature review software refers to specialized programs developed to help researchers organize, manage, and analyze research material more efficiently. It’s like having a personal assistant who knows all the greatest research places and can juggle references like a circus act. These handy tools can help you organize, manage, and analyze papers as you search them. These are amazing when you are going through scientific articles faster than a caffeine-fueled scientist and need a tool to store and sort all of the data you are collecting as you go.
How can literature review software help you: as a cer writer?
Literature review software has a lot of uses for a medical writer or medical manufacturer. Here are the ones we found to be the most helpful.
Efficient Literature Search and Organization:
Some literature search tools allow users to do extensive article searches across different databases, online journal sites, and repositories. You can use some of these tools to import references, extract key data/tables, and arrange them all in one place. The capacity to search and arrange massive quantities of material systematically and intuitively saves time, to say the least. Sure, you can use spreadsheets, but you have to log in the articles manually, decide where they go, which parameters to arrange them by AND then arrange the articles. Anyone who has done that knows how painstakingly long it can be. Given the fact that several literature review tools can automate the arrangement, most people would agree these tools are worth every penny.
Improved Collaboration and Teamwork:
Many literature review software solutions allow several people to work effortlessly together. You can share your work with colleagues and other researchers or ask for experts to help out in the literature review process . It’s great for the review process as well. Some literature review software lets you leave comments and ideas. You can always go through the process at one go first and then come back and edit as you like.
Citation Creation and Management:
Accurate and consistent citation of all sources is one of those things you really can’t mess with. Fortunately, there are many online sites and web based software that offers to do them for you. However, going to a website and searching each article you read takes a lot of work. Most literature review software will automatically produce citations. You can choose whichever citation style (e.g., APA, MLA) you want and easily modify them. In short, literature review software simplifies citation and reference management for you. And doing so saves your time, which these tools are for.
Data Extraction and Analysis:
Now this is the cream of the crop. The systematic review process is extensive. And while we love reading through the resources, data extraction is only sometimes fun. Or, to be exact, reading through an interesting journal paper or doing a more detailed search is more fund when the data extraction is taken care of. Some systematic literature review tools use machine learning for data extraction. You have to choose the journal article, and the software will collect and organize data like research design, sample sizes, results, and conclusions. You can then use the data to do a meta-analysis or use statistical software to do it for you.
Screening process:
Literature review software is great for screening the papers you collect. A simple keyword search is enough to get some of the sortings out of the way. Nevertheless, for the next step, you will need some systematic review tools. You see, systematic review software usually comes equipped with advanced search options like these. To support systematic reviews, they allow you to tweak the article inclusion criteria and do some abstract screening. The best software will help you with protocol development, meta-analysis, keyword highlighting, and even critical appraisal. Some highlight the most important parts of an article so you can read through them faster. Some software has tools for screening literature with various criteria.
Choosing a literature review software
So, how do you choose the perfect software for you? First, don’t just go and buy the most expensive one. Many of these are monthly subscription based tools, so try a few of them before deciding on one. Based on what you are looking for, make a list of attributes you would like the software to have. See if you can find all the appropriate tools in one software. Here are some things you will need to look for to help you manage systematic reviews: A. Confirm that the software works smoothly with your existing systems. These can be reference management software or document repositories. Seamless fits eliminate data transfer headaches and ensure your work doesn’t take the hit. B. Look for software with good built-in research tools. The ones that can search across several databases. The ability to conduct complex searches, filter results using the inclusion/exclusion criteria, and obtain full-text articles directly from the platforms streamlines the research process. C. Choose software with an easy-to-use interface. Research is complex enough without you having to figure out how to use the software. A well-designed user interface is easy, simple, and saves time. Look for customizable dashboards, simple navigation, and simple search results. D. Try software that has co-working spaces., especially if you have a large project. Shared libraries, commenting, and annotation features will come in handy when a team is working on a project together. E. Prioritize software that can work with data. For example, go for software that has data extraction, analysis, and report-creating tools. The more additional tools that allow you to sort your findings, the better. Remember, the software is there to assist researchers, not replace them. So, you will probably not get all the options you want in one software. However, as any cer writer will agree, anything that helps in conducting systematic reviews a bit faster, more accurately, and without the risk of bias is a good tool.
Want more EU MDR and Regulatory Insights?
We send weekly emails with the latest regulatory developments, templates, and strategies straight to QA/RA Professionals like you. Sign up below to get access today.
" * " indicates required fields
Step 1 of 4

Want Access to Everything EU MDR?
Join over 3,000 Regulatory and Quality Directors, Engineers and Consultants who receive weekly industry Whitepapers, Templates, and EU MDR news right to their inbox
By signing up, you agree to receive email marketing
- How we work
Writing a Literature Review Made Simple
We make writing a literature review both quick & quality!
Best quality literature review service
How to get help with literature review.
First-Timer?
Let us welcome you with a special discount on your first order!
Our literature review samples
Why hire a literature review writer here.
Some students lack time for writing their literature review, and others have too few skills. When writing this academic work, you’re in charge of critically examining the data, discovering gaps in current knowledge, and demonstrating to your professor that you’ve done everything correctly.
While writing a literature review, you’ll work with various sources and concepts that must be a single logical assertion. Second, many ideas and concepts from sources will be unfamiliar and difficult to grasp. Lastly, there are no universal rules for what to use and what to avoid writing your literature overview.
In case of trouble, a service for writing student literature reviews is a workable solution.
Try Writing a Lit Review Stress-Free With Us
Our service makes writing a lit review easier if you don’t have time to fit this into your schedule. We have many practiced writing specialists who can assist anybody in need. You’re guaranteed to submit your piece of writing without delay and avoid the revisions your professor might require.
Every writing expert who provides literature review services holds a master’s or doctoral degree. Besides, each possesses substantial knowledge in several of the 50+ academic areas. Our writing service helps you with your work following a strict money-back policy: get either satisfaction or a refund.
With Our Help Literature Review Comes Right on Time
We provide 24/7 writing services. Our writing experts find everything doable: a literature review map, systemic literature review, the one for a thesis or dissertation, etc. With our help literature review is sure to be a mistake-free writing piece, regardless of urgency or difficulty.
When assigned to write a literature review, we assess your order information and appoint the most qualified specialist to do this from scratch. It is a feasible option when you don’t know efficient methods for writing your literature overview or when the deadline is too pressing to accomplish this yourself.
Go for the Best Literature Review Writing Service on the Web
If you buy a literature review paper from our reputable writing service, be confident in the maximum accuracy of the content. Complete the order form, make a safe payment, and wait for the writing to be done. We’ll let you know when your piece of writing is ready for downloading.
It’s not simple to find the best literature review writing service, especially if you want it done to a high degree. Many writing services will likely assign someone with little knowledge in your area and non-fluent English. It leads to low-standard writing pieces and plagiarism, getting you in severe trouble.
Our writing services are the job of experts described below:
- Well-experienced writing professionals
- People who know the structure of writing
- Native or native-like academic English users
- Writing experts with access to #1 literature
All the competencies you expect from a literature review writer are real with our service.
Assistance With Writing a Literature Review and Extra Benefits
Our services aim to provide customers with the best possible literature review writing help. After the writing expert completes your order, you’ll have two weeks of free revisions to the text. Providing that your instructions remain the same, the writing expert will make corrections without limits.
Your experience with our writing service is confidential. Literature review writers follow stringent privacy guidelines and work under a non-disclosure agreement. TLS encryption ensures that usage of our writing service is risk-free. You’re also offered tried and true payment methods: Visa and MasterCard.
Additionally to a literature review service, we offer:
- Full text of sources – a list of the literature used in your writing piece.
- First-priority option – your order will find its writing specialist quicker.
- Version by a different writing expert – get another literature overview.
- Initial draft – we send 30% of your writing piece before the deadline.
- Plagiarism report – receive evidence of completely original content.
- Extra quality check – done additionally to the default proofreading.
You can buy literature review on every writing level, from high school to doctorate studies. Another thing our writing specialists cover is formatting, so each style is manageable to us.
Don’t put your literature review in danger – order it from the top-rated writing service.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
- View all journals
- My Account Login
- Explore content
- About the journal
- Publish with us
- Sign up for alerts
- Open access
- Published: 02 November 2023
Software defect prediction using learning to rank approach
- Ali Bou Nassif 1 ,
- Manar Abu Talib 2 ,
- Mohammad Azzeh 3 ,
- Shaikha Alzaabi 1 ,
- Rawan Khanfar 1 ,
- Ruba Kharsa 2 &
- Lefteris Angelis 4
Scientific Reports volume 13 , Article number: 18885 ( 2023 ) Cite this article
102 Accesses
Metrics details
- Computer science
- Electrical and electronic engineering
- Scientific data
Software defect prediction (SDP) plays a significant role in detecting the most likely defective software modules and optimizing the allocation of testing resources. In practice, though, project managers must not only identify defective modules, but also rank them in a specific order to optimize the resource allocation and minimize testing costs, especially for projects with limited budgets. This vital task can be accomplished using Learning to Rank (LTR) algorithm. This algorithm is a type of machine learning methodology that pursues two important tasks: prediction and learning. Although this algorithm is commonly used in information retrieval, it also presents high efficiency for other problems, like SDP. The LTR approach is mainly used in defect prediction to predict and rank the most likely buggy modules based on their bug count or bug density. This research paper conducts a comprehensive comparison study on the behavior of eight selected LTR models using two target variables: bug count and bug density. It also studies the effect of using imbalance learning and feature selection on the employed LTR models. The models are empirically evaluated using Fault Percentile Average. Our results show that using bug count as ranking criteria produces higher scores and more stable results across multiple experiment settings. Moreover, using imbalance learning has a positive impact for bug density, but on the other hand it leads to a negative impact for bug count. Lastly, using the feature selection does not show significant improvement for bug density, while there is no impact when bug count is used. Therefore, we conclude that using feature selection and imbalance learning with LTR does not come up with superior or significant results.
Introduction
Recently, software systems have experienced massive growth in number, size, and complexity. These dramatic changes have elevated the demand on software testing, which is costly and time-consuming 1 . With the aim of efficient allocation of software testing resources, Software Defect Prediction (SDP) has been an active area of research. SDP is the predictive process of identifying software modules with defect- or bug-proneness based on their method-level and class-level metrics 2 . It is a helpful tool during the testing phase to improve quality, reliability, and cost reduction. Previous SDP models used classification Machine Learning (ML) algorithms, such as Support Vector Machine (SVM) 3 , Random Forest (RF) 4 , 5 , K-Nearest Neighbor (KNN) 6 , and Naïve Bayes (NB), to provide binary classifications for the existence of defects in software modules 7 , 8 . SDP as a classification tool proved its importance. Still, its outcomes were insufficient in practice, as they do not account for the importance of a defective module and which modules should be examined first 9 . To produce more accurate resource assignments, researchers started to study SDP as a ranking problem using Learning-to-Rank (LTR) or regression algorithms. Instead of finding an explicit defect count prediction, ranking algorithms work towards ordering modules according to their defects or defect densities such that, for instance, the module with the highest ranking is assigned the most testing resources 10 .
LTR is an algorithm of machine learning that builds a function to solve ranking problems on queries. It works by predicting a score in each instance, and the instances are then sorted based on the score assigned by the ranking model 11 . LTR is beneficial for many applications in information retrieval, such as e-commerce, social networks, and recommendation systems. It has proven its performance in other applications like machine translation, computational biology, recommender systems, and SDP in software engineering 12 . LTR algorithms can be classified into three approaches based on their ranking mechanism: pointwise, pairwise, and listwise, as illustrated in Fig. 1 . The pointwise approach takes an individual item from the list and trains a regressor on it to predict how relevant it is for the query. The score of each item in the list (in our case, each software module) is independent of the scores of other modules. The final ranking is achieved by sorting the resultant list by the scores of the software modules 13 . The pairwise approach looks at a pair of software modules at a time. Given a pair of modules, it tries to come up with the optimal ordering for that pair and compare it to the actual ranks of these pairs of modules. The listwise approach treats the whole list as an entity and predicts the optimal ordering for each module. It uses probability models to minimize ordering errors 14 .

Pointwise, pairwise, and listwise LTR.
This research paper proposes a comprehensive comparison study of the listwise LTR approach for the SDP, starting by importing datasets that contain previous details about software modules (i.e., quality metrics and the number of bugs in each module). Subsequently, we build the SDP model by training a regression algorithm and optimizing it using Grid Search with Fault-Percentile-Average as the objective function to achieve better ranking accuracy 15 . Evaluation is the last step, yet the most essential, because it ensures the quality and reliability of the model 16 . To further analyze the proposed process and provide the desired solutions, we address the following research questions:
RQ1. What is the role of the target variables on the performance of the employed LTR techniques?
Two target variables are studied in this research paper: bug count and bug density. Bug count refers to the number of bugs present in a module. Bug density is a measure of how frequently a bug appears per line of code. Bug density gives a better indication of which modules require more testing resources. Given two modules with the same number of bugs, the module with a smaller number of lines of code (LOC) has a higher testing priority, as it has a higher bug density 17 .
RQ2. What is the average improvement when using imbalanced learning with LTR techniques?
Most SDP datasets have an imbalanced distribution with an excess of zero-count observations. Imbalanced datasets negatively affect performance, as the model is likely to be influenced by the excessive observations 18 , 19 . Typically, SDP datasets are imbalanced where the non-defective modules outnumber the defective modules. In this paper, we study the impact of random under-sampling of the zero-count instances (non-defective modules) on the performance of LTR techniques 20 .
RQ3. What is the role of feature selection in the accuracy of LTR techniques?
Feature selection is an essential preprocessing technique that can improve the execution time and accuracy of ML models, especially in SDP 21 , 22 . Features irrelevant to the target value can affect the overall performance of the model 23 . Feature selection is the process of choosing the most relevant attributes to train the model and enhance prediction outcomes. In this study, we apply the Information Gain (InfoGain) method to eliminate unrelated features and select the most related ones 9 .
The rest of the paper proceeds as follows. Section " Literature review " discusses the related work and relevant literature. Section " Research methodology " highlights the methodology for building and evaluating the model. Section " Results " illustrates the experimental work and the results, while Section " Threats to validity " mentions threats to validity. Lastly, Section " Conclusion " provides a summarized conclusion and suggests directions for future work.
Literature review
SDP has been a hot topic for many years. Researchers have conducted a large number of studies, explored many areas in the field, and applied various algorithms seeking better accuracy. This section reviews the related works and algorithms used to construct SDP models; however, it focuses on the SDP ranking models as they are most relevant to our study.
Software defect prediction
Each dataset in the classification SDP model is defined as \(D=[{{\varvec{x}}}_{i},{y}_{i}]\) , for modules \(i \in [1,n]\) . \({{\varvec{x}}}_{i}=[{x}_{i1},{x}_{i2},{x}_{i3},\dots {x}_{im}]\) represents a vector of \(m\) independent features (i.e., quality metrics) of the \({i}^{th}\) module. The dependent variable is \({y}_{i}\in \{-\mathrm{1,1}\}\) . “1” represents the defective modules; “-1” represents the clean ones. The equation \({y}_{i}{\prime}=f\left({{\varvec{x}}}_{i}\right)\) represents the ML classification models that predict \({y}_{i}{\prime}\) depending on the \({{\varvec{x}}}_{i}\) . Different algorithms \(f\left(.\right)\) , provide different accuracies in classifying the modules.
ML Classifiers have been the most popular approach in the field of SDP. Guo et al. 4 constructed a classification model using RF on five NASA datasets and used Defect Detection Rate (PD) and Overall Accuracy (ACC) to evaluate the model. Alsghaier and Akour 3 used the Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) to optimize the SVM algorithm and applied the model to 24 NASA datasets,they used accuracy, recall, precision, specificity, and F-measure as evaluation measures. As the number of used classifiers increased, researchers started conducting studies to compare them. Bansal 24 constructed a comparative analysis of six classification algorithms. He compared the results from using static metrics, results from using change metrics, and results from combining both. Bansal evaluated the models using the Matthews Correlation Coefficient (MCC) and Area Under the Curve (AUC). He found that models trained by a combination of static and change metrics performed the best. Moreover, models that used only change metrics slightly outperformed models that used only static metrics.
Li et al. 25 conducted a benchmark study using 17 classifiers on 27 datasets from MDP and Github. After applying AUC to evaluate the classifiers, Li et al. found that RF and Multilayer Perceptron (MLP) achieved good results,however, there was no significant difference in performance among the 17 classifiers. Weyuker et al. 26 compared four modeling methods (NBR, RF, Recursive Partitioning (RP), and Bayesian Additive Regression Trees (BART)) and found that NBR and RF significantly surpassed RP and BART. Previous studies have found that eliminating irrelevant features using feature selection can significantly improve the model 27 , 28 , 29 . Chen et al. 30 applied multi-objective optimization for feature selection. Yang et al. 31 utilized InfoGain to select the most relevant three metrics of each dataset and found that most of the selected metrics were change metrics. Wang et al. 28 applied a threshold-based feature selection method. They discovered that three features can construct an effective classifier and that model prediction improved when they removed 98.5% of the features.
Balogun et al. 32 conducted a comparative study between three of the most widely used filtering approaches for dimensionality reduction (Chi-Square (CS), ReliefF (REF), and InfoGain)), for two of the SDP classification algorithms (NB and DT). They also proposed the “Rank Aggregation-Based Multi-Filter Feature Selection (RMFFS)” method, aggregating the resulting features from multiple filters. Balogun et al. found that RMFFS performed noticeably better than the solo techniques, especially the G-Mean, which resulted in the best outcomes.
Shin et al. 33 experimented on 32 SDP datasets with LIME and Breakdown to determine whether they reasonably explain the classification results from 5 classifiers (see Table 1 ). Their experiments revealed that none of the mentioned methods consistently explained different settings, making them unreliable for practical use.
López-Martín et al. 34 developed a novel algorithm to predict the Defect Density (DD) of projects based on function points (FP). The algorithm utilized transformation and reduction concepts to enhance and surpass the limitations of the original KNN algorithm in regression. The “transformed k-nearest neighborhood output distance minimization” algorithm (TkDM), minimizes the distance between the most similar k projects to the project whose DD is being predicted,then, it applies an inverse transformation to the output. Four datasets were selected from the ISBSG release 2018 35 containing various projects with various development types and programming languages. López-Martín et al. 34 chose the Mean Absolute Residuals (MAR) and Median Absolute Residuals (MdAR) as the main accuracy metrics for the evaluation of their algorithm, as well as Standardized Accuracy (SA) and effect size for further assessment of the algorithm performance. Also, they tested the algorithm against the SVR and NN models. Moreover, they tested different values for the number of neighbors (K) to choose the best one. Finally, they demonstrated that their algorithm yielded the highest SA and the least MAR and MdAR values compared to other algorithms.
Learning to rank for software defects prediction
Recently, more research has been done on Software Number Prediction (SNP), where researchers predict the exact number of defects in the software module using regression algorithms 36 , 37 , 38 . Bal and Kumar 39 studied the efficiency of the “extreme learning machine” (ELM) for imbalanced learning in SNP. They also derived a new method called “weighted regularization ELM” and evaluated it on 26 datasets using the Average Absolute Error (AAE), the Average Relative Error (ARE), and the Pred(I). Bal and Kumar 39 found that the WR-ELM outperformed other techniques for predicting the minority classes in imbalanced datasets.
Tong et al. 40 utilized the “subspace hybrid sampling ensemble” (SHSE) method for SNP. They evaluated their model on 27 open-source, public datasets detailed in Table 1 . The work of Tong et al. 40 resulted in an approximate FPA improvement of 8–15% compared to the previous ensemble and zero-inflated methods. A recent study by Yu et al. 41 demonstrated that the prediction of the exact number of bugs in the software modules (i.e., SNP) is still difficult. They reached this conclusion after a detailed study using various regression algorithms, datasets, and optimization methods (see Table 1 for details). They evaluated the regression algorithms on the Average Absolute Error (AAE) and pred (0.3) 42 . Yu et al. 41 suggested that the ranking SDP is the best approach for the regression algorithms.
In the ranking SDP model, each module in the dataset used is represented as \({{\varvec{M}}}_{i}=[{{\varvec{x}}}_{i},{y}_{i}]\) , where \({{\varvec{x}}}_{i}=[{x}_{i1},{x}_{i2},{x}_{i3},\dots {x}_{im}]\) represents a vector of \(m\) independent features (i.e., quality metrics) of the \({i}^{th}\) module. The dependent variable \({y}_{i} \in R\) represents the number of bugs in the \({i}^{th}\) module, or the bug density (i.e., \(\frac{\#bugs}{LOC}\) ). \(D=\{{{\varvec{M}}}_{i}=\left[{{\varvec{x}}}_{i},{y}_{i}\right]{\}}_{i=1}^{n}\) defines the software defect dataset, where \(n\) is the number of modules in \(D\) . The goal of the LTR algorithms is to build a prediction model that ranks new modules based on the number of bugs or bug density, where \({M}_{j}>{M}_{k}\) means that module \(j\) is more defect-prone than module \(k\) 31 .
Unlike SDP for classification, SDP for ranking is still relatively new, with fewer studies and research. Ostrand et al. 43 performed a simple Negative Binomial Regression (NBR) on one static metric (i.e., LOC) to predict the number of defects in each module. Then, they ranked the modules according to their defect density. They evaluated the model by calculating the percentage of faults in the top 20% of modules. This produced better results than a simple regression model. Yang et al. 31 proposed an LTR approach that optimized the linear regression model using CoDE with FPA as the objective function and the performance measurement. They demonstrated the effectiveness of the LTR approach by directly optimizing the algorithm. Yu et al. 9 applied 23 LTR algorithms to 41 datasets from the PROMISE repository 44 , then performed \({Norm(P}_{opt})\) and FPA to evaluate and compare the algorithms. They found that Bayesian Ridge Regression (BRR) performed the best according to FPA, while BRR and LTR (by Yang et al.) performed the best when evaluated with FPA and \({Norm(P}_{opt}).\) Yu et al. 9 divided the 23 algorithms into four categories: Classification-based pointwise approach, Regression-based pointwise approach, pairwise approach, and Listwise approach.
Some ML algorithms do not perform well with their default hyper-parameter settings. Selecting the best hyper-parameter of these algorithms can boost their predictive performance 45 . Researchers have utilized many optimization techniques to enhance and improve their model’s performance by tuning the hyper-parameters of the algorithm to minimize or maximize an objective function. Tantithamthavorn et al. 46 applied an automated parameter optimization technique called Caret to optimize the SDP and found that the AUC improved by 0.4 points after applying Caret. Yang et al. 31 performed CoDE optimization, with FPA as the objective function to directly optimize the ranking performance of the SDP. Canfora et al. applied GA to optimize the algorithm. Buchari et al. 47 used a meta-heuristic chaotic Gaussian PSO for optimizing their regression model and chose FPA as their objective function. PSO was introduced by Kennedy and Eberhart 48 . They derived the algorithm from the behavior of birds and fish when they search for food in groups: every group member benefit from the knowledge of its swarm. A flock of birds can integrate the experiences of all members to find food in much less time. PSO is a heuristic algorithm used to search for the optimal maximum or minimum solution to a problem. Although PSO does not guarantee finding the real global optimal, it finds a value that is close enough to be sufficient in most cases. Alazba and Aljamaan 49 combined ensemble learning with optimization methods. They used a grid search to find the best hyperparameters of tree-based ensemble algorithms. After assessing their approach on 21 datasets, Alazba and Aljamaan 49 found that the RF and XGB outperformed all tree-based classifiers. Ni et al. 38 investigated the usefulness of effort for cross-project defect prediction. The results obtained are promising and show superior results than traditional cross-project techniques.
It is important to note that some researchers proposed using the concept of effort-aware to prioritize software modules and aim to detect more bugs while inspecting a specific number of modules. For instance, Mende et al. 50 introduced the concept of "effort-aware" and presented two strategies for evaluating EADP models. Kamei et al. 51 found that process metrics yielded better results than product metrics in EADP models. In their work, Kamei et al. 52 proposed an Effort-Aware Linear Regression (EALR) model, demonstrating its ability to detect 35% of defective code changes by examining only 20% of all changes. Yang et al. 53 confirmed the effectiveness of slice-based cohesion metrics for EADP. Bennin et al. 54 investigated optimal EADP algorithms and explored the practical benefits of data resampling techniques. Yang et al. 58 discovered that the unsupervised method ManualUp 34 generally outperformed several simple supervised models for change-level EADP. Fu et al. 55 introduced the OneWay method, which utilizes the training dataset to automatically select the best software feature for ManualUp. Different studies explored various Effort Aware Defect Predictions 9 , 56 , 57 , 58 .
Qu et al. 59 suggested integrating developer information into EADP to enhance performance. Carka et al. 60 proposed using the normalized PofB to assess EADP performance, which ranked software modules based on predicted defect densities. Huang et al. 61 presented the Classify Before Sorting (CBS +) algorithm for EADP, which outperformed other algorithms to identify defective changes. Compared to ManualUp, The CBS + identified a similar number of defective changes but required inspection of fewer changes and significantly reduced the Initial False Alarms. Finally, Li et al. 62 investigated the impacts of different feature selection algorithms for effort-aware defect predictions. Finally, Multiple authors investigated the importance of effort aware methods for just in time software defect prediction 36 , 37 , 63
Research methodology
This section discusses the research approach for constructing different SDP ranking models. It states the characteristics of the used datasets, explains the data preprocessing and optimization techniques, explores multiple algorithms for building the regression model, and presents an evaluation strategy to assess and compare models based on various criteria. Figure 2 summarizes the conducted research methodology in this paper.

The process of building the SDP model using the LTR approach.
As depicted in Fig. 2 , we start with an unprocessed dataset, which is imbalanced, unnormalized, and contains many inessential features. Working with raw data is always ineffective; therefore, we preprocess the data using suitable preprocessing techniques (i.e., removing outliers, data normalization, and feature selection). We then build our regression models using the best-known regression algorithms. Our experiments are done on eight algorithms: MLP, SVR, KNR, BRR, RF, XGB, ZIPR, and ZIGPR. We chose the best hyperparameters of the algorithms (except for the zero-inflated ones) using a grid search that explores many possible variants of the hyperparameters and chooses the best combination that optimizes a quality metric. In this case, we search for the hyperparameters that minimize the error of the algorithm predictions. Our approach utilizes three-fold cross-validation for fair and precise assessment and evaluation of our models. The process is performed with two target variables: bug count and bug density. Finally, we present a comprehensive comparison study between the correctness and performance of the eight models on the target variables. The rest of this section gives more details about the methodology, datasets, and metrics we adopted.
Most previous studies in this field use datasets from the BUG PREDICTION and PROMISE repositories 44 , 64 . These datasets belong to public projects and contain different types of quality metrics. Early datasets contain method-level metrics (e.g., LOC, McCabe Complexity, and Halstead metrics). However, more recent datasets employ object-oriented and change metrics 65 . Tables 2 and 3 show static and change metrics from Bansal research 24 .
This research paper uses datasets from public projects to train and test the model. These datasets have different attributes and instances. D’Ambros et al. 64 collected the bug prediction repository that consists of PDE and JDT datasets. On the other hand, Ant, Camel, Ivy, Jedit, Lucene Poi, Synapse, Velocity, Xalan, and Xerces are parts of the Promise Software Engineering Repository. Table 4 summarizes the characteristics of each dataset 44 , 64 .
Data preprocessing
Data preprocessing is an essential step in building ML models. The Garbage in Garbage Out principle (GIGO) 66 highlights the importance of the data preprocessing stage in data analysis. The results depend heavily on the completeness, quality, integrity, and consistency of the data fed to the model. Therefore, increasing the data quality can considerably boost the reliability of the results. Data preprocessing techniques include data normalization, under-sampling, and feature selection 67 , 68 . Normalization is transforming the data in all attributes into similar ranges to avoid problems related to the considerable difference between the ranges. The dataset is normalized using the min–max normalization technique 69 , 70 . This technique transforms all data points into values between zero and one using ( 1 ).
Feature selection is a principal data preprocessing technique that enhances performance and reduces complexity by removing irrelevant attributes. This research utilizes InfoGain to select the most crucial features and demonstrates that there are cases where we can achieve the same results using a small percentage of the attributes 71 . InfoGain measures the dependencies between each attribute and the target value,after that, it ranks the variables based on the gain in the target variable (i.e., bug count or density). The attributes that reduce the uncertainty of the target have higher information gain values and thus have a higher chance of being selected 67 , 71 . Equations ( 2 ), ( 3 ), and ( 4 ) are used to calculate the InfoGain.
where \(H(Y\) ) is the entropy of the target variable Y, and \(y\) is each class in Y; however, since the entropy expects a discrete number of classes, we will convert the bug density into discrete ranges, then apply Eq. ( 3 ).
The result “ \(H\left(Y|X\right)\) ” is the conditional entropy of the target variable \(Y\) given a feature \(X\) . Lastly, Eq. ( 4 ) finds the gain in \(Y\) after using the feature variable \(X\) .
These formulas are applied to the features (one at a time) to select the most relevant ones.
Model selection and optimization
The comparative study utilizes five state-of-the-art supervised machine learning algorithms to construct a regression model that learns from known observations to predict the bug density and bug count of new observations. These models are: SVR, MLP, KNR, BRR, RF, and XGB. The study also uses the famous zero-inflated models (i.e., ZIPR, ZIGPR) to compare results and better understand trends and observations. SVR is a generalized linear regressor that predicts based on constructing a hyperplane with the maximum margin of the samples. MLP is a neural network model with input, output, and multiple hidden layers. The model is designed to discover complex hidden patterns in the data and can work for regression and classification. BRR is based on the Bayes theorem and supposes that the software features are independent. It dramatically simplifies the complexity of Bayesian methods. KNR predicts the number of bugs of the new software modules based on the number of bugs of the nearest one or several software modules. The choice of k number of nearest neighbors and aggregation method are the main factors of KNR. RF generates an ensemble model with essential decision trees. It randomly samples each instance to train different decision trees. XGB is an optimized distributed gradient boosting algorithm robust enough to handle various data types, relationships, and distributions. ZIFR and ZIGPR are regression techniques designed to count data with an excess of zero counts in case of bug counts and bug density.
Creating general ML models can produce acceptable results regardless of the discussed problem, even without using the data to tune them; however, it does not achieve the most desirable performance. Model optimization is finding the hyperparameters that minimize or maximize a scoring function for a specific task. Each model has its hyperparameters with a set of possible values 72 . This research employs the Grid Search technique to uncover the optimum values of the hyperparameters. Grid Search accepts the hyperparameter names (e.g., the learning rate in MLP or the kernel in SVM) and a vector of possible values for each. Then, the function goes through all the combinations and returns a fine-tuned model using the best combination of values. Even though Grid Search can require more resources and time than other optimization methods, it works better with the SDP problem since the datasets are not enormous and most of the model's hyperparameters are non-numeric (i.e., categorical or binary). Table 5 shows the hyperparameter configuration of each algorithm used by Grid search to find the best set of parameters.
Model evaluation (fault percentile average)
As discussed previously in the literature, FPA is a state-of-the-art performance measurement for ranking SDP models. FPA is a metric for evaluating the performance of the built models. Consider a dataset that contains \(k\) modules, \({m}_{1}, {m}_{2}, {m}_{3},\dots ,{m}_{k}\) ordered in increasing value according to predicted defects where \({m}_{k}\) contains the most predicted defects. Let \({n}_{i}\) represent the actual defects in \({m}_{i}\) , and the total number of actual defects is \(n={n}_{1}+ {n}_{2}+{n}_{3}+ \dots + {n}_{k}\) . The sum of actual defects computed from the modules with the highest numbers of predicted defects is \(\sum_{i=r}^{k}{n}_{i}\) . Therefore, the proportion of actual defects in the top predicted defective \(r\) modules to the total number of defects is:
The FPA is the average of \({P}_{r}\)
The previous equation shows that FPA is the average of the proportions of actual defects in the top \(r\) predicted defective modules to the total defects. Where \(r=1, 2, 3, .., k\) , FPA is compatible with ranking models because it takes the order of the predicted defects into account. Better models have higher FPA values because their ranking is more accurate 26 .
Our research testing plan uses k-fold cross-validation to evaluate the ML model’s reliability, avoid biased and misleading results, and get the most accurate and fair assessment of each model's performance. This approach involves testing different portions of the datasets iteratively, which allows all data points to contribute to the testing process instead of one fixed model testing. Since the observations in each dataset are limited, this study uses three-fold cross-validation, computes the quality metric (i.e., FPA) in each of the three iterations, and then finds the mean for all FPA over the iterations to achieve stable, unbiased results. After building the models, optimizing them, and computing all FPA for the models with different percentages of attributes, the following section reflects on the results and discusses the main observations and findings.
Compliance with ethical standards
The authors would like to convey their thanks and appreciation to the “University of Sharjah” for supporting the work through the research group – Open UAE Research and Development.
Informed consent
This study does not involve any experiments on animals.
We present the results of our comparison study on SDP in this section. We include a detailed description of how the experiments were designed, how the results were evaluated, and a discussion of the results.
To answer this research question, the eight models were first trained to predict either bug count or bug density. The FPA scores of all results were calculated, and the average score of each model was found. The eight models were compared based on the FPA scores of the two target variables and were visualized using box plots.
Table 6 presents the mean FPA results of our models, applied to the Promise and Bug Prediction repository datasets. The FPA values are in the form of “mean ± standard deviation.” A higher mean FPA indicates that the model could rank the defective modules more accurately. A higher standard deviation shows higher dispersion in FPA scores. Hence, the model has low stability and less reliability as the model gives variant results. Therefore, maximizing the mean FPA and minimizing the standard deviation is desired.
The table compares the mean of the FPA results of each model for all datasets with different target variables. The first and second rows indicate the FPA scores when the target variables are bug count and bug density, respectively, with the best performance highlighted in bold type.
The best FPA scores when the target variable is bug count are achieved by MLP, SVR, KNR, BRR, XBG, and ZIGPR, ranging between 74.6 and 77.6%. On the other hand, the best scores when the target variable is bug density are produced by MLP, BRR, XGB, and ZIGPR, with scores ranging from 61.3 to 63.0%. In addition, the bug count results are more reliable, as they have a lower standard deviation than bug density results. It can further be seen that ZIPR has a contrasting behavior compared to other models since its bug density score has a higher FPA mean and a lower standard deviation compared to bug count scores.
Figure 3 visualizes the results of the table in box plots. The box plot shows the mean FPA results of the proposed models. Each model has a pair of box plots: bug count and bug density, colored in blue and yellow, respectively. The box plot shows that seven out of eight models perform significantly better when the target variable is bug count, as they have higher FPA scores and lower standard deviations since they have smaller box plots. This can be statistically proven using the non-parametric Wilcoxon test, with a 95% confidence interval applied to the bug count and bug density FPA scores. The null hypothesis states that using the bug count or the bug density as the target variable is statistically indifferent. Performing the Wilcoxon test produces a p-value of 5.706 e-09, which is less than 0.05, rejecting the null hypothesis. In contrast to the rest of the models, ZIPR produces a meager FPA score when the target variable is the bug count.

Box plot for bug count and bug density FPA results for all models.
Overall, using bug count as the target variable is more reliable and stable than the bug density, as visualized in Fig. 4 . The box plot summarizes the results for all models and all datasets. The bug counts results have more outliers due to the low results of ZIPR model.

Summarized box plot for bug count and bug density FPA results.
To answer this research question, the datasets were under-sampled by reducing the number of instances with a zero bug count. The under-sampling was done at different rates: 50%, 75%, 85%, 90%, and 95%, where the rate represents the percentage of non-defective samples that were randomly selected and removed from the training set. The effect of under-sampling was measured by the improvement rate calculated using (12).
Table 7 shows the improvement rates of results after performing under-sampling. The improvement rates are calculated relative to the results of the original dataset and are written in the form “mean ± standard deviation.” The positive improvement rate represents increasing FPA, while the negative improvement rate represents a decrease. In general, increasing the under-sampling rate slightly decreases the FPA results when the target variable is bug count, as opposed to the bug density results where the scores improved with increasing the under-sampling rate.
Figure 5 illustrates the change in FPA results with the change of the under-sampling rate for bug count and bug density targets. The under-sampling rates are distinguished with different colors, as indicated in the legend of the graph. The box plot visually describes the effect of changing the under-sampling rates, as in Table 7 . While under-sampling improved the results of bug density, bug count results remained higher in all cases.

Box plot of FPA results after under-sampling.
To answer this research question, InfoGain feature selection was first applied to the features, which ranked them based on their significance on the prediction. The models were trained using different subsets of the features, where the subset is a percentage of the top features. All results' FPA scores were calculated, and each percentage's average score was found and compared for both bug count and bug density.
Table 8 shows the results for bug count and bug density with different feature selection percentages from 10 to 100% with an increment of 10%, where 100% means all features are selected. The results are in the form “mean ± standard deviation,” with the highest highlighted in bold type. For bug count, the maximum score was achieved using 10% of the features. The maximum score was achieved for bug density using 100% or 90% of the features.
Figures 6 and 7 visualize the impact of feature selection on different models for both bug count and bug density, respectively. Figure 6 shows that most models maintained similar scores and were not significantly affected by feature selection. However, ZIPR showed unusual behavior, with shallow scores from 30 to 100% of the features but increasing sharply at 20% and 10% of the features. This shows that ZIPR is highly sensitive to the features used in the training set. In Fig. 7 , most models show a decreasing FPA score as the feature selection rate decreases. Some models, such as BRR, ZIPR, and RF, show less sensitivity to features than others, such as SVR, XGB, and KNR.

Box plot of bug count results with feature selection.

Box plot of bug density results with feature selection.
Figure 8 shows the average performance of the eight models and compares the effect of feature selection on both bug count and bug density. Overall, bug count results seem to maintain the same score with all feature selection rates. This means that using the minimum number of features (10%) yields the same performance that using 100% of the features yields, reducing computational power and time requirements significantly. In contrast, bug density results showed that even the less significant features positively contribute to the model results. This was proven using the Wilcoxon test with a 95% confidence interval. The null hypothesis states that using 10% and 100% of features are statistically indifferent. Applying the test to bug count and bug density results in p-values of 0.8986 and 1.314e−10, respectively. The bug count results are statistically indifferent since the p-value is greater than 0.05. However, the p-value for the bug density results is less than 0.05; therefore, they are statistically different.

Overall box plot of results after feature selection.
Threats to validity
This section presents the threats that were the main factors in the validity of our comparison study. We begin with the internal validity, which is associated with the trustworthiness of the results of our study. First, data sampling methods may have affected the results, as threefold cross-validation was used. Although other sampling methods are less biased, such as tenfold and leave-one-out cross-validation, they are computationally expensive for large datasets. We tested our study on 24 datasets with large numbers of attributes and instances; therefore, using threefold cross-validation was a compromise solution. Second, machine learning models are primarily affected by the data, which is why the models used in our study were chosen based on features of our datasets, such as the distribution of the data and the characteristics of the dependent variable. While many popular performance metrics are commonly used for regression problems, such as the mean absolute error, mean squared error, and R-squared, the most appropriate metric for ranking problems is FPA. Lastly, external validity is the ability to generalize the results of the study for all datasets. This paper used 24 datasets from PROMISE and bug prediction repositories to generalize our results. We followed the approach of within-project prediction, and we did not validate the cross-project or cross-company approaches.
Software Defect Prediction (SDP) is essential to software testing and quality assurance. It has become even more fundamental in recent years, as the number of programs and software products has also increased in size and complexity. In practice, project managers are not only interested in identifying defective modules but also want to rank the potential defective modules to optimize resource allocation and minimize testing costs. This is notably observed for projects with limited budgets. Thus, this paper compared multiple LTR models using two standard output metrics: bug count and bug density as target variables. It also studied the effect of using imbalance learning and feature selection on eight models with Grid Search optimization. The FPA results of the models showed that using bug count as the target variable produced higher scores and more stable results. The use of imbalance learning has shown significant improvement in the FPA scores of the bug density results but less significant on the bug count results. Finally, using feature selection with LTR has reduced the FPA score of the bug density metric while it had no impact on bug count results. Thus, we conclude that using feature selection and imbalance learning with LTR does not come up with superior or significant results. Our study has several implications for the software industry. LTR helps by ranking modules based on the defect severity, which helps to direct focus and resources to the modules that need more testing.
Data availability
All datasets used in this research are publicly available through PROMISE 44 , and Bug Prediction datasets 44 , 64 . Please check http://promise.site.uottawa.ca/SERepository/datasets-page.html and https://bug.inf.usi.ch/index.php .
Abbreviations
Machine learning
Learning to rank
Fault percentile average
Multilayer perceptron
Support vector regression
K-Neighbors regression
Bayesian ridge regression
Random forest
XGBoost (extreme gradient boosting)
Zero inflated poisson regression
Zero inflated generalized poisson regression
Bertolino, A. Software testing research: achievements, challenges, dreams. In Future of Software Engineering (FOSE ’07) , pp. 85–103. https://doi.org/10.1109/FOSE.2007.25 (2007).
Catal, C. & Diri, B. A systematic review of software fault prediction studies. Expert Syst. Appl. 36 (4), 7346–7354. https://doi.org/10.1016/j.eswa.2008.10.027 (2009).
Article Google Scholar
Alsghaier, H. & Akour, M. Software fault prediction using particle swarm algorithm with genetic algorithm and support vector machine classifier. Softw. Pract. Exp. 50 (4), 407–427. https://doi.org/10.1002/SPE.2784 (2020).
Guo, L., Ma, Y., Cukic, B., & Singh, H. Robust prediction of fault-proneness by random forests. In Proceedings—International Symposium on Software Reliability Engineering, ISSRE , pp. 417–428. https://doi.org/10.1109/ISSRE.2004.35 (2004).
Magal, K. & Gracia Jacob, S. Improved random forest algorithm for software defect prediction through data mining techniques. Int. J. Comput. Appl. 117 (23), 18–22. https://doi.org/10.5120/20693-3582 (2015).
Goyal, R., Chandra, P. & Singh, Y. Suitability of KNN regression in the development of interaction based software fault prediction models. IERI Proc. 6 , 15–21. https://doi.org/10.1016/J.IERI.2014.03.004 (2014).
Wang, T., & Li, W. H. Naïve Bayes Software Defect Prediction Model. In 2010 International Conference on Computational Intelligence and Software Engineering, CiSE 2010 . https://doi.org/10.1109/CISE.2010.5677057 (2010).
Asmono, R., Wahono, R., & Syukur, A. Absolute correlation weighted Naïve Bayes for software defect prediction. J. Softw. Eng. 1 (1), 38–45 (2015).
Yu, X., Bennin, K. E., Liu, J., Keung, J. W., Yin, X., & Xu, Z. An empirical study of learning to rank techniques for effort-aware defect prediction. In SANER 2019 - Proceedings of the 2019 IEEE 26th International Conference on Software Analysis, Evolution, and Reengineering , pp. 298–309. https://doi.org/10.1109/SANER.2019.8668033 (2019).
Yang, X., Tang, K., & Yao, X. A learning-to-rank algorithm for constructing defect prediction models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , vol. 7435 LNCS, pp. 167–175. https://doi.org/10.1007/978-3-642-32639-4_21 (2012).
Joachims, T., Li, H., Liu, T. Y. & Zhai, C. X. Learning to rank for information retrieval (LR4IR 2007). SIGIR Forum. 41 (2), 58–62. https://doi.org/10.1145/1328964.1328974 (2007).
Cao, Z., Qin, T., Liu, T. Y., Tsai, M. F., & Li, H. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the 24th International Conference on Machine Learning , in ICML ’07 pp. 129–136 (Association for Computing Machinery, New York, NY, USA, 2007). https://doi.org/10.1145/1273496.1273513 .
Ibrahim, M., & Carman, M. Comparing pointwise and listwise objective functions for random-forest-based learning-to-rank. ACM Trans. Inf. Syst. 34 (4). https://doi.org/10.1145/2866571 (2016).
Li, H. A short introduction to learning to rank. IEICE Tran. 94 , 1854–1862. https://doi.org/10.1587/transinf.E94.D.1854 (2011).
Yang, X., Tang, K. & Yao, X. A learning-to-rank approach to software defect prediction. IEEE Trans. Reliab. 64 (1), 234–246. https://doi.org/10.1109/TR.2014.2370891 (2015).
Raschka, S. Model evaluation, model selection, and algorithm selection in machine learning (2018).
Bach, T., Andrzejak, A., Pannemans, R. & Lo, D. The impact of coverage on bug density in a large industrial software project. ACM/IEEE Int. Symp. Empirical Softw. Eng. Meas. (ESEM) 2017 , 307–313. https://doi.org/10.1109/ESEM.2017.44 (2017).
Krawczyk, B. Learning from imbalanced data: Open challenges and future directions. Prog. Artif. Intell. 5 (4), 221–232. https://doi.org/10.1007/S13748-016-0094-0 (2016).
Ganganwar, V. An overview of classification algorithms for imbalanced datasets. Int. J. Emerg. Technol. Adv. Eng. 2 , 42–47 (2012).
Google Scholar
Mohammed, R., Rawashdeh, J., & Abdullah, M. Machine Learning with Oversampling and Undersampling Techniques: Overview Study and Experimental Results. In 2020 11th International Conference on Information and Communication Systems (ICICS) , pp. 243–248. https://doi.org/10.1109/ICICS49469.2020.239556 (2020).
Perera, A., Aleti, A., Turhan, B. & Boehme, M. An experimental assessment of using theoretical defect predictors to guide search-based software testing. IEEE Trans. Softw. Eng. 1 , 1. https://doi.org/10.1109/TSE.2022.3147008 (2022).
Kabir, M. A., Keung, J., Turhan, B. & Bennin, K. E. Inter-release defect prediction with feature selection using temporal chunk-based learning: An empirical study. Appl. Soft Comput. 113 , 107870. https://doi.org/10.1016/j.asoc.2021.107870 (2021).
Li, J. et al. Feature selection: A data perspective. ACM Comput. Surv. 50 , 1. https://doi.org/10.1145/3136625 (2016).
Article ADS Google Scholar
Bansal, A. Comparative analysis of classification methods for prediction software fault proneness using process metrics. TechRxiv (2021).
Li, L., Lessmann, S. & Baesens, B. Evaluating Software Defect Prediction Performance: An Updated Benchmarking Study. SSRN Electronic Journal 1 , 1 (2019).
Weyuker, E., Ostrand, T. & Bell, R. Comparing the effectiveness of several modeling methods for fault prediction. Springer 15 (3), 277–295. https://doi.org/10.1007/s10664-009-9111-2 (2010).
Wang, H., Khoshgoftaar, T., & Napolitano, A. A comparative study of ensemble feature selection techniques for software defect prediction. In Ninth International Conference on Machine Learning and Applications (2010).
Wang, H., Khoshgoftaar, T., & Seliya, N. How many software metrics should be selected for defect prediction? Twenty-Fourth International (2011).
Gao, K., Khoshgoftaar, T. & Wang, H. Choosing software metrics for defect prediction: An investigation on feature selection techniques. Wiley Online Library 41 (5), 579–606. https://doi.org/10.1002/spe.1043 (2011).
Chen, X., Shen, Y., Cui, Z., & Ju, X. Applying feature selection to software defect prediction using multi-objective optimization. In 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC) , pp. 54–59. https://doi.org/10.1109/COMPSAC.2017.65 (2017).
Yang, X., Tang, K., & Yao, X. A learning-to-rank approach to software defect prediction. ieeexplore.ieee.org (2014).
Balogun, A. O. et al. Empirical analysis of rank aggregation-based multi-filter feature selection methods in software defect prediction. Electronics (Basel) 10 (2), 179. https://doi.org/10.3390/electronics10020179 (2021).
Shin, J., Aleithan, R., Nam, J., Wang, J., & Wang, S. Explainable software defect prediction: Are we there yet? 10.5281/zenodo.5425868.
López-Martín, C., Villuendas-Rey, Y., Azzeh, M., Bou Nassif, A. & Banitaan, S. Transformed K-Nearest neighborhood output distance minimization for predicting the defect density of software projects. J. Syst. Softw. 167 , 110592. https://doi.org/10.1016/j.jss.2020.110592 (2020).
ISBSG. Guidelines for use of the ISBSG data. In International Software Benchmarking Standards Group, Release 2018 (2018).
Xu, Z. et al. Effort-aware just-in-time bug prediction for mobile apps via cross-triplet deep feature embedding. IEEE Trans Reliab 71 (1), 204–220. https://doi.org/10.1109/TR.2021.3066170 (2022).
Article MathSciNet Google Scholar
Cheng, T., Zhao, K., Sun, S., Mateen, M. & Wen, J. Effort-aware cross-project just-in-time defect prediction framework for mobile apps. Front. Comput. Sci. 16 (6), 1–15. https://doi.org/10.1007/S11704-021-1013-5/METRICS (2022).
Ni, C., Xia, X., Lo, D., Chen, X. & Gu, Q. Revisiting supervised and unsupervised methods for effort-aware cross-project defect prediction. IEEE Trans. Softw. Eng. 48 (3), 786–802. https://doi.org/10.1109/TSE.2020.3001739 (2022).
Bal, P. R. & Kumar, S. WR-ELM: Weighted regularization extreme learning machine for imbalance learning in software fault prediction. IEEE Trans. Reliab. 69 (4), 1355–1375. https://doi.org/10.1109/TR.2020.2996261 (2020).
Tong, H., Lu, W., Xing, W., Liu, B. & Wang, S. SHSE: A subspace hybrid sampling ensemble method for software defect number prediction. Inf. Softw. Technol. 142 , 950–5849. https://doi.org/10.1016/j.infsof.2021.106747 (2022).
Yu, X. et al. Predicting the precise number of software defects: Are we there yet?. Inf. Softw. Technol. https://doi.org/10.1016/j.infsof.2022.106847 (2022).
Macdonell, S. G. Establishing relationships between specification size and software process effort in CASE environments. Inf. Softw. Technol. 39 , 35–45 (1997).
Ostrand, T. J., Weyuker, E. J. & Bell, R. M. Predicting the location and number of faults in large software systems. IEEE Trans. Softw. Eng. 31 (4), 340–355. https://doi.org/10.1109/TSE.2005.49 (2005).
Boetticher, G., Menzies, T. & Ostrand, T. Promise repository of empirical software engineering data (West Virginia University, 2007).
Yang, L. On hyperparameter optimization of machine learning algorithms: Theory and practice (Elsevier, 2014).
Tantithamthavorn, C., McIntosh, S., & Hassan, A. E. Automated parameter optimization of classification techniques for defect prediction models. In IEEE/ACM 38th International Conference on Software Engineering (ICSE) , vol. 14–22, pp. 321–332. https://doi.org/10.1145/2884781.2884857 (2016).
Buchari, M. & Mardiyanto, S. Implementation of chaotic Gaussian particle swarm optimization for optimize learning-to-rank software defect prediction model construction. J. Phys. 978 , 12079. https://doi.org/10.1088/1742-6596/978/1/012079 (2017).
Eberhart, R., & Kennedy, J. Particle swarm optimization. In Proceedings of the IEEE international conference on neural networks , pp. 1942–1948 (1995).
Aljamaan, H., & Alazba, A. Software defect prediction using tree-based ensembles. In Proceedings of the 16th ACM International Conference on Predictive Models and Data Analytics in Software Engineering , pp. 1–10. https://doi.org/10.1145/3416508.3417114 (2020).
Mende, T., & Koschke, R. Effort-aware defect prediction models. In Proceedings of the European Conference on Software Maintenance and Reengineering, CSMR , pp. 107–116. https://doi.org/10.1109/CSMR.2010.18 (2010).
Kamei, Y., Matsumoto, S., Monden, A., Matsumoto, K. I., Adams, B., & Hassan, A. E. Revisiting common bug prediction findings using effort-aware models. In IEEE International Conference on Software Maintenance, ICSM . https://doi.org/10.1109/ICSM.2010.5609530 (2010).
Kamei, Y. et al. A large-scale empirical study of just-in-time quality assurance. IEEE Trans. Softw. Eng. 39 (6), 757–773. https://doi.org/10.1109/TSE.2012.70 (2013).
Yang, Y. et al. Are slice-based cohesion metrics actually useful in effort-aware post-release fault-proneness prediction? An empirical study. IEEE Trans. Softw. Eng. 41 (4), 331–357. https://doi.org/10.1109/TSE.2014.2370048 (2015).
Bennin, K. E., Keung, J. W. & Monden, A. On the relative value of data resampling approaches for software defect prediction. Empir. Softw. Eng. 24 (2), 602–636. https://doi.org/10.1007/s10664-018-9633-6 (2019).
Fu, W., & Menzies, T. Revisiting unsupervised learning for defect prediction, vol. 17, pp. 72–83. https://doi.org/10.1145/3106237.3106257 (2017).
Yu, X. et al. Finding the best learning to rank algorithms for effort-aware defect prediction. Inf. Softw. Technol. 157 , 107165. https://doi.org/10.1016/J.INFSOF.2023.107165 (2023).
Du, X. et al. CoreBug: Improving Effort-Aware Bug Prediction in Software Systems Using Generalized k-Core Decomposition in Class Dependency Networks. Axioms 11 , 205. https://doi.org/10.3390/AXIOMS11050205 (2022).
Yu, X. et al. Improving effort-aware defect prediction by directly learning to rank software modules. Inf. Softw. Technol. 10 , 7250. https://doi.org/10.1016/J.INFSOF.2023.107250 (2023).
Qu, Y., Chi, J. & Yin, H. Leveraging developer information for efficient effort-aware bug prediction. Inf. Softw. Technol. 137 , 106605. https://doi.org/10.1016/J.INFSOF.2021.106605 (2021).
Çarka, J., Esposito, M. & Falessi, D. On effort-aware metrics for defect prediction. Empir. Softw. Eng. 27 (6), 1–38. https://doi.org/10.1007/S10664-022-10186-7 (2022).
Jiarpakdee, J., Tantithamthavorn, C. & Treude, C. The impact of automated feature selection techniques on the interpretation of defect models. Empir. Softw. Eng. 25 (5), 3590–3638. https://doi.org/10.1007/S10664-020-09848-1/METRICS (2020).
Li, F. et al. The impact of feature selection techniques on effort-aware defect prediction: An empirical study. IET Softw. 17 (2), 168–193. https://doi.org/10.1049/SFW2.12099 (2023).
Li, W., Zhang, W., Jia, X. & Huang, Z. Effort-aware semi-supervised just-in-time defect prediction. Inf. Softw. Technol. 126 , 106364. https://doi.org/10.1016/J.INFSOF.2020.106364 (2020).
D’Ambros, M., Lanza, M., & Robbes, R. An extensive comparison of bug prediction approaches. In 7th IEEE Working Conference on Mining Software Repositories (MSR 2010) , pp. 31–41 (2010).
Moser, R., Pedrycz, W., & Succi, G. Analysis of the reliability of a subset of change metrics for defect prediction. In ACM-IEEE international symposium on Empirical software engineering and measurement , pp. 309–311, https://doi.org/10.1145/1414004.1414063 (2004).
Sanders, H., Garbage in, garbage out: How purportedly great ml models can be screwed up by bad data. In Proceedings of Blackhat 2017 (2017).
Ahmed, T., Md Siraj, M., Zainal, A., Elshoush, H. & Elhaj, F. Feature selection using information gain for improved structural-based alert correlation. PLoS One 11 , e0166017. https://doi.org/10.1371/journal.pone.0166017 (2016).
Article CAS Google Scholar
Bach, M., Werner, A. & Palt, M. The proposal of undersampling method for learning from imbalanced datasets. Proc. Comput. Sci. 159 , 125–134. https://doi.org/10.1016/j.procs.2019.09.167 (2019).
Borkin, D., Nemethova, A., Michalconok, G. & Maiorov, K. Impact of data normalization on classification model accuracy. Res. Papers Faculty Mater. Sci. Technol. Slovak Univ. Technol. 27 , 79–84. https://doi.org/10.2478/rput-2019-0029 (2019).
Singh, D. & Singh, B. Investigating the impact of data normalization on classification performance. Appl. Soft. Comput. 10 , 5524. https://doi.org/10.1016/j.asoc.2019.105524 (2019).
Azhagusundari, B., & Thanamani, A. S. Feature selection based on information gain. In International Journal of Innovative Technology and Exploring Engineering (IJITEE) (2013).
Sun, S., Cao, Z., Zhu, H. & Zhao, J. A survey of optimization methods from a machine learning perspective. IEEE Trans. Cybern. 50 (8), 3668–3681. https://doi.org/10.1109/TCYB.2019.2950779 (2020).
Article PubMed Google Scholar
Download references
Author information
Authors and affiliations.
Department of Computer Engineering, University of Sharjah, Sharjah, United Arab Emirates
Ali Bou Nassif, Shaikha Alzaabi & Rawan Khanfar
Department of Computer Science, University of Sharjah, Sharjah, United Arab Emirates
Manar Abu Talib & Ruba Kharsa
Department of Data Science, Princess Sumaya University for Technology, Amman, Jordan
Mohammad Azzeh
Department of Statistics and Information Systems, Aristotle University of Thessaloniki, Thessaloniki, Greece
Lefteris Angelis
You can also search for this author in PubMed Google Scholar
Contributions
A.B.N., M.A., S.A., Ra.K. and Ru.K. wrote the main manuscript. A.B.N., M.A., M.A.T. and L.A. wrote the methodology used in this research. All authors reviewed the manuscript.
Corresponding author
Correspondence to Ali Bou Nassif .
Ethics declarations
Competing interests.
The authors declare no competing interests.
Additional information
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and Permissions
About this article
Cite this article.
Nassif, A.B., Talib, M.A., Azzeh, M. et al. Software defect prediction using learning to rank approach. Sci Rep 13 , 18885 (2023). https://doi.org/10.1038/s41598-023-45915-5
Download citation
Received : 27 June 2023
Accepted : 25 October 2023
Published : 02 November 2023
DOI : https://doi.org/10.1038/s41598-023-45915-5
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Quick links
- Explore articles by subject
- Guide to authors
- Editorial policies
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

- Entertainment
- Photography
- Press Releases
- Israel-Hamas War
- Russia-Ukraine War
- Latin America
- Middle East
- Asia Pacific
- Election 2024
- AP Top 25 College Football Poll
- Movie reviews
- Book reviews
- Financial Markets
- Business Highlights
- Financial wellness
- Artificial Intelligence
- Social Media
Book Review: ‘A Brief History of Intelligence’ may help humans shape the future of AI
This cover image released by Mariner Books shows “A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains” by Max Bennett. (Mariner Books via AP)
- Copy Link copied
Ever wonder how Homo sapiens got so smart? How come we developed actual language when all the other animals didn’t? How about what first made a nematode turn its body in a different direction? Or… what’s a nematode?
Answers to those questions and much, much more can be found in the pages of Max Bennett’s new book “A Brief History of Intelligence: Evolution, AI and the Five Breakthroughs that Made Our Brains.” At 365 pages plus 45 more with a glossary, chapter notes and a bibliography, readers can quibble whether it’s indeed brief, but it is certainly thorough.
Bennett’s premise — he’s a software entrepreneur who founded a company called Bluecore that “helped predict what consumers would buy before they knew what they wanted” — is that humans won’t ever create true artificial intelligence without understanding exactly what led to the real intelligence we already possess. So he begins with those nematodes — worms, to you and me — and painstakingly details the five breakthroughs that over the course of billions of years evolved into the three-pound brain that is folded into all of our skulls.
The first half of the book is a touch dry, detailing not only what caused worms to turn (food!), but how fish learn via trial and error and the pivotal role the basal ganglia plays in dictating behavior, among many, many other evolutionary developments. Bennett cites the work of psychologists and neuroscientists every step of the way and includes plenty of charts and graphs to make his points. It can feel like you’re reading a textbook at times. But to his credit, he begins each new chapter with actual prose, as in this description of the Cambrian explosion more than 500 million years ago: “The gooey microbial mats of the Ediacaran that turned the ocean floor green would have long since faded and given way to a more familiar sandy underbelly. The sensible, slow, and small creatures of the Ediacaran would have been replaced by a bustling zoo of large mobile animals as varied in form as in size.”
When Bennett begins to connect the evolution of the human brain to where we are in the development of artificial intelligence is when the book, for this reader, gets more interesting. Why can’t machines truly learn? Even ChatGPT, which every industry seems to be embracing these days, can’t “learn things sequentially,” writes Bennett. “They learn things all at once and then stop learning.” We’ve trained ChatGPT using the entire contents of the Internet, but the software can’t learn new things because of the risk that it will forget old things, or learn the wrong things.
Bennett is intelligent enough not to draw any conclusions about AI in a field that is changing daily, but he does end his book with a challenge. Evolution gave us our magnificent human brain, he writes, and now that we are in a position to play god and create a new form of intelligence, we must first decide on our goal — are we destined to spread out across the cosmos? Or will we fail, victims of pride or climate change or something yet unseen, just another branch on the evolutionary tree, which will grow on without humans and perhaps never add a limb called “Artificial Intelligence?” No reader alive today will live long enough for that answer, but Bennett makes a solid case for why reverse engineering the human brain may lead to future breakthroughs in the science of AI.
AP book reviews: https://apnews.com/hub/book-reviews
- Health and fitness
- Personal care
The Aisle Super Pad Is the Best Reusable Menstrual Pad I’ve Tested

Aisle’s Super Pad , a reusable cloth menstrual pad that is soft, extra-large, and reliably absorbent, is truly superlative.
After three decades of menstruating and over three years of covering period products for Wirecutter, I’ve mopped up my menses with everything from sea sponges to homemade interlabial pads (video) . Nothing I’ve tried compares to the Super Pad in terms of comfort, capacity, and—dare I say it—charm.

Aisle Super Pad
A reliable, reusable menstrual pad.
Soft and highly absorbent, this pad wears and washes better than the competition.
Buying Options
Now, not everyone needs (or wants) to plunk down roughly $22 for a single menstrual pad, no matter how many uses it may afford. But among reusable pads, the Super Pad is, to me, well worth the additional spend. Compared with other options, which absorb less fluid, become horribly stained, shift around too much in underwear, or irritate skin, the Aisle pad feels like a security blanket for the vulva. Call me Linus van Pelt (video), because after years of exclusively being team period underwear , only using pads or tampons in order to review them, I now cannot imagine parting from my Super Pad.
I’ve spent the better part of a year bleeding onto eight types of reusable pads. A few months in I noticed myself subconsciously saving the Super Pad for the heaviest day of my flow, feeling confident in its capacity to handle whatever clots and other uterine chaos comes its way.
It works and wears well
The Super Pad is extremely effective as a standalone period product, fitting into most traditional brief-style underwear. Yes, it’s gigantic, but it’s not bulky, and its padding uniquely has a slight warming effect that is soothing but that never veers into coochie furnace territory like some items I’ve tested, even on the hottest of summer days. In addition to the Super Pad, Aisle offers two smaller sizes, the Maxi (about $19) and Mini (about $16), which cover less surface area and are rated to absorb less fluid compared with the Super Pad.
Unlike other reusable cloth pads I’ve tried, the Aisle Super Pad stays put. The plastic snap is more secure than the hook and eye closures on some other pads, yet it isn’t noticeable while in use. This pad, due in part to its large size, also shifts around less than others, so you’re less likely to experience a leak.
Aisle pads’ dark green or crimson red retro prints are inviting and evocative, like a delightful DIY project sewn from snips and scraps by the coolest crafter as a coming-of-age present and to be gifted with a hand mirror and a dog-eared copy of Our Bodies, Ourselves (or my first book, Body Drama ). If prints aren’t your thing, minimalist menstruators can purchase the Super Pad in solid black.
It’s easy to clean and shows fewer stains than most
Aisle recommends rinsing the pad in cool water before laundering it. The company also recommends line drying the pad to extend its life. Yet after each of the more than 40 times I’ve haphazardly thrown a bloodied Super Pad into my washer and dryer, with no pre-soaking or special care, it has miraculously emerged unstained, softer than ever, free of frays or pills, and ready to serve again.
Compared with other reusable cloth pads I’ve tested, which quickly became stained or crumpled beyond recognition after washing, my trusty Super Pad remains in great shape.
But it’s expensive
Aisle suggests that the lifespan of the Super Pad is about five years, which softens the $22 price point considerably. Taking advantage of Aisle’s three-or-more bundle discount and investing in nine Super Pads costs around $150 and reduces the need for mid-cycle laundry. This $30 per year investment is generally less than what using disposable pads over five years of monthly cycles might cost. All menstrual products, including this pricey pad, are health savings account (HSA) and flexible spending account (FSA) eligible thanks to the CARES Act.
I love that the Super Pad is breathing new life into the idiom “on the rag.” But heed my warning: Like a luxe hair dryer and other nonessential upgrades, once you start wearing Aisle’s reusable Super Pads to manage your period, you may never be satisfied with lesser pads again.
This article was edited by Tracy Vence and Catherine Kast.
Meet your guide

Nancy Redd is a senior staff writer at Wirecutter covering everything from Santa hats to bath bombs. She is also a GLAAD Award–nominated on-air host and a New York Times best-selling author. Her latest picture book, The Real Santa , follows a determined little Black boy's journey to discover what the jolly icon truly looks like.
Further reading

Getting Work Done on an iPad
by Haley Perry
You can do a surprising amount of work on an iPad with the right gear. These are the best accessories for turning your iPad into a mobile work space.

The Best Menstrual Cups and Discs
by Rose Eveleth
After trying more than 40 different menstrual cups and discs, we’ve found the best options for different bodies and preferences.

We Had 44 Period and Incontinence Products Tested for Forever Chemicals. Many Were Contaminated.
by Nancy Redd
Some of the menstrual products marketed as PFAS-free turned out to be likely contaminated with those substances.

The Best Period Underwear
Period underwear is more reliable—and better looking—than ever. The best pairs for you depend on your period and preferences. We recommend several styles.

IMAGES
VIDEO
COMMENTS
MAXQDA is the best choice for a comprehensive literature review. It works with a wide range of data types and offers powerful tools for literature review, such as reference management, qualitative, vocabulary, text analysis tools, and more. All-in-one Literature Review Software
#1. Semantic Scholar - A free, AI-powered research tool for scientific literature Credits: Semantic Scholar Semantic Scholar is a cutting-edge literature review tool that researchers rely on for its comprehensive access to academic publications.
Rayyan is used to screen and code literature through a systematic review process. Unlike Covidence, Rayyan does not follow a standard SR workflow and simply helps with citation screening. It is accessible through a mobile application with compatibility for offline screening.
Rayyan Teams+ makes your job easier. It includes VIP Support, AI-powered in-app help, and powerful tools to create, share and organize systematic reviews, review teams, searches, and full-texts. GET STARTED RESEARCHERS Rayyan makes collaborative systematic reviews faster, easier, and more convenient.
ATLAS.ti empowers researchers to perform powerful and collaborative analysis using the leading software for literature review. Buy now Get started for free Trusted by the world's leading universities and companies Finalize your literature review faster with comfort
Firefox Linux distributions generally come with a free web browser, and the most popular is Firefox. Two Firefox plugins that are particularly useful for literature reviews are Unpaywall and Zotero. Keep reading to learn why. 3. Unpaywall
Other Review Software Systems. There are a number of tools available to help a team manage the systematic review process. Notable examples include Eppi-Reviewer , DistillerSR, and PICO Portal. These are subscription-based services but in some cases offer a trial project. Use the Systematic Review Toolbox to explore more options.
Tools & Tutorials Develop a Focused Question Scope the Literature (Initial Search) Refine & Expand the Search Limit the Results Download Citations Abstract & Analyze Create Flow Diagram Synthesize & Report Results 1. Develop a Focused Question Consider the PICO Format: Population/Problem, Intervention, Comparison, Outcome
Systematic Review Costs and Gaps. According to the Centre for Evidence-Based Medicine, systematic reviews (SRs) of high-quality primary studies represent the highest level of evidence for evaluating therapeutic performance [].However, although vital to evidence-based medical practice, SRs are time-intensive, taking an average of 67.3 weeks to complete [] and costing leading research ...
Literature Review Tips & Tools On this guide, we share tips for doing any type of comprehensive literature review and tools that can help in the process. Home Tips & Examples Tools Organizational Tools Tools for Systematic Reviews Writing & Citing Help Zotero Mendeley Organizational Tools Bubbl.us
1. SciSpace SciSpace is an AI for academic research that will help find research papers and answer questions about a research paper. You can discover, read, and understand research papers with SciSpace making it an excellent platform for literature review.
Systematic Review Tools. The following subscription-based evidence synthesis software tools can be used to manage steps in the systematic review process. Tip: For subscription-based tools, check for 'trial versions' to test software prior to purchasing. "JBI is an international evidence-based healthcare research organisation that works with 70 ...
Enago Read (Prev. Raxter) - AI Literature Review tool for Researchers Your all in one AI-powered Reading Assistant A Reading Space to Ideate, Create Knowledge, and Collaborate on Your Research Smartly organize your research Receive recommendations that cannot be ignored Collaborate with your team to read, discuss, and share knowledge Get Started
Literature review generators automate the information-gathering process, retrieving relevant articles, journals, and related publications in a matter of seconds. This ensures a momentous saving of time and relieves the user from the tedious job of slogging through numerous resources.
Although literature review software can help with many tasks throughout the review lifecycle, your process likely includes other tools for searching and storing references and data. You also likely need to use the information from your completed review in reports and submissions. Your literature review software should allow you to import and ...
Literature reviews are an important step in the data analysis journey of many research projects, but often it is a time-consuming and arduous affair. Whether you are reviewing literature for writing a meta-analysis or for the background section of your thesis, work with MAXQDA. Our product comes with many exciting features which make your ...
4. Scoping review: As the name suggests, the purpose of a scoping review is to study a field, highlight the gaps in it, and underline the need for the following research paper. 5. Critical review: A critical literature review assesses and critiques the strengths and weaknesses of existing literature, challenging established ideas and theories.
CReMs is available for Windows or Mac operating systems. The software is developed by the Joanna Briggs Institute in Australia. It is available through Lippincott for an annual subscription of $30.00. Rayyan QCRI: Rayyan is a 100% FREE web application to help systematic review authors perform their job in a quick, easy and enjoyable fashion.
Step 1 - Search for relevant literature Step 2 - Evaluate and select sources Step 3 - Identify themes, debates, and gaps Step 4 - Outline your literature review's structure Step 5 - Write your literature review Free lecture slides Other interesting articles Frequently asked questions Introduction Quick Run-through Step 1 & 2 Step 3 Step 4 Step 5
Literature review software provides built-in automation and validation tools that dramatically reduce the potential for errors in your review process. 4. Compatibility. Although literature review software can help with many tasks throughout the review life cycle, your process likely includes other tools for searching and storing references and ...
Proper grammar is key to a great literature review so it is good to have some tools at hand that can help you check or double-check meanings and definitions, as well as spelling and correct word use. The following resources you may find useful: Dictionary.com; Grammerly; Thesaurus.com; Store and organize the information you find
Here are some things you will need to look for to help you manage systematic reviews: A. Confirm that the software works smoothly with your existing systems. These can be reference management software or document repositories. Seamless fits eliminate data transfer headaches and ensure your work doesn't take the hit.
One aspect I would do much better is the literature review. Or to be more specific, the organization of the lit review. Instead of half a dozen excel files, haphazard folder names, and the same journal paper saved in 3 different locations - I would use (and am now using) a literature review tool called Litmaps.
We provide 24/7 writing services. Our writing experts find everything doable: a literature review map, systemic literature review, the one for a thesis or dissertation, etc. With our help literature review is sure to be a mistake-free writing piece, regardless of urgency or difficulty. When assigned to write a literature review, we assess your ...
This research involves a systematic literature review of different approaches of assistant solutions in order to support software practitioners in developing their tasks. The solutions identified are oriented to recommendation systems, chatbots, and virtual assistants, covering a wide range of software processes such as requirements, design ...
SDP is the predictive process of identifying software modules with defect- or bug-proneness based on their method-level and class-level metrics 2. It is a helpful tool during the testing phase to ...
Answers to those questions and much, much more can be found in the pages of Max Bennett's new book "A Brief History of Intelligence: Evolution, AI and the Five Breakthroughs that Made Our Brains.". At 365 pages plus 45 more with a glossary, chapter notes and a bibliography, readers can quibble whether it's indeed brief, but it is ...
A reliable, reusable menstrual pad. Soft and highly absorbent, this pad wears and washes better than the competition. $22 from Aisle. Now, not everyone needs (or wants) to plunk down roughly $22 ...