Bibliometrics
Would you like to use bibliometrics to crystallize, compare, and communicate the significance of your scientific publications? Would you like to use bibliometrics as an analytical tool or research method (e.g. to complement your next systematic review)? Below you will find all the important information you need on bibliometrics.
Bibliometrics is the field of research and application that deals with the quantitative analysis of scientific output and its influence. Bibliometric analyses apply statistical methods to the results of research. They enable analyses of publication and research performance (e.g., benchmarking and rankings) and provide researchers and science organizers with insights into research trends, networks, and the influence of research output.
Consultation and training
We advise, support, and answer all your questions about bibliometrics, citation-based research evaluation, and bibliometric analyses as well as the databases and tools used with them. If you have individual requirements and questions, you can contact us anytime for individual advice.
Webinar “Bibliometrics and bibliometric analysis”
This webinar offers a practice-oriented introduction to the basics, areas of application and limitations of bibliometrics, bibliometric data sources (Scopus, Dimensions, OpenAlex, etc.) as well as the use of bibliometric indicators, the use of bibliometric analyses as a research method to support one's research and the development of scientific publications.
Individual training/customized courses
We are happy to put together a training program for groups that is tailored to the specific needs of the department. If you are interested, please use our enquiry form for customised courses.
- Bibliometrics can help illustrate the impact of a scholarly publication or group of publications in the greater research community and can support applications for grants and research funding.
- Bibliometrics complements qualitative assessment (peer review) because of the increasing volume of publications and specialization. Bibliometrics is a useful tool for evaluating the research output of programs and researchers.
- It is worthwhile for researchers to look at publication and citation data:
- Good publication and citation figures are an indicator of an author's contribution to scientific discourse
- Bibliometric figures are becoming increasingly important in research evaluation and funding decisions
- Useful for finding cooperation partners and gaining new perspectives
- As an independent research method, bibliometrics/ bibliometric analysis can be used to identify research strengths and gaps in research and inform decisions about future research
A variety of bibliometric databases are available for the identification of a range of bibliometric measures. Four databases are commonly used for this purpose: Web of Science, Scopus, Dimensions, and Google Scholar. Recently, OpenAlex has also become a well-known bibliometrics database due to its rapid development and extensive data coverage.
The TIB has licensed Web of Science, Scopus, and Dimensions mit Dimensions Analytics, which are all available to all TIB users. Google Scholar and OpenAlex are freely accessible.
Specific bibliometric tools
Dimensions is an abstract and citation database that has both free and institutional subscription versions available. With the special part “Dimensions Analytics” you can, for example, analyze research trends in your subject area or find the most influential journals in your subject area and more. It also allows you to visualize and analyze results, like the publication development of individuals or institutions over the years, with various visualization opportunities (line charts, Heat map, etc.) and much more (s. Dimensions library toolkit).
Furthermore, the LUH has licensed SciVal, which enables further evaluations based on the literature database Scopus.
- Publication metrics: The number of times a particular article, book, or other research output has been cited can be an indication of the impact of that output on others’ work.
- Author metrics: Author-level metrics quantify a researcher’s impact by analyzing citations to their publication. They are used in grant applications, as well as in tenure, promotion, and performance reviews. It includes a researcher’s total number of publications, the number of times those publications have been cited, and a researcher’s h-index.
- Journal metrics: The Journal Impact Factor (JIF) is a metric commonly used to demonstrate the importance and prestige of a journal. It is from the Journal Citation Report (JCR), available in Web of Science. It is calculated by dividing the number of times articles were cited by the number of citable articles in a journal for a two-year period. (s. Example)
- Altmetrics: Altmetrics (alternative metrics) are measures of the impact of published research beyond traditional citations, which can be used to supplement the information gained from traditional bibliometrics. Depending on what indicators are used, they can show scholarly interest (e.g. Mendeley bookmarking), media interest (e.g. news stories), or public engagement (e.g. social media activity). They can also be used to identify the use of research in policy documents or other official publications that may not appear in conventional citation databases. Altmetrics can also be used to measure the impact of research outputs that would not be included in traditional bibliometrics, such as data sets, software, or presentations. In some cases, they can be used to provide an early indication of the level of citations a paper is likely to receive.
Some of the well-known shortfalls of bibliometrics are the scope of the databases, discipline variation, publication languages, unpublished works, and lack of distinction between positive and negative impact.
Scope of the databases
Common tools such as Web of Science and Scopus provide a particular view of the bibliographic universe. This view may be limited by:
- formats of materials included (e.g., journals, conference papers, books, book chapters);
- subject breadth (what falls within the subject scope of the tool?);
- subject depth (how much of a particular discipline's scholarly output is included?);
- geographic coverage (do publications emanating from particular regions get included?);
- language coverage and
- depth of backfiles (e.g., how far back are citations tracked?).
In all cases, it is important to ask what a particular tool includes and what it excludes. No one article database includes all articles ever published and can analyze all the relations between them.
Disciplinary differences
Comparisons between subject areas must be avoided. Some subject areas have a higher rate of publication and citation. For example, molecular biology articles are produced rapidly and cited frequently compared to computer science or mathematics articles. This means that an average molecular biologist would probably have a larger h-index than a leading computer scientist.
Some disciplines, particularly in the humanities, rely more heavily on particular formats for scholarly output, especially books and book chapters. These are not tracked well in tools such as Web of Science and Scopus, nor is the subject breadth and depth for humanities and social science journals as broad and deep within these tools. Therefore scholars who write books may have their impact, as measured by these methods, misrepresented.
Publications in some disciplines do not rely as heavily on citations to other work. Thus, the reduced number of citations interconnecting publications may not properly capture the impact of a publication. Moreover, these metrics do not necessarily work well for creative works and may not reflect local cultural practices.
Researchers who publish in journals that serve subdisciplinary specialties not well-covered by the common tools will not be accurately tracked by the bibliometrics that common tools produce.
Inherent limitations of the metrics
- It is important not to make comparisons between authors of different ages or lengths of professional activity. Authors who have published for many years have had more time to accumulate citations and reputation. To some extent, bibliometric results can be limited to a specific period for a fairer comparison.
- Papers often have multiple authors - but what proportion of the work can be attributed to each author? Citation metrics assume that each named author is equally responsible, which may not always be the case.
- Citation counts could be misleading, for example, if an author includes a large number of self-citations, or if a peer group agrees to cite each other to boost their citation rates. The peer review process for journals should spot and prevent this.
- Negative citations are counted as valid.
- Papers may be submitted under various forms of name although it is the same author. There is also a lack of standard affiliation details. Databases have ways around this including grouping name variants and assigning each researcher an individual numerical ID.
- Some publication types tend to receive more citations than others. For example, review articles and methods papers are more likely to be cited than a paper based on a laboratory study.
Bibliometrics is a measure of the impact of research on further research, not necessarily of the quality of that research. Bibliometrics should therefore always be used with caution and not be considered a replacement for peer review, but best used to complement or verify qualitative evaluation.
Focusing solely on bibliometric indicators leads to undesirable side effects, e.g. "salami tactics" and "citation cartels“, which reinforces the "publish or perish" dilemma.
Efforts to describe and incentivize the responsible use of metrics have provided researchers, research administrators, and other stakeholders involved in research assessment with practical strategies and solutions.
Further recommendations for the reliable and responsible use of bibliometric methods can be found here:
- Hicks, D. et al. (2015): The Leiden Manifesto for research metrics. Nature 520 (7548), 429-431,
- https://www.nature.com/articles/520429a
- The San Francisco Declaration on Research Assessment: https://sfdora.org/
- Coaliation for Advancing Research Assessment https://coara.eu/
Dipl.-Inf. Linna Lu
Subject Specialist for Computer Science and Specialist for East Asia
Email: [email protected]
Phone: +49 511 762-3459