Description
Bibliometrics is a method increasingly used to perform evaluations of scientific output and impact, in particular in order to distribute means, such as research grants. But also internally within universities and other research institutions. Various performance and impact meaures are used to establish quality of research. This can be highly problematic, not only in terms of ethics, but also with regards to method. Especially considering the proliferation of tools to perform bibliometric analysis, which means that analyses are increasingly performed without actual understanding of bibliometrics as a scientific method. In our research we have compared two research groups in the same field of research, both from Norwegian universities, and with a similar size and goal. We have used a variety of methods to normalize between them, in order evaluate the ethics and methodical reliability of the results. We found that in order to compare the two groups for benchmarking purposes we needed to perform a number of normalizations, to the point where it rendered the results largely useless. Too many individual strengths of each group had to be left out of the evaluation in order to compare the two in equal terms. In our case these problems were increased by the fact that one of the two groups is multidisciplinary, which in turn demanded methods to correct for differing publication patterns within the same group. Without knowledge of the researchers background this is an element that could easily be overlooked, and in turn skew the results in the group’s disfavor. This is turn means that evaluative bibliometrics is in danger of either skewing the results in favor of a certain type of research or group of researchers, or type of publication. In the worst case research funds can be alloted, or entire research groups lose funds based on unsound comparisons and prejudice against certain types of publications.
Evaluating evaluative bibliometrics: a case study of two research groups
Bibliometrics is a method increasingly used to perform evaluations of scientific output and impact, in particular in order to distribute means, such as research grants. But also internally within universities and other research institutions. Various performance and impact meaures are used to establish quality of research. This can be highly problematic, not only in terms of ethics, but also with regards to method. Especially considering the proliferation of tools to perform bibliometric analysis, which means that analyses are increasingly performed without actual understanding of bibliometrics as a scientific method. In our research we have compared two research groups in the same field of research, both from Norwegian universities, and with a similar size and goal. We have used a variety of methods to normalize between them, in order evaluate the ethics and methodical reliability of the results. We found that in order to compare the two groups for benchmarking purposes we needed to perform a number of normalizations, to the point where it rendered the results largely useless. Too many individual strengths of each group had to be left out of the evaluation in order to compare the two in equal terms. In our case these problems were increased by the fact that one of the two groups is multidisciplinary, which in turn demanded methods to correct for differing publication patterns within the same group. Without knowledge of the researchers background this is an element that could easily be overlooked, and in turn skew the results in the group’s disfavor. This is turn means that evaluative bibliometrics is in danger of either skewing the results in favor of a certain type of research or group of researchers, or type of publication. In the worst case research funds can be alloted, or entire research groups lose funds based on unsound comparisons and prejudice against certain types of publications.