Publication:
A comprehensive examination of the relation of three citation-based journal metrics to expert judgment of journal quality

dc.contributor.authorPeter Haddawyen_US
dc.contributor.authorSaeed Ul Hassanen_US
dc.contributor.authorAwais Asgharen_US
dc.contributor.authorSarah Aminen_US
dc.contributor.otherMahidol Universityen_US
dc.contributor.otherUniversity of the Punjab, Lahoreen_US
dc.date.accessioned2018-12-11T02:40:22Z
dc.date.accessioned2019-03-14T08:04:31Z
dc.date.available2018-12-11T02:40:22Z
dc.date.available2019-03-14T08:04:31Z
dc.date.issued2016-02-01en_US
dc.description.abstract© 2015 Elsevier Ltd. The academic and research policy communities have seen a long debate concerning the merits of peer review and quantitative citation-based metrics in evaluation of research. Some have called for replacing peer review with use of metrics for some evaluation purposes, while others have called for the use peer review informed by metrics. Whatever one's position, a key question is the extent to which peer review and quantitative metrics agree. In this paper we study the relation between the three journal metrics source normalized impact per paper (SNIP), raw impact per paper (RIP) and Journal Impact Factor (JIF) and human expert judgement. Using the journal rating system produced by the Excellence in Research for Australia (ERA) exercise, we examine the relationship over a set of more than 10,000 journals categorized into 27 subject areas. We analyze the relationship from the dimensions of correlation, distribution of the metrics over the rating tiers, and ROC analysis. Our results show that SNIP consistently has stronger agreement with the ERA rating, followed by RIP and then JIF along every dimension measured. The fact that SNIP has a stronger agreement than RIP demonstrates clearly that the increase in agreement is due to SNIP's database citation potential normalization factor. Our results suggest that SNIP may be a better choice than RIP or JIF in evaluation of journal quality in situations where agreement with expert judgment is an important consideration.en_US
dc.identifier.citationJournal of Informetrics. Vol.10, No.1 (2016), 162-173en_US
dc.identifier.doi10.1016/j.joi.2015.12.005en_US
dc.identifier.issn18755879en_US
dc.identifier.issn17511577en_US
dc.identifier.other2-s2.0-84954287475en_US
dc.identifier.urihttps://repository.li.mahidol.ac.th/handle/123456789/43459
dc.rightsMahidol Universityen_US
dc.rights.holderSCOPUSen_US
dc.source.urihttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84954287475&origin=inwarden_US
dc.subjectComputer Scienceen_US
dc.titleA comprehensive examination of the relation of three citation-based journal metrics to expert judgment of journal qualityen_US
dc.typeArticleen_US
dspace.entity.typePublication
mu.datasource.scopushttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84954287475&origin=inwarden_US

Files

Collections