Abstract | ||
---|---|---|
Scientific impact evaluation is a long-standing problem in scientometrics. Graph-ranking methods are often employed to account for the collective diffusion process of scientific credit among researchers or their publications. One key issue, however, is still up in the air: what is the appropriate level for scientific credit diffusion, researcher level or paper level? In this paper, we tackle this problem via an anatomy of the credit diffusion mechanism underlying both researcher level and paper level graph-ranking methods. We find that researcher level and paper level credit diffusions are actually two aggregations of a fine-grained authorship level credit diffusion. We further find that researcher level graph-ranking methods may cause misallocation of scientific credit, but paper level graph-ranking methods do not. Consequently, researcher level methods often fail to identify researchers with high quality but low productivity. This finding indicates that scientific credit is fundamentally derived from \"paper citing paper\" rather than \"researcher citing researcher\". We empirically verify our findings using American Physical Review publication dataset spanning over one century. |
Year | DOI | Venue |
---|---|---|
2016 | 10.1007/s11192-016-2057-4 | Scientometrics |
Keywords | Field | DocType |
Scientific impact,Credit diffusion,Authorship citation network | Data mining,Actuarial science,Impact evaluation,Computer science,Scientometrics | Journal |
Volume | Issue | ISSN |
109 | 2 | 0138-9130 |
Citations | PageRank | References |
2 | 0.38 | 15 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hao Wang | 1 | 11 | 1.64 |
Huawei Shen | 2 | 739 | 61.40 |
Xueqi Cheng | 3 | 3148 | 247.04 |