Is it better to make an important discovery or to know everything?

Expertscape believes it’s the latter – “know everything” – because Expertscape measures expertise. But for all other prominent publication-ranking system, the answer is making an important discovery, because those rankings measure the influence of research findings.

This distinction is at the heart of Expertscape’s uniqueness and explains why it yields superior results if you’re seeking a clinical consultant or a scientific collaborator to tackle a difficult problem.

A few extreme cases amplify the principles involved.

Measuring “research influence” invariably derives from citation counts. The Science Citation Index, the H-index, Clarivate analytics, and the Google Scholar rankings all declare a publication (or group of publications) as “influential” if it is cited frequently.

This is logical, but also silly. In 1951 Oliver Lowry (who?) published what became the most commonly cited scientific paper in history, describing how to measure the concentration of protein in solution. Certainly this paper had influence in laboratories everywhere, but… so what? No one needing consultative help with protein chemistry would be likely to identify Lowry as the go-to person for such help.

Many other top-cited papers have the same type of uselessness. The #2 paper describes how to make a certain buffer solution. The #3 paper describes an alternative to Lowry’s method.

Even Nobel Prize-winning papers have the same failure to distinguish a lightning stroke of discovery from sustained expertise. For example, Alexander Fleming’s pre-World War II discovery-of-penicillin publication earned him a Nobel share, but he had no role whatsoever in the subsequent saga of penicillin becoming the most important medication in human history. For all practical purposes, the only thing he ever did with penicillin was publish that one small paper. It was, of course, influential, but he was never afterwards consulted as an expert in penicillin, except to furnish some leftover cultures.

More recently, the Nobel committee awarded Michael Mayor and Didier Queloz part of a Nobel Prize in Physics “for the discovery of an exoplanet orbiting a solar-type star” in a 1995 “breakthrough paper.” But, as with Fleming, breakthroughs can be transient: the Committee acknowledged that the doppler-based technique used in the breakthrough paper held sway for only 5 years afterwards. In actuality, the paper’s 5000+ citations since its publication marks it as “a standard reference that one cites in order to make clear to other scientists what kind of work one is doing” – one of the known stigmata associated with highly-cited papers.

By contrast, Expertscape’s ranking algorithm favors people who have an extensive, sustained publishing record in a topic. It was designed to identify the seasoned clinician who has “seen it all” when it comes to some medical condition – someone who will realize that your case of Parkinson disease is actually not Parkinson disease, but is a different parkinsonian syndrome with a different response to treatment, such as olivopontocerebellar atrophy.

Such a person does not acquire this knowledge by making a narrow discovery that is reported in one frequently-cited paper. They acquire it by continuous study of the medical condition, in all its guises, and by organizing their thoughts in written publications. The same principle applies to all fields of study, not just to clinical medicine.

Of course there are overlaps between influencers and experts. And there are limitations of Expertscape as well. But, for the reasons above, we think that Expertscape is the pre-eminent tool to rapidly find people who can help.