Judging by quality in a world with too many pieces of the puzzle


One of the themes of the Utrecht University Open Science Programme is developing a new vision on Recognition and Rewards: no longer should researchers be evaluated based on the impact factor of the journal in which they publish their research results or on how often their articles are cited, laid down in the so-called Hirsch-index (often abbreviated to h-index). The time has come to evaluate research on its true quality, openness and societal impact.

We discuss these developments and the possible consequences for the publication strategy of researchers with Bert Weckhuysen  external link, Professor of Inorganic Chemistry and Catalysis and Distinguished University Professor Catalysis, Energy & Sustainability.

What are your thoughts on these developments around counting citations as a renowned researcher with an h-index of around 100?

Let me start at the beginning: I did not grow up with the h-index. As a PhD student at the University of Leuven and later as a postdoc in the USA in the 90s, I had never heard of the h-index, simply because Jorge Hirsch did not invent the index, which is named after him, until 2005.

However, what I did notice when I came to work in the Netherlands was that the Dutch academic community was on the forefront of counting citations. For instance, the University of Leiden worked with the Van Raan analysis, used, among other things, to judge academic research both by quantitative and objective standards. Over the past decades I have seen the growing quantification of academic output emerge. In itself there is nothing wrong with trying to quantify things, but when things are getting totally out of proportion, you are losing control.

When researchers say: I only want to publish in journals such as NatureScience and Cell because of their high impact factors, publishing becomes an aim in itself. It is then no longer a question of doing science, but of ticking off lists. I think we have reached this situation by now. So we are currently seeing a movement against the use of lists and their matching indexes. The pendulum needs to be pushed to the opposite side, but in my opinion must end somewhere in the middle. However, I want go guard against thinking that all the people who were appointed the last thirty years were judged by the wrong criteria. If the candidates were evaluated in a correct manner, if the committee knew how to look beyond h-indexes and high impact papers, then there’s nothing to worry about.

In itself there is nothing wrong with trying to quantify things, but when things are getting totally out of proportion, you are losing control.

Prof. Bert Weckhuysen

It is sometimes said that the old guard resists the current developments in Recognition and Rewards. I don’t think that is the case, because we have just witnessed these developments and we expressed our concerns too. Maybe we should bring up the question in the debate how exactly we have reached this tipping point. Why were we dissatisfied with how things were before and why have we systematically developed a recognition and rewards system more based on numbers over the past 15 to 20 years? We don’t need to go back to the old days, but we can learn a lot from the past period to come to a better and more balanced situation.

By signing DORA the university has said goodbye to h-indices and impact factors. What do you think is a good measure of success?

The real question is: how do you judge someone when it comes to performing scientific research? If I want to find out if someone is a good researcher, I pay attention to the following things: has he or she tried to develop their own line of research? Do I see creativity? Authenticity? Has that person managed to let several ideas blossom? What is the piece that he or she will add to the big puzzle of science? And moreover: what has the candidate already achieved? In this way the candidates inspire confidence and offer the prospect of being able to introduce new concepts in a discipline or between several disciplines. By the way, the same line of reasoning applies to developing new teaching formats and curricula.

Rector Henk Kummeling signs the San Francisco Declaration on Research Assessment (DORA) on behalf of Utrecht University

There is a growing tendency against quantifying academic output, but if you agree upon the indicators for quality within a discipline, then you are still on the right track. For instance, publishing a book that has a lot of impact is also actually a manner to put the researcher on the metaphorical scales. Together we should not be afraid to identify what you call good and not so good.

Listen, in my discipline you have other respectable professional journals such as The Journal of the American Chemical Society and Angewandte Chemie with, it is true, their somewhat lower impact factors, but which are well read by fellow researchers. If we publish in these journals, it could well be impactful research taking the discipline a step further. Discoveries and new scientific insights, whether published in a high impact journal or not, may have large societal implications. So impact factors only tell part of the story. In the end, it is not the choice of journal but the quality of the work that will determine if something is of importance to the further development of science and society.

Does this also apply to open access journals?

Oh yes, that may be open access journals as well. The Journal of the American Chemical Society now also has an open access version: JACS Au, (pronounced as JACS Gold) and this year we have submitted our first articles. I think it will become a very good and widely read journal. Let’s hope the economic model leads to a sustainable open access journal. I would advise my students to publish in that journal, but there are other journals in which their research will come out well.

According to you, what is currently changing in the publication behaviour of researchers?

Researchers must first of all do what they are appointed for: education, research and knowledge transfer. Researchers want their work to be read by fellow researchers and this is done by means of scientific journals and books. We, as academic community, must take care that these journals and books are or will be made open access. But exactly how the revenue model works should not be the concern of  researchers. That question is down to the publishers, funding organisations such as NWO, university executive boards and the university libraries. In the Netherlands we are making progress by concluding Big Deals with publishers such as Wiley-VCH and the American Chemical Society (ACS).

I would like to say that we are currently living in a world abundant with information. We have a proliferation of journals, books, articles, as a scientist you can hardly keep up with what’s going on in your own discipline. So this whole exercise of open access, open data must fit in with finding reliable information. There are too many pieces of the puzzle. That is why in relation to open science we must not only think about sharing knowledge but also about the synthesis of this knowledge. By putting the pieces together, something new will emerge. Publishers are already doing this by the way, but the question is if there is not another way to organise this in a satisfactory manner.

But then you need to know about each other’s pieces.

And understand them and have the time to read and process everything. Eventually we are going towards the ‘database of everything’. And the danger lurking there is that unreliable or even false pieces of the puzzle become part of that database. We need verified knowledge in which reliable papers will hold and others will go down for being irrelevant or wrong.

In open science all our papers, including the underlying data, will become part of this ‘database of everything’. And a search engine will determine which information ranks first, because nobody is going to scroll through 20,000 pages. So tech giants like Google have great power. We already see where this leads to in the case of general news items about all kind of societal concerns. Information is provided by algorithms that measure impact by the number of clicks, likes or responses. How should we handle this and do it better? How will we know if something is reliable or not? There must be some form of validation, academic journals must aim at building and maintaining a good reputation.

So that also means a responsibility for editorial boards of journals?

Yes, and it also means that more attention should be paid to reliable publishing and analysing, including sharing the underlying data. We need to get the ‘database of everything’ under control again. That does not necessarily mean limiting the database, but we need to order data in a new reality where we pay more attention to performing good science. We need to commit ourselves to being a trustworthy partner for society and industry. Exactly how to give shape to this intention will be quite a search.  I don’t think there is someone out there who has already thought out a plan. Each one of us will have to take on this job in the years to come.

We need verified knowledge in which reliable papers will hold and others will go down for being irrelevant or wrong.

Prof. Bert Weckhuysen

Is the new vision on Recognition and Rewards also a step towards getting the database under control?

Maybe so, because it can put the brakes on the proliferation of information. You need to get back to the question: what is our actual mission? Is it our aim to inform each other of the research results to tell something of importance, or is it our aim to write a paper and get it published in a reputed journal? I think it is the first. And if that leads to the indirect effect that it may be published in a well-reputed journal: all for the better. But in the current climate, publishing in that sort of journals takes  priority over sharing knowledge, and I think that is not right.

How would you define impactful?

That is something we agree upon within a scientific discipline, if something feels as impactful. I think the definition is easier to give for societal impact than for insights and discoveries of which the direct application or even industrial or societal impact is not yet clear.

But feelings are a bit…

Subjective.

And vague.

In a sense, yes, because it is hard to predict if a new insight or great discovery will actually have impact. In some disciplines it is probably easier to show if research may lead to direct societal or technological impact.

A final question: you acted in one of the introductory videos shown at the opening of the academic year. Asked for your comment on the question ”Does UU already have an open mind and attitude” you did not answer with a heartfelt yes.

It is true that I said there is room for improvement. And that starts with you. Let’s go back to the pieces of the puzzle. In a joint effort to put them together, you make progress. At the same time you want recognition for your own piece and all the hard work you put into it. A certain kind of egocentrism plays its part here and that is only human. You need to share in all fairness, but you should not share naively. That means we have to work more on developing team science, good mutual collaboration and the recognition that comes with it. That is the task we are all facing.

Source: website Utrecht University