Guest Post : Two Modest Proposals*

My UKIDSS co-conspirator Steve Warren has provided me with a nice wee guest post. In fact, as a special extra, its really two posts in one. Both parts are provocative proposals. So …. how about another poll ?


Steve on :

My first modest proposal is to introduce ‘ranked-normalised’ citations. Straight citations are good for people who typically work in large collaborations. Normalised citations take care of this to some extent, but don’t give credit to the first few authors who probably did most of the hard work. Ranked-normalised citations would work as follows. In a four author paper, the weights by author rank would be 4,3,2,1. These are then normalised by the sum of the weights, so the first author gets 0.4 of the citations, the second author 0.3 etc. In many cases this would be a fairer way of giving credit than either straight or normalised citations. Of course in some cases it won’t work, particularly when author lists are alphabetical – I’m afraid the Aarseths of this world will always do better than the Zytkows. I think ranked-normalised citations would be useful, and might even influence authorship lists.

My second modest proposal is to give away a small amount of telescope time by lottery. For example at the end of the meeeting the ESO OPC (or HST, Chandra, etc) would throw the names of all the successful PIs into a hat, and draw out one, who is then given 8 hours of grade A VLT time to do whatever they want. They wouldn’t have to justify the science in any way, and would be free to collaborate with anyone who they might think has a better idea. They wouldn’t have to justify how they used the time after the event. There would be no rules (well you can’t sell the time). I bet that those 8 hours would produce more science than average.

* A modest proposal is the title of a satirical essay by Jonathan Swift, in which he suggests that poor people in Ireland alleviate their suffering by selling their children to be eaten by the rich. Another of his works was The benefit of farting.

: Steve off


So : vote now ! Results will accumulate publicly this time.


33 Responses to Guest Post : Two Modest Proposals*

  1. telescoper says:

    I don’t think it’s a good idea to use an automatic ranked-normalised measure because it’s just going to replace one wrong measure with another. An alternative would be to require a set of authors to agree on a weighting to be assigned with the author list when the paper is submitted, e.g. Joe Smith (0.5), Herbert Bloggs (0.45) and Freddy Freeloader (0.05). That would cause some interesting internal ructions in big collaborations but it seems fairer to me than assuming a formula like yours which will be wrong more often than it is right.

  2. Francis says:

    Don’t think the citation weighting is a good idea. It might discourage more senior scientists from allowing their students or postdocs to write a paper and be first author – which is important for their career development.

    And as for the ‘lottery’ award of telescope time – I don’t think that the government would appreciate time on expensive facilities being given away like this!

  3. The lottery is a good idea.

    As for the citations, I agree with Peter that it doesn’t make sense to replace one wrong measure with another. IF (and that’s a very, very big if) one wants a citation measure, I think the g-index is probably the best, though the question of author order is not addressed directly by it (or most other schemes). One thing many folks overlook is that even if you have a good measure, it can only deal with the data available. In other words, it counts only authors on papers. However, the amount of work required to become an author (as opposed to just being mentioned in the acknowledgements) varies quite a bit from country to country, institute to institute, scientist to scientist.

  4. Another problem with weighting author contributions in large collaborations is what weights you ascribe to the people who built the instrument, wrote the software, made the project possible? If you’re a high-z blob-counter you might argue that the instrumental folk get an SPIE paper and don’t appear on your marvellous two-author correlation function paper in Nature. If you’ve worked for six years building, testing and commissioning an instrument, all with a view to measuring that correlation function (but where you recognise you have folk in your team who specialise in correlation functions who’ve been brought in for that purpose) you might take a different view.

  5. …oops, pressed post too quickly.

    So, everyone gets on the big high-impact paper, but how do you compare the apples of instrument builders with the oranges of the correlation function experts? Percentage contributions only make sense if one can agree on a single parameter that measures the contribution, and I doubt that it’s often possible, especially in large collaborations.

  6. Grumpy old man says:

    I find the current obsession with publication and citation statistics quite unhealthy. At best they provide a crude measure of whether a scientist is useless, average or highly productive, but that is surely as far as it goes. Trying to compare us beyond these crude categories is a waste of time, so I see little value in inventing even more convoluted metrics. We all know that some fields gain more citations than others, simply because there are more researchers in that area, or maybe the culture of referencing is different. Some research also takes a long time to bear fruit, but our current obsession discourages work in these areas in favour of the quick result, etc…

    What should we be aiming for? Is it really publication and citation statistics, or should we be aiming for those rare insightful breakthroughs that really change our understanding of the Universe? The two aren’t mutually exclusive, of course, but our current system certainly encourages the former over the latter simply because the latter often takes more time and effort and won’t necessarily guarantee a result.

  7. Sarah says:

    Stephen – I don’t see how the rank-normalised citation indices would change anything for the instrumentalists. They don’t get much credit in the current system either. That’s a problem but I’m not sure it’s related to the way citations are counted, rather a side-effect with the limited way in which academic productivity is measured in general.

    I don’t think the rank-normalised citations are a particularly good idea either – see comments above.

    The lottery system I reckon is good though. It would encourage scientists to attempt more high-risk/high-gain observations than those that seem to be favoured by the TACs.

  8. Hi Sarah – I meant my comments to apply to @telescoper’s interesting proposal for divvying up fractions of effort contributing to a paper, which is appealing at first sight but which has these problems. The rank-normalised citations are, I think, a weighting scheme that in practice is a special case of @telescoper’s idea.

    I do agree though that this interest in citation statistics is a little unhealthy. We’re in danger of disappearing up our own

  9. …I think I pressed post just in time this time.

  10. Michael Merrifield says:

    On the second issue about giving away a chunk of telescope time by lottery, you might be interested to know that ESO recently set up a committee to look at how time is allocated on its telescopes. Since I foolishly stuck my head above the parapet on this issue by coauthoring a paper provocatively suggesting we ought to eliminate conventional allocation committees entirely (http://arxiv.org/abs/0906.1943 ), I found myself “volunteering” to be on the panel.

    In principle, this panel is charged with a ground-up look at how time should be allocated to optimize science returns without being ridiculously burdensome, and we are looking quite seriously at lottery elements. For example, since there is a reasonable consensus that peer review does quite well at identifying the excellent and the awful, but fails to generate reproducible results in ordering those in the middle ground, one possibility is to select randomly from those mid-ranking proposals rather than arguing endlessly over their arbitrary ordering.

    I suspect that since there are several members of the committee who currently oversee conventional peer review processes, who are naturally inclined to defend the effectiveness of their own efforts, this panel is unlikely to come up with anything too radical, but if you do have any original ideas please let me know so I can feed them into the discussion.

  11. Dave says:

    Alphabetical author lists are coming to astronomy – Planck is going to be strictly alphabetical as I understand it, so that everyone on Planck would be changing their name to Aarnold A Aaaardvark if this proposal were to succeed…

  12. Steve W says:

    A dearth of positive comments for rank-normalised citations so far, and several negative comments about the widespread use of citations. But when you put your next grant proposal in, or apply for a fellowship, or a job, or are considered for a prize, the first thing the committee will do is check your citations on ADSABS, or perhaps your normalised citations. And they are sure to be influenced by the numbers that come out, even when professing that they are not important. So which measure would you prefer them to use as the least bad?

  13. John Peacock says:

    Steve,

    Your citation proposal addresses something that worries lots of us: what Peter Coles called the “lurker” problem. A lurker is someone who is energetic politically, thus joining lots of big consortia, and clocks up the citations without ever having to contribute any of the original ideas that underly the papers. Your ranking proposal doesn’t solve this without disadvantaging people who *have* contributed ideas to the paper, who can often be down the batting order for reasons of alphabet or lack of ego. I propose two alternatives:

    * Ignore the problem. Planck will probably break the current model, and (obervational) astronomy will be where PPE has been for decades. They survived.

    * If you want to implement the gut feeling (which I share) that a single-author paper is more worthy than a 100-author paper of the same citations, a down-weighting of citations based on numbers of authors would suffice. But I don’t feel that the ADS linear weight is right: something logarithmic has the right feel to it. I’d say a 100-author paper ought to be perhaps 5 times less respected than a single-author paper. This suggests replacing citations, C, for a paper with N authors by

    C -> C/ln(1+N)

    (I didn’t have this formula in mind when I said a factor 5. It actually gives a slightly stronger penalty, but this equation is attractively simple).

    By this formula, a 3-author paper is half as prestigious as a single-author paper. Perhaps this initial decline is too steep: we want to encourage small collaborations, which can be very creative. I think I’d be happy with replacing 1+N by 2+N, or (better still) e-1+N. This is my final proposal, which I hope everyone will cite.

  14. Michael Merrifield says:

    Alternatively, one could argue that a well-cited paper that comes out of a large well-supported survey with a mass of telescope time and an army of data slaves to produce a definitive-yet-predictable result is actually far-and-away less impressive than a well-cited single-author or few-author work, which clearly must have contained a modicum of real inspiration rather than just a lot of perspiration. Accordingly, the appropriate weighting might be

    C -> C/exp(N-1).

  15. Survey dude says:

    Mike, I guess haven’t worked in a large survey for some time. Long gone are the days when you can have an army of data slaves waiting to pounce (if ever there were such days). In reality most of the work is done by a handful of over-worked people, who quite frankly deserve all the citations they get.

    If you really think the average single-author (or ‘few-author’) paper involves any real inspiration then you clearly haven’t read astro-ph in a while…

  16. Michael Merrifield says:

    Mike, I guess haven’t worked in a large survey for some time. Long gone are the days when you can have an army of data slaves waiting to pounce (if ever there were such days). In reality most of the work is done by a handful of over-worked people, who quite frankly deserve all the citations they get.

    Then they should either kick the large number of freeloaders off their extensive author lists and into the acknowledgements, or insist that they pull their weight.

    If you really think the average single-author (or ‘few-author’) paper involves any real inspiration then you clearly haven’t read astro-ph in a while…

    I agree entirely, but then the average single-author paper on astro-ph probably picks up three citations, which was rather my point. If a large survey paper picks up 100 citations, it is probably because it has made an important measurement which required a large amount of data, but may not have been hugely inspired as a piece of work; if a single-author paper picks up 100 citations, it probably did something rather clever.

  17. Ian Smail says:

    I agree with Mike – consortia should spend more of their time policing the co-authorship of their papers to ensure that the people who end up on a paper get the credit they deserve. In fact this seems to be increasingly the approach which is being taken… which suggests one of the rank-weighting schemes may eventually be useful.

    For the question about TAC allocations, on pretty much every TAC I’ve sat on (bar HST) – when we get down to the “indistinguishable” – I’ve argued that we should go back to the top of the list and give those proposals more time and let them do what they want with it. I think this would be much more scientifically productive than doing a little more of the same stuff.

  18. Survey dude says:

    If only kicking off the freeloaders were that simple! To be fair, most have genuinely contributed something at some stage, but this tends to be minimal compared to the efforts of a tiny minority.

    This is also a project management issue of course. A good PI will insist that the hard-working few get the recognition they deserve when results are presented and in job references etc, which certainly provides some compensation.

  19. A solution to the problem of people misinterpreting alphabetical author lists (i.e. giving Zeldovich less credit than Aarseth) would be, for each new paper, to randomly decide whether the order will be alphabetical or reverse-alphabetical. Or, alternatively, use a random order. One should also indicate this, so that those who wish to be more precise have the necessary information (otherwise we might penalise Aarseth for those multiple-author papers where he did the lion’s share of the work, as well as giving him to much credit if only the alphabet put him at the beginning; the two effects might cancel out in some sense, but not in any one assessment).

    However, this is not really a solution. The idea is that the authors get credit for the work, and an all-or-nothing approach is as stupid as the first-past-the-post electoral system (ducks and dons flame-proof coat paid for by Agents Provocateurs for PR). As I have mentioned elsewhere, the threshold at which one moves from acknowledgments to authorship (as well as that between nothing and acknowledgments) varies enormously; this is a problem in itself, but an all-or-nothing approach exacerbates it.

    Let’s face it: bibliometry, despite its flaws, is here to stay. While we should use the g-index if a single number is required, the input data need to more accurately reflect the actual contributions. In practice, a system of weights, agreed on by the authors, is the only way to go. The full information is there for those who want to use it.

    This doesn’t, necessarily, have to be fractions. One could have “starring” and “co-starring” authors. In the context of film and television, it is clear to everyone what this means, as well as terms like “and ” or “special guest” which is usually a role less than a co-star or supporting actor but in some way special.

    Then there is the cameo. 🙂 However, this is somewhat different than the uncredited appearance. 🙂

    Seriously, perhaps one needs a special status for people who have a reason to be on the paper even if they didn’t contribute directly. If people just count the names, then the situation isn’t any different to that today; if they are interested in the additional information, then it is there.

    Some folks think they should be on any paper written by someone whose grant money they were instrumental in obtaining. While this might seem unfair to some, it can be argued that the success of this grant will be evaluated by the number of papers with the P.I.’s name, even if he did no actual work in producing the paper.

  20. Michael Merrifield says:

    Rather than introducing weights, why not allow authors to appear more than once in the author list? Mind you, papers would start sounding like firms of lawyers if we did (“Zwicky, Zwicky, Zwicky, Hubble and Zwicky.”)

  21. I don’t think de Vaucouleurs would have liked that idea, nor would Burbidge like it.

    This would introduce more confusion. I’m sure THE Simon White is happy that he has three Christian names. 🙂

  22. Chris_C says:

    The principal effect of the growth of scoring of publication rates since I did my D Phil 35 years ago seems to me to be an explosion in low grade publications. Even then, I noted that (mainly American) N-strong experimental teams would publish the same results N (or even N!) times in a range of journals so each team member could take a turn as lead author. The more recent citation mania has simply encouraged this even more. I would rather go back to fewer publications, each of which was worth reading. Mind you, I have no useful suggestions on implementation…

    • Michael Merrifield says:

      One suggestion, which I first heard from Bill Press I think, was that self-citations should count as -1. Although a little harsh, it would stop the gratuitous “see Bloggs (1992a,b,c, 1993, 1995, 1997, 1999a,b,c, 2001, 2005)” effect, and would also have the salutary consequence of encouraging new PhDs to stop rehashing their thesis and come up with something new of their own.

    • telescoper says:

      I agree to some extent, but I’m not sure that the proportion of low-grade papers has changed that much over the years. Obviously there are more papers now (because there are more astronomers) so if the fraction of dross is constant with time the number of weak papers published grows. I think this has happened. But if you pick up a random issue of ApJ or MN from 50 years ago you’ll find about the same proportion of rubbish as in a contemporary one.

  23. “an explosion in low grade publications”

    This isn’t a problem if the g-index is used:

    http://en.wikipedia.org/wiki/G-index

    It is not perfect, but perhaps the best measure IF (and that’s a big IF) one wants to put a lot of stock in bibliometry.

    I suspect that, among astrophysicists, Simon D. M. White’s is the largest—whichever measure is used.
    Any other candidates?

  24. Dave says:

    The fundamental problem with all of this is that metrics, of any kind, buy into the lie that the influence can be blindly quantified without any understanding of the field. I don’t believe they can. That is why peer review panels were part of the RAE and, as I understand it, are being retained at some level by REF. If you don’t know a field you can’t judge it, whether you’re an accountant or a numerical formula.

  25. Steve W says:

    ‘Yawn’ has just sneaked ahead on the citations question, suggesting that it’s time to close the poll …

  26. […] interesting suggestion over on the e-astronomer addresses the first question by suggesting that authors be assigned weights depending on their […]

  27. Bill Keel says:

    Id id once suggest to STScI that they sweeten the offer to get people to sit on review panels by giving each panelist an hour of HST time. Can be used, combined, bartered, any observation not constituting a conflict of interest with something proposed to the panel. This also addresses the common complaint of really interesting and obvious things not being done, or no one having the wit to propose that certain combination of objects and wavelength range. Clearly this was met by wry grins and shaking of heads. Everyone’s experience seems to be in agreement that non-reproducible factors control the ranking around the cutoff – if I could just interest some competitor in spending time working on the relations between catered breakfast items, coffee brands, and scientific taste of TACs…

  28. andyxl says:

    Bill ! welcome on board. Or maybe you have been lurking…

  29. […] interesting suggestion over on the e-astronomer a while ago addressed the first question by suggesting that authors be assigned weights depending […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: