HST scramble

Friday night I got my HST proposal submitted, with various chums.  Very pleased. Reads well. Obviously excellent. But it seems HST is just a tad competitive. I knew this of course, but it just got driven home to me when I got an email requesting feedback from all Cycle-18 proposers. Well…. here is the address list for this email. Jeez. Maybe I coud form a club of people who have never tried to get HST time and we could all meet in the back room of the Stoat and Ferret without having to stand up.


Wednesday evening edit : list of names removed. For anybody who wants to know, it was about four screenfuls long. Lots.


>

32 Responses to HST scramble

  1. Nick Cross says:

    You would have thought that they would have heard of blind carbon copy by now.

  2. Tom says:

    They may all be receiving viagra adverts by now.

    Andy – not convinced providing the world with all these email addresses is such a good idea.

    Tom

  3. andyxl says:

    Hmm yes good point Tom. Just got a private email making the same point. Off to busy land now, but will probably remove later….

  4. Jonny says:

    This list does rather get the point across! I hate to poop the party though, but could this list attract all the spammers in history? Could you put a JPEG screen shot of this list up instead?

  5. Actually, in the spammer community, harvesting email addresses from the web (or, worse, usenet) is just SO 1990s. After all, such geeks aren’t very interesting as spam targets. 🙂

    These days, there are two main routes to collecting email addresses. One is the PC virus which has the task of collecting email addresses from “address books” and the like on PCs. Thus, real email addresses used by real people which perhaps were never visible on the web or usenet are obtained. The other is the dictionary attack: just go through all the combinations. (Yes, this is actually done.) Computing power is not a problem: just have it done by a virus which infects millions of PCs.

    As a long-time usenet poster, I have a spamblock there out of tradition. However, since I run my own SMTP server at home, I know what the spammers are trying. By far the most attempts are to non-existent addresses, i.e. the dictionary attack.

    Since spamming has become so widespread, respectable SMTP relays don’t allow it. Using an RBL thus eliminates almost all spam.

  6. Tom says:

    Or there’s now a list of people to spam who might have thought their email address was relatively safe.

  7. Michael Merrifield says:

    Isn’t the idea to print out the list and throw darts at it to see who gets the telescope time?

  8. Nic Ross says:

    Long time reader, first time poster…

    Surely this kinda thing should be more private from STScI?? Doesn’t publishing these names in the email lead to a potentially huge conflict of interest??

    (As my old PhD supervisor used to say, “It’s a good thing the general public doesn’t know what we really do, otherwise we’d be laughed out the room…” Admittedly this was on a discussion about Lambda… 🙂

    Note also that I think
    a) all of these folks are PIs only and
    b) some of these proposals will be for Archival projects…

  9. Paul Crowther says:

    STScI has how provided Cycle 18 submission stats, including an over-subscription of 9+ for GO proposals, see here, so there may have to be plenty of dart throwing in May, along the lines suggested by MM.

  10. Mr Physicist says:

    Address lists!? What about the Drayson proposals and the future of physics? Come on Andy………

  11. Michael Merrifield says:

    Afraid that the more I learn about it, the more convinced I become that peer review of telescope time is a very expensive random number generator.

  12. Nic Ross says:

    It’s O(10^4) $/£/€ per night for 10m time, right?

    Considering the full costs of the Shuttle Programme, the Space Telescope hardware (admittedly some degeneracy there), and the associated HST funding, I would love to know how much a Cycle 18 orbit costs.

    (and then cf. this with the decision making process on how these orbits are allocated…)

  13. Paul Crowther says:

    Mike, I’m sure you’d agree that TACs generally agree on `brilliant’ and `awful’ proposals, but struggle to objectively rank the large chunk of ‘decent’ stuff in the middle. For ESO, WHT, Gemini that is a problem, but for HST, there is no problem, because only a small fraction of the excellent stuff makes the grade.

    Nic, best not to think about cost/orbit, but worth every penny/cent/pfennig.

  14. Paul Crowther says:

    Mr Physicist,

    since you ask, Jon Amos at the Beeb temporarily included the following summary from me yesterday:

    Lord Drayson has addressed two of the main issues that have held STFC back since its creation – subscription exchange rates and operations of national facilities. These are very welcome in the current financial climate and should help stabilise future funding. However, there is no compensation for the damage inflicted on basic physics over the past 2 years, so severe cuts to university research grants will still go ahead. STFC lacks a coherent science vision, such that its corporate strategy trumps priorities from its scientific community.

    before, at my request, switching to the official RAS/IoP view. Some sharper comments from telescoper and leaves on the line too.

  15. Michael Merrifield says:

    Paul, that was certainly my preconception before I was co-opted onto the ESO committee looking at the allocation of telescope time. From the data we have looked at (not, as it happens, from ESO telescopes) it would seem that the scatter in pre-meeting grades assigned by different time allocation committee members is large and almost completely independent of the rank of the proposal. It does not seem to be true that assessors even agree on the excellent and the awful. I must admit, I was very surprised by the inability of the process to discriminate even at this level.

  16. Michael Merrifield says:

    To quantify that, a TAC for this unnamed-but-prestigious facility gave the top 10 proposals it looked at an average grade of ~2.0 with a dispersion amongst the panel members of ~0.8, 10 proposals in the middle of the range got an average of ~2.5 with a dispersion of ~0.8, and the bottom 10 proposals scored ~3.3 with a dispersion of ~0.9.

  17. Michael Merrifield says:

    Nic, I agree entirely that it would not be smart to skimp on the allocation process given the cost of the telescope. However, that does not mean that one should confuse precision for accuracy: if it is possible to get as good a result out of a simpler, cheaper process, we owe it to ourselves and the taxpayer to do so.

  18. andyxl says:

    Here is something hard to quantify : even if the selection process is close to random, the writing process is not. To write a competitive proposal, we had to think really hard about the science, the statistics etc; I am pretty sure we learned a lot and my own research programme moved forward, even if we end up being rejected. If we just allocated time to “good” people, we’d become much sloppier over time.

  19. Paul Crowther says:

    That’s perhaps the price you pay for having a small number of people (co-opted) on sub-panels, so often no real expert view since you lack referee reports. You’ve also based your assessment on pre-grades prior to face-to-face discussion in Garching.

    Having served on a variety of ground- and space-telescope TAC panels over most of the past decade, i’d argue that PATT is slightly better, despite the variable quality of referee reports.

    Hubble, Spitzer etc. are the best of the bunch since sub-panels involve twice as many people as ESO, so at least one genuine expert. Quite appropriate given their much higher cost per orbit/hr.

  20. Mr Physicist says:

    Chums? Jeez? Club? Beeb? I have wandered into cardigan land from the wastelands of the STFC empire. The lone soldier is Paul Crowther telling us how it us.

  21. Michael Merrifield says:

    Paul: those weren’t ESO grades; it was an even more prestigious organization, with large sub-panels of exactly the type you describe, yet still it reached no consensus on what was good, bad, or downright ugly.

    Andy: That’s a good argument for not simply holding a lottery. But if an alternative process could be just as effective at randomly selecting from the “good” proposals, yet were significantly cheaper to implement, surely we would get the same benefit of making applicants think hard, and at the same time would free up resources for actually doing science rather than assessing it.

  22. Paul Crowther says:

    Mike, Andy notes the positive effects of writing proposals regardless of the outcome, but you seem reluctant to admit the positive benefits for those doing the assessment. If you don’t want the hassle, just say no next time.

    Those stats look perfectly reasonable to me as a 0th order cut to decide which to consider for approval, which to discard and which of the inbetweenies might sneak in (e.g. datasets worthwhile for legacy value in spite of poorly argued science case).

  23. Michael Merrifield says:

    Not at all: I agree that there are significant positive benefits to taking part in the assessment, which should certainly be factored into any discussion. All I am arguing is that we have an obligation to carry out the process efficiently, and should not slavishly (and somewhat arrogantly) assume that the process that has evolved is the most effective and appropriate way of doing it. Just declining to participate myself does not seem the right way to address this concern.

    As for your second point, unfortunately, if you look at the standard error associated with those scatters that I quoted, you will find that the 1-sigma uncertainty in ranking corresponds to a shift up or down the list by about a third of the total number of applications, which is a pretty unreliable metric for making even a first cut.

    I agree, though, that it would be good to have more data on the quality of the final outcome. Perhaps not surprisingly, there is not a huge amount of evidence, because it is very costly to carry out the kind of duplicated trials that such tests would require. However, the data I have seen on the subject show a depressingly small amount of correlation between the outcomes of two independent panels ranking the same proposals. This lack of correlation certainly fits with the anecdotal stories of many applications that jump from top to bottom quartile between rounds, but anecdotes are not a good basis for making decisions.

    All I am arguing is that we should approach this question scientifically: collect good data, analyze it with a due understanding of the difference between precision and accuracy, and not be so wedded to the current paradigm that we are not open to the possibility that there might be a better way to allocate telescope time.

  24. Albert Zijlstra says:

    The oversubscription at ESO is 5-8, not that much lower than at STScI.

    Panels are a real-life example of game theory. A random element in telescope allocations prevents time monopolization, and allows off-key projects a chance at getting data, and they may do much better than the panel -or the proposers! – may have expected. It doesn’t hurt truly good projects either to miss out at times. It keeps everyone on their toes. Panels probably get things drastically wrong more often than spot-on correct ; a random element stabilizes the system. Same as in darwinian evolution.

    For the same reason I am not a fan of limiting telescope access only to the STFC ‘priorities’. That does lead to fast results, yes, but also ensures a steep decline in competitiveness afterwards. Panels should aim to avoid subject bias. ESO does this by allocating time per subject based on demand. A rapid turnover in panel membership also helps.

    Would this also apply to grant panels?

    Albert

  25. Michael Merrifield says:

    A very salient point, Albert, and certainly the more sophisticated terms in which we should be thinking about the process.

    Fortunately, a degree of randomness is pretty much intrinsic to the system, since the desired outcome of the “best science” is so ill-defined: there is no objective basis for deciding between a high-risk potentially-revolutionary proposal and one that offers a sure-fire incremental return in the same field, let alone trying to decide whether exoplanets or cosmology are more interesting scientifically, so different panels will always come up with different solutions.

    However, a corollary of this randomness is that it is not worth having an allocation process that has pretences of objectivity by agonizing over each proposal in huge detail, since investing large amounts of time and money in such a procedure fundamentally cannot result in a “better” outcome in such an intrinsically noisy process.

  26. ian smail says:

    mike – i agree most TACs spend too much time trying to finesse their outcomes… but given the pseudo-randomness of the rankings this just means that their discussions frequently continue until the end of the scheduled meeting, irrespective of when it should have ended. it wastes a few hours, but its hardly the end of the world, given the advantage that peer-review keeps all applicants on their toes and at least provides the pretense that the quality of their ideas has an impact on their outcome.

    albert – some “memory” is good on all review panels – to remember objections or support from previous rounds for earlier versions of related proposals (or supported proposals in identical fields from other applicants). ideally this information would be held by the secretariat, but given they are rarely experts in the field, its usually safer to simply ensure a few members are retained between panels.

    …and i’m not at all certain that allocating time on the basis of oversubscription ratio is the right thing to do. that way we could end up with a dwarf elliptical panel, giant elliptical panel, moderate-luminosity elliptical panel, etc. (or even “hot” stars, “cool” stars, “other” stars – oh no sorry – we already have those).

    “workers of the world unite…” (my random grosse point blank reference)

  27. Michael Merrifield says:

    If the nugatory effort were really just down to that last hour or so, I’d be inclined to agree with you Ian. However, that doesn’t seem to be where the evidence points. We spend enough time trying to convince our students that they shouldn’t spend hours measuring a quantity to many decimal places when the method they are using is barely good to one significant figure, and we really should take that message to heart. If it turns out that there is an alternative method that attains an answer no more random than conventional peer review at a fraction of the administrative cost, we would be very foolish not to use it.

    • Monica Grady says:

      In terms of grants: so little money, so many applicants, so much time wasted in writing and reviewing proposals. So, take the pot of money, and divide by the number of applicants, and dole out an equal amount to each. Big groups = lots of applicants = lots of money. Small groups = valuable resource to be courted into collaborative projects. Allow natural selection to act.

      Same process for telescope time. Everyone gets 1 night, and then alliances form…

      • andyxl says:

        Monica – market processes are tempting … but if we get allocations and start trading, surely the cleverest salesfolk would win rather than the best scientists ? In normal markets the extra factor is that consumers buy the product, and thats a very hard nosed test. But it isn’t clear what that equivalent is here. Hmm. Getting lost in metaphor.

      • Monica Grady says:

        well, maybe the ‘market test’ is publications. People would flock to the best scientists/best kit/best models. A sort of hierarchical bottom up way of achieving growth. Or is it top down? Not just lost in metaphor, just lost!

      • Ken Rice says:

        Presumably a decent middle ground would be to get rid of the Full Economic Costing model (which is approximate at best anyway) and go back to one in which some of the research money goes to Universities directly to be divided up as they see fit (paying the research component of academic salaries, some computing and some travel) and researchers only bid for direct costs (plus some overhead possibly) through a competitive process. Researchers would then not be as reliant on a funding system that is somewhat random but they would still be required to write coherent, well thought out proposals if they wanted support for PDRAs or funding for HPC etc.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.