These are grim times for Welsh astronomy. The cancellation of Clover follows on from a surprisingly bad RAE result for Cardiff. Peter Coles has analysed the RAE results several times over. In this first post, he listed straight weighted mean scores (in which Cardiff came 35th). In a second post, he introduced “research power”, meaning volume times score, which brought Cardiff up to 22nd. Then on January 29th, when HEFCE announced its funding algorithm (7,3,1,0 for buckets 4,3,2,1 respectively) he gave another league table showing expected relative funding, with Cardiff now 27th. (Note however that the Welsh and Scottish funding councils have not yet announced their funding algorithms…)
Last week the RAE published the sub-profiles on which the final profiles were based – i.e. we now have separate profiles for research outputs, for environment, and for esteem. I downloaded the UOA19 (Physics) table, scraped the numbers, and played plotting games with Topcat. To help you play your own games I am attaching a .doc file which is really a CSV file in disguise … Unfortunately WordPress won’t let me upload a VOTable (its XML) or even a plain .txt file, but it does allow .doc files. You can convert the .doc file into plain text, and then Topcat or Excel will read it in.
So here is one interesting thing that jumped out at me – environment scores seem to have been quite crucial. The figure displayed here shows the research outputs score (bue dots) and the environment score (red dots) plotted in turn against the overall score. Compared to research outputs, environment shows a larger range, a larger dispersion, and gradient which is distinctly larger than unity. The red dot way off the correlation is Loughborough – environment score 1.1 even though it scored 2.66 on research outputs. On overall score, Loughborough came 32nd. If its score had been as good as its outputs score it would have been 14th. Cardiff was actually slightly rescued by its environment score; it scored outputs=2.22 and environment=2.74. (Edinburgh had a fairly consistent 2.8 and 3.0).
It wouldn’t be wise to overinterpret individual scores. But it does look like the panel had more marked opinions about the quality of research environment, or perhaps allowed themselves bolder judgements. Any other patterns emerging ?