Some people have this annoying habit of starting a blog post with an embarassing statement about how they haven’t posted for a while, thus drawing unecessary attention to their failings. Well, I have absolutely no intention of mentioning… oh hang on. Bugger. Already done it. Oh. Well. Anyway. LUCKILY, old chum and intellectual sparring partner Martin Elvis has supplied me with a lovely guest post. It follows on from my previous post, and is a plea for asking answerable questions. I think its spot on – how about you ?
Martin Elvis, Harvard-Smithsonian CfA
Scientists are sometimes criticized for their hubris, trying to explain the whole Universe, searching for the beginning of Time, or for the God Particle that gives mass to all things. But excuse us, that was never our intent. A feature of science, why it has been successful, is that it asks humble questions.
The alchemists famously wanted to find the Universal Solvent, or to turn Base metals into Gold. It’s good to have ambitious goals, surely. “Make no small plans”, they say. Well, no. Not if you have no idea how to achieve them. Your plans should be just as big as they need to be to answer a well-posed question. It’s too easy to ask a big question. Science began when some folks decided to ask small questions: How does a ball roll down an inclined plane? What if I pump the air out of a closed container? Being happy to get answers to these questions, led on to more questions, and those to more still. And after 400 years of one tractable question after another, we have the extraordinary questions we can ask today.
Linking these questions up with mathematics was often a good way to make the questions sharper, though not always. Some technologies – gunpowder, transparent glass, printing – made progress more rapid. And publishing our answers to these questions publicly made a huge difference. But the main point was to ask tractable questions, ones that you had an idea how to answer.
For some time I’ve felt that present day philosophers are stuck with asking the same Big Questions, and have made only as much progress as the alchemists did. But now I’m worried that Big Science has run off in the same direction. We want a Theory of Everything, but 50 years of searching has produced only tantalizing clues in 6 dimensions. We want to understand Dark Energy, the ‘anti-gravity’ that accelerates the expansion of the Big Bang. But all we know how to do is to measure that acceleration more and more precisely. So far that has produced zero insight. There are an infinite number of theories that can fit the data, and there will still be an infinite number of theories if we measure the acceleration 10 times more accurately. We will want to image the Other Earths that we expect find soon, map their continents, study their vegetation. But that needs telescopes far, far, larger than we can currently build. These are not good questions. In his Op-Ed “The ‘Nightmare Scenario’ ” James Owen Weatherall (UC Irvine) says of high energy physics: “We are faced with a struggle between the questions we want to answer and the limitations of our abilities – and at some point, perhaps soon, our limitations will win the day”.
Is there really a point at which we should say “This is too hard a question. We don’t have a clue how to answer it, or the best plan we have costs so much we couldn’t do anything else. So let’s shelve it for now. Maybe in 50 years we’ll see how to get at it.” Superconductivity remained unsolved for 50 years. No-one knew how to attack it and/or they didn’t have the technology. It’s plausible that another clue to Dark Energy will come up thanks to research in some other area of astrophysics, or elsewhere. Perhaps in 50 years we’ll be able to afford much bigger telescopes in space to image new Earths. Perhaps In solving another problem, maybe in mathematics or computing rather than physics, we may see how a Unified Theory of quantum gravity can be built. Almost certainly it won’t be the tricks that worked before. Einstein and Feynman failed. New ways of thinking are needed. They may be Outsider now, but perhaps the approaches of Robert Laughlin (“A Different Universe“) or Stephen Wolfram (“A New Kind of Science“) are what we need. What now seems a weakness, may be re-imagined as a strength. 1/3 cannot be expressed as a decimal. Is that a problem, or does it point the way to irrational numbers?
This is not a cry of despair. Weatherall is too pessimistic. Building ever bigger colliders has probably hit its limit. But we will find other ways. We’ve always made these choices. Asking what the stars were made of, even though it’s an obvious question, was left alone because there was no way to address it. Then Kirchoff and Bunsen were taking their exercise along the Philosopher’s walk in Heidelberg talking about how their new spectrograph had analyzed the composition of a fire in the nearby city of Mannheim. Bunsen half joked “Why should we not do the same with regard to the Sun?”* Moment of silence. Bingo! Now we can ask what the Sun is made of, because we can see a way to find out. This was the birth of modern astrophysics. There’s still a huge amount of science we do know how to make progress on. Modest specialized telescopes found exoplanets and Dark Energy. New ones can test quantum chromodynamics, not to mention understanding galaxies and quasars. Ongoing laboratory searches could well identify the Dark Matter. Meanwhile, we leave unanswerable aside.
The trick in science has always been to ask the right questions – not too easy, not too hard. (Imre’ Lakatos, a philosopher of science, called it being able to imagine a research program.) We do best when we see how we can go forward easily; the means of doing so may not be all that cheap, though it can be. What we do have to do is to ask humble, but not too humble, questions.
* as recounted in an anonymous article in Nature in 1902 (vol.65, p.587).
Even worse than science that attempts to answer too difficult a question is science that seeks to answer no question at all. I read a depressingly large number of telescope applications that simply propose to observe something because it hasn’t been observed, or seek to compile the largest sample of some class of object that has ever been compiled, with no clear view as to what science will be enabled. Many seem to think that the magic incantation “a complete sample” is all that is required to guarantee success, and are seemingly unable to grasp that, give or take the extra Poisson noise, a random subsample of such a sample has exactly the same statistical properties, and a case has to be made for the size of sample sought.
While it is true that such “fishing expeditions” do turn up fascinating results, the hit rate is rather low, and I would argue that serendipity is at least as likely to strike in a proposal that seeks to answer a specific question, with the bonus that good science will be forthcoming even in the many cases where the unexpected does not appear.
This issue is related to the “surveyization” of astronomy. While there are many truly excellent surveys being undertaken, and many areas of science that cannot be addressed without such survey data, we now run the risk of totally squeezing out those who seek to answer small well-focused questions through targeted observations. In addition to the loss of interesting low-cost science, I worry about a generation of students who have only been involved in large collaborations throughout their careers, so have never been given the opportunity of developing the skill set to oversee an entire project from idea-on-a-beer-mat to final publication, and who struggle to demonstrate in an objective way to anyone outside their consortia that they have made significant intellectual input to the science.
There needs to be both, of course. However, many surveys turned out to be useful for things which weren’t even thought of when they were done.
It’s interesting (well, to a survey nerd like me anyway) how people use “complete sample” to mean different things. For me it’s obvious that something can be complete >and< sparse sampled, because for me complete means "everything that satisfies your selection function" even if that includes sparse sampling. The idea is to have selection effects that you understand and a sample that's exactly what your team needs to answer a question, not a weird rag-bag (of targets I mean, not collaborators). But as Michael shows, there's a danger in how you sell that to TACs.
What gets my goat is claiming a survey is "unbiased", which is clearly impossible, because survey "bias" is just a pejorative word for a selection function. Oh dear, I think everyone's nodded off…
I’m listening Steve ! I think when people say “bias” they are thinking “unrecoverable bias” or something like that
Mike – I am as you know a fan of mega-surveys, but I actually agree with you on middling-surveys and consortium-ization. I have always felt that part of the point of mega-surveys , and also the VO, is preserving our small-team smart-RA astro-democracy. You do with your observations on the database rather than on the sky; so as long as that database is public, any smart RA could do something good without asking a PI’s permission. However, the science case for even a legacy-style mega-survey has to have at least some hard nosed science examples !
I hope I didn’t give the impression that I am not a fan of mega-surveys! All I am worried about is the balance: it seems increasingly that you have to declare yourself the “PI of the CRASS (“ContRived Acronym Science Survey”) Survey” to get much by way of funding support. And while mining of public surveys is a good way of getting excellent multiplex value from such data, it is in no way a replacement for coming up with novel observations tailored specifically to answer interesting science questions.
We need somehow to ensure that we maintain a balanced portfolio, when in difficult times it is quite natural to target what little funding we have on the highest-profile big programmes (and I am as guilty as anyone, in arguing the case for E-ELT at the expense of smaller telescopes).
I can understand the reluctance to pour a huge fraction of our limited resources into an attempt to answer a question like “is the dark energy equation of state w *exactly* equal to -1?” because that’s an area where arbitrarily high precision still can’t answer the question with complete certainty. But to say that we’ve gotten “zero insight” from measuring the expansion more carefully is to belittle all that we’ve learned from supernova surveys, from measurements of the expansion in large-scale structure, from lensing, even from the only marginally related questions such as whether or not gamma-ray bursts are standard candles. Asking the small questions is a way to make incremental progress, but asking the big questions is a way to get into unknown territory, and that doesn’t mean you necessarily find the answer to what you were looking for, but you’re bound to find *something.* It’s the questions that are *really really hard* that force us, as scientists, to be maximally creative. When we have no idea how to approach a problem, we make the most progress by thinking outside the box — which can of course be a complement to setting the issue aside and working on something else, but should never be written off as completely pointless.
Of course we have to be smart about it, and we have to choose our battles. Flailing about in the theoretical landscape is unlikely to be the most efficient approach. But I would argue against the statement that “We do best when we see how we can go forward easily.” I think we do best when we have no idea how to go forward, and we have to come up with something entirely new.
No observation can ever measure anything with arbitrary accuracy. That’s not the point. However, if we measure cosmological parameters more accurately, we might find that the equation of state is not -1, which in my view would be really interesting. -1 is the cosmological constant, which has been around for almost a century. Something other than -1 would be really new. Maybe we will measure that Omega + lambda is not 1 within the accuracy. That would be very important.
Phil: Oh, I agree – if there were >1sigma evidence for w=/= -1 I would pile more funds in. When it keeps on being -1.00 within 1sigma, though, how much of your budget do you put into reducing sigma?
Someone, maybe Richard Battye at the last Moriond meeting, noted that most measurements get something quite close to -1, but, from the width of the error bars, one would expect more variation in the best-fit values. It is not clear what is going on.
Not just funds, but it would be really interesting. Of course, there comes a point when additional effort won’t reduce the error bars enough, but I don’t think we are there yet. It wasn’t that long ago that the Einstein-de Sitter universe was within the error bars. 🙂
@Katie: I’m not putting down all the wonderful SN1a, BAO and weak lensing work. I’m a great fan of BOSS and DES, for example.
My point is that while “really really hard” questions can force us to
be creative, “really hard” often means unsolvable (right now). Have we been creative enough with Dark Energy? I’m not convinced. With current plans we may well find *something* good that is nothing to do with Dark Energy. But anything we do that’s new enough will find something. (This is Harwitt’s “Discovery Space” argument). But in that case why choose something very expensive?
A good example of how things progress well is searching for Earth-like planets spectroscopically. This is really hard too. No-one even attempted it until we’d already found the totally unexpected hot Jupiters by very economical means. Thanks to an evolution of technique over ~15 years 10cm/s or better laser-comb technology is now plausible. This is a case of the winning combination in science: “really hard yet plausible” that I was arguing for. And it still isn’t very expensive. [Now planetary atmospheres, that’s getting expensive. Most likely we’ll have to wait on that, unless we get creative again!]
As you note, the interesting bit may be trying to see where the current cosmological model may be wrong rather than getting to the third decimal place on some of the parameters. After all, no dark matter particle has been detected at LHC so far and cosmologists always want to be in line with particle physics! And dark energy certainly has its problems as an explanation for an accelerating expansion. A problem arises only if we restrict ourselves to asking small questions of these data…
Do you have any suggestions? Low Hubble constant is no longer viable, right? Any other serious contender?
Inevitably, am still interested in low H_0 universe but difficulty is the CMB straitjacket. So have looked at smoothing 1degree CMB scales by both telescope beam and lensing. There are also the anomalies at large scales in WMAP data to think about. Otherwise, R Kolb et al have tried to get out of accelerating Universe via inhomogeneous models that produce extra acceleration without DE. Others look at more inhomogeneous and even non-Copernican models also to get out of DE. Am not keen on these models, but people being forced to try these can be taken as an indication of how desperately fine tuned is the dark energy alternative.
I think we agree that the cure shouldn’t be worse than the disease. However, what fine-tuning are you referring to with regard to dark energy (how I hate that term—Sean Carroll’s “smooth tension” is much better) and why is it desperate?
We can probably all agree that incremental advances are necessary but dull, and ambitious questions are good. But questions that are too ambitious can be just pointless. Its really just a practical issue. Tom’s point is related but not the same – yes we could be stuck in a rut and need some imaginative thinking. It may be tempting to “think big” in that situation, but you may get nowhere and your brain hurts. Better to try something different and see where you get.
Unless, however, it is sufficient to work on the easiest problem no-one else could solve. 🙂
If I remember correctly Peter Medawar’s dictum in “Advice to a Young Scientist” was that you should work on the hardest problem you thought you could solve.
Unless, however, it is sufficient to work on the easiest problem no-one else could solve. 🙂
(Sorry, comment should have been here. The format is screwed up again—happens from time to time with WordPress, probably because some CSS isn’t accessible.
Wasn’t Medawar’s advice to work on the biggest problems because they were usually easier to solve than the smaller problems?
So Sunyaev tells the story that Zeldovich advised him to forget about the galaxy cluster SZ effect “because it didn’t affect the whole Universe.”
Keith – pretty much spot on I would say… but then the problem might be communitisation. As an individual, the desire to actually solve the problem and get famous must really help to hit that sweet spot. But if you feel yourself as making a contribution to a large community progressing together towards the truth, you are maybe more likely to go for over-ambitious targets, because you think we will get there together.
Phillip, as am sure you know, with a cosmological constant if you ratio the radiation energy density to the vacuum energy density after inflation you get a number like 1 part in 10*100 which seems to imply a finely tuned “brake” on whatever cleaned out vacuum energy and turned it into radiation! If you then allow vacuum energy to vary with time and call it DE or quintessence then you may reduce the above problem but you still have the coincidence which is why is the DE density comparable to the matter density just at the present time? Usually there is then a desperate appeal to anthropic principle or multiverse to get the standard model out of these problems.
I’ll say as much as I can in a blog comment while everyone checks out my cool new gravatar (which I’ve had elsewhere for a bit longer–yes, that’s me showing Tycho where to look). First, the argument “the cosmological constant should be 120 orders of magnitude larger, based on vacuum energy from particle physics” is incorrect because there is some unknown problem with the back-of-the-envelope calculation. In other cases when theory is this far off, one questions the theory! I don’t see why that should be any different here. In other words, there is no need to correct this calculation by adding some other effect nor to question the observations: the cosmological constant has some other source.. Second, with respect to your argument, I think, until convinced otherwise by observations or a very elegant theory, a) that the present cosmological constant is independent of whatever happened during the inflation phase (if, indeed, it actually happened) and b) that it is a pure cosmological constant (i.e. constant in time, equation of state -1 etc). Not mainstream? Perhaps, but I doubt I am the only one with such views. More radical than a Hubble constant of 30? You decide. (Note that Sandage actually wrote a paper arguing for a Hubble constant of 42. Somehow, I don’t think he had read The Hitchhiker’s Guide to the Galaxy.)
Your “adding up the zeropoint energy” argument makes another case against lambda in that its natural value is either huge, or zero where it may be protected by some symmetry. But the value we find is an unnatural one, near zero but not zero where its difficult to make up a symmetry or a cancellation. Even if you dont think much of naturalness/fine-tuning arguments, isn’t it still a bit circular arguing for inflation on basis of getting rid of fine-tuning in the flatness problem and then having to re-introduce it in spades with a small lambda?
Your gravatar – am having difficulty telling which one is Phillip and which one is Tycho – I may have to upgrade to a Retina screen!
The argument about the natural value assumes that the lambda we observe is some sort of zero-point energy. But maybe some unknown symmetry makes the zero-point energy zero, and the lambda we observe has another source. When it was introduced into GR, it had nothing at all to do with particle physics, quantum mechanics, zero-point energy etc. In other words, since they have similar effects, they are assumed to be the same, or somehow related, but maybe that is a too quick conclusion. Ditto for connecting exponential inflation in the early universe to exponential inflation in the de Sitter universe which is the asymptotic state of one with a large enough positive cosmological constant.
If a ratio is near zero, then it is unnatural. If it is near one, it is a coincidence. 😐 Which argument denotes something which needs an explanation? It can even be both at once: if the radius of the universe is extremely large, then its reciprocal is near zero. However, in such a case lambda+Omega is near 1. Which is natural and which needs an explanation? 🙂
I don’t know if I think much of fine-tuning arguments, but I think much about them. 🙂 There is an interesting discussion after Mike Turner’s Darwin lecture reported (the discussion, not the lecture, which is supposed to appear in A&G) in the April issue of The Observatory. There was a discussion about a quite interesting coincidence: The current standard model has an age of almost exactly the Hubble time. Remarkable enough, but consider that this holds only around the present time. As Geraint Lewis expresses the same idea, the average value of q up to now is 0, but at an arbitrary time this is not the case. I think this is much more interesting than most such “coincidences”. The discussion was started by Rev. Barber (does anyone here know him?) and went back and forth between him and Turner until Donald Lynden-Bell asked (I wasn’t there, but it was certainly in a loud voice without a microphone) whether he was worried about the angular sizes of the Sun and the Moon. 🙂 Turner had said before that sometimes coincidences are interesting and sometimes they are not. I would add that the problem is that we don’t know which ones are interesting.
Many people have pointed out that in some sense inflation as a cure is worse than the disease if one has to fine-tune inflation itself to avoid other fine-tuning problems. It would be nice to have some proof of inflation—not just “it explains something we can’t otherwise explain”. I don’t think the flatness problem is really a problem (see my latest paper in MNRAS,) but I don’t think I’ve convinced anyone yet. (On the other hand, no-one has pointed out a mistake in the reasoning, which usually happens quite quickly in the case of extraordinary claims.) However, if the universe is not homogeneous for some other reason, then the observed isotropy is a puzzle (Barrow showed that it is not if the universe is homogeneous to start with).
There is actually a third person in the gravatar as well!
Depending on your browser, putting the mouse pointer on the gravatar might show a larger version, perhaps with a link to an even larger version. Alas, after two chemotherapy treatments (2004 and 2008; I hope I am cured completely—by chance I have a checkup Friday), the hair isn’t what it used to be, but my beard is still approaching the Brahe beard, which of course is my ultimate goal. I certainly hope I live longer than Tycho! At a conference in Prague at the end of June (I will post the conference photo in due course), I had a chance to visit Tycho’s grave. It’s quite prominent. It seems, however, that no-one knows which office Einstein had when he was a professor in Prague.
Phillip, For me, the fine-tuning of the standard model isnt a bad thing either because it gives some room for what might otherwise look like quite contrived escape routes from the CMB! I will have a look at your paper in MNRAS. Have now seen a larger version of your gravatar – looks great – Tycho one of my heroes as well. Hope everything went well yesterday!
I’ll have the results from the checkup next Monday. It’s been almost 4 years since my last chemotherapy, but then again four years after the first one a second one was necessary. Until then, I’m off to Oxfordshire for three days of peace and music.
For those who care, results are fine. I go back twice more, in February and next August, then, after what will then be 5 years, am officially cured. Of course, this is an arbitrary period of time, but in many cases it does mean that the risk of cancer recurring is about the same as it was before the first case, which of course might vary from person to person; in other words, if it does come back, it is probably a new case, and not the remains of the last one which weren’t completely eradicated.
Many forms of cancer (including mine) were almost always fatal just a few years ago; there has been enormous progress in treatment. So, get regular checkups, and if something shows up, don’t waste time on “alternative” woo like Steve Jobs did, get early treatment and cancer can in many cases be cured like many other diseases.
Phillip – good to hear results are positive, and I am right with you on the stay-off-the-nutty-stuff thing.
One humble question:
How to choose between the ‘expansion of space’ and its dual ‘matter shrinking’ that is observed when the comoving coordinates is choosen ?
Another humble question:
Do you realize that in all models the atom is ‘absolute’ in size, mass… As the fields, with energy, spread away from the source particles at ‘c’ speed the absolute viewpoint is impossible unless you accept that the total energy grow in time.
It must be true that the Universe will survive if the atoms were double sized, or is there a way to show that the actual size of atoms is special?
In this paper a new model is formally presented:
A self-similar model of the Universe unveils the nature of dark energy @ http://vixra.org/pdf/1107.0016v1.pdf
When I said “cience began when some folks decided to ask small questions: How does a ball roll down an inclined plane?” I was of course thinking of Galileo. I didn’t know that he actually said:
““I deem it of more value to find out some truth about however light a matter than to engage in long disputes about the greatest questions without achieving any truth.”* Right on!
This then was a deliberate turning away from Too Big questions.
* quoted in Physics Today obituary for Massimilia Baldo Ceolin:
“A special commemorative medallion showing Galileo presenting a telescope to the Doge was produced for the occasion. Its reverse carries an inscription in Italian of one of Milla’s favorite quotes by Galileo; its translation is “I deem it of more value to find out some truth about however light a matter than to engage in long disputes about the greatest questions without achieving any truth.” The motto of the Padua Institute of Physics, it was also used by Milla in her 2002 paper in the Annual Review of Nuclear and Particle Science (volume 52, page 1) on the nuclear emulsion era of the 1950s; each volume of the Annual Reviews starts with a historical review by a distinguished scientist.”
But didn’t Galileo take on the Pope and the Inquisition on the question of the Copernican model of the Earth’s motion – “Eppur si muove!” – and all that? Such questions can hardly be called “light” or “humble”. Maybe Galileo’s emphasis is more on the potential “to find some truth” rather than on the size of question.