Posts Tagged ‘Top Posts’

Dark Energy FAQ

04 Oct

In honor of the Nobel Prize, here are some questions that are frequently asked about dark energy, or should be.

What is dark energy?

It’s what makes the universe accelerate, if indeed there is a “thing” that does that. (See below.)

So I guess I should be asking… what does it mean to say the universe is “accelerating”?

First, the universe is expanding: as shown by Hubble, distant galaxies are moving away from us with velocities that are roughly proportional to their distance. “Acceleration” means that if you measure the velocity of one such galaxy, and come back a billion years later and measure it again, the recession velocity will be larger. Galaxies are moving away from us at an accelerating rate.

But that’s so down-to-Earth and concrete. Isn’t there a more abstract and scientific-sounding way of putting it?

The relative distance between far-flung galaxies can be summed up in a single quantity called the “scale factor,” often written a(t) or R(t). The scale factor is basically the “size” of the universe, although it’s not really the size because the universe might be infinitely big — more accurately, it’s the relative size of space from moment to moment. The expansion of the universe is the fact that the scale factor is increasing with time. The acceleration of the universe is the fact that it’s increasing at an increasing rate — the second derivative is positive, in calculus-speak.

Does that mean the Hubble constant, which measures the expansion rate, is increasing?

No. The Hubble “constant” (or Hubble “parameter,” if you want to acknowledge that it changes with time) characterizes the expansion rate, but it’s not simply the derivative of the scale factor: it’s the derivative divided by the scale factor itself. Why? Because then it’s a physically measurable quantity, not something we can change by switching conventions. The Hubble constant is basically the answer to the question “how quickly does the scale factor of the universe expand by some multiplicative factor?”

If the universe is decelerating, the Hubble constant is decreasing. If the Hubble constant is increasing, the universe is accelerating. But there’s an intermediate regime in which the universe is accelerating but the Hubble constant is decreasing — and that’s exactly where we think we are. The velocity of individual galaxies is increasing, but it takes longer and longer for the universe to double in size.

Said yet another way: Hubble’s Law relates the velocity v of a galaxy to its distance d via v = H d. The velocity can increase even if the Hubble parameter is decreasing, as long as it’s decreasing more slowly than the distance is increasing.

Did the astronomers really wait a billion years and measure the velocity of galaxies again?

No. You measure the velocity of galaxies that are very far away. Because light travels at a fixed speed (one light year per year), you are looking into the past. Reconstructing the history of how the velocities were different in the past reveals that the universe is accelerating.

How do you measure the distance to galaxies so far away?

It’s not easy. The most robust method is to use a “standard candle” — some object that is bright enough to see from great distance, and whose intrinsic brightness is known ahead of time. Then you can figure out the distance simply by measuring how bright it actually looks: dimmer = further away.

Sadly, there are no standard candles.

Then what did they do?

Fortunately we have the next best thing: standardizable candles. A specific type of supernova, Type Ia, are very bright and approximately-but-not-quite the same brightness. Happily, in the 1990′s Mark Phillips discovered a remarkable relationship between intrinsic brightness and the length of time it takes for a supernova to decline after reaching peak brightness. Therefore, if we measure the brightness as it declines over time, we can correct for this difference, constructing a universal measure of brightness that can be used to determine distances.

Why are Type Ia supernovae standardizable candles?

We’re not completely sure — mostly it’s an empirical relationship. But we have a good idea: we think that SNIa are white dwarf stars that have been accreting matter from outside until they hit the Chandrasekhar Limit and explode. Since that limit is basically the same number everywhere in the universe, it’s not completely surprising that the supernovae have similar brightnesses. The deviations are presumably due to differences in composition.

But how do you know when a supernova is going to happen?

You don’t. They are rare, maybe once per century in a typical galaxy. So what you do is look at many, many galaxies with wide-field cameras. In particular you compare an image of the sky taken at one moment to another taken a few weeks later — “a few weeks” being roughly the time between new Moons (when the sky is darkest), and coincidentally about the time it takes a supernova to flare up in brightness. Then you use computers to compare the images and look for new bright spots. Then you go back and examine those bright spots closely to try to check whether they are indeed Type Ia supernovae. Obviously this is very hard and wouldn’t even be conceivable if it weren’t for a number of relatively recent technological advances — CCD cameras as well as giant telescopes. These days we can go out and be confident that we’ll harvest supernovae by the dozens — but when Perlmutter and his group started out, that was very far from obvious.

And what did they find when they did this?

Most (almost all) astronomers expected them to find that the universe was decelerating — galaxies pull on each other with their gravitational fields, which should slow the whole thing down. (Actually many astronomers just thought they would fail completely, but that’s another story.) But what they actually found was that the distant supernovae were dimmer than expected — a sign that they are farther away than we predicted, which means the universe has been accelerating.

Why did cosmologists accept this result so quickly?

Even before the 1998 announcements, it was clear that something funny was going on with the universe. There seemed to be evidence that the age of the universe was younger than the age of its oldest stars. There wasn’t as much total matter as theorists predicted. And there was less structure on large scales than people expected. The discovery of dark energy solved all of these problems at once. It made everything snap into place. So people were still rightfully cautious, but once this one startling observation was made, the universe suddenly made a lot more sense.

How do we know the supernovae not dimmer because something is obscuring them, or just because things were different in the far past?

That’s the right question to ask, and one reason the two supernova teams worked so hard on their analysis. You can never be 100% sure, but you can gain more and more confidence. For example, astronomers have long known that obscuring material tends to scatter blue light more easily than red, leading to “reddening” of stars that sit behind clouds of gas and dust. You can look for reddening, and in the case of these supernovae it doesn’t appear to be important. More crucially, by now we have a lot of independent lines of evidence that reach the same conclusion, so it looks like the original supernova results were solid.

There’s really independent evidence for dark energy?

Oh yes. One simple argument is “subtraction”: the cosmic microwave background measures the total amount of energy (including matter) in the universe. Local measures of galaxies and clusters measure the total amount of matter. The latter turns out to be about 27% of the former, leaving 73% or so in the form of some invisible stuff that is not matter: “dark energy.” That’s the right amount to explain the acceleration of the universe. Other lines of evidence come from baryon acoustic oscillations (ripples in large-scale structure whose size helps measure the expansion history of the universe) and the evolution of structure as the universe expands.

Okay, so: what is dark energy?

Glad you asked! Dark energy has three crucial properties. First, it’s dark: we don’t see it, and as far as we can observe it doesn’t interact with matter at all. (Maybe it does, but beneath our ability to currently detect.) Second, it’s smoothly distributed: it doesn’t fall into galaxies and clusters, or we would have found it by studying the dynamics of those objects. Third, it’s persistent: the density of dark energy (amount of energy per cubic light-year) remains approximately constant as the universe expands. It doesn’t dilute away like matter does.

These last two properties (smooth and persistent) are why we call it “energy” rather than “matter.” Dark energy doesn’t seem to act like particles, which have local dynamics and dilute away as the universe expands. Dark energy is something else.

That’s a nice general story. What might dark energy specifically be?

The leading candidate is the simplest one: “vacuum energy,” or the “cosmological constant.” Since we know that dark energy is pretty smooth and fairly persistent, the first guess is that it’s perfectly smooth and exactly persistent. That’s vacuum energy: a fixed amount of energy attached to every tiny region of space, unchanging from place to place or time to time. About one hundred-millionth of an erg per cubic centimeter, if you want to know the numbers.

Is vacuum energy really the same as the cosmological constant?

Yes. Don’t believe claims to the contrary. When Einstein first invented the idea, he didn’t think of it as “energy,” he thought of it as a modification of the way spacetime curvature interacted with energy. But it turns out to be precisely the same thing. (If someone doesn’t want to believe this, ask them how they would observationally distinguish the two.)

Doesn’t vacuum energy come from quantum fluctuations?

Not exactly. There are many different things that can contribute to the energy of empty space, and some of them are completely classical (nothing to do with quantum fluctuations). But in addition to whatever classical contribution the vacuum energy has, there are also quantum fluctuations on top of that. These fluctuation are very large, and that leads to the cosmological constant problem.

What is the cosmological constant problem?

If all we knew was classical mechanics, the cosmological constant would just be a number — there’s no reason for it to be big or small, positive or negative. We would just measure it and be done.

But the world isn’t classical, it’s quantum. In quantum field theory we expect that classical quantities receive “quantum corrections.” In the case of the vacuum energy, these corrections come in the form of the energy of virtual particles fluctuating in the vacuum of empty space.

We can add up the amount of energy we expect in these vacuum fluctuations, and the answer is: an infinite amount. That’s obviously wrong, but we suspect that we’re overcounting. In particular, that rough calculation includes fluctuations at all sizes, including wavelengths smaller than the Planck distance at which spacetime probably loses its conceptual validity. If instead we only include wavelengths that are at the Planck length or longer, we get a specific estimate for the value of the cosmological constant.

The answer is: 10120 times what we actually observe. That discrepancy is the cosmological constant problem.

Why is the cosmological constant so small?

Nobody knows. Before the supernovae came along, many physicists assumed there was some secret symmetry or dynamical mechanism that set the cosmological constant to precisely zero, since we certainly knew it was much smaller than our estimates would indicate. Now we are faced with both explaining why it’s small, and why it’s not quite zero. And for good measure: the coincidence problem, which is why the dark energy density is the same order of magnitude as the matter density.

Here’s how bad things are: right now, the best theoretical explanation for the value of the cosmological constant is the anthropic principle. If we live in a multiverse, where different regions have very different values of the vacuum energy, one can plausibly argue that life can only exist (to make observations and win Nobel Prizes) in regions where the vacuum energy is much smaller than the estimate. If it were larger and positive, galaxies (and even atoms) would be ripped apart; if it were larger and negative, the universe would quickly recollapse. Indeed, we can roughly estimate what typical observers should measure in such a situation; the answer is pretty close to the observed value. Steven Weinberg actually made this prediction in 1988, long before the acceleration of the universe was discovered. He didn’t push it too hard, though; more like “if this is how things work out, this is what we should expect to see…” There are many problems with this calculation, especially when you start talking about “typical observers,” even if you’re willing to believe there might be a multiverse. (I’m very happy to contemplate the multiverse, but much more skeptical that we can currently make a reasonable prediction for observable quantities within that framework.)

What we would really like is a simple formula that predicts the cosmological constant once and for all as a function of other measured constants of nature. We don’t have that yet, but we’re trying. Proposed scenarios make use of quantum gravity, extra dimensions, wormholes, supersymmetry, nonlocality, and other interesting but speculative ideas. Nothing has really caught on as yet.

Has the course of progress in string theory ever been affected by an experimental result?

Yes: the acceleration of the universe. Previously, string theorists (like everyone else) assumed that the right thing to do was to explain a universe with zero vacuum energy. Once there was a real chance that the vacuum energy is not zero, they asked whether that was easy to accommodate within string theory. The answer is: it’s not that hard. The problem is that if you can find one solution, you can find an absurdly large number of solutions. That’s the string theory landscape, which seems to kill the hopes for one unique solution that would explain the real world. That would have been nice, but science has to take what nature has to offer.

What’s the coincidence problem?

Matter dilutes away as the universe expands, while the dark energy density remains more or less constant. Therefore, the relative density of dark energy and matter changes considerably over time. In the past, there was a lot more matter (and radiation); in the future, dark energy will completely dominate. But today, they are approximately equal, by cosmological standards. (When two numbers could differ by a factor of 10100 or much more, a factor of three or so counts as “equal.”) Why are we so lucky to be born at a time when dark energy is large enough to be discoverable, but small enough that it’s a Nobel-worthy effort to do so? Either this is just a coincidence (which might be true), or there is something special about the epoch in which we live. That’s one of the reasons people are willing to take anthropic arguments seriously. We’re talking about a preposterous universe here.

If the dark energy has a constant density, but space expands, doesn’t that mean energy isn’t conserved?

Yes. That’s fine.

What’s the difference between “dark energy” and “vacuum energy”?

“Dark energy” is the general phenomenon of smooth, persistent stuff that makes the universe accelerate; “vacuum energy” is a specific candidate for dark energy, namely one that is absolutely smooth and utterly constant.

So there are other candidates for dark energy?

Yes. All you need is something that is pretty darn smooth and persistent. It turns out that most things like to dilute away, so finding persistent energy sources isn’t that easy. The simplest and best idea is quintessence, which is just a scalar field that fills the universe and changes very slowly as time passes.

Is the quintessence idea very natural?

Not really. An original hope was that, by considering something dynamical and changing rather than a plain fixed constant energy, you could come up with some clever explanation for why the dark energy was so small, and maybe even explain the coincidence problem. Neither of those hopes has really panned out.

Instead, you’ve added new problems. According to quantum field theory, scalar fields like to be heavy; but to be quintessence, a scalar field would have to be enormously light, maybe 10-30 times the mass of the lightest neutrino. (But not zero!) That’s one new problem you’ve introduced, and another is that a light scalar field should interact with ordinary matter. Even if that interaction is pretty feeble, it should still be large enough to detect — and it hasn’t been detected. Of course, that’s an opportunity as well as a problem — maybe better experiments will actually find a “quintessence force,” and we’ll understand dark energy once and for all.

How else can we test the quintessence idea?

The most direct way is to do the supernova thing again, but do it better. More generally: map the expansion of the universe so precisely that we can tell whether the density of dark energy is changing with time. This is generally cast as an attempt to measure the dark energy equation-of-state parameter w. If w is exactly minus one, the dark energy is exactly constant — vacuum energy. If w is slightly greater than -1, the energy density is gradually declining; if it’s slightly less (e.g. -1.1), the dark energy density is actually growing with time. That’s dangerous for all sorts of theoretical reasons, but we should keep our eyes peeled.

What is w?

It’s called the “equation-of-state parameter” because it relates the pressure p of dark energy to its energy density ρ, via w = p/ρ. Of course nobody measures the pressure of dark energy, so it’s a slightly silly definition, but it’s an accident of history. What really matters is how the dark energy evolves with time, but in general relativity that’s directly related to the equation-of-state parameter.

Does that mean that dark energy has negative pressure?

Yes indeed. Negative pressure is what happens when a substance pulls rather than pushes — like an over-extended spring that pulls on either end. It’s often called “tension.” This is why I advocated smooth tension as a better name than “dark energy,” but I came in too late.

Why does dark energy make the universe accelerate?

Because it’s persistent. Einstein says that energy causes spacetime to curve. In the case of the universe, that curvature comes in two forms: the curvature of space itself (as opposed to spacetime), and the expansion of the universe. We’ve measured the curvature of space, and it’s essentially zero. So the persistent energy leads to a persistent expansion rate. In particular, the Hubble parameter is close to constant, and if you remember Hubble’s Law from way up top (v = H d) you’ll realize that if H is approximately constant, v will be increasing because the distance is increasing. Thus: acceleration.

Is negative pressure is like tension, why doesn’t it pull things together rather than pushing them apart?

Sometimes you will hear something along the lines of “dark energy makes the universe accelerate because it has negative pressure.” This is strictly speaking true, but a bit ass-backwards; it gives the illusion of understanding rather than actual understanding. You are told “the force of gravity depends on the density plus three times the pressure, so if the pressure is equal and opposite to the density, gravity is repulsive.” Seems sensible, except that nobody will explain to you why gravity depends on the density plus three times the pressure. And it’s not really the “force of gravity” that depends on that; it’s the local expansion of space.

The “why doesn’t tension pull things together?” question is a perfectly valid one. The answer is: because dark energy doesn’t actually push or pull on anything. It doesn’t interact directly with ordinary matter, for one thing; for another, it’s equally distributed through space, so any pulling it did from one direction would be exactly balanced by an opposite pull from the other. It’s the indirect effect of dark energy, through gravity rather than through direct interaction, that makes the universe accelerate.

The real reason dark energy causes the universe to accelerate is because it’s persistent.

Is dark energy like antigravity?

No. Dark energy is not “antigravity,” it’s just gravity. Imagine a world with zero dark energy, except for two blobs full of dark energy. Those two blobs will not repel each other, they will attract. But inside those blobs, the dark energy will push space to expand. That’s just the miracle of non-Euclidean geometry.

Is it a new repulsive force?

No. It’s just a new (or at least different) kind of source for an old force — gravity. No new forces of nature are involved.

What’s the difference between dark energy and dark matter?

Completely different. Dark matter is some kind of particle, just one we haven’t discovered yet. We know it’s there because we’ve observed its gravitational influence in a variety of settings (galaxies, clusters, large-scale structure, microwave background radiation). It’s about 23% of the universe. But it’s basically good old-fashioned “matter,” just matter that we can’t directly detect (yet). It clusters under the influence of gravity, and dilutes away as the universe expands. Dark energy, meanwhile, doesn’t cluster, nor does it dilute away. It’s not made of particles, it’s some different kind of thing entirely.

Is it possible that there is no dark energy, just a modification of gravity on cosmological scales?

It’s possible, sure. There are at least two popular approaches to this idea: f(R) gravity , which Mark and I helped develop, and DGP gravity, by Dvali, Gabadadze, and Porati. The former is a directly phenomenological approach where you simply change the Einstein field equation by messing with the action in four dimensions, while the latter uses extra dimensions that only become visible at large distances. Both models face problems — not necessarily insurmountable, but serious — with new degrees of freedom and attendant instabilities.

Modified gravity is certainly worth taking seriously (but I would say that). Still, like quintessence, it raises more problems than it solves, at least at the moment. My personal likelihoods: cosmological constant = 0.9, dynamical dark energy = 0.09, modified gravity = 0.01. Feel free to disagree.

What does dark energy imply about the future of the universe?

That depends on what the dark energy is. If it’s a true cosmological constant that lasts forever, the universe will continue to expand, cool off, and empty out. Eventually there will be nothing left but essentially empty space.

The cosmological constant could be constant at the moment, but temporary; that is, there could be a future phase transition in which the vacuum energy decreases. Then the universe could conceivably recollapse.

If the dark energy is dynamical, any possibility is still open. If it’s dynamical and increasing (w less than -1 and staying that way), we could even get a Big Rip.

What’s next?

We would love to understand dark energy (or modified gravity) through better cosmological observations. That means measuring the equation-of-state parameter, as well as improving observations of gravity in galaxies and clusters to compare with different models. Fortunately, while the U.S. is gradually retreating from ambitious new science projects, the European Space Agency is moving forward with a satellite to measure dark energy. There are a number of ongoing ground-based efforts, of course, and the Large Synoptic Survey Telescope should do a great job once it goes online.

But the answer might be boring — the dark energy is just a simple cosmological constant. That’s just one number; what are you going to do about it? In that case we need better theories, obviously, but also input from less direct empirical sources — particle accelerators, fifth-force searches, tests of gravity, anything that would give some insight into how spacetime and quantum field theory fit together at a basic level.

The great thing about science is that the answers aren’t in the back of the book; we have to solve the problems ourselves. This is a big one.


Trusting Experts

13 Sep

Over on the Google+, Robin Hanson asks a leading question:

Explain why people shouldn’t try to form their own physics opinions, but instead accept the judgements of expert physicists, but they should try to form their own opinions on economic policy, and not just accept expert opinion there.

(I suspect the thing he wants me to explain is not something he thinks is actually true.)

There are two aspects to this question, the hard part and the much-harder part. The hard part is the literal reading, comparing the levels of trust accorded to economists (and presumably also political scientists or sociologists) to the level accorded to physicists (and presumably also chemists or biologists). Why do we — or should we — accept the judgements of natural scientists more readily than those of social scientists?

Although that’s not an easy question, the basic point is not difficult to figure out: in the public imagination, natural scientists have figured out a lot more reliable and non-obvious things about the world, compared to what non-experts would guess, than social scientists have. The insights of quantum mechanics and relativity are not things that most of us can even think sensibly about without quite a bit of background study. Social scientists, meanwhile, talk about things most people are relatively familiar with. The ratio of “things that have been discovered by this discipline” to “things I could have figured out for myself” just seems much larger in natural science than in social science.

Then we stir in the matter of consensus. On the very basics of their fields (the Big Bang model, electromagnetism, natural selection), almost all natural scientists are in agreement. Social scientists seem to have trouble agreeing on the very foundations of their fields. If we cut taxes, will revenue go up or down? Does the death penalty deter crime or not? For many people, a lack of consensus gives them license to trust their own judgment as much as that of the experts. To put it another way: if we talked more about the bedrock principles of the field on which all experts agreed, and less about the contentious applications of detailed models to the real world, the public would likely be more ready to accept experts’ opinions.

None of which is to say that social scientists are less capable or knowledgable about their fields than natural scientists. Their fields are much harder! Where “hard” characterizes the difficulty of coming up with models that accurately capture important features of reality. Physics is the easiest subject of all, which is why we know enormously more about it than any other science. The social sciences deal with fantastically more complicated subjects, about which it’s very naturally more difficult to make definitive statements, especially statements that represent counterintuitive discoveries. The esoteric knowledge that social scientists undoubtedly possess, therefore, doesn’t translate directly into actionable understanding of the world, in the same way that physicists are able to help get a spacecraft to the moon.

There is a final point that is much trickier: political inclinations and other non-epistemic factors color our social-scientific judgments, for experts as well as for novices. On a liberal/conservative axis, most sociologists are to the left of most economists. (Training as an economist allegedly makes people more selfish, but there are complicated questions of causation there.) Or more basically, social scientists will often approach real-world problems from the point of view of their specific discipline, in contrast with a broader view that the non-expert might find more relevant. (Let’s say the death penalty does deter crime; is it still permissible on moral grounds?) Natural scientists are blissfully free from this source of bias, at least most of the time. Evolution would be the obvious counterexample.

The more difficult question is much more interesting: when should, in completely general terms, a non-expert simply place trust in the judgment of an expert? I don’t have a very good answer to that one.

I am a strong believer that good reasons, arguments, and evidence are what matter, not credentials. So the short answer to “when should we trust an expert simply because they are an expert?” is “never.” We should always ask for reasons before we place trust. Hannes Alfvén was a respected Nobel-prizewinning physicist; but his ideas about cosmology were completely loopy, and there was no reason for anyone to trust them. An interested outsider might verify that essentially no working cosmologists bought into his model.

But a “good reason” might reasonably take the form “look, this is very complicated and would take pages of math to make explicit, but you see that I’ve been doing this for a long time and have the respect of my peer group, which has a long track record of being right about these issues, so I’m asking you to go along this time.” In the real world we don’t have anything like the time and resources to become experts in every interesting field, so some degree of trust is simply necessary. When deciding where to place that trust, we rely on a number of factors, mostly involving the track record of the group to which the purported expert belongs, if not the individual experts themselves.

So my advice to economists who want more respect from the outside world would be: make it much more clear to the non-expert public that you have a reliable, agreed-upon set of non-obvious discoveries that your field has made about the world. People have tried to lay out such discoveries, of course — but upon closer inspection they don’t quite measure up to Newton’s Laws in terms of reliability and usefulness.

Social scientists are just as smart and knowledgable as natural scientists, and certainly have a tougher job. But trust among non-experts isn’t demanded, and shouldn’t be based on credentials; it is given on the basis of a long track record of very visible success. Everyone would be in favor of that.


Moral Realism

16 Mar

Richard Carrier (author of Sense and Goodness Without God) has a longish blog post up about moral ontology, well worth reading if you’re into that sort of thing. (Via Russell Blackford.) Carrier is a secular materialist, but a moral realist: he thinks there are such things as “moral facts” that are “true independent of your opinion or culture.”

Carrier goes to great lengths to explain that these moral facts are not simply “out there” in the same sense that the laws of physics arguably are, but rather that they express relationships between the desires of particular humans and external reality. (The useful analogy is: “bears are scary” is a true fact if you are talking about you or me, but not if you are talking about Superman.)

I don’t buy it. Not to be tiresome, but I have to keep insisting that you can’t squeeze blood from a turnip. You can’t use logic to derive moral commandments solely from facts about the world, even if those facts include human desires. Of course, you can derive moral commandments if you sneak in some moral premise; all I’m trying to say here is that we should be upfront about what those moral premises are, and not try to hide them underneath a pile of unobjectionable-sounding statements.

As a warm-up, here is an example of logic in action:

  • All men are mortal.
  • Socrates is a man.
  • Therefore, Socrates is mortal.

The first two statements are the premises, the last one is the conclusion. (Obviously there are logical forms other than syllogisms, but this is a good paradigmatic example.) Notice the crucial feature: all of the important terms in the conclusion (“Socrates,” “mortal”) actually appeared somewhere in the premises. That’s why you can’t derive “ought” from “is” — you can’t reach a conclusion containing the word “ought” if that word (or something equivalent) doesn’t appear in your premises.

This doesn’t stop people from trying. Carrier uses the following example (slightly, but not unfairly, paraphrased):

  • Your car is running low on oil.
  • If your car runs out of oil, the engine will seize up.
  • You don’t want your car’s engine to seize up.
  • Therefore, you ought to change the oil in your car.

At the level of everyday practical reasoning, there’s nothing wrong with this. But if we’re trying to set up a careful foundation for moral philosophy, we should be honest and admit that the logic here is obviously incomplete. There is a missing premise, which should be spelled out explicitly:

  • We ought to do that which would bring about what we want.

Crucially, this is a different kind of premise than the other three in this argument; they are facts about the world that could in principle be tested experimentally, while this new one is not.

Someone might suggest that this is isn’t a premise at all, it’s simply the definition of “ought.” The problem there is that it isn’t true. You can’t claim that Wilt Chamberlain was the greatest basketball player of all time, and then defend your claim by defining “greatest basketball player of all time” to be Wilt Chamberlain. When it comes to changing your oil, you might get away with defining “ought” in this way, but when it comes to more contentious issues of moral obligation, you’re going to have to do better.

Alternatively, you’re free to say that this premise is just so obviously true that no reasonable person could possibly disagree. Perhaps so, and that’s an argument we could have. But it’s still a premise. And again, when we get to issues more contentious than keeping your engine going, it will be necessary to make those premises explicit if we want to have a productive conversation. Once our premises start distinguishing between the well-being of individuals and the well-being of groups, you will inevitably find that they begin to seem a bit less self-evident.

Observe the world all you like; you won’t get morality off the ground until you settle on some independent moral assumptions. (And don’t tell me that “science makes assumptions, too” — that’s obviously correct, but the point here is that morality requires assumptions in addition to the assumptions we need to get science off the ground.) We can have a productive conversation about what those assumptions should be once we all admit that they exist.


LIGO to Collaboration Members: There Is No Santa Claus

15 Mar

Ah, the life of an experimental physicist. Long hours of mind-bending labor, all in service of those few precious moments in which you glimpse one of Nature’s true secrets for the very first time. Followed by the moment when your bosses tell you it was all just a trick.

Not that you didn’t see it coming. As we know, the LIGO experiment and its friend the Virgo experiment are hot on the trail of gravitational waves. They haven’t found any yet, but given the current sensitivity, that’s not too much of a surprise. Advanced LIGO is moving forward, and when that is up and running the situation is expected to change.

But who knows? We could be surprised. It’s certainly necessary to comb through the data looking for signals, even if they’re not expected at this level of sensitivity.

Of course, there is something of a bias at work: scientists are human beings, and they want to find a signal, no matter how sincerely they may rhapsodize about the satisfaction of a solid null result. (Do the words “life on a meteorite” mean anything to you?) So, to keep themselves honest and make sure the data-analysis pipeline is working correctly, the LIGO collaboration does something sneaky: they inject false signals into the data. This is done by a select committee of higher-ups; the people actually analyzing the data don’t know whether a purported signal they identify is real, or fake. It’s their job to analyze things carefully and carry the whole process through, right up to the point where you have written a paper about your results. Only then is the truth revealed.

Yesterday kicked off the LIGO-Virgo collaboration meeting here in sunny Southern California. I had been hearing rumors that LIGO had found something, although everyone knew perfectly well that it might be fake — that doesn’t prevent the excitement from building up. Papers were ready to be submitted, and the supposed event even had a colorful name — “Big Dog.” (The source was located in Canis Major, if you must know.)

Steinn Sigurðsson broke the news, and there’s a great detailed post by Amber Stuver, a member of the collaboration. And the answer is: it was fake. Just a drill, folks, nothing to see here. That’s science for you.

When the real thing comes along, they’ll be ready. Can’t wait.


Science for the masses

09 Dec

Observational science is hard. And it seems to be getting harder. Nowadays, when you want to analyze the latest and greatest data set, it could consist of finding a minute-long evolving oscillatory gravitational-wave signal buried in months and mountains of noise. Or it could consist of picking out that one Higgs event among 600 million events. Per second. Or it could consist of looking for tiny correlations in the images of tens of millions of galaxies.

The interesting effects are subtle, and it’s easy to fool oneself in the data analysis. How can we be sure we’re doing things right? One popular method is to fake ourselves out. A group gets together and creates a fake data set (keeping the underlying parameters secret), and then independent groups can analyze the data to their heart’s content. Once the analysis groups publicly announce their results, the “true” parameters underlying the data can be revealed, and the analysis techniques can be directly evaluated. There is a correct result. You either get it or you don’t. You’re either right or wrong.

dark matter from gravitational lensing This approach has been developed for particle physics and gravitational waves and all sorts of other data sets. The latest version of this is the GREAT10 data challenge, for weak gravitational lensing data analysis. As we’ve discussed before (here, here, here), gravitational lensing is one of the most powerful tools in cosmology (Joanne Cohn has a brief introduction, with lots of links). In short: the gravity from intervening matter bends the light coming from distant objects. This causes the images of distant objects to change in brightness, and to be bent (”shear” is the preferred term of art). By looking at the correlated effects on (literally) millions of distant galaxies, it is possible to infer the intervening matter distribution. What is particularly powerful about gravitational lensing is that it is sensitive to everything in the Universe. There are no prejudices: the lensing object can be dark or luminous, it can be a black hole or a cluster of galaxies or something we haven’t even thought of yet. As long as the object in question interacts via gravity, it will leave an imprint on images of distant sources of light.

Measuring the Universe with gravitational lensing would be simple if only all galaxies were perfectly round, and the atmosphere wasn’t there, and telescopes were perfect. Sadly, that’s not the situation we’re in. We’re looking for an additional percent-level squashing of a galaxy that is already intrinsically squashed at the 30% level. The only way to see this is to notice correlations among many, many galaxies, so you can average away the intrinsic effects. (And there might be intrinsic correlations in the shapes of adjacent galaxies, which is a pernicious source of systematic noise.) And if some combination of the telescope and the atmosphere produces a blurring (so that stars, for example, don’t appear perfectly spherical), this could easily make you think you have tons of dark matter where there isn’t any. How do you know you’re doing it right? You produce a fake sky, with as many of the complications of the real sky as possible. Then you ask other people to separate out the effects of the atmosphere and the telescope (encapsulated in the point spread function) and the effects of dark matter (via gravitational lensing). The GREAT10 team has done exactly this (see discussions here, here, here). They have released a bunch of images to the public. They know exactly what has gone into making the images. Your task is to figure out the PSF and the gravitational lensing in the image. Everyone is welcome to give it a shot! The images, and lots of explanatory documentation, are available here. The group that does the best job of finding the dark matter gets a free trip to the Jet Propulsion Laboratory. And, most importantly, an iPad. What more incentive could you want? Start working on your gravitational-lensing algorithms!

This is truly science by the masses, for the masses.


Is Dark Matter Supernatural?

01 Nov

No, it’s not. Don’t be alarmed: nobody is claiming that dark matter is supernatural. That’s just the provocative title of a blog post by Chris Schoen, asking whether science can address “supernatural” phenomena. I think it can, all terms properly defined.

This is an old question, which has come up again in a discussion that includes Russell Blackford, Jerry Coyne, John Pieret, and Massimo Pigliucci. (There is some actual discussion in between the name-calling.) Part of the impetus for the discussion is this new paper by Maarten Boudry, Stefaan Blancke, Johan Braeckman for Foundations of Science.

There are two issues standing in the way of a utopian ideal of universal agreement: what we mean by “supernatural,” and how science works. (Are you surprised?)

There is no one perfect definition of “supernatural,” but it’s at least worth trying to define it before passing judgment. Here’s Chris Schoen, commenting on Boudry et. al:

Nowhere do the authors of the paper define just what supernaturalism is supposed to mean. The word is commonly used to indicate that which is not subject to “natural” law, that which is intrinsically concealed from our view, which is not orderly and regular, or otherwise not amenable to observation and quantification.

Very sympathetic to the first sentence. But the second one makes matters worse rather than better. It’s a list of four things: a) not subject to natural law, b) intrinsically concealed from our view, c) not orderly and regular, and d) not amenable to observation and quantification. These are very different things, and it’s far from clear that the best starting point is to group them together. In particular, b) and d) point to the difficulty in observing the supernatural, while a) and c) point to its lawless character. These properties seem quite independent to me.

Rather that declare once and for all what the best definition of “supernatural” is, we can try to distinguish between at least three possibilities:

  1. The silent: things that have absolutely no effect on anything that happens in the world.
  2. The hidden: things that affect the world only indirectly, without being immediately observable themselves.
  3. The lawless: things that affect the world in ways that are observable (directly or otherwise), but not subject to the regularities of natural law.

There may be some difficulty involved in figuring out which category something fits, but once we’ve done so it shouldn’t be so hard to agree on how to deal with it. If something is in the first category, having absolutely no effect on anything that happens in the world, I would suggest that the right strategy is simply to ignore it. Concepts like that are not scientifically meaningful. But they’re not really meaningful on any other level, either. To say that something has absolutely no effect on how the world works is an extremely strong characterization, one that removes the concept from the realm of interestingness. But there aren’t many such concepts. Say you believe in an omnipotent and perfect God, one whose perfection involves being timeless and not intervening in the world. Do you also think that there could be a universe exactly like ours, except that this God does not exist? If so, I can’t see any way in which the idea is meaningful. But if not, then your idea of God does affect the world — it allows it to exist. In that case, it’s really in the next category.

That would be things that affect the world, but only indirectly. This is where the dark matter comparison comes in, which I don’t think is especially helpful. Here’s Schoen again:

We presume that dark matter –if it exists–is lawful and not in the least bit capricious. In other words, it is–if it exists–a “natural” phenomena. But we can presently make absolutely no statements about it whatsoever, except through the effect it (putatively) has on ordinary matter. Whatever it is made of, and however it interacts with the rest of the material world is purely speculative, an untestable hypothesis (given our present knowledge). Our failure to confirm it with science is not unnerving.

I would have thought that this line of reasoning supports the contention that unobservable things do fall unproblematically within the purview of science, but Chris seems to be concluding the opposite, unless I’m misunderstanding. There’s no question that dark matter is part of science. It’s a hypothetical substance that obeys rules, from which we can make predictions that can be tested, and so on. Something doesn’t have to be directly observable to be part of science — it only has to have definite and testable implications for things that are observable. (Quarks are just the most obvious example.) Dark matter is unambiguously amenable to scientific investigation, and if some purportedly supernatural concept has similar implications for observations we do make, it would be subject to science just as well.

It’s the final category, things that don’t obey natural laws, where we really have to think carefully about how science works. Let’s imagine that there really were some sort of miraculous component to existence, some influence that directly affected the world we observe without being subject to rigid laws of behavior. How would science deal with that?

The right way to answer this question is to ask how actual scientists would deal with that, rather than decide ahead of time what is and is not “science” and then apply this definition to some new phenomenon. If life on Earth included regular visits from angels, or miraculous cures as the result of prayer, scientists would certainly try to understand it using the best ideas they could come up with. To be sure, their initial ideas would involve perfectly “natural” explanations of the traditional scientific type. And if the examples of purported supernatural activity were sufficiently rare and poorly documented (as they are in the real world), the scientists would provisionally conclude that there was insufficient reason to abandon the laws of nature. What we think of as lawful, “natural” explanations are certainly simpler — they involve fewer metaphysical categories, and better-behaved ones at that — and correspondingly preferred, all things being equal, to supernatural ones.

But that doesn’t mean that the evidence could never, in principle, be sufficient to overcome this preference. Theory choice in science is typically a matter of competing comprehensive pictures, not dealing with phenomena on a case-by-case basis. There is a presumption in favor of simple explanation; but there is also a presumption in favor of fitting the data. In the real world, there is data favoring the claim that Jesus rose from the dead: it takes the form of the written descriptions in the New Testament. Most scientists judge that this data is simply unreliable or mistaken, because it’s easier to imagine that non-eyewitness-testimony in two-thousand-year-old documents is inaccurate that to imagine that there was a dramatic violation of the laws of physics and biology. But if this kind of thing happened all the time, the situation would be dramatically different; the burden on the “unreliable data” explanation would become harder and harder to bear, until the preference would be in favor of a theory where people really did rise from the dead.

There is a perfectly good question of whether science could ever conclude that the best explanation was one that involved fundamentally lawless behavior. The data in favor of such a conclusion would have to be extremely compelling, for the reasons previously stated, but I don’t see why it couldn’t happen. Science is very pragmatic, as the origin of quantum mechanics vividly demonstrates. Over the course of a couple decades, physicists (as a community) were willing to give up on extremely cherished ideas of the clockwork predictability inherent in the Newtonian universe, and agree on the probabilistic nature of quantum mechanics. That’s what fit the data. Similarly, if the best explanation scientists could come up with for some set of observations necessarily involved a lawless supernatural component, that’s what they would do. There would inevitably be some latter-day curmudgeonly Einstein figure who refused to believe that God ignored the rules of his own game of dice, but the debate would hinge on what provided the best explanation, not a priori claims about what is and is not science.

One might offer the objection that, in this view of science, we might end up getting things wrong. What if there truly are lawless supernatural actions in the world, but they appear only very rarely? In that case science would conclude (as it does) that they’re most likely not supernatural at all, but simply examples of unreliable data. How can we guard against that error?

We can’t, with complete confidence. There are many ways we could be wrong — we could be being taunted by a powerful and mischievous demon, or we and our memories could have randomly fluctuated into existence from thermal equilibrium, etc. Science tries to come up with the best explanations based on things we observe, and that strategy has great empirical success, but it’s not absolutely guaranteed. It’s just the best we can do.