Posts Tagged ‘Science and Society’

Trusting Experts

13 Sep

Over on the Google+, Robin Hanson asks a leading question:

Explain why people shouldn’t try to form their own physics opinions, but instead accept the judgements of expert physicists, but they should try to form their own opinions on economic policy, and not just accept expert opinion there.

(I suspect the thing he wants me to explain is not something he thinks is actually true.)

There are two aspects to this question, the hard part and the much-harder part. The hard part is the literal reading, comparing the levels of trust accorded to economists (and presumably also political scientists or sociologists) to the level accorded to physicists (and presumably also chemists or biologists). Why do we — or should we — accept the judgements of natural scientists more readily than those of social scientists?

Although that’s not an easy question, the basic point is not difficult to figure out: in the public imagination, natural scientists have figured out a lot more reliable and non-obvious things about the world, compared to what non-experts would guess, than social scientists have. The insights of quantum mechanics and relativity are not things that most of us can even think sensibly about without quite a bit of background study. Social scientists, meanwhile, talk about things most people are relatively familiar with. The ratio of “things that have been discovered by this discipline” to “things I could have figured out for myself” just seems much larger in natural science than in social science.

Then we stir in the matter of consensus. On the very basics of their fields (the Big Bang model, electromagnetism, natural selection), almost all natural scientists are in agreement. Social scientists seem to have trouble agreeing on the very foundations of their fields. If we cut taxes, will revenue go up or down? Does the death penalty deter crime or not? For many people, a lack of consensus gives them license to trust their own judgment as much as that of the experts. To put it another way: if we talked more about the bedrock principles of the field on which all experts agreed, and less about the contentious applications of detailed models to the real world, the public would likely be more ready to accept experts’ opinions.

None of which is to say that social scientists are less capable or knowledgable about their fields than natural scientists. Their fields are much harder! Where “hard” characterizes the difficulty of coming up with models that accurately capture important features of reality. Physics is the easiest subject of all, which is why we know enormously more about it than any other science. The social sciences deal with fantastically more complicated subjects, about which it’s very naturally more difficult to make definitive statements, especially statements that represent counterintuitive discoveries. The esoteric knowledge that social scientists undoubtedly possess, therefore, doesn’t translate directly into actionable understanding of the world, in the same way that physicists are able to help get a spacecraft to the moon.

There is a final point that is much trickier: political inclinations and other non-epistemic factors color our social-scientific judgments, for experts as well as for novices. On a liberal/conservative axis, most sociologists are to the left of most economists. (Training as an economist allegedly makes people more selfish, but there are complicated questions of causation there.) Or more basically, social scientists will often approach real-world problems from the point of view of their specific discipline, in contrast with a broader view that the non-expert might find more relevant. (Let’s say the death penalty does deter crime; is it still permissible on moral grounds?) Natural scientists are blissfully free from this source of bias, at least most of the time. Evolution would be the obvious counterexample.

The more difficult question is much more interesting: when should, in completely general terms, a non-expert simply place trust in the judgment of an expert? I don’t have a very good answer to that one.

I am a strong believer that good reasons, arguments, and evidence are what matter, not credentials. So the short answer to “when should we trust an expert simply because they are an expert?” is “never.” We should always ask for reasons before we place trust. Hannes Alfvén was a respected Nobel-prizewinning physicist; but his ideas about cosmology were completely loopy, and there was no reason for anyone to trust them. An interested outsider might verify that essentially no working cosmologists bought into his model.

But a “good reason” might reasonably take the form “look, this is very complicated and would take pages of math to make explicit, but you see that I’ve been doing this for a long time and have the respect of my peer group, which has a long track record of being right about these issues, so I’m asking you to go along this time.” In the real world we don’t have anything like the time and resources to become experts in every interesting field, so some degree of trust is simply necessary. When deciding where to place that trust, we rely on a number of factors, mostly involving the track record of the group to which the purported expert belongs, if not the individual experts themselves.

So my advice to economists who want more respect from the outside world would be: make it much more clear to the non-expert public that you have a reliable, agreed-upon set of non-obvious discoveries that your field has made about the world. People have tried to lay out such discoveries, of course — but upon closer inspection they don’t quite measure up to Newton’s Laws in terms of reliability and usefulness.

Social scientists are just as smart and knowledgable as natural scientists, and certainly have a tougher job. But trust among non-experts isn’t demanded, and shouldn’t be based on credentials; it is given on the basis of a long track record of very visible success. Everyone would be in favor of that.


Science for the masses

09 Dec

Observational science is hard. And it seems to be getting harder. Nowadays, when you want to analyze the latest and greatest data set, it could consist of finding a minute-long evolving oscillatory gravitational-wave signal buried in months and mountains of noise. Or it could consist of picking out that one Higgs event among 600 million events. Per second. Or it could consist of looking for tiny correlations in the images of tens of millions of galaxies.

The interesting effects are subtle, and it’s easy to fool oneself in the data analysis. How can we be sure we’re doing things right? One popular method is to fake ourselves out. A group gets together and creates a fake data set (keeping the underlying parameters secret), and then independent groups can analyze the data to their heart’s content. Once the analysis groups publicly announce their results, the “true” parameters underlying the data can be revealed, and the analysis techniques can be directly evaluated. There is a correct result. You either get it or you don’t. You’re either right or wrong.

dark matter from gravitational lensing This approach has been developed for particle physics and gravitational waves and all sorts of other data sets. The latest version of this is the GREAT10 data challenge, for weak gravitational lensing data analysis. As we’ve discussed before (here, here, here), gravitational lensing is one of the most powerful tools in cosmology (Joanne Cohn has a brief introduction, with lots of links). In short: the gravity from intervening matter bends the light coming from distant objects. This causes the images of distant objects to change in brightness, and to be bent (”shear” is the preferred term of art). By looking at the correlated effects on (literally) millions of distant galaxies, it is possible to infer the intervening matter distribution. What is particularly powerful about gravitational lensing is that it is sensitive to everything in the Universe. There are no prejudices: the lensing object can be dark or luminous, it can be a black hole or a cluster of galaxies or something we haven’t even thought of yet. As long as the object in question interacts via gravity, it will leave an imprint on images of distant sources of light.

Measuring the Universe with gravitational lensing would be simple if only all galaxies were perfectly round, and the atmosphere wasn’t there, and telescopes were perfect. Sadly, that’s not the situation we’re in. We’re looking for an additional percent-level squashing of a galaxy that is already intrinsically squashed at the 30% level. The only way to see this is to notice correlations among many, many galaxies, so you can average away the intrinsic effects. (And there might be intrinsic correlations in the shapes of adjacent galaxies, which is a pernicious source of systematic noise.) And if some combination of the telescope and the atmosphere produces a blurring (so that stars, for example, don’t appear perfectly spherical), this could easily make you think you have tons of dark matter where there isn’t any. How do you know you’re doing it right? You produce a fake sky, with as many of the complications of the real sky as possible. Then you ask other people to separate out the effects of the atmosphere and the telescope (encapsulated in the point spread function) and the effects of dark matter (via gravitational lensing). The GREAT10 team has done exactly this (see discussions here, here, here). They have released a bunch of images to the public. They know exactly what has gone into making the images. Your task is to figure out the PSF and the gravitational lensing in the image. Everyone is welcome to give it a shot! The images, and lots of explanatory documentation, are available here. The group that does the best job of finding the dark matter gets a free trip to the Jet Propulsion Laboratory. And, most importantly, an iPad. What more incentive could you want? Start working on your gravitational-lensing algorithms!

This is truly science by the masses, for the masses.