RSS
 

Archive for October, 2011

Biofuels, Speculation Blamed for Global Food Market Weirdness

05 Oct

A new analysis of sudden rises in global food prices puts the blame on biofuel policy and mortgage-meltdown-style speculation, which may have fundamentally changed how food markets function.

Many other explanations have been proposed, and the latest analysis — a series of mathematical models and statistical evaluations that seem to match theory with real-world patterns — is not conclusive. But it does make a strong case.

“There’s a literature of a hundred-plus articles, saying this might be the cause, or that might be the cause,” said network theorist Yaneer Bar-Yam of the New England Complex Systems Institute. “We looked quantitatively, and found two important factors. Speculators cause the bubbles and crashes, and ethanol causes the background rise.”

Bar-Yam and the NECSI team, whose analysis was published Sept. 21 on arxiv, work at the intersection of social phenomena and network analysis. In earlier research, they’ve explored the global economy’s changing structure and early-warning signals that may precede crashes.

More recently, they’ve studied how social unrest may have been fueled by food price spikes in 2008 and again in 2011. It’s not only the rise in food prices that’s proved troubling, but the rapidity. Shifts have been big and sudden, in stark contrast to the generally slow fluctuation of food prices since the mid-20th century.

Among the possible causes put forward by economists are drought, meat-intensive dietary habits and market hypersensitivity to supply and demand. Another is corn-based biofuel: In less than a decade, some 15 percent of the world’s corn production has been converted from food to fuel. Perhaps most controversially, some economists have blamed a flood of speculators betting on the rise or fall of food prices.

The FAO Food Price Index (blue solid line) and the prices produced by Bar-Yam's model (dotted red line). Image: Bar-Yam et al./arxiv

Speculating on food isn’t new, but it was long restricted to farmers and companies involved in food production. For them, speculation was a classic form of hedging: A farmer could, for example, make a bet that crop prices would fall. If they didn’t, he’d benefit from his harvest’s high prices; but if they did fall, winning his bet would offset the losses. Speculation was, on the whole, a stabilizing force.

In the late 1990s, however, a financial industry-led push for deregulation — which would later result in the Enron debacle and the California energy crisis, and the 2008 mortgage meltdown — changed how food speculation worked. Anyone could participate. Bets on food were suddenly made by investment companies who could package and repackage their bets into the sorts of derivatives made famous by the mortgage crisis.

According to some economists, this disconnected food prices from basic laws of supply and demand, and made them prone to wild swings. But others disagreed, saying the mathematical signs of cause-and-effect were hazy or absent.

“In the last three to four years, many things have happened in the economy that weren’t anticipated by most folks, and are not explained even today. I don’t know if that means the basic laws of supply and demand aren’t operating, but the way supply and demand is manifested is not understood,” said Jeffrey Fuhrer, research director at the Federal Reserve Bank of Boston. “We don’t have an understanding of the role of speculative markets.”

Bar-Yam and colleagues approached this morass with a series of mathematical models designed to simulate the trend-following investment behavior of speculators and food producers. Key to their models was a link between food prices among speculators and the so-called spot price of food at markets where actual commodities, not their hypothetical future values, are traded.

Some critics of the proposed speculation-food bubble link say spot prices are established independently, from moment to moment, in isolation from any speculative influence. But when Bar-Yam’s team phoned people in the business, at granaries and the U.S. Department of Agriculture, they were told that spot prices are set in reference to the futures market at the Chicago Board Options Exchange.

With the link to speculation established, the researchers let their model run. What resulted was a pattern of month-to-month prices similar to the peaks and valleys seen in real-world food price fluctuations since 2007. However, speculation didn’t replicate the observed long-term, year-to-year rise in food prices.

Those only appeared when Bar-Yam’s team added the shift of corn from use as food to use in ethanol biofuels. With both speculation and biofuels included, the model produced a series of food prices uncannily similar to recent history (see graphic above.)

Overlay their model’s simulated market graph on a graph of the U.N. Food and Agriculture Price Index between 2004 and 2011, and “it fits amazingly well,” said Bar-Yam. “It reproduces the peaks. It reproduces the intermediate blip. The quality of the fit is astoundingly good.”

Models are necessarily pale, oversimplified representations of complex reality, of course, and retrospectively replicating a dataset doesn’t prove the researchers’ model right. But it seems to fit better than other proposed explanations for rising, volatile food prices.

'You had trillions of dollars go into commodities from the housing and stock markets, and it blew away the pricing mechanisms.'
When Bar-Yam’s group looked for a statistical connection between the 2008 spike and drought in Australia, none could be found. Neither could a link be found to rising grain demand, which has come primarily from China and India, both of which met their needs by increasing grain production internally rather than buying abroad. Another plausible explanation, rising oil and energy prices, didn’t hold up to rigorous statistical analysis.

Finally, the researchers found no evidence that global food markets have simply become extra-sensitive to tiny changes in supply and demand. If anything, the basic laws of supply and demand appear temporarily suspended: Supplies increase but prices don’t fall, and demand goes unmet.

While slowly rising prices are a problem, however, the rapid short-term bursts are more troubling. In the last decade, those bursts occurred only after 2007, a time when investors moved money en masse into commodities. That timing fits with another finding of Bar-Yam’s model: Some speculation is fine, even beneficial, but too much makes a market prone to instability.

“Under circumstances where speculators are fairly limited in their engagement, there’s nothing wrong. But when they’re a large fraction of the market, you’re in trouble,” said Bar-Yam. “You had trillions of dollars go into commodities from the housing and stock markets, and it blew away the pricing mechanisms.”

Brookings Institution economist Homi Kharas called Bar-Yam’s model “carefully done,” and said it “provided solid empirical analysis” that diagnoses of speculative influence are correct. However, he warned against attaching too much weight to a model. Food price bubbles also aren’t new, Kharas said.

“Prices today are roughly at what they were in the mid-1970s,” Kharas said. “At that time, nobody had heard of these futures, these index-traded funds. How do we know these are new changes, and not a return to things in the past?” But Richard Cooper, a Harvard University economist who in the mid-1970s studied that bubble, said speculation by Russian grain buyers probably contributed to that bubble.

Bar-Yam’s new analysis will surely be challenged, Cooper said. “Somebody will come along and say the fundamentals weren’t characterized properly. There will be technical arguments. But it’s up to the challengers to show where their analysis has gone wrong.”

Image: Sign in a cafe. (Cory Doctorow/Flickr)

Citation: “The Food Crises: A quantitative model of food prices including speculators and ethanol conversion.” By Marco Lagi, Yavni Bar-Yam, Karla Z. Bertrand, Yaneer Bar-Yam. arXiv, Sept. 21, 2011

See Also:

 
 

Nobel Awarded to Researcher Who Redefined Crystalline

05 Oct

By John Timmer, Ars Technica

Yesterday, the Physics Nobel Prize went to a group of researchers who found that what we expected about something as basic as the structure of the universe was wrong. Today, the Chemistry Prize has gone to a lone researcher who overturned something even more basic: His discovery of what’s now termed a quasicrystal actually triggered the redefinition of what a crystalline solid is.

It’s easy to find a representation of a typical crystal in any chemistry textbook, which will typically show an orderly arrangement of atoms, spreading out to infinity. These crystals, which are as easy to find as looking in the nearest salt shaker, look the same no matter which direction you look at them. There are a limited number of ways to build something with that sort of symmetry, and chemists had pretty much figured they identified all of them. In fact, the International Union of Crystallography had defined a crystal as, “a substance in which the constituent atoms, molecules, or ions are packed in a regularly ordered, repeating three-dimensional pattern.”

Enter Israel’s Daniel Shechtman, who was working with a rapidly cooled aluminum alloy with about 10 to 15 percent manganese mixed in. Shechtman put his sample under an electron microscope to generate a diffraction pattern, in which electrons are bounced off the atoms in an orderly crystal structure, creating a bunch of bright and dark regions that tell us about the positions of the atoms themselves. The diffraction pattern Shechtman saw, shown above, didn’t make any sense — it showed a tenfold symmetry, something that any chemist, including Shechtman, would know was impossible.

The exclamation points in Shechtman's notebook. Image: Iowa State University

In fact, his notebook, which is also still around, has three question marks next to the point where he noted the sample’s tenfold symmetry.

His boss apparently thought he had lost it and, according to the Nobel’s press information, bought Shechtman a crystallography handbook to tell him what he already knew. But Shechtman was persistent, and sent his data to others in the field, some of whom took it seriously.

Fortunately, there was some precedent for the sorts of patterns he was seeing. Mathematicians had studied Medieval Islamic tiling that contained repeated patterns that lacked symmetries, and had developed methods of describing them. This Penrose tiling (named after Roger Penrose, a British mathematician) could also be used to describe the sorts of patterns Shechtman was seeing in his crystals.

Despite the mathematical backing, Shechtman’s first publication on the topic met fierce resistance from some in the crystallography community, including Nobel Laureate Linus Pauling. What gradually won the day for him was the fact that other researchers were able to quickly publish related quasicrystalline structures — some of them may have actually seen this years earlier, but didn’t know what to make of the data, so they left it in the file drawer.

Enough labs published results that it became impossible to contend that they all needed a remedial trip to a crystallography textbook, and the consensus in the field went in Shechtman’s favor. Eventually, the International Union of Crystallography even changed its definition of a crystal to accommodate what was once thought to be impossible. And, more recently, researchers have even described a naturally occurring quasicrystal.

The Nobel Prize literature cites a number of interesting properties of these substances that might eventually be turned into useful materials. Quasicrystals, even purely metallic ones, tend to be very hard (although prone to fracturing). Their unusual structures make them poor conductors of heat and electricity, and can help create a nonstick surface. There is some hope that, because of their poor heat conductivity, they’ll make good materials for converting temperature differences directly to electricity, allowing the harvesting of waste heat.

Still, the Prize isn’t being awarded because quasicrystals could have commercial applications. Instead, it’s being awarded because Shechtman demonstrated that he could reliably reproduce what we once thought was impossible.

Top image: The symmetry pattern of electron diffraction in Shechtman’s quasicrystal. (Nobel Media)

Source: Ars Technica

See Also:

 
 

How a Blogging Duo Is Changing Fashion Photography With Animated Cinemagraphs

05 Oct



A cinemagraph created during New York Fashion Week last month.

Jamie Beck and Kevin Burg comprise the rising star duo behind the wildly popular Tumblr From Me To You.

(One might argue, given recent campaigns with Ralph Lauren and Juicy Couture, a photo editorial in The New York Times and an appearance in Lucky Magazine, that their stars have already risen, but we firmly believe the best is yet to come.)

Beck, 28, and Burg, 30, combine an unusual set of talents that have attracted not only the notice of the Tumblr community, but also of a growing roster of brands and editors.

Beck is the photographer and the blog’s primary model and stylist. She leverages her pinup figure, makeup and hair-styling skills, and a wardrobe of vintage finds to create spreads that connote the glamor of American icons such as Audrey Hepburn and Grace Kelly.

Burg is the more technical of the two, leading the blog’s design and the creation of their signature (and trademark-pending) cinemagraphs — animated GIF images that look like moving photos. He also — from what I observed in meetings with one of their clients and their manager, Karen Robinovitz of DBA — heads up business relations, jotting down notes on clients’ expectations and deadlines for deliverables.

The two met in 2006 through mutual friends, and are now engaged. Before they began working together at the beginning of this year, Beck — who says that from the age of 13, photography is “all [she's] ever done, and all [she's] ever wanted to do” — was still shooting in film. Burg encouraged her to purchase her first digital camera with which to begin blogging and tweeting, and more recently, to begin uploading her iPhone snapshots to Instagram. (“I’m obsessed,” she discloses.) He also designed her Tumblr.

Burg had, for some time, been taking frames from Saturday Night Live clips and turning elements into looping animations on a still background. These became the prototypes for the their first cinemagraph “Les Tendrils,” which was published on Feb. 13, 2011.


Beck and Burg’s first cinemagraph, “Les Tendrils,” published on February 13, 2011.

After they published their first cinemagraphs, Beck recalls that no one wanted to book her for photographs anymore. They wanted her to create “that moving thing you do” — which is when they decided to coin the term “cinemagraph.” The two felt they needed the term because what they created was unlike an animated GIF.

“There’s a cinematic quality to it … like a living photograph. It’s always a photograph first and foremost,” says Beck.


How They Create Cinemagraphs






Jamie Beck.





Model Coca Rocha wearing Oscar de la Renta.





A rainy evening in Savannah, Georgia.





A cappuccino.





Fashion designer Oscar de la Renta.





Inside the Brooklyn apartment of a couple who designs jewelery.





A row of swaying lights.





A swinging necklace.





A summer concert.





A cinemagraph for Juicy Couture.





A cinemagraph for designer Katie Ermilio's lookbook.





A handbag swings on a Manhattan rooftop.





Backstage at Prabal Gurung.





Backstage at Prabal Gurung.





Photographers Scott Schuman and Garance Dore at Burberry's S/S 2012 collection show.

Beck and Burg never know for sure if a cinemagraph is going to work out, which makes it difficult when brands hire the pair. “We can be 90% sure,” Beck discloses. “When we shoot from the street or at [New York] Fashion Week, and I can’t control the environment, it’s never a guarantee.”

To create a cinemagraph, Burg and Beck focus on animating one object: a swinging chain, for instance, or a spoon moving around the rim of a coffee cup. In a studio setting, the pair will employ pinpoint light to create sparkle, and fans to tousle hair and garments. Beck directs the camera, a Canon D5 Mark II, while Burg controls the props that produce the animation.

Beck and Burg will then import and edit the files in Adobe Photoshop and After Effects. The number of frames they use depends on the medium. For Gilt Taste‘s website, they were able to create much longer loops and embed their work on the site using HTML5 video layers. A cinemagraph that appears on their Tumblr will end up being between 25 and 100 frames; a banner ad is even more constrained.

Shooting a cinemagraph doesn’t take any more time than shooting a photo, roughly speaking, but the editing process generally takes a day, says Burg.

Both Beck and Burg expressed frustrations with the limitations of connections speeds and file sizes, which necessitate the use of GIF files, and consequently reduce the quality. Beck expects that in a year they will able to distribute cinemagraphs that look so lifelike that you could touch them.


At a test shoot for Juicy Couture in August.


The Added Value of an Audience



A cinemagraph commissioned by Juicy Couture.

It’s not just Beck’s and Burg’s photography and cinemagraphs that make them appealing to brands. The two have also amassed a large built-in audience — a series of six cinemagraphs they did featuring model Coca Rocha in Oscar de la Renta gowns merited around 55,000 notes and more than 2,000,000 impressions, Tumblr fashion director Rich Tong revealed at a conference in Paris earlier this month. That exposure makes the duo a valuable distribution force.

Take a recent campaign Beck and Burg did for fashion brand Juicy Couture. They were commissioned to create a series of cinemagraphs using Juicy Couture products, some of which appeared as banner ads across a range of fashion sites, and some of which — like the one above — appeared solely on their own Tumblr, racking upwards of 15,000 notes (reblogs and likes) apiece.

“The great thing about Jamie and Kevin is that they’re not just artists, but they also have a distribution portal,” says Robinovitz. “Why would you just hire a photographer when you can hire a photographer who has a place to share photos… [and] a hungry audience?”

Robinovitz’s question was rhetorical, of course, but also a good one to pose.

In a recent interview, Scott Schuman, the photographer behind street style blog The Sartorialist, says that he earns somewhere between a quarter of a million and half a million dollars per year running ads on his blog, in addition to the assignments it has earned him. Will photographers who don’t blog and market themselves online stand as much of a chance? And will blog coverage be written into assignment contracts?

Beck says that while she has not negotiated blog coverage into any of her contracts directly, it is discussed with brands during an assignment — namely, she says, to figure out timing and what she’s allowed to post. Brands don’t control what goes on Tumblr, and she is careful to only accept assignments true to her aesthetic.

“If I am going to work with somebody, it has to be part of my life, something I want to share,” Beck explains. “I can be hired to make banner ads, but I want people to see the whole 360, and hopefully my readers will be amused or inspired.”

More About: coca rocha, fashion, features, jamie beck, juicy couture, kevin burg, trending, tumblr

For more Dev & Design coverage:


 
 

How a Blogging Duo Is Changing Fashion Photography With Animated Cinemagraphs

05 Oct



A cinemagraph created during New York Fashion Week last month.

Jamie Beck and Kevin Burg comprise the rising star duo behind the wildly popular Tumblr From Me To You.

(One might argue, given recent campaigns with Ralph Lauren and Juicy Couture, a photo editorial in The New York Times and an appearance in Lucky Magazine, that their stars have already risen, but we firmly believe the best is yet to come.)

Beck, 28, and Burg, 30, combine an unusual set of talents that have attracted not only the notice of the Tumblr community, but also of a growing roster of brands and editors.

Beck is the photographer and the blog’s primary model and stylist. She leverages her pinup figure, makeup and hair-styling skills, and a wardrobe of vintage finds to create spreads that connote the glamor of American icons such as Audrey Hepburn and Grace Kelly.

Burg is the more technical of the two, leading the blog’s design and the creation of their signature (and trademark-pending) cinemagraphs — animated GIF images that look like moving photos. He also — from what I observed in meetings with one of their clients and their manager, Karen Robinovitz of DBA — heads up business relations, jotting down notes on clients’ expectations and deadlines for deliverables.

The two met in 2006 through mutual friends, and are now engaged. Before they began working together at the beginning of this year, Beck — who says that from the age of 13, photography is “all [she's] ever done, and all [she's] ever wanted to do” — was still shooting in film. Burg encouraged her to purchase her first digital camera with which to begin blogging and tweeting, and more recently, to begin uploading her iPhone snapshots to Instagram. (“I’m obsessed,” she discloses.) He also designed her Tumblr.

Burg had, for some time, been taking frames from Saturday Night Live clips and turning elements into looping animations on a still background. These became the prototypes for the their first cinemagraph “Les Tendrils,” which was published on Feb. 13, 2011.


Beck and Burg’s first cinemagraph, “Les Tendrils,” published on February 13, 2011.

After they published their first cinemagraphs, Beck recalls that no one wanted to book her for photographs anymore. They wanted her to create “that moving thing you do” — which is when they decided to coin the term “cinemagraph.” The two felt they needed the term because what they created was unlike an animated GIF.

“There’s a cinematic quality to it … like a living photograph. It’s always a photograph first and foremost,” says Beck.


How They Create Cinemagraphs






Jamie Beck.





Model Coca Rocha wearing Oscar de la Renta.





A rainy evening in Savannah, Georgia.





A cappuccino.





Fashion designer Oscar de la Renta.





Inside the Brooklyn apartment of a couple who designs jewelery.





A row of swaying lights.





A swinging necklace.





A summer concert.





A cinemagraph for Juicy Couture.





A cinemagraph for designer Katie Ermilio's lookbook.





A handbag swings on a Manhattan rooftop.





Backstage at Prabal Gurung.





Backstage at Prabal Gurung.





Photographers Scott Schuman and Garance Dore at Burberry's S/S 2012 collection show.

Beck and Burg never know for sure if a cinemagraph is going to work out, which makes it difficult when brands hire the pair. “We can be 90% sure,” Beck discloses. “When we shoot from the street or at [New York] Fashion Week, and I can’t control the environment, it’s never a guarantee.”

To create a cinemagraph, Burg and Beck focus on animating one object: a swinging chain, for instance, or a spoon moving around the rim of a coffee cup. In a studio setting, the pair will employ pinpoint light to create sparkle, and fans to tousle hair and garments. Beck directs the camera, a Canon D5 Mark II, while Burg controls the props that produce the animation.

Beck and Burg will then import and edit the files in Adobe Photoshop and After Effects. The number of frames they use depends on the medium. For Gilt Taste‘s website, they were able to create much longer loops and embed their work on the site using HTML5 video layers. A cinemagraph that appears on their Tumblr will end up being between 25 and 100 frames; a banner ad is even more constrained.

Shooting a cinemagraph doesn’t take any more time than shooting a photo, roughly speaking, but the editing process generally takes a day, says Burg.

Both Beck and Burg expressed frustrations with the limitations of connections speeds and file sizes, which necessitate the use of GIF files, and consequently reduce the quality. Beck expects that in a year they will able to distribute cinemagraphs that look so lifelike that you could touch them.


At a test shoot for Juicy Couture in August.


The Added Value of an Audience



A cinemagraph commissioned by Juicy Couture.

It’s not just Beck’s and Burg’s photography and cinemagraphs that make them appealing to brands. The two have also amassed a large built-in audience — a series of six cinemagraphs they did featuring model Coca Rocha in Oscar de la Renta gowns merited around 55,000 notes and more than 2,000,000 impressions, Tumblr fashion director Rich Tong revealed at a conference in Paris earlier this month. That exposure makes the duo a valuable distribution force.

Take a recent campaign Beck and Burg did for fashion brand Juicy Couture. They were commissioned to create a series of cinemagraphs using Juicy Couture products, some of which appeared as banner ads across a range of fashion sites, and some of which — like the one above — appeared solely on their own Tumblr, racking upwards of 15,000 notes (reblogs and likes) apiece.

“The great thing about Jamie and Kevin is that they’re not just artists, but they also have a distribution portal,” says Robinovitz. “Why would you just hire a photographer when you can hire a photographer who has a place to share photos… [and] a hungry audience?”

Robinovitz’s question was rhetorical, of course, but also a good one to pose.

In a recent interview, Scott Schuman, the photographer behind street style blog The Sartorialist, says that he earns somewhere between a quarter of a million and half a million dollars per year running ads on his blog, in addition to the assignments it has earned him. Will photographers who don’t blog and market themselves online stand as much of a chance? And will blog coverage be written into assignment contracts?

Beck says that while she has not negotiated blog coverage into any of her contracts directly, it is discussed with brands during an assignment — namely, she says, to figure out timing and what she’s allowed to post. Brands don’t control what goes on Tumblr, and she is careful to only accept assignments true to her aesthetic.

“If I am going to work with somebody, it has to be part of my life, something I want to share,” Beck explains. “I can be hired to make banner ads, but I want people to see the whole 360, and hopefully my readers will be amused or inspired.”

More About: coca rocha, fashion, features, jamie beck, juicy couture, kevin burg, trending, tumblr

For more Dev & Design coverage:


 
 

Kill Math makes math more meaningful

05 Oct

Kill Math

After a certain point in math education, like some time during high school, the relevance of the concepts to the everyday and the real world seem to fade. However, in many ways, math lets you describe real life better than you can with just words. Designer Bret Victor hopes to make the abstract and conceptual to real and concrete with Kill Math.

Kill Math is my umbrella project for techniques that enable people to model and solve meaningful problems of quantity using concrete representations and intuition-guided exploration. In the long term, I hope to develop a widely-usable, insight-generating alternative to symbolic math.

As part of the early project, Victor developed a prototype interface on the iPad to help you understand dynamical systems. It probably sounds boring to you, but the video and explanation will change your mind:

Statistics has the same problem with concepts, and is one of the main reasons why people hate it so much. They learn about curves, hypothesis tests, and distribution tables, and the takeaway is that there are some equations that you plug numbers into. Sad. Of course there are plenty of people working on that, but there's still a ways to go.

[Kill Math | Thanks, Matthew]

 
 

Kill Math makes math more meaningful

05 Oct

Kill Math

After a certain point in math education, like some time during high school, the relevance of the concepts to the everyday and the real world seem to fade. However, in many ways, math lets you describe real life better than you can with just words. Designer Bret Victor hopes to make the abstract and conceptual to real and concrete with Kill Math.

Kill Math is my umbrella project for techniques that enable people to model and solve meaningful problems of quantity using concrete representations and intuition-guided exploration. In the long term, I hope to develop a widely-usable, insight-generating alternative to symbolic math.

As part of the early project, Victor developed a prototype interface on the iPad to help you understand dynamical systems. It probably sounds boring to you, but the video and explanation will change your mind:

Statistics has the same problem with concepts, and is one of the main reasons why people hate it so much. They learn about curves, hypothesis tests, and distribution tables, and the takeaway is that there are some equations that you plug numbers into. Sad. Of course there are plenty of people working on that, but there's still a ways to go.

[Kill Math | Thanks, Matthew]

 
 

Dark Energy FAQ

04 Oct

In honor of the Nobel Prize, here are some questions that are frequently asked about dark energy, or should be.

What is dark energy?

It’s what makes the universe accelerate, if indeed there is a “thing” that does that. (See below.)

So I guess I should be asking… what does it mean to say the universe is “accelerating”?

First, the universe is expanding: as shown by Hubble, distant galaxies are moving away from us with velocities that are roughly proportional to their distance. “Acceleration” means that if you measure the velocity of one such galaxy, and come back a billion years later and measure it again, the recession velocity will be larger. Galaxies are moving away from us at an accelerating rate.

But that’s so down-to-Earth and concrete. Isn’t there a more abstract and scientific-sounding way of putting it?

The relative distance between far-flung galaxies can be summed up in a single quantity called the “scale factor,” often written a(t) or R(t). The scale factor is basically the “size” of the universe, although it’s not really the size because the universe might be infinitely big — more accurately, it’s the relative size of space from moment to moment. The expansion of the universe is the fact that the scale factor is increasing with time. The acceleration of the universe is the fact that it’s increasing at an increasing rate — the second derivative is positive, in calculus-speak.

Does that mean the Hubble constant, which measures the expansion rate, is increasing?

No. The Hubble “constant” (or Hubble “parameter,” if you want to acknowledge that it changes with time) characterizes the expansion rate, but it’s not simply the derivative of the scale factor: it’s the derivative divided by the scale factor itself. Why? Because then it’s a physically measurable quantity, not something we can change by switching conventions. The Hubble constant is basically the answer to the question “how quickly does the scale factor of the universe expand by some multiplicative factor?”

If the universe is decelerating, the Hubble constant is decreasing. If the Hubble constant is increasing, the universe is accelerating. But there’s an intermediate regime in which the universe is accelerating but the Hubble constant is decreasing — and that’s exactly where we think we are. The velocity of individual galaxies is increasing, but it takes longer and longer for the universe to double in size.

Said yet another way: Hubble’s Law relates the velocity v of a galaxy to its distance d via v = H d. The velocity can increase even if the Hubble parameter is decreasing, as long as it’s decreasing more slowly than the distance is increasing.

Did the astronomers really wait a billion years and measure the velocity of galaxies again?

No. You measure the velocity of galaxies that are very far away. Because light travels at a fixed speed (one light year per year), you are looking into the past. Reconstructing the history of how the velocities were different in the past reveals that the universe is accelerating.

How do you measure the distance to galaxies so far away?

It’s not easy. The most robust method is to use a “standard candle” — some object that is bright enough to see from great distance, and whose intrinsic brightness is known ahead of time. Then you can figure out the distance simply by measuring how bright it actually looks: dimmer = further away.

Sadly, there are no standard candles.

Then what did they do?

Fortunately we have the next best thing: standardizable candles. A specific type of supernova, Type Ia, are very bright and approximately-but-not-quite the same brightness. Happily, in the 1990′s Mark Phillips discovered a remarkable relationship between intrinsic brightness and the length of time it takes for a supernova to decline after reaching peak brightness. Therefore, if we measure the brightness as it declines over time, we can correct for this difference, constructing a universal measure of brightness that can be used to determine distances.

Why are Type Ia supernovae standardizable candles?

We’re not completely sure — mostly it’s an empirical relationship. But we have a good idea: we think that SNIa are white dwarf stars that have been accreting matter from outside until they hit the Chandrasekhar Limit and explode. Since that limit is basically the same number everywhere in the universe, it’s not completely surprising that the supernovae have similar brightnesses. The deviations are presumably due to differences in composition.

But how do you know when a supernova is going to happen?

You don’t. They are rare, maybe once per century in a typical galaxy. So what you do is look at many, many galaxies with wide-field cameras. In particular you compare an image of the sky taken at one moment to another taken a few weeks later — “a few weeks” being roughly the time between new Moons (when the sky is darkest), and coincidentally about the time it takes a supernova to flare up in brightness. Then you use computers to compare the images and look for new bright spots. Then you go back and examine those bright spots closely to try to check whether they are indeed Type Ia supernovae. Obviously this is very hard and wouldn’t even be conceivable if it weren’t for a number of relatively recent technological advances — CCD cameras as well as giant telescopes. These days we can go out and be confident that we’ll harvest supernovae by the dozens — but when Perlmutter and his group started out, that was very far from obvious.

And what did they find when they did this?

Most (almost all) astronomers expected them to find that the universe was decelerating — galaxies pull on each other with their gravitational fields, which should slow the whole thing down. (Actually many astronomers just thought they would fail completely, but that’s another story.) But what they actually found was that the distant supernovae were dimmer than expected — a sign that they are farther away than we predicted, which means the universe has been accelerating.

Why did cosmologists accept this result so quickly?

Even before the 1998 announcements, it was clear that something funny was going on with the universe. There seemed to be evidence that the age of the universe was younger than the age of its oldest stars. There wasn’t as much total matter as theorists predicted. And there was less structure on large scales than people expected. The discovery of dark energy solved all of these problems at once. It made everything snap into place. So people were still rightfully cautious, but once this one startling observation was made, the universe suddenly made a lot more sense.

How do we know the supernovae not dimmer because something is obscuring them, or just because things were different in the far past?

That’s the right question to ask, and one reason the two supernova teams worked so hard on their analysis. You can never be 100% sure, but you can gain more and more confidence. For example, astronomers have long known that obscuring material tends to scatter blue light more easily than red, leading to “reddening” of stars that sit behind clouds of gas and dust. You can look for reddening, and in the case of these supernovae it doesn’t appear to be important. More crucially, by now we have a lot of independent lines of evidence that reach the same conclusion, so it looks like the original supernova results were solid.

There’s really independent evidence for dark energy?

Oh yes. One simple argument is “subtraction”: the cosmic microwave background measures the total amount of energy (including matter) in the universe. Local measures of galaxies and clusters measure the total amount of matter. The latter turns out to be about 27% of the former, leaving 73% or so in the form of some invisible stuff that is not matter: “dark energy.” That’s the right amount to explain the acceleration of the universe. Other lines of evidence come from baryon acoustic oscillations (ripples in large-scale structure whose size helps measure the expansion history of the universe) and the evolution of structure as the universe expands.

Okay, so: what is dark energy?

Glad you asked! Dark energy has three crucial properties. First, it’s dark: we don’t see it, and as far as we can observe it doesn’t interact with matter at all. (Maybe it does, but beneath our ability to currently detect.) Second, it’s smoothly distributed: it doesn’t fall into galaxies and clusters, or we would have found it by studying the dynamics of those objects. Third, it’s persistent: the density of dark energy (amount of energy per cubic light-year) remains approximately constant as the universe expands. It doesn’t dilute away like matter does.

These last two properties (smooth and persistent) are why we call it “energy” rather than “matter.” Dark energy doesn’t seem to act like particles, which have local dynamics and dilute away as the universe expands. Dark energy is something else.

That’s a nice general story. What might dark energy specifically be?

The leading candidate is the simplest one: “vacuum energy,” or the “cosmological constant.” Since we know that dark energy is pretty smooth and fairly persistent, the first guess is that it’s perfectly smooth and exactly persistent. That’s vacuum energy: a fixed amount of energy attached to every tiny region of space, unchanging from place to place or time to time. About one hundred-millionth of an erg per cubic centimeter, if you want to know the numbers.

Is vacuum energy really the same as the cosmological constant?

Yes. Don’t believe claims to the contrary. When Einstein first invented the idea, he didn’t think of it as “energy,” he thought of it as a modification of the way spacetime curvature interacted with energy. But it turns out to be precisely the same thing. (If someone doesn’t want to believe this, ask them how they would observationally distinguish the two.)

Doesn’t vacuum energy come from quantum fluctuations?

Not exactly. There are many different things that can contribute to the energy of empty space, and some of them are completely classical (nothing to do with quantum fluctuations). But in addition to whatever classical contribution the vacuum energy has, there are also quantum fluctuations on top of that. These fluctuation are very large, and that leads to the cosmological constant problem.

What is the cosmological constant problem?

If all we knew was classical mechanics, the cosmological constant would just be a number — there’s no reason for it to be big or small, positive or negative. We would just measure it and be done.

But the world isn’t classical, it’s quantum. In quantum field theory we expect that classical quantities receive “quantum corrections.” In the case of the vacuum energy, these corrections come in the form of the energy of virtual particles fluctuating in the vacuum of empty space.

We can add up the amount of energy we expect in these vacuum fluctuations, and the answer is: an infinite amount. That’s obviously wrong, but we suspect that we’re overcounting. In particular, that rough calculation includes fluctuations at all sizes, including wavelengths smaller than the Planck distance at which spacetime probably loses its conceptual validity. If instead we only include wavelengths that are at the Planck length or longer, we get a specific estimate for the value of the cosmological constant.

The answer is: 10120 times what we actually observe. That discrepancy is the cosmological constant problem.

Why is the cosmological constant so small?

Nobody knows. Before the supernovae came along, many physicists assumed there was some secret symmetry or dynamical mechanism that set the cosmological constant to precisely zero, since we certainly knew it was much smaller than our estimates would indicate. Now we are faced with both explaining why it’s small, and why it’s not quite zero. And for good measure: the coincidence problem, which is why the dark energy density is the same order of magnitude as the matter density.

Here’s how bad things are: right now, the best theoretical explanation for the value of the cosmological constant is the anthropic principle. If we live in a multiverse, where different regions have very different values of the vacuum energy, one can plausibly argue that life can only exist (to make observations and win Nobel Prizes) in regions where the vacuum energy is much smaller than the estimate. If it were larger and positive, galaxies (and even atoms) would be ripped apart; if it were larger and negative, the universe would quickly recollapse. Indeed, we can roughly estimate what typical observers should measure in such a situation; the answer is pretty close to the observed value. Steven Weinberg actually made this prediction in 1988, long before the acceleration of the universe was discovered. He didn’t push it too hard, though; more like “if this is how things work out, this is what we should expect to see…” There are many problems with this calculation, especially when you start talking about “typical observers,” even if you’re willing to believe there might be a multiverse. (I’m very happy to contemplate the multiverse, but much more skeptical that we can currently make a reasonable prediction for observable quantities within that framework.)

What we would really like is a simple formula that predicts the cosmological constant once and for all as a function of other measured constants of nature. We don’t have that yet, but we’re trying. Proposed scenarios make use of quantum gravity, extra dimensions, wormholes, supersymmetry, nonlocality, and other interesting but speculative ideas. Nothing has really caught on as yet.

Has the course of progress in string theory ever been affected by an experimental result?

Yes: the acceleration of the universe. Previously, string theorists (like everyone else) assumed that the right thing to do was to explain a universe with zero vacuum energy. Once there was a real chance that the vacuum energy is not zero, they asked whether that was easy to accommodate within string theory. The answer is: it’s not that hard. The problem is that if you can find one solution, you can find an absurdly large number of solutions. That’s the string theory landscape, which seems to kill the hopes for one unique solution that would explain the real world. That would have been nice, but science has to take what nature has to offer.

What’s the coincidence problem?

Matter dilutes away as the universe expands, while the dark energy density remains more or less constant. Therefore, the relative density of dark energy and matter changes considerably over time. In the past, there was a lot more matter (and radiation); in the future, dark energy will completely dominate. But today, they are approximately equal, by cosmological standards. (When two numbers could differ by a factor of 10100 or much more, a factor of three or so counts as “equal.”) Why are we so lucky to be born at a time when dark energy is large enough to be discoverable, but small enough that it’s a Nobel-worthy effort to do so? Either this is just a coincidence (which might be true), or there is something special about the epoch in which we live. That’s one of the reasons people are willing to take anthropic arguments seriously. We’re talking about a preposterous universe here.

If the dark energy has a constant density, but space expands, doesn’t that mean energy isn’t conserved?

Yes. That’s fine.

What’s the difference between “dark energy” and “vacuum energy”?

“Dark energy” is the general phenomenon of smooth, persistent stuff that makes the universe accelerate; “vacuum energy” is a specific candidate for dark energy, namely one that is absolutely smooth and utterly constant.

So there are other candidates for dark energy?

Yes. All you need is something that is pretty darn smooth and persistent. It turns out that most things like to dilute away, so finding persistent energy sources isn’t that easy. The simplest and best idea is quintessence, which is just a scalar field that fills the universe and changes very slowly as time passes.

Is the quintessence idea very natural?

Not really. An original hope was that, by considering something dynamical and changing rather than a plain fixed constant energy, you could come up with some clever explanation for why the dark energy was so small, and maybe even explain the coincidence problem. Neither of those hopes has really panned out.

Instead, you’ve added new problems. According to quantum field theory, scalar fields like to be heavy; but to be quintessence, a scalar field would have to be enormously light, maybe 10-30 times the mass of the lightest neutrino. (But not zero!) That’s one new problem you’ve introduced, and another is that a light scalar field should interact with ordinary matter. Even if that interaction is pretty feeble, it should still be large enough to detect — and it hasn’t been detected. Of course, that’s an opportunity as well as a problem — maybe better experiments will actually find a “quintessence force,” and we’ll understand dark energy once and for all.

How else can we test the quintessence idea?

The most direct way is to do the supernova thing again, but do it better. More generally: map the expansion of the universe so precisely that we can tell whether the density of dark energy is changing with time. This is generally cast as an attempt to measure the dark energy equation-of-state parameter w. If w is exactly minus one, the dark energy is exactly constant — vacuum energy. If w is slightly greater than -1, the energy density is gradually declining; if it’s slightly less (e.g. -1.1), the dark energy density is actually growing with time. That’s dangerous for all sorts of theoretical reasons, but we should keep our eyes peeled.

What is w?

It’s called the “equation-of-state parameter” because it relates the pressure p of dark energy to its energy density ρ, via w = p/ρ. Of course nobody measures the pressure of dark energy, so it’s a slightly silly definition, but it’s an accident of history. What really matters is how the dark energy evolves with time, but in general relativity that’s directly related to the equation-of-state parameter.

Does that mean that dark energy has negative pressure?

Yes indeed. Negative pressure is what happens when a substance pulls rather than pushes — like an over-extended spring that pulls on either end. It’s often called “tension.” This is why I advocated smooth tension as a better name than “dark energy,” but I came in too late.

Why does dark energy make the universe accelerate?

Because it’s persistent. Einstein says that energy causes spacetime to curve. In the case of the universe, that curvature comes in two forms: the curvature of space itself (as opposed to spacetime), and the expansion of the universe. We’ve measured the curvature of space, and it’s essentially zero. So the persistent energy leads to a persistent expansion rate. In particular, the Hubble parameter is close to constant, and if you remember Hubble’s Law from way up top (v = H d) you’ll realize that if H is approximately constant, v will be increasing because the distance is increasing. Thus: acceleration.

Is negative pressure is like tension, why doesn’t it pull things together rather than pushing them apart?

Sometimes you will hear something along the lines of “dark energy makes the universe accelerate because it has negative pressure.” This is strictly speaking true, but a bit ass-backwards; it gives the illusion of understanding rather than actual understanding. You are told “the force of gravity depends on the density plus three times the pressure, so if the pressure is equal and opposite to the density, gravity is repulsive.” Seems sensible, except that nobody will explain to you why gravity depends on the density plus three times the pressure. And it’s not really the “force of gravity” that depends on that; it’s the local expansion of space.

The “why doesn’t tension pull things together?” question is a perfectly valid one. The answer is: because dark energy doesn’t actually push or pull on anything. It doesn’t interact directly with ordinary matter, for one thing; for another, it’s equally distributed through space, so any pulling it did from one direction would be exactly balanced by an opposite pull from the other. It’s the indirect effect of dark energy, through gravity rather than through direct interaction, that makes the universe accelerate.

The real reason dark energy causes the universe to accelerate is because it’s persistent.

Is dark energy like antigravity?

No. Dark energy is not “antigravity,” it’s just gravity. Imagine a world with zero dark energy, except for two blobs full of dark energy. Those two blobs will not repel each other, they will attract. But inside those blobs, the dark energy will push space to expand. That’s just the miracle of non-Euclidean geometry.

Is it a new repulsive force?

No. It’s just a new (or at least different) kind of source for an old force — gravity. No new forces of nature are involved.

What’s the difference between dark energy and dark matter?

Completely different. Dark matter is some kind of particle, just one we haven’t discovered yet. We know it’s there because we’ve observed its gravitational influence in a variety of settings (galaxies, clusters, large-scale structure, microwave background radiation). It’s about 23% of the universe. But it’s basically good old-fashioned “matter,” just matter that we can’t directly detect (yet). It clusters under the influence of gravity, and dilutes away as the universe expands. Dark energy, meanwhile, doesn’t cluster, nor does it dilute away. It’s not made of particles, it’s some different kind of thing entirely.

Is it possible that there is no dark energy, just a modification of gravity on cosmological scales?

It’s possible, sure. There are at least two popular approaches to this idea: f(R) gravity , which Mark and I helped develop, and DGP gravity, by Dvali, Gabadadze, and Porati. The former is a directly phenomenological approach where you simply change the Einstein field equation by messing with the action in four dimensions, while the latter uses extra dimensions that only become visible at large distances. Both models face problems — not necessarily insurmountable, but serious — with new degrees of freedom and attendant instabilities.

Modified gravity is certainly worth taking seriously (but I would say that). Still, like quintessence, it raises more problems than it solves, at least at the moment. My personal likelihoods: cosmological constant = 0.9, dynamical dark energy = 0.09, modified gravity = 0.01. Feel free to disagree.

What does dark energy imply about the future of the universe?

That depends on what the dark energy is. If it’s a true cosmological constant that lasts forever, the universe will continue to expand, cool off, and empty out. Eventually there will be nothing left but essentially empty space.

The cosmological constant could be constant at the moment, but temporary; that is, there could be a future phase transition in which the vacuum energy decreases. Then the universe could conceivably recollapse.

If the dark energy is dynamical, any possibility is still open. If it’s dynamical and increasing (w less than -1 and staying that way), we could even get a Big Rip.

What’s next?

We would love to understand dark energy (or modified gravity) through better cosmological observations. That means measuring the equation-of-state parameter, as well as improving observations of gravity in galaxies and clusters to compare with different models. Fortunately, while the U.S. is gradually retreating from ambitious new science projects, the European Space Agency is moving forward with a satellite to measure dark energy. There are a number of ongoing ground-based efforts, of course, and the Large Synoptic Survey Telescope should do a great job once it goes online.

But the answer might be boring — the dark energy is just a simple cosmological constant. That’s just one number; what are you going to do about it? In that case we need better theories, obviously, but also input from less direct empirical sources — particle accelerators, fifth-force searches, tests of gravity, anything that would give some insight into how spacetime and quantum field theory fit together at a basic level.

The great thing about science is that the answers aren’t in the back of the book; we have to solve the problems ourselves. This is a big one.

 
 

All numbers lead to one

04 Oct

Collatz graph

In 1937, mathematician Lothar Collatz proposed that given the following algorithm, you will always end at the number 1:

  1. Take any natural number, n.
  2. If n is even, divide it by 2.
  3. Otherwise, n is odd. Multiply it by 3 and add 1.
  4. Repeat indefinitely.

Developer Jason Davies puts it into reverse and shows all the numbers that fall within an orbit length of 18 or less. Press play, and watch the graph grow. Mostly a fun animation for nerds like me.

[Collatz Graph]

 
 

All numbers lead to one

04 Oct

Collatz graph

In 1937, mathematician Lothar Collatz proposed that given the following algorithm, you will always end at the number 1:

  1. Take any natural number, n.
  2. If n is even, divide it by 2.
  3. Otherwise, n is odd. Multiply it by 3 and add 1.
  4. Repeat indefinitely.

Developer Jason Davies puts it into reverse and shows all the numbers that fall within an orbit length of 18 or less. Press play, and watch the graph grow. Mostly a fun animation for nerds like me.

[Collatz Graph]

 
 

Isaac Asimov on Security Theater

03 Oct

A great find:

In his 1956 short story, "Let's Get Together," Isaac Asimov describes security measures proposed to counter a terrorist threat:
"Consider further that this news will leak out as more and more people become involved in our countermeasures and more and more people begin to guess what we're doing. Then what? The panic might do us more harm than any one TC bomb."

The Presidential Assistant said irritably, "In Heaven's name, man, what do you suggest we do, then?"

"Nothing," said Lynn. "Call their bluff. Live as we have lived and gamble that They won't dare break the stalemate for the sake of a one-bomb head start."

"Impossible!" said Jeffreys. "Completely impossible. The welfare of all of Us is very largely in my hands, and doing nothing is the one thing I cannot do. I agree with you, perhaps, that X-ray machines at sports arenas are a kind of skin-deep measure that won't be effective, but it has to be done so that people, in the aftermath, do not come to the bitter conclusion that we tossed our country away for the sake of a subtle line of reasoning that encouraged donothingism."

This Jeffreys guy sounds as if he works for the TSA.