Monday, March 25, 2013

Making sense of the Planck results

Without further ado, allow me to present the newest baby picture of the universe.
Image Credit: ESA and the Planck Collaboration
Isn't it adorable! Look at those little anisotropies! Who's a good cosmic microwave background?

Sorry, I got a bit carried away there. Anyway, this is the new image of the anisotropies of the cosmic microwave background. The resolution here is apparently limited only by fundamental physics rather than our ability to observe it, which means this is the best image of the CMB we'll ever get.

Besides making pretty pictures, what did Planck actually find? The first of the results we'll discuss is the new measurement of Hubble's constant, H0. The newly calculated value for Hubble's constant is 67.8±0.77 km/s/Mpc. It is important to note that there are different values reported in the papers released by Planck based on what other data sets the Planck team also used in their analysis to increase their precision. The value I reported above uses the most independent methods together to determine H0. What I found particularly interesting was that this was still within the error bars of the WMAP and HST results, as shown in this figure from one of the many papers released by the Planck science team.

Image credit: Planck Collaboration. Ade et al. 2013 A&A

And now I present the Planck results for the composition of the universe.
68.25% dark energy
31.75% atomic matter + dark matter

Breaking up the matter category yields (roughly)
4.9% atomic matter
26.8% dark matter

If you compare this to the chart I posted last week, you'll note that this is a pretty significant change in what we thought the universe is made of. No, the universe didn't magically lose dark energy between WMAP and Planck. Planck just got better measurements, leading to better data, which gives us a more accurate picture of our universe than we ever had before.

One thing that we non-cosmologists tend to take for granted is that the sum of all of the components of the universe necessarily add up to 100%. As it turns out, there is actually no reason whatsoever for this to be the case. We seem to live in a "critical universe," in which all fractional components sum to 1. We could just as easily have ended up in a universe where this number is either >1 or <1. These are, respectively, called "closed" and "open" universes. The total amount of "stuff" (matter + dark energy) in the universe determines its ultimate fate. In a closed universe, the universe will inevitably collapse back upon itself (unless the universe contains a lot of dark energy) in what we like to call the Big Crunch. An open universe will continue to expand forever until the expansion (completely independent of dark energy, mind you) completely overwhelms all forces in the universe in what is often coined the Big Rip or the Big Freeze. (Man, cosmologists come up with a lot of catchy names for things like this.)

What I discussed above is often called the curvature of the universe, and is parameterized by k. k is positive for a closed universe, and negative for an open universe. While some of the theoretical possibilities you can create by playing around with universes with curvature are rather entertaining, we don't actually need to worry about this. Planck's measurement of the curvature of our universe says that k = 0.0000. That's quite a lot of certainty.

If you recall another different part of my intro to cosmology post, I said that inverting Hubble's constant can give us a rough age for the universe, assuming constant expansion velocity (which we know isn't true, so consider this an upper limit on the age of the universe). With the new calculation of H0, we can determine that the age of the universe is 14.42 billion years. Accounting for the accelerating expansion of dark energy, the newly derived age of the universe is 13.7965 billion years (not a huge change from the final WMAP results, really).

Another particularly important measurement Planck made was looking at the angular sizes of the anisotropies in the CMB. Remember, these tiny fluctuations grew up into the matter distributions we can observe in the universe today through what are called baryon acoustic oscillations (the parts of that Wikipedia article that are purely qualitative seem pretty straightforward, so if you're really interested, give it a read), so those fluctuations are pretty important. What's more is that each peak in the spectrum of the anisotropies contains different information about the universe as a whole, so making these measurements as precisely as possible is very important. Here I present, side-by-side, the measurements of the angular scale of CMB anisotropies as measured by WMAP and Planck, respectively.
Comparison of measurements of the power spectrum of CMB anisotropies from WMAP (left) and Planck (right). Note the change in x-axis length, with Planck's result going out to l = 2500. Image credit: NASA/WMAP Collaboration (Bennett et al. 2013) and ESA/Planck Collaboration (Ade et al. 2013).
Wow. That's quite a difference. While WMAP resolved the first three peaks, (which, in itself was brilliant work), Planck convincingly resolves 5 peaks and (unless I'm fooling myself visually) indicates 6th and 7th peaks as well. Of course, you'd have to find a real cosmologist to tell you what each peak actually means, or you can do what I do, and read the Wikipedia article on the CMB.
"The peaks contain interesting physical signatures. The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe). The next peak - ratio of the odd peaks to the even peaks - determines the reduced baryon density. The third peak can be used to get information about the dark matter density."
 That's pretty damn cool. But now I wonder what all those other peaks tell us about...

Lastly, I'll address a topic that is very near and dear to my heart: neutrinos. For now, all you need to know about neutrinos is that they are very, very light particles, and we observe three different types (which we generally call "flavors" for some reason) in our particle physics experiments today. Turns out that you can actually use CMB data to determine the number of types of neutrinos there are in the universe. I wrote a little paper about this once and I won't bore you with the details. In short, the evolution of the very early universe is surprisingly sensitive to how many different types of this particle with almost negligible mass there are. Better yet, we can even find out the mass of all three neutrino types summed together (not the total mass of neutrinos in the universe, just the mass of one type plus the mass of the other plus the mass of the other).

Later WMAP results (WMAP 7-year and WMAP 9-year) calculated that, apart from the three flavors of neutrinos we already knew about, there was a fourth type flying around out there (uncertainties ruled out three neutrino flavors to two sigma). This was a particularly big deal because the Standard Model of particle physics only has room for three flavors of neutrino. Needless to say, neutrino lovers like myself were pretty excited, and eagerly awaited to see what Planck had to say on the subject. Survey says.... three neutrino flavors. Oh well. But let's not forget the mass constraints! Planck's data says that the summed neutrino masses come to less than 0.23 eV (1 eV = 1.78x10-36 kg), nearly a factor of two lower than WMAP's calculations.

Well, that's all that I can possibly say right now about the Planck results. For those of you expecting to hear something about inflation, I'm sorry to disappoint, but I don't really know enough about inflation to make any meaningful remarks. For the more specific questions, I highly recommend asking a real cosmologist.

Thursday, March 21, 2013

The History of the Universe (very, very abridged)

Today, the first results from the Planck mission are being released. This is an event that many of us have been waiting for since the launch of the Planck satellite in May of 2009. However, in order to explain why the Planck mission is so important, I'll first need to talk about some cosmology. Otherwise, for you non-scientists, why would you care about anisotropies in the cosmic microwave background?

Cosmology is essentially the history of the universe. Like history in general, we know pretty well what has happened recently because most of the evidence is pretty fresh and relatively easy to find and interpret. As you go back in time, however, history becomes more and more uncertain as evidence gets fuzzier and fuzzier. You can eliminate some of the crazier theories about the past because you know pretty well what happened next, but the uncertainties become greater. At that point, you can only really speculate about what happened in the past until you get some really, really good evidence. Luckily for us, we do have some really, really good evidence that tells us what the universe was like in the very early days, but I'll get back to that.

So, what are some of the fundamental cosmological observations we've made about the universe?

We'll start with the accidental discovery made by Edwin Hubble in 1927 that the universe was expanding. Hubble measured the distances to galaxies outside of our Local Group using a type of star known as Cepheid variables. Cepheid variable stars are very bright, pulsating stars whose pulsation period depends directly on their intrinsic brightness. The so-called period-luminosity relation is shown graphically below. Therefore, if you can measure the period of a Cepheid variable (which you can do over pretty large distances because they're so bright), you know how bright it really is. By comparing how bright an object really is to how bright you observe it to be, you can determine how far away it is (because you know light drops off as the inverse square of the distance).

The standard Period-Luminosity relation with period measured in log(days) and luminosity given in the funky magnitude scale, where smaller numbers mean the object is brighter. Image credit: Pietrzynski et al. 2010. Nature 468
Hubble also measured the galaxies' redshifts. What this means is that he measured the Dopper shift of particular lines in the spectrum of the galaxy. The Doppler shift you measure for an object can tell you whether or not it is moving towards or away from you. In the case of redshift, you observe that a particular spectral like that you know very well is shifted to a slightly longer wavelength (in the visible part of the spectrum, this would make it appear more red, hence "redshift"). The amount by which the line is shifted tells you how fast the object is moving.

So when Hubble combined his measured distances with his measured redshifts, he saw that all of the galaxies he observed were moving away from us. Even better still, the farther away they are, the faster they're moving. This simple observed relation is known as Hubble's Law, and directly relates the recessional velocity of a distant galaxy (far enough that it's not affected by the gravity of galaxies in the Local Group) to its distance as
v = H0d 
where v is the recessional velocity, d is the distance, and H0 is Hubble's constant. Hubble's constant is easily one of the most important parameters to measure in all of cosmology. In fact, NASA even launched a very well-known space telescope with the primary mission of measuring H0 that bears Hubble's name. As of now (before the Planck release), the most accurately known value of His 69.32 km/s/Mpc. To explain the units, for every 1 million parsecs (3.25 million lightyears) an galaxy is away from us, it is moving about 70 km/s faster.

What are the main consequences here? First off, Hubble's discovery brought the prevailing view of the steady-state universe into question (it should have thrown it completely out the window, but astronomers are often very stubborn). Second, if you know that the universe is expanding now, that means it was smaller in the past. Specifically, if you know the value of Hubble's constant, you could invert it to figure out the age of the universe. Currently, this yields an age of 14.12 billion years. This calculation of age assumes a constant expansion rate, which is certainly not the case, but we'll get to that later. If every other possible factor is taken into account, then we get an age of 13.77 billion years.

Still following from Hubble's law, if the universe really was smaller in the past, then that means that all of the matter and energy in the universe used to be more tightly packed. If it was more tightly packed, the matter and energy densities would be higher, and the universe as a whole would be hotter. If we go back in time far enough, the densities should be high enough (from everything getting squeezed together), that the universe would be both hot and opaque (everything is dense enough that radiation will always have something to interact with). Since you guys know I love to reference Kirchoff's laws of spectroscopy, you should know what's coming next: a hot opaque body produces a thermal spectrum.

This was predicted in a now-famous paper written by Ralph Alpher and George Gamow. But because Gamow, Alpher's advisor, had a very geeky sense of humor, he added Hans Bethe on the paper, such that the author list became "Alpher, Bethe, Gamow", and is now known as the alpha-beta-gamma paper (Alpher was not amused). In this paper, Alpher predicted that a hot, dense universe should have emitted thermal radiation that would be visible today, but very, very highly redshifted (increase in wavelength by a factor of ~1,100). In the same paper, Alpher also predicted that the first elements could be formed in the very, very early universe, corresponding roughly to the first 17 minutes of the universe's history. Better yet, his predictions of the abundances of light elements (mostly hydrogen, helium, and lithium with trace beryllium and  boron) were mostly spot-on.

The prediction of the highly redshifted thermal radiation from the early universe was a big deal even then, so many astronomers set out to find it. They weren't having too much luck until a pair of Bell Labs engineers (Arno Penzias and Robert Wilson) accidentally stumbled across a funny background signal in a long-range radio receiver at Bell Labs in Holmdel, New Jersey. (See! Good things can come out of New Jersey!) This was the predicted thermal radiation from the early universe. For this work, Penzias and Wilson won the Nobel Prize in Physics for 1978, and Alpher got little to no recognition for his contributions.

Since this initial observation, three different satellites have been launched to study the cosmic microwave (microwave because it has been redshifted to background. The first of these was the Cosmic Background Explorer (COBE) mission, which, for the first time, measured the spectrum of background radiation. The result was simply astounding: the universe is one of the most perfect thermal emitters ever observed. This discovery won a Nobel Prize in 2006 and has been immortalized in the following xkcd comic.



More interesting than the remarkable perfection of the cosmic microwave background was the very slight imperfections, or anisotropies in the background. Measuring these anisotropies was the goal of the Wilkinson Microwave Anisotropy Probe (WMAP). WMAP was launched in 2001, and was retired into a close orbit around the Sun in October of 2010. The most well-known result from WMAP was the following image of the anisotropies in the cosmic microwave background.

Keep in mind that these fluctuations are enhanced for the sake of the image, but are actually absolutely tiny, 1 part in 10,000.

There is a lot of very important information contained in this one image, and that is a subject that has been covered thoroughly in many papers over the years, so I won't even try to get into that here. The most basic point to know, however, is that these tiny fluctuations in the temperature of the cosmic microwave background come from very small changes in the density of matter in the early universe. The reason they are so important is because these tiny fluctuations are what eventually grew into the large-scale structures of the universe that we observe today.

The newest mission to measure the cosmic microwave background, the European Space Agency operated Planck mission, just this morning released its first results, which sparked this post so I can talk about what Planck has found while you guys still know what the heck is going on. In short, Planck is a better version of WMAP, which will work to measure these fluctuations even more precisely than ever before, which will give us even better information about the content of the universe.

The last important point to cover when providing an introduction to cosmology is the existence of both dark matter and dark energy. While many people have heard these buzzwords before, you don't often get too much of a context as to what these terms mean, and how we know they're real.

Dark matter is, as it kind of sounds like, matter that we can't see directly. We can detect dark matter indirectly by observing its gravitational influence on normal matter. Its existence was first strongly postulated by the infamous old fart Fritz Zwicky in his measurements of galactic clusters. Zwicky used the brightnesses of galaxies in a cluster to determine their masses (the brightness of the galaxy relates to the total mass of stars in the galaxy) and measured the motions of the galaxies in the cluster. From his calculations, there was no way that the clusters could be held together without a lot of unseen matter contributing its mass and gravity to the cluster. But because Zwicky was such an old fart, no one really took him seriously.

More evidence for the existence of unseen matter came from the work of Vera Rubin (who is an absolutely adorable old woman now, and I've been fortunate enough to attend one of her talks). Rubin worked on measuring the so-called "rotation curves" of galaxies. A rotation curve tells us how the rotational speeds of objects in a galaxy change based on how far away they are from the galaxy's center. This can be measured by looking for the redshift (and blueshift, which is the opposite of redshift) of spectral lines from the galaxy, but rather than looking at the galaxy as a whole, like Hubble did, one looks at different locations in the galaxy. Like Zwicky, Rubin saw that the motions of stars in the galaxies she looked at could not be explained purely by the luminous matter content of the galaxies.

There is other, completely independent,  compelling evidence for the existence of dark matter in the universe that we have only been able to find more recently. We can actually get information about the dark matter content of the from the fluctuations in the cosmic microwave background. Another very important measurement we can make is to look at how matter in our universe is grouped together. In general, galaxies exist in large clusters (like those studied by Fritz Zwicky), and the separation between galaxy clusters is very strongly dependent on the amount of total matter (normal stuff + dark matter) in the universe.

Now we're going to jump back a bit to measuring the Hubble constant and the Hubble Space Telescope. In the early 1990s, two competing groups of scientists, the Supernova Cosmology Project led by Saul Perlmutter and the High-Z Supernova Search Team lead by Brian Schmidt. (Note that astronomers and cosmologists use the letter z to denote redshift, so a "high-z" search is a high redshift search. From Hubble's law, this means they're looking at objects that are far away.) Both groups used a special type of supernova known as a Type-Ia (type one-A) supernova that occurs when a white dwarf becomes too massive (though some highly debated process that will not be discussed here) and explodes. It turns out that Type-Ia supernovae (plural of supernova because "nova" is Latin) explode in a very consistent manner, and emit roughly the same amount of energy every time another one goes off. Like Cepheid variables, when we know how bright something is intrinsically, we can determine how far away it is.

Both groups used the Hubble Space Telescope to measure Hubble's constant, and did so for the universe as it was a few billion years ago. Adam Riess, a researcher with the High-Z team analyzed the data and found something weird and totally unexpected. When he went to check with the other team doing the same work, they saw the same thing. Rather than tell you what they found right off the bat, I'll show you the graph. The top half of the plot shows the distance to the galaxy (proxied by the apparent brightness of the supernovae) plotted against the redshift of the galaxy. If Hubble's law is 100% correct, the data should plot as a straight line with a slope of H0. The bottom half shows the residuals, that is to say, the data if you subtract off the straight line predicted by Hubble's law.

So, what do you see? Does it look like the data fall on a straight line? If you said no, you came to the same conclusion as Adam Riess did. What's the big deal, you ask? If the actual trend is above the line one would expect for Hubble's Law, then you have a universe that is not only expanding, but accelerating as well.

Yeah, that's some crazy stuff. Why would our universe be accelerating though? Wouldn't the cumulative gravity of everything in the universe have the opposite effect? The solution cosmologists have come up with so far is dark energy. While we have even less of an idea of what dark energy is than we do dark energy, we know that it exerts a repulsive force counter to gravity on the scale of the entire universe such that, rather than slowing down or coasting, the rate at which the universe is expanding is steadily increasing.

The last thing I will show you here is the composition of the universe, which we have determined from studying the cosmic microwave background, how large-scale structures in the universe are distributed, and a number of other methods that I don't even understand fully, and will not attempt to explain. Do note that this is the data from before the Planck data release. Planck's results will be discussed in my next post.



I do hope you have enjoyed my whirlwind tour of cosmology. I apologize for the length, but there's just so much to cover to make sure that everyone is up to speed such that the Planck results will actually make sense. There's way more that I could also have covered, but I tried to stick to the basics and connect them through a somewhat coherent storyline as well. Thanks for sticking through to the end, and tune in tomorrow for my write-up on the Planck data once I've had time to digest it.

Wednesday, March 20, 2013

Humanity has NOT gone interstellar

As I'm sure many of you saw, today the American Geophysical Union published a press release stating that, for the first time in our history as a species, humanity has left the solar system. That's right. The space probe Voyager 1, launched on September 5, 1977, has moved beyond the edge of the solar system into interstellar space. Obviously this is very exciting news an...

Wait a minute, what's this? Oh... huh. It appears that NASA's Voyager team has quickly released a press release of their own. According to the Voyager science team, Voyager 1 has NOT left the solar system or entered interstellar space. Of course, this part of the story is getting pretty widely ignored in favor of the more sensational idea that something we made has left the solar system. At this point, even the AGU has submitted a correction. Check the press release link again and you'll see
"CORRECTED PRESS RELEASE
Please note that the headline on this release has been changed to better represent the findings reported in the study "

right at the top of the page.

Of course, none of this changes how far the Voyager 1 probe has travelled. It is still absolutely remarkable what the Voyager missions have done for us from an astronomical perspective, and I don't think we should ever forget that regardless of what boundaries it has or has not crossed. As of the time of this writing, Voyager 1 is 123.6886 times farther away from the Sun than Earth is, and moving at something in the neighborhood of 8 to 9 kilometers per second.

Also as of this writing, you may notice that that website actually has two different counters: one keeping track of Voyager 1's distance from the Sun, and another tracking its distance from Earth. You may also notice that while the distance from the Sun is increasing, the distance from Earth is actually decreasing. Don't worry; this isn't wrong. It just so happens that Earth is at a point in its orbit around the Sun where we happen to be moving faster than Voyager 1 in the same direction as Voyager 1 is travelling, so we're temporarily catching up with it. But I digress.

We can use this latest communication disaster as a starting point for a very interesting question though: where does our solar system end? This is actually a really tricky question, and I can't say we have a definitive answer. I sure as heck don't know, but I'll try to summarize where current thinking is on the issue.

Normally one thinks about the solar system as a collection of large bodies orbiting the Sun, in which case, you have the eight planets of the solar system and their moons. But we also have the dwarf planets and their moons (Pluto has at least 5 now). And we can't forget the asteroid belt. That's it, right?

Wrong.

Beyond the orbit of Neptune, we have a collection of small, icy/rocky objects that appear to resemble the hypothesized composition of Pluto. This is the Kuiper Belt. The Kuiper Belt runs from about 30 AU to 50 AU (where 1 AU is the average distance between Earth and the Sun), and is most likely made of the leftovers from the formation of the solar system. One of the telling hints is that the Kupier Belt, rather than being a spherical cloud, is more of a disc that appears to share a plane with the rest of the solar system. But at that distance from the Sun, the leftovers just weren't close enough together to attract one another through their mutual gravity, so they never formed any planets. Is the Kupier Belt the edge of the solar system?

Still no.

The way that astronomers seem to define the edge of the solar system these days is the edge of the heliosphere. The heliosphere is region surrounding the Sun that is dominated by the solar wind, the constant stream of high-energy particles emitted by the Sun. You can even see evidence for the solar wind in Earth's aurorae. The lights are caused by the charged particles (electrons and protons, mostly) interacting with Earth's magnetic field, which causes them to move toward the magnetic poles and emit radiation. If we go far enough away from the Sun, the solar wind will weaken, but it will also begin to run into gas that is not associated with the Sun at all. The key is that there is no reason to believe that this will happen at a single, easily defined, distance from the Sun. More likely, it will be a transition region.

How will we be able to tell? Well, both Voyager probes actually have detectors made for detecting these types of particles. So not only will they see a decrease in the particles coming from the Sun, but they should see an increase in cosmic rays from elsewhere in our galaxy. Interestingly enough, this is what we see from Voyager 1's detectors.

Data from Voyager 1's cosmic ray detectors showing the relative detection rates of solar wind particles (shown in blue) and galactic cosmic rays (red and black). Image credit: Webber & McDonald (2013)
Wait, isn't that exactly what I just described? Yes. But there's way more going on here. From an update released by NASA's Jet Propulsion Laboratory in December of 2012, the cosmic rays are only one indicator that the Voyager team is studying. The Sun also provides another detectable signal in the form of its magnetic field. In the region through with Voyager 1 is currently travelling, it appears that the Sun's magnetic field is currently getting stronger. While this is evidence that Voyager 1 is getting really close to the beginning of true interstellar space, it doesn't seem to be there quite yet.

Monday, March 18, 2013

Real life catches up

For the first time since my introductory post, it seems that I don't have any amazing new science results to talk about. This is rather fortunate, because I'm also a major slacker (as you can probably guess from the length of my previous posts that were done when I could have been doing homework instead) and have quite a bit of homework due in a few days. So unless something particularly awesome comes up, this week will be rather dull from me.


Thursday, March 14, 2013

Yep. It's a Higgs.

Looks like all the particle physicists can go home now.

OK, seriously now. Physicists at CERN have confirmed that that bump in the data the saw around 126 GeV/c2 is, in fact, a Higgs Boson. (GeV/c2 is a unit of mass that is specifically used for measuring the masses of objects on a particle physics scale, and equals about 10-27 kilograms.)

But wait, you may ask, didn't they announce finding (something that looked an awful lot like) a Higgs Boson back in December of 2011 (with this infamous, god-awful powerpoint slide featuring the most hated font in the known universe: Comic Sans)? Yes, and they published the discovery paper the following summer, when they could distinctly separate the signal from the noise to a probability of 1 in 3.5 million. Up until then, however, they only had what can best be described as a Higgs-like signal. The bump was in the right place, energywise, but they couldn't confirm that it actually was a Higgs boson.

Now, on this most auspicious of days (Pi Day, which shouldn't matter to the Europeans, because they write their dates backwards anyway), we have actual confirmation that the signal detected at 126 GeV/c2 matches specific predictions made by the creatively named Standard Model of particle physics. Specifically, physicists working at CERN have finally gathered enough data to see what this mysterious particle decays (eventually breaks up due to instability) into, and the observations seem to match.

You may have noticed that, throughout this article, I have been using the indefinite article "a" rather than the definite article "the" to refer to the Higgs that we've seen. This was not just a fluke on my part. Variations on the Standard Model predict that we should see multiple different types of Higgs Bosons, each with different intrinsic properties that only really exist on particle scales (unless you make a Bose-Einstein condensate; those things are weird and have some awesome properties). Right now, the decay modes seen only confirm that what physicists have seen so far is one of those various Higgs bosons. It will take a lot more data gathering to get the information necessary to tell us if we're seeing the boring Standard Model Higgs, or one of its exotic cousins. Despite my opening statement, there's actually still a lot of work to do.

Personally, I'm hoping for the latter. While the Standard Model has been wildly successful, it has actually almost been too successful, in my opinion (damn theorists...). The most fun times in physics are when no one has any clue what is going on and the field as a whole has to make a huge leap forward in order to catch up with the crazy new observations. Such a leap would most likely have profound consequences on our understanding of other field as well, particularly cosmology. Maybe we'd even get a good theory of dark matter!

Also, this is a really cool gif showing the data gathering over time at the ~ 126 GeV/c2 peak in the data that indicates the Higgs boson. Specifically what you see here are the data points and every other interaction know that yields a signal there. The discrepancy between the known signals and the data are what indicate the existence of a Higgs boson at 126 GeV/c2.

Credit: CERN, ATLAS Collaboration

Wednesday, March 13, 2013

How many habitable Earths are out there?

Now for the second installment of my two-part report on recent publications by Ravi Kopparapu. This time I shall report on A Revised Estimate of the Occurrence Rate of Terrestrial Planets in the Habitable Zones Around Kepler M-Dwarfs (Kopparapu 2013). This paper builds off of his previously discussed Habitable Zones paper by applying the calculated habitable zone boundaries to estimates of how many Earth-like planets are orbiting small, cool stars known as M-dwarfs, or red dwarfs. Ravi was inspired by a recent publication by Dressing & Charbonneau (2012) making very similar calculations. Dressing & Charbonneau (2012) predicted 0.15 habitable planets for each red dwarf, while Kopparapu (2013) predicted, at a minimum, three times that number. The main difference is that Dressing & Charbonneau was published before Ravi's habitable zone paper and, therefore, used the habitable zone estimates from Kasting et al. (1993). Just a month after publication, Dressing & Charbonneau (2012) was already out of date!

Man. I hope you made it through that paragraph alive! Don't worry though, it gets much more comprehensible from here.

Both of these papers used data from NASA's Kepler mission to estimate the number of planets orbiting red dwarf stars at a distance that would put them in the star's habitable zone. How they did this is particularly interesting, and definitely worth some explanation.

The Kepler mission is designed to detect exoplanets that transit (or eclipse) their host stars by looking for the decrease in light that happens when the planet is blocking some of its star's light. Of course, only a very small fraction of planets are in systems with exactly the right alignment that we can see them transit. Fortunately, we can account for this statistically. What it amounts to is determining the probability that any random planet orbiting a random star will transit its star, and turning that on its head. As a very simple example, if we knew that 1 out of every 100 stars hosting a planet would feature a transiting planet (we can calculate this probability) and we detect 4 transiting planets, we expect that there are probably around 396 planets that we can't see transiting.

Dressing & Charbonneau (2012) and Kopparapu (2013) both made this calculation specifically for planets in the habitable zones of red dwarf stars. But what is an red dwarf, you may ask, and why do we care? Red dwarfs are a class of star with the lowest possible temperature and mass range that an object can have and still be a star (any lower and it would be a brown dwarf, which I briefly discuss here). Because of their low masses, red dwarfs are also not very luminous. The advantage this has for the star is that it means the star can have a very long lifetime. The typical lifetime for an red dwarf is around 100 billion years (or more), which is 10 times longer than that of the Sun, and around 7 times longer than our universe has even existed!

Red dwarfs also have another interesting advantage in that they are, by far, the most common type of star in the Milky Way, making up an estimated 75% of all stars in our galaxy. This is due to a very interesting and surprisingly consistent trick of nature that the universe prefers to make more small things than it does large ones (stars, galaxies, planets, asteroids, etc.). Because red dwarfs are so common and have such long lifetimes, astronomers who study exoplanets have long been excited about the prospects for life on planets around red dwarfs.

In my previous post, I addressed the ways that Kopparapu et al. (2013) calculate the inner and outer edges of the habitable zone given their new climate model. But there is another way to calculate habitable zone boundaries that is also addressed in their paper that I have saved until now. These are the so called "recent Venus" and "early Mars" limits for the inner and outer edges, respectively. In short, these limits come from our observations of each planet and our inferences regarding how long they have been without liquid surface water. For Venus, this is at least 1 billion years ago and for Mars, this is about 3.8 billion years ago. A key point here is to note that the Sun has steadily increased in brightness over time for fairly well-understood reasons that will not be addressed here. (Though I may cover it in a later post; I *do* like stellar evolution.)

By combining the respective locations of Mars and Venus with the times at which we think they last had water, we can determine the solar flux on each planet in the past. Knowing this, we can scale that flux to the Sun's modern luminosity and determine the distance from today's Sun that would correspond to the received fluxes on recent Venus and early Mars. This yields an inner edge of 0.75 AU and an outer edge of 1.77 AU. Note that these limits allow for a much wider range of habitability. As such, these limits on the habitable zone boundaries are referred to as the optimistic limits. To see the actual calculations done out, you can read either of Ravi's recent publications.

For the paper I am currently addressing, Ravi did the above calculation, but replaced the Sun with a red dwarf star along with running the new climate model to get the more conservative habitable zone limits. Kopparapu (2013) also allowed for a larger range of planetary radii to count as being Earth-size. Where Dressing & Charbonneau used the range of 0.5-1.4 REarth, Ravi used the range of 0.5-2.0 REarth (where REarth is the radius of the Earth).

Given this increased size range and the new habitable zone models, Kopparapu (2013) concluded that the conservative (model-derived) limits on the habitable zone yield an estimate of 0.51 habitable Earth-like planets per red dwarf, while the optimistic estimates yield 0.61 habitable Earthlike planets per red dwarf. Even without the increased planetary size range, Kopparapu (2013) calculates a conservative estimate of 0.48 and an optimistic estimate of 0.53 habitable Earths per red dwarf.

Wow. In short, even if we're not feeling very optimistic, it seems that every other red dwarf should have an Earth-like planet orbiting in the habitable zone. As someone just itching to be able to find life elsewhere in the universe, these results seem pretty promising.

Tuesday, March 12, 2013

The New Habitable Zone

So, this is going to be the first of two posts I make covering papers published this year by Dr. Ravi Kopparapu. The reason I cover both is that most recently released paper (March 12) requires some knowledge of the first, which, in itself, is very interesting with some important consequences. But first, I'll do my best to provide you with the background necessary to understand what the heck is actually going on here.

The first paper I will review is Habitable Zones Around Main-Sequence Stars: New Estimates (Kopparapu et al. 2013). In brief, this is a re-doing of the famous Kasting et al. (1993) paper which first developed the concept of the habitable zone, and made the first calculations of where this region would be around main-sequence (non-evolved) stars

So, for starters, let's break down some of this jargon to explain what's actually going on. A "habitable zone" is the region surrounding a star where you could put a planet somewhat like Earth, and it could have liquid water on its surface. Contrary to popular belief, being in the "habitable zone" does not necessarily mean that the planet has life, or could even support life as we know it. The title is just a statement about the planet's surface temperature. The reason that the habitable zone is a region rather than a single orbital distance is because water can exist as a liquid at a fairly broad range of temperatures: 273 K (0° Celcius) to 373 K (100° Celcius).

In the simplest possible scenario, the size of the habitable zone is determined solely by the planet's distance from its star. As in our own solar system (to first order), the closer a planet is to the Sun, the more radiation it receives, making it hotter. The boundaries of the habitable zone are set by the distances at which a planet is too close to its star, so its water would evaporate, or too far away, so its water would freeze.

But this obviously isn't the full story, because the average surface temperature of Venus is much higher than the average temperature of Mercury (735 K for Venus compared to ~300 K for Mercury with massive variations between its day and night sides). The difference, as the late Carl Sagan pointed out in his dissertation, is the atmosphere.

Venus' atmosphere, as we've learned from a series of probes and landers (VeneraMagellanPioneer Venus, and many more), Venus has a very thick atmosphere (about 90 times the mass of Earth's atmosphere). Further, that atmosphere is about 96.5% carbon dioxide. Carbon dioxide is already pretty infamous as a "greenhouse gas" courtesy of global warming here on Earth. So now imagine an atmosphere with about one hundred thousand times more carbon dioxide than we have here on Earth causing our warming. That's going to be pretty damn hot.

What exactly do greenhouse gases do? Below I have included a figure illustrating the physical processes that cause the greenhouse effect on Earth. The yellow beam represents the incoming radiation from the Sun, and the red beams show infrared radiation coming from various things on Earth. You may notice that some of the radiation is reflected by Earth's surface and even more is reflected from Earth's clouds (this is a very important process whose consequences I'll describe later). The reflectivity of the planet is also going to be important because it determines how much radiation is actually absorbed by the planet. This absorbed energy is what warms the planet in question (the Earth, in this case).

Image credit: http://see-the-sea.org
That doesn't really answer the question though. The answer comes from what we know about how light and matter interact. First, the Sun's energy (the stuff that isn't reflected) is absorbed by Earth's surface. Without an atmosphere, this would cause Earth to have a surface temperature of about 255 K (too cold for liquid water). At this temperature (from Wein's law), Earth would emit radiation mostly at infrared wavelengths. Here's where our greenhouse gases become relevant. A greenhouse gas is a greenhouse gas because of its ability to absorb light at infrared wavelengths. This causes some of the radiation emitted by Earth to become trapped in the atmosphere, which heats up our atmosphere. As shown by the image above, the atmosphere, upon being heated by the absorbed infrared radiation, will, itself, emit infrared radiation. Some of this will go out to space, and some will be absorbed by Earth, which warms the surface. As you could probably guess, this causes a positive feedback loop that results in raising the average surface temperature of Earth to a balmy 288 K. And that's the greenhouse effect in a nutshell.

In summary, just through what we've talked about so far, we've identified some important factors in determining whether or not a planet is habitable.
1) Planets closer to their stars will be hotter.
2) Planets whose atmospheres (for whatever reason) have stronger greenhouse effects will be hotter.
3) Planets who reflect less radiation (absorb more radiation) will be hotter.

The last factor we really need to account for is the star around which the planet is revolving. Planets whose stars are brighter will have higher temperatures. A brighter star also makes the boundaries of the habitable zone farther away from the star, and makes the habitable zone itself larger. This effect is illustrated below. Be sure to note the logarithmic distance axis that appears to compress the blue stripe denoting the habitable zone for higher mass (higher brightness) stars. This is in appearance only. If you look at the numbers corresponding to the inner and outer edges, you can see that the habitable zone is wider around more massive/brighter stars.

Credit: Chester (Sonny) Harman
Well, that was an awful lot of setup to finally get to the punchline, but now we can talk about the paper that I meant to talk about in the first place.

What Dr. Kopparapu did was create a model of a planet and the interactions that occur in that planet's atmosphere. Then, as part of the model, he included a stellar model and the ability to put the planet at various distances from the star. Fundamentally, this is the same as James Kasting's aforementioned 1993 paper. The main difference, however, is the atmospheric model. Kopparapu et al. (2013) used new absorption coefficients for the greenhouse gases in the atmospheric model (carbon dioxide and water vapor), along with new calculations Rayleigh scattering by water vapor and collision-induced absorption (as described in Section 2.1). Using the atmospheric model, they determined the distances of the inner and outer edges of the habitable zone around the sun (and around main sequence stars in general).

The inner edge of the habitable zone can be calculated in two different ways. The first is to find the "moist greenhouse limit", which is the distance at which the water vapor content in the stratosphere increases dramatically due to increased evaporation rates. When your stratosphere becomes water-rich, the planet can begin to lose its water to space (through a handful of interactions that I won't touch on here). This limit has gone from 0.95 AU to 0.99 AU for our Sun (keep in mind, Earth is, on average, 1 AU away from the Sun). The second way to determine the location of the inner edge is the "runaway greenhouse limit", at which point Earth becomes trapped in a runaway greenhouse-like scenario akin to that of Venus. This limit has gone from 0.84 AU to 0.97 AU, which is quite a change!

The outer edge of the habitable zone occurs where carbon dioxide in the atmosphere begins to condense out such that it can no longer contribute to greenhouse warming. The outer limit from the new models has moved outward a small amount, from 1.67 AU to 1.70 AU. It is noted, however, that this is a conservative limit, as it doesn't account for warming by carbon dioxide clouds.

Actually, this model doesn't really account for clouds, because it is a one-dimensional simulation (assumes spherical symmetry), and the phenomena that give rise to clouds require far more complicated three-dimensional simulations. Because clouds are the main contributors to a planet's reflectivity (albedo), they do have to be included in the initial calculations. This is solved by, as Dr. Kasting is fond of saying, "painting the clouds on the ground." This simply means the cloud albedo is assumed and just included in the general reflectivity calculations, but the amount of "cloud cover" remains constant.

Making cloud cover constant does have some important consequences on the range of the habitable zone. Specifically, not having dynamic clouds will shrink the habitable zone from both ends. On the inner edge, water vapor clouds will build up and reflect more incoming light, which will push the inner edge a bit closer to the Sun. On the opposite end, as stated above, carbon dioxide clouds would move the outer edge farther away from the Sun, also making the habitable zone larger overall.

Phew. That was an awful lot to write, and there is way more in the paper than I could possibly cover. If you're really interested in the other stuff, like habitable zones around other types of stars, read the paper (link provided above. It's well-written and, I thought, very easily readable). I will also shamelessly plug Ravi's habitable zone calculator applet on his website.

Tomorrow, I will write about the paper that was just released on the arXiv today, also by Dr. Kopparapu. It deals largely with some of the consequences of this paper's work, so I'll save that discussion for another time!

Monday, March 11, 2013

Our Newest Neighbors (astronomically speaking)

Big news out of Penn State Astronomy. Dr. Kevin Luhman just published a paper in which he identified the third-closest set of objects to our solar system in the galaxy. I call this a "set of objects" rather than a "star system" because technically, it isn't a star system at all, but I'll come to that later.

Dr. Luhman specializes in studying brown dwarfs, objects that come up just short of being full-fledged stars. Brown dwarfs are not massive enough to sustain nuclear fusion reactions in the same way that stars do. While more massive brown dwarfs can fuse deuterium (a hydrogen nucleus with a neutron attached to it) and lithium early in their lives, these stages don't last for more than a billion years or so. Beyond that, brown dwarfs are basically just gravitationally bound balls of fairly cool gas ("cool" meaning between roughly 3000K and 300K, so "cool" in astronomical terms) that emit radiation because they are hot and opaque (see: Kirchoff's Laws of Spectroscopy).

Dr. Luhman works primarily on studying and cataloging brown dwarfs with data from the Wide-field Infrared Survey Explorer (WISE), a NASA satellite active between 2009 and 2011, that conducted an all-sky survey in four wavelength bands: 3.4, 4.6, 12, and 22 microns (micrometers). The reason you need infrared images for this kind of work is because brown dwarfs, as previously stated, are fairly cool (by astronomy standards), and they don't emit well in the visible part of the spectrum. However, from Wein's Law, we know that the thermal emission from a brown dwarf will peak in the infrared. So, simply put, that's just where they're best seen.

As part of its mission, WISE imaged the entire sky twice, which allows astronomers to observe regions of the sky at two different points in time. What Dr. Luhman initially saw was that, within a particular field of view, a certain object appeared to move in space over time. When he tracked down previous images of this region of the sky through the Two-Micron All-Sky Survey (2MASS) and the Digitized Sky Survey (DSS), he saw more of the same. The object was moving very quickly across the field of view, or (in astro-jargon) it had a high proper motion. The image below shows exactly what Dr. Luhman saw.
WISE J104915.57-531906 was discovered through its rapid motion across the sky, which is shown in these images taken between 1978 and 2010 by the DSS, 2MASS, and the WISE satellite. Credit: NASA/STScI/JPL/IPAC/University of Massachusetts.
Pretty cool, huh?

Well, it gets better. Based on the motion of the object, that we see here, Dr. Luhman also determined the object's distance from the solar system through astrometry, finding that it was 2.0±.15 parsecs (6.5±.49 lightyears) away. This makes WISE J104915.57-531906 (don't worry about the name too much, it's just based on the object's coordinates) the third-closest object to Earth behind the Alpha Centauri system and Barnard's Star, shown schematically in the image below.
This diagram shows the locations of the three closest star systems to the Sun with the year of discovery indicated along with the name of the system. WISE J104915.57-531906 is the third-nearest system to the Sun, and the closest system discovered in nearly a century. Credit: Janella Williams, Penn State University.

But wait, there's more! When doing follow-up observations of the object with the Gemini Multi-Object Spectrometer (GMOS) on the 8.4 meter Gemini South telescope, Dr. Luhman saw that the "object" was actually a binary system consisting of two brown dwarfs. In a binary system, the two objects are gravitationally bound to one another and orbit a common center of mass (like Earth and its Moon, but bigger in this case). These two brown dwarfs happen to be orbiting at a distance of roughly 3 astronomical units (AU, the mean distance between Earth and the Sun) from one another. The high-resolution image recorded by GMOS compared to that from WISE is also shown below.

WISE J104915.57-531906 is at the center of the larger image, which was taken by WISE. Inset is the image from Gemini Observatory, revealing the object to be a binary system. Credit NASA/JPL/Gemini Observatory/AURA/NSF

I may not be a brown dwarf aficionado, but I gotta admit that this is pretty cool. I mean, astronomically speaking, this system is right in our own backyard, and it took us this long to properly identify it. We have every reason to believe that many more systems like this could be lurking out there just waiting for us to stumble across them.

Keep it up, Kevin!

For anyone interested, a copy of Dr. Luhman's publication is available here.

Introduction

Hi everyone reading this (i.e. no one). I'm Danny, and welcome to the terrible place that is my mind! I'm currently a graduate student in Astronomy and Astrophysics at Penn State University with a rather unfocused array of interests including exoplanets, planetary habitability, astrobiology, supernovae, neutrinos, nucleosynthesis, stellar evolution, Neil DeGrasse Tyson (yes, he counts as a separate interest all by himself), and science (particularly physics/astronomy) education. In short, I'm a nerdy, somewhat athletic, tattooed metalhead with more sarcasm than should exist within any single person.

I decided to start this blog largely as a challenge to myself of "can I actually do this?" Plus, I figure with wanting to eventually go into public outreach, science policy, and/or science education, practicing my written communication skills can't hurt. Worst case scenario: this ends up as a dead project that I never update and no one ever reads. Best case scenario: this ends up as a fantastic creative exercise that I update often and no one ever reads. I obviously don't have particularly high hopes for readership, largely because I can't really fathom people finding my thoughts or opinions interesting enough to keep them coming back for more. But hey, I can always be pleasantly surprised!

You may be wondering "What can I expect out of this blog in the future?" Well, I'm glad you asked! The topics of discussion will, of course, depend largely on what's going on at the time. Maybe I'll have an interesting story or anecdote about something that recently happened to me. Maybe there will be a science result (most likely astronomical, but physics is also a relevant topic) that I think is particularly interesting. If I'm lucky enough, perhaps it will even be one of mine someday. Maybe I'll just complain about life as a grad student, as we are all wont to do. Maybe I'll be ranting about politics (warning in advance: I'm rather openly progressive) on a local, country, or global scale. I'll try to shy away from that, as it's not my job to tell you, the reader, what to think; but once in a while, I'll just have to vent. I may also talk about music, particularly if a new metal album comes out, though I may also write about older ones that I still love just to share my music with other people. I guess we'll see how I feel in the days/weeks/months to come.

And yes, the name of the blog is a play on the fact that I'm also a huge metalhead, as stated above.

Cheers everyone!