As I write this, a conglomeration of radio telescopes scattered across Earth are acting as one giant instrument to try to image the supermassive black hole at the center of our galaxy. It’s no easy feat. A black hole, by definition, is so dense and the gravitational pull so strong that not even light can escape its confines. So how can an object that can’t emit light and that doesn’t reflect light be observed? By looking for its shadow, and that’s exactly what the so-called Event Horizon Telescope is doing.
Black holes are common objects in the universe. We think each star at least 30 times the mass of our Sun ends up as a black hole. Thousands and thousands of stars that size live in just our Milky Way Galaxy. Then there are the really massive black holes, the “supermassive” ones. The center of every large galaxy contains one of these, and each harbor millions to billions of times the mass of our Sun.
All the 4 million solar masses of our galaxy’s supermassive black hole, called Sagittarius A*, is crammed tightly together, compressed into something that would sit inside of Saturn’s orbit around our Sun. That physical size — some 20 million kilometers wide, if you’re curious — would appear about 20 millionths of an arcsecond on the sky. (An arcsecond is a unit that astronomers use to measure an angle on the sky.) If my math is correct, that’s pretty similar to the size that a golf ball on the Moon’s surface would appear to us from Earth.
Sagittarius A* takes up a tiny swath of the sky, but astronomers think that linking together radio telescopes based in Chile, Hawaii, Arizona, Antarctica, and the French Alps — plus a few others — will allow them to see small enough details to image that black hole’s shadow against background light. The black hole’s gravity warps light that passes close, but not so close as to “fall” in, and so this warped light bends around the unseen mass. (An analogy is when you’re looking at holiday lights through antique windows. Areas where there’s more glass will warp those background lights more than areas with less glass.) The light that bends around the black hole’s boundary of no return ends up outlining that event horizon region.
Each individual telescope in this full-Earth array is looking at a specific color, or wavelength, of light: 1.3 mm. This light isn’t stopped by intervening dust or gas, so all the nebula and star clusters that lay between Sagittarius A* and Earth won’t block the view
The Event Horizon Telescope has looked toward the center of the Milky Way several times in the past few years and spied hot gas in turmoil very close to the black hole. This new observation, which started on Wednesday and should last for 10 days, has the eyes of some of the most powerful radio telescopes on Earth. The South Pole Telescope and ALMA (based in Chile and having comparable resolution in radio light to what the Hubble Space Telescope can resolve in visible light) joined the array. Scientists are expecting big things. Last year at the winter American Astronomy Society meeting, Event Horizon Telescope team member Feryal Özel said astronomers expect to measure the size of Sagittarius A* to a precision of 4 percent. They’ll be able to study the turbulence just outside of the black hole, a region where magnetic fields and the extremes of gravity play. According to a recent release from the European Southern Observatory — one of several partners who run ALMA — we should expect the first results late this year.
Our supermassive black hole is relatively quiet, but many others are not. They somehow shoot out jets of material moving nearly as fast as light, and those jets can energize gas at the outskirts of their home galaxies. But we don’t really know how those jets form. We haven’t ever seen close enough to an active black hole’s event horizon to obtain that information. Luckily, a nearby galaxy just might hold the answer.
M87 lies 55 million light-years from us. At that distance, you wouldn’t expect its central black hole to be visible, but it’s an enormous supermassive black hole — about a thousand time more massive than the Milky Way’s. That means its event horizon is also larger. Calculations suggest that M87’s active supermassive black hole appears about half the size on the sky as Sagittarius A*.
Depending on how well the Event Horizon Telescope and its computer software fleet resolves Sagittarius A*, we might also get a glimpse of an active supermassive black hole soon after.
Moons in the solar system come in many different forms. Some are boulder-sized, while one is larger than the planet Mercury. Some are mixtures of rock and iron, while others hide oceans and rocky cores under icy surfaces. Two even look a bit like walnuts, each hosting a bulge of material around its equator. Interestingly, both of these, Iapetus and Pan, orbit Saturn. And had I written this post a month ago, “two” would have been “one.” Scientists discovered March 8 that Saturn’s tiny satellite Pan has a ridge. But the other walnut-shaped moon, Iapetus, has puzzled scientists for centuries.
The first astronomers who observed Iapetus couldn’t make out the mountain range ringing three-quarters of the moon’s equator. But they could see that one side is brighter than the other side, so the surface material on those faces must differ. (One side is much more reflective than the other.)
The first hints that Iapetus has mountains ringing its equator came when the Voyager 2 spacecraft flew by Saturn and its moons in 1981. Scientists saw that near the equator were several “aligned” mountain peaks. Curious. It wasn’t until December 2004, when the Cassini spacecraft flew by Iapetus, that astronomers identified those aligned peaks were part of a 870-mile-long (1400 kilometers) chain of mountains, reaching close to 12 miles (20km) tall. And that discovery was even more curious than a few mountains lined-up, because what could create that global feature?
Some researchers think a process internal to the moon, like its rotation, could have generated this ridge. But other scientists argue these internal models are too constraining, too specific to only Iapetus and requiring very precise scenarios to occur. Perhaps instead, the ridge is a result of something external, like a disk of material surrounding the moon falling onto the satellite’s equator. After all, surrounding disks aren’t uncommon (all four of the solar system’s giant planets have rings).
Modeling the details
Last week at the annual Lunar and Planetary Science Conference outside of Houston, Texas, Johns Hopkins University Applied Physics Laboratory’s Angela Stickle presented her and colleague James Roberts’ work building off the idea of a disk creating the mountain range. When that disk’s material fell from the sky onto Iapetus, it wouldn’t just rest softly on the moon. But it also doesn’t create only a hole. Instead, each piece of debris would alter the surface, digging out material and creating some type of impact crater that depends on the angle and speed at which the rock hits the surface, the size of the rock, its composition, and the moon’s surface composition. Energy from the impact travels deeper into the terrain, uplifting layers of material. And sometimes some of the impactor itself survives the collision, specifically if the impact hit at a shallow angle.
A disk of debris wouldn’t fall onto the surface as if gravity just turned off. It would spiral down and collapse toward the surface, until finally each rock hits the moon with a glancing blow. To study that type of collision, Stickle and Roberts simulated space rocks impacting an icy, rocky surface at shallow angles — one rock first, then two, and then several in succession. They looked at impactors between 3.2 feet (1m) and 0.6 mile (1km) wide and colliding at the surface with angles of 1 degree, 3 degrees, and 10 degrees from parallel.
When the meteoroids hit the surface at those shallow angles, some of each meteoroid would shear itself off and move down range from the impact site. (Planetary scientists call that shearing process “decapitation.”) The scientists found that if you have a bunch of these rocks, their sheared material clumps together and they pile up — “like a traffic jam,” Stickle said during the presentation. Their simulations showed that if Iapetus had a disk of material, that disk as it fell to the moon’s surface could have created the mountain range. What created the disk, though, is yet another mystery.
Last week, the Indian Space Research Organisation launched 104 satellites into space via one rocket. Out of those 104, 101 are CubeSats, small satellites that have the potential of doing big things for astronomy, and yet for various reasons the astronomy community isn’t utilizing them. Most of the 613 CubeSats that have launched (as of this writing) are used for communication relays, education, various Earth observations, and to test space technology. Astrophysics research is sorely missing.
CubeSats were introduced back in 2000 as educational tools, providing a way for students to get into programming, electronics, hardware, and spacecraft. A CubeSat is made up of standardized cubic units, each 10cm on a side; a satellite with one unit is 1U, a satellite with three is 3U, and so on. The standardized (and small) sizes make for a whole bunch of positives: They can be built faster, they are less expensive to produce — which is a benefit for learning purposes — and they can easily be integrated into larger spacecraft for launch.
Why aren’t we using CubeSats to learn more about how our universe works? One of the main catches is that, well, CubeSats are small, and that limits how much “stuff” it can carry: the number of instruments and the size of any onboard telescope. You can see how those space constraints are at odds with a core idea of astronomy. “Astronomy is primarily a game of collecting photons,” says The Aerospace Corporation’s David Ardila. “We believe that the way to know more about the universe is to collect more photons.” And to collect those particles of light, you need a larger telescope.
One of the projects Ardila is involved with at Aerospace involves coming up with ways to utilize CubeSats for science research, and he is making the case for more CubeSats in astronomy. I spoke to him recently about the possibility of increasing the number of CubeSats used in astronomical science. A way to do that is of course focus on the arenas where these small satellites could make a big impact.
Most operational telescopes collect a moment in time, a photograph of a galaxy or a star cluster. The large ground-based telescopes and the space-based instruments like Hubble can’t watch a single patch of sky or a single galaxy, for months at a time; we just don’t have the resources. But a CubeSat could. This is the so-called “time domain” in astronomy. “We really don’t know about the universe in time,” says Ardila, making this an important unexplored frontier. “This is not remedial science or a consolation prize,” he adds. This research would go toward putting together the full picture of our universe, watching how celestial objects change in time, and tracking cosmic evolution. Other ground-based projects are also studying time-domain astronomy — Pan-STARRS out of Hawaii, and the Large Synoptic Survey Telescope, set to come online in the mid 2020s. In a similar vane, CubeSats, just like arrays of small telescopes on the ground, can survey the sky looking for a specific type of object.
There are other reasons as well why CubeSats aren’t popular among the astronomy community, and those have to do with both longevity and funding. The CubeSats that have been produced since 2000 aren’t extremely reliable — less than half of those that have launched have met their mission objectives. So researchers would need to develop their own hardware kit to fit the standardized CubeSat launch compartment, along with every other part of a typical mission.
A CubeSat for science research, including development, construction, and support during the mission itself, is expected to be between $5 million and $10 million. Where would that funding come from? “NASA Astrophysics does not have an appropriate slot to fund science-based CubeSats,” says Ardila. The space agency has several different mission classes, which are proportional to different funding amounts, and researchers propose missions to fit within each class. The two classes that are closest to what a CubeSat would cost are the Missions of Opportunity, a class that has a maximum of dozens of millions of USD (I’ve seen a couple different numbers) and thus a CubeSat would be competing with more ambitious projects, or a different type of grant program called “Astrophysics Research and Analysis” (APRA) grant that ranges between $100,000 and $1 million a year and is on the low end of a CubeSat cost.
The CubeSat regime isn’t being entirely ignored by astronomers, though. HaloSat is an APRA-grant-funded 6U CubeSat that will launch in 2018 to look for the diffuse, million-degree, X-ray emitting gas that envelopes our galaxy. CUTIE is a proposed ultraviolet surveyor. And ASTERIA (launching this year) would test technology for a future star monitor looking for brightness dips due to exoplanets.
Astronomy is already a field of study that requires creativity — with a measurement of A you can calculate B, which has a known relationship to C. Perhaps another twist of creativity can progress research further, namely thinking about all of the different types of spacecraft and their onboard instruments can achieve the science goal. Adds Ardila, “I think we are being limited by our imagination.”
This morning, when Hurricane Matthew updates showed up on my local news, I had a very selfish mentality. The approaching hurricane is interesting from a weather standpoint, but I wasn’t too concerned. I mentally checked through my list of family and friends; none of them are directly in harm’s way. And so, I let out a sigh of relief. Until a few hours later, when it dawned on me that the most important location for space exploration is in Hurricane Matthew’s crosshairs. The eye of the hurricane is estimated to hit NASA’s Kennedy Space Center and Cape Canaveral at 6am Friday morning.
NASA’s Kennedy Space Center is home to the following:
Cape Canaveral Air Force Station is home to:
As of the time of this writing, Hurricane Matthew is a category 4 story. Tropical-storm force winds are expected around midnight local time, with hurricane-force winds hitting around 6am Friday morning. The current forecast calls for 125 mph winds with gusts up to 150 mph. (For more about Hurricane Matthew, check out the ImaGeo blog.)
According to the Kennedy Space Center blog, a “hurricane ride-out” team of 116 people is onsite and will stay at various locations at the center through the hurricane. “During the storm they will report any significant events to the Emergency Operations Center, located in the Launch Control Center at Complex 39,” wrote Brian Dunbar. “They can also take any action needed to stabilize the situation and keep the facility secure.”
The next planned NASA launch, a cargo delivery to the International Space Station, is still scheduled for October 13, but that will lift off from NASA’s Wallops Flight Facility in Virginia. The next Florida launch is scheduled for November 4, and that spacecraft is currently housed in nearby Titusville, Florida. Perhaps ironically, it’s a weather satellite, and part of the geostationary spacecraft network that observes hurricanes.
The Space Coast is crucial to both the government’s space-related activities plus the private space exploration industry. If the current projections hold, Hurricane Matthew’s impact could be felt long after the winds die down.
On this cold boulder-strewn surface lies Rosetta. The European spacecraft launched 12.5 years ago, into a life filled with exploration. Most of that time was spent traveling to her destination, Comet 67P/ Churyumov-Gerasimenko, where Rosetta has since perished. She arrived at that target on August 6, 2014, and spent 786 days exploring the comet’s long-hidden secrets. Her life ended this morning, with an intentional and controlled crash-landing into her two-year home.
During her life, Rosetta showed how complex comets are. She revealed Comet 67P’s surprising shape: not of a potato as expected, but instead of two lobes that looks oddly like a duck. Astronomers think both of those lobes formed separately and then adhered together after a low-speed collision. The spacecraft’s 11 scientific instruments detected molecular nitrogen (the first time this compound had been seen at a comet), molecular oxygen, and several noble gases (argon, krypton, and xenon). From these detections and studies of the comet’s water ice, scientists gained a better idea of where Comet 67P had formed billions of years ago: in the distant, cold reaches of our solar system. Rosetta gave the astronomical community many more discoveries. She measured the comet’s mass and its physical dimensions, from which scientists could calculate a density. At 533 kg/m3, Comet 67P is just over half as dense as water; it is probably made of ice-dust clumps grouped together in piles, with air pockets in between. Rosetta also spotted outgassing water across the surface — from cracks, pits, and smooth regions. And she documented for the first time what happens as the wind of energetic charged particles from the Sun collides with a comet.
In November 2014, three months after Rosetta arrived at the comet, she sent another spacecraft, Philae, to the surface. Philae’s landing didn’t go as planned, and she bounced before resting at an awkward angle in a cliff. But scientists could still use that botched landing to learn about Comet 67P: its surface is harder than expected and covered in about 3cm of loose dust. Just four weeks before her own death, Rosetta photographed her friend Philae, laying in the shade on her side and surrounded by rocks.
Rosetta has now joined Philae on Comet 67P’s surface — but at a different area on the surface. During Rosetta’s last hours, she captured photographs of her final resting place with increasing resolution. During that approach, she also sent back to Earth more data of dust and gas. And at 13:19 Central European Summer Time (5:19 AM Central Daylight Time in the United States), the mission’s control center received Rosetta’s last signal. So long, dear friend.
Yes, an obituary is a slightly odd way to reflect on a space mission, but it connects to something I’ve noticed a lot of lately: We personify our spacecraft. Cartoons of Rosetta on the mission’s website show it smiling and beady-eyed. When a spacecraft fails unexpectedly, like the Japanese ASTRO-H mission earlier this year, we mourn its passing like a friend lost too soon. And for a spacecraft that carried out its entire mission, we celebrate its life and plan for an appropriate way to go out — for example, joining it with the comet it rendezvoused with.
When you’re walking through a forest, it’s nearly impossible to know the forest’s boundaries or layout. This same idea applies with our galaxy, yet we know the Milky Way is a spiral galaxy, and we know our Sun orbits the center of the galaxy from about 26,000 light-years out. With no way to propel ourselves outside of the galaxy and look at it from that view, how have astronomers learned so much about the Milky Way? By tracking the positions and movements of stars and gas clouds. And astronomers just received details from the Gaia spacecraft of a billion stars.
To map these stars, astronomers use astrometry — measuring the precise positions of stars. It isn’t the sexiest field of astronomy, but it’s extremely important. Astrometry lets you track the movements of stars and gas clumps, and that provides a way to map and measure the structures in space.
To measure distances nearby, astronomers use the parallax method. For a basic idea of what this is, hold your index finger a few inches from your face and close your right eye. Note the position of your finger compared to the background. Next, keep your index finger in the same physical location but open your right eye and close your left. Again, note the position of your finger against the background. That apparent movement is parallax, and nearby stars do the exact same thing against a background sky. The difference is that instead of using the distance between your two eyes, astronomers use the distance between Earth at opposite points in its orbit around the Sun. The tinier shift they spy, the farther away the star is from us. Using geometry plus a few known values (the amount the star shifts on the sky and the distance Earth traveled over those six months of its orbit), you can calculate the distance to the star. And while Gaia isn’t on Earth, it’s near our planet and so this same idea applies.
Gaia uses two telescopes pointed at different directions and separated by a constant angle. (The telescope mirrors, along with other instruments, are welded to a solid ring to ensure the angle never changes.) Gaia slowly rotates, so that the telescopes continually scan the sky. Each day, the telescopes collect light amounting to 40 gigabytes of data. Over its 5-year mission, Gaia will see the complete sky 70 times, and collect detailed position information for about 1 billion stars during those 70 observations. It will also measure the three-dimensional motions of two million of the brightest stars.
Because everything in the galaxy is moving at all times, the positions and distances are measured with respect to the other stars. That makes for an enormous amount of data that needs to be analyzed. A collaboration of 450 scientists from 160 institutions is up to the task. “We take all the raw telemetry that comes from the mission — all these zeroes and ones that you see coming in from the sky — and we turn them, through our six data processing centers, into data products that the scientists can interpret,” said Anthony Brown, who leads the Gaia Data Processing Analysis Consortium, during the press conference announcing the first data release. It’s one enormous, complex moving machine.
The science within the map
For the past two decades, astronomers have relied on the positions and distances measured by the Hipparcos satellite. With Hipparcus, they could only measure parallax out to a few hundred light-years, but Gaia is far more sensitive. Gaia extends the parallax measurements out to a couple hundred thousand light-years; when mission is complete, we’ll have a 3-D map of the Milky Way out to that distance from us.
This map isn’t just for kicks. Knowing the directions and velocities that stars are moving in, astronomers can then turn back the clock and figure out where those stars came from. Gaia is also collecting data about the stars’ compositions and temperatures. From all of this information, scientists will tease out which stars were born with what other stars, or if a grouping of suns came from a satellite galaxy that crashed into the growing Milky Way. They’ll know more about the evolution of our galaxy.
And on smaller scales, they will reveal the structures within gas clouds and known star clusters. This first data release already uncovered some 400 million stars that hadn’t been individually seen before, said Brown during the press conference. And there’s no doubt that some of the pinpoints of light Gaia sees are not stars but asteroids in the solar system or perhaps something far outside our galaxy.
The distance measurements astronomers are collecting will lead scientists to better calibrate the tools they use to measure distances much farther out than a couple hundred thousand light-years. Any distances greater than the Hipparcos distances had been measured using specific types of brightness-varying stars. The amount of time it takes for such a star cycle from its max brightness to its minimum brightness and back to its max depends on the amount of light it produces. So if you measure the time it takes for that cycle, you can calculate how bright it should be. That gives you a specific-wattage light bulb. (If we know that bulb should be, say, 100 watts, and we see it as fainter than that, we can calculate how far away it is.) With Gaia, astronomers have measured the distance of a few hundred of these types of varying-brightness stars using parallax. And those detections can be used to calibrate the variable-star period-distance relation and serve as a way to double-check that scientists had understood that relation. So far, so good.
This first batch of Gaia data includes brightness changes for 3,000 variable stars, the positions of 1.1 billion stars, motion information for more than 2 million of those 1.1 billion stars, and positions for more than 2,000 active galaxies. The team plans to release another three data sets before the final release, which scheduled for the early 2020s and incorporating all five years of observations.
Using the telescopes that have come in the decades before Gaia, we know that Earth orbits the Sun, and the Sun lies within a disk of stars and gas and dust. The cloudy stream arcing across the sky at night is the Milky Way’s disk. At the center of the disk lies a supermassive black hole, and our galaxy’s spiral arms pinwheel and rotate around that center. From decades of mapping the stars and gas clouds, we have a two-dimensional illustration of our spiral galaxy. But with Gaia, we’re switching from that folded map in your glove box to a spectacular “3D motion picture,” as Brown puts it.
Last week, NASA launched its newest spacecraft on a mission to sample an asteroid and send that collected rock and dust back to Earth. The OSIRIS-REx spacecraft (short for Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer) is on a two-year journey to Bennu, an asteroid about 1,600 feet (290 meters) wide and orbiting the Sun in a similar orbit to Earth’s. The spacecraft will get to Bennu in August 2018, and spend the next two years surveying the space rock. It will produce a range of different types of maps — photographic, altitude measurements, mineral maps, heat maps — and from those data, scientists will chose a location on the asteroid to snag samples. In the summer of 2020, the spacecraft will then approach the chosen location and stretch out its robotic arm. An instrument at the end of the arm will send out a puff of gas to agitate the surface material, and any loose rocks and dust will be sucked up into a capsule. (I really hope we get to see video footage of this …) In the following spring, OSIRIS-REx will head back to Earth. Then on September 24, 2023, the spacecraft will let go of its capsule holding the sampled material, and send it plunging through Earth’s atmosphere. Finally, scientists will have pristine material from asteroid Bennu.
But it’s not the first pristine material from any asteroid.
Japan beat NASA to that title 6 years ago, when its Hayabusa spacecraft returned a bit of dust from asteroid Itokawa. That mission launched in 2003 to orbit the asteroid (which is about the same size as, but more potato-shaped than, Bennu) before touching down on its surface. The sampling process didn’t go quite as planned, unfortunately. The craft was supposed to fire small metal projectiles into the asteroid to loosen bits of rock and dust from the surface, but those didn’t fire. Instead Hayabusa collected only the dust that flung upward when the craft landed. That sampled dust, only about 1 milligram of uncontaminated material, arrived on Earth in June 2010. For comparison, OSIRIS-REx is expected to collect between 2 ounces (60 grams) and 4.4 pounds (2 kilograms) of material.
The Japanese Aerospace Exploration Agency (JAXA) launched its second asteroid-sampler, Hayabusa2, on December 3, 2014. It should arrive at its target, asteroid Ryugu, in mid-2018. And in late 2019, Hayabusa2 will head back toward Earth carrying several grams of rocky asteroid material. This sample is expected to touch down on our planet in late 2020 — still nearly three years before OSIRIS-REx’s sample reaches us.
At first glance, it might seem a bit repetitive to have three asteroid sample-return missions, but they’re actually going after very different targets. Just like planets and stars come in many varieties, so do asteroids (and comets, for that matter).
Hayabusa studied an S-type asteroid, Hyabusa2 is on its way to a C-type asteroid, and OSIRIS-REx is hurtling toward a B-type asteroid. So what’s the difference? These classifications refer to the asteroid’s appearance and thus surface composition. S-type is stony or silicate materials. These tend to be lighter-colored material so they reflect more Sunlight than other asteroid types. C-types are the most common, and are carbonaceous with darker, coal-like surfaces. B-type asteroids are a subclass of C-type, but even darker surfaces (B is for black — creative). The different surface brightnesses come not just from the differing materials that make up the asteroids, but also from the effects of “space weathering.” When high-speed subatomic particles (from the Sun or from even farther away) or pieces of dust slam into any space object, it can change the surface in many ways: like cratering, color tweaks, and chemical composition changes.
By sending these missions to different asteroid types, scientists are investigating the broad spectrum of space rocks. Did these different types of rocks form through different processes? How much larger were their parent bodies? Are their compositions what we actually think they are? (And, of course, what do these asteroids tell us about how life arose on our planet?) Plus, with each mission come more advanced technologies — better cameras and other detectors, and improvements about how to gather sample material (and hopefully avoid the mistakes of the past).
Three decades ago, the only planets that we had ever seen were those in our solar system. Then two decades ago, humankind knew of a dozen planets orbiting stars other than the Sun. These types of worlds became known as exoplanets. At that time, I was in high school and I was already hooked on the universe and astronomy, but the idea of finding new worlds outside of our home solar system was an intriguing draw to study physics and astronomy.
Ultimately, I didn’t continue down the exoplanet astronomy PhD path as a career. That’s partly because I found my way into cosmology, galactic astronomy, learning about how stars evolve and die, in addition to all the other objects in our solar system. I didn’t have to choose one path. Instead, I now get a tasting of each subject in astronomy because I write about all of these. But of course, excitement of new worlds outside of our solar system never went away.
Now we know of 3,518 worlds (or 3,375 worlds, depending on which reference source you prefer) outside our solar system, and the tally constantly rises. We’ve found planets larger than Jupiter orbiting their stars closer than Mercury orbits the Sun and a planet with three host stars. We’ve imaged planets still glowing from the heat of formation, and watched as exoplanets vacuum material along their orbits to etch circular paths in the dust disks around their stars. We’ve found planets around the dense leftovers of once-Sun-like stars. We’ve found rocky planets with seething hot surfaces, ice-cold gaseous planets so far away from their host stars that we can’t figure out how they got there, and everything in between.
But none of those worlds have been quite as exciting of the discovery announced last week, that astronomers found a planet circling the star next door. Now we have a system close enough to send robotic explorers (although, it would take a while time to get there), and close enough to have a shot at observing with the next generation of telescopes.
In the enormous universe — where our home galaxy is 120,000 light-years wide, the next nearest large galaxy is millions of light-years away, and the cosmos holds billions of galaxies — the star only 4.2 light-years from us hosts a planet. And this planet, Proxima b, orbits its small, cool, red star at the right distance that the temperature could keep surface water in liquid form.
Of course, we have no clue if this exoplanet has water, an atmosphere, or even if it’s rocky like Earth. But so far, scientists know enough about it to make them pretty excited about learning more about Proxima b. We’ve come a long way in the couple decades since discovering the first planet orbiting another star.
Even though astronomy is my first love, sometimes I wander away and explore other science. This week, I attended a mechanical engineering conference and sat in on sessions specifically devoted to the influence of origami in engineering design. Lucky for me, there have been a few talks that combine this area of engineering with space-based applications.
One of the coolest was a compressible tube whose structure is based on origami folds. This type of tube (called a bellows) has all kinds of uses on Earth — like connecting a jet bridge to an airplane’s open door — and in space, it can protect a smaller tool that lies inside the bellows but can extend out.
The most advanced rover currently rolling around Mars, Curiosity, has a drill as part of its system that samples material on the planet. That drill and its associated instrumentation lie within a metal bellows. NASA next Red Planet rover, Mars2020, will also be equipped with a bellows. And researchers have been working on an origami version for future space exploration missions.
A bellows needs to be able to compress — for storage — and expand as needed. (Think again of the bellows that lies between a jet bridge and an airplane. Both the bellows and jet bridge are compressed and closest to the terminal when not in use.) The metal bellows that will fly on Mars2020 has a 1:8 compression ratio, meaning its expanded length is 8 times its compressed length. But what if you could make that ratio a smaller number?
Origami is the art of paper folding, so, perhaps unsurprisingly, its influence leads to a bellows with a much smaller compression ratio: 1:30. And that leads to a smaller compressed structure. Brigham Young University’s Jared Butler, one of the researchers working on this origami bellows, explains why this is beneficial. “If I can reduce the amount of volume my bellows takes in compressed state, I can also reduce the amount of shaft length that is used to stow that bellows, which reduces the mass of my mechanism.” And that’s good news in spaceflight.
The researchers had focused on a bellows that could fly on Mars2020, and that meant they could compare its lab-tested specifications to the metal bellows on Curiosity. After deciding on the fold pattern, called Kresling, they folded by hand several samples of different flexible industrial materials and films (like polyethylene). The Red Planet is home to harsh environmental conditions, so to ensure their origami bellows could withstand those conditions, Butler and his colleagues worked with the Jet Propulsion Laboratory.
To simulate martian dust storms, they placed the bellows in a contraption where a powerful fan whipped around sand particles, battering each bellows. Mars’ thin atmosphere blocks very limit ultraviolet (UV) radiation at the surface, and to test that exposure, the researchers placed each sample in another chamber with a deuterium light bulb to bathe each bellows with a high UV dose. To test for the wide temperature swings between each day and night, Butler and his colleagues used a different chamber with dry ice and watched how many compression-expansion cycles the samples could withstand. The key was testing how soon the structure tore (if it did at all), which not surprisingly occurred on the point, called the vertex, where multiple folds met. While they had very promising results, NASA chose the previous metal bellows design for the future Mars2020 rover.
A bellows for an ARM
But that doesn’t mean the work was for naught. A mission still in the development stages would use a larger version of this Kresling origami bellows — if the mission gets the go-ahead from Congress. This is NASA’s Asteroid Redirect Mission (ARM), and it’s a bit controversial (but that’s another story for another day). In this project, a larger drill would stay contained within the origami bellows the entire time, expanding and compressing when in use. With ARM, the instrument would operate not on a planetary surface but on a small asteroid flying through space. This type of environment’s harsh conditions are different from those on the martian surface. For example, there aren’t dust storms to contend with in open space, but there are micrometeoroids. The good news is the drill and bellows would sit within a titanium cage, which would block the meteoroids and any UV radiation. They still need to test for rocky debris: that’s material that the action of drilling into the asteroid would fling up, says Butler. A bellows would protect the rest of the spacecraft, its mechanics, and gears from that debris.
So far, this origami structure has passed every test the researchers have put it through. The BYU team still has a few unresolved issues, though, with the bellows. First is that all the samples they’ve worked on have been hand-folded. With 59 fabrication folds per sample, there’s bound to be inconsistencies among the samples. They need to come up with another way to make the exact same structure every single time.
Another issue is that to end up with a cylinder, you have to take a flat sheet, crease the folds, and then connect two opposite edges of that sheet to form a cylinder. Tape and other similar adhesives don’t work at the frigid temperatures of space. The engineers have sewn along that edge with Kevlar thread, but that puts small holes into the bellows as a result of the stitching. But Butler isn’t worried about overcoming those obstacles.
Not just drills
The origami bellows isn’t the only space-based application of origami engineering. The large solar panels on spacecraft in addition to solar sails, which help space probes travel through space via the pressure from sunlight, have to be much smaller for launch. (The storage compartment in a rocket is only so big.)
And exoplanet hunters are also hoping that the near future will see another structure that could make use of origami’s influence. This would be a huge sunflower-shaped starshade that would fly in formation with a telescope and block the light from a star, allowing that telescope to look at the much fainter light reflected off an exoplanet. This shade would be 100 feet in diameter. And to safely cram that material into a rocket, one idea is to fold the central disk and then wrap the petals around the folded disk. Once in space, it would unfurl and unfold. Even though a launch-ready starshade is still a decade away, researchers are building prototypes in the lab to find the ideal fold patterns and material.
These space-based structures are only a few ways that engineers are incorporating the centuries-old art of origami. Of course, this influence is adding a coolness factor. But it’s also improving upon the technologies that have already flown in space.
Most of the space news we hear about comes out of the biggest ground-based telescopes and the observatories launched into space. I admit, I’m guilty of focusing on these sorts of news, too. The biggest and most expensive projects are often equipped to touch on a broader amount of science. (Hubble, for example, has cost billions but astronomers have used it consistently for 25 years to study everything from the atmospheres of planets orbiting other stars to the oldest and most-distant galaxies we can see in our universe. The Cassini spacecraft has spent over a decade at Saturn to uncover a diverse and exciting system — including learning that a saturnian moon that might have the right conditions for life.) But astronomers have a much larger bag of tools. One of those is an array of small telescopes working in unison. I’ve touched on one small-telescope setup before for Discover. Another is sounding rockets, with cameras and other detectors launched into Earth’s atmosphere to collect data for a few minutes. And yet another is balloons carrying instruments for weeks at a time. In this post, I want to bring attention to this third one.
Climb into the atmosphere
The type of light humans see — so-called visible light — makes up only a tiny portion of all types of radiation. Visible light passes right through Earth’s atmosphere, but most other radiation doesn’t. The atmosphere blocks some lower-energy light than what we can see, like infrared, and nearly all of the higher-energy radiation, like X-rays and gamma rays. Balloons are a great way to get your science experiment above at least part of Earth’s atmosphere, so that you have a better chance of catching some of that cosmic radiation.
Some balloons can stay afloat for hours, while others remain in the atmosphere for up to 100 days. It all depends on the balloon’s structure. Those with openings at the sides and bottom allow gas to escape as the balloon rises, and so they last for shorter amounts of time and ride the weather pattern. Those with the gas locked in can stay afloat for weeks. (And NASA is currently developing an Ultra Long Duration Balloon, or ULDB, that can fly for about 100 days.)
None of these balloons are what you’d find at your local Party City. Instead, most are enormous — like, 40 million cubic feet enormous. (NASA uses the following comparison: a football stadium could fit inside one of these balloons when it’s inflated. Because in America we like sports comparisons.) These balloons need to hold a huge volume of gas in order to lift thousands of pounds of scientific instruments and their electronic brains miles into the atmosphere. Even the smaller scientific balloons are still about a million cubic feet.
OK that’s all interesting, but the main question is: have science instruments flying via balloons collected important data? Yes.
We know of the telescopes that flew in space to study the universe’s earliest light, called the cosmic microwave background (CMB). And we know of ground-based observatories that also collect this light (like the BICEP2 hoopla from 2014). But some of the most important measurements of the CMB’s tiny variations in temperature were made in 1998 by a balloon flying for 10.5 days, and again in 2003, above Antarctica. That was the BOOMERANG project.
Another balloon-flown experiment above Antarctica discovered that the highest-energy particles from space, called ultra-high-energy cosmic rays, can generate pulses of radio light as those particles traverse Earth’s atmosphere. The ANITA project while it was lofted above Antarctica and hanging from a balloon, had detected those radio waves reflecting off the Antarctic ice.
And over the past several days, smaller balloons carrying lower loads — these are “only” 90 feet across and carry about 40 pounds of instruments — have been taking off from Northern Sweden for short flights. This is the BARREL project, and the onboard detectors are collecting X-rays to study Earth’s radiation belts.
I could talk about dozens more experiments that have flown into Earth’s atmosphere via balloons over the past few decades — but I’ll spare you the details for now. The point is that when you’re hearing about astronomical discoveries and news, know that it’s not coming from only billion dollar telescopes in space and multi-million dollar observatories on the ground. Smaller projects — and many of them built almost entirely by students — are also extremely important tools that we’re using to learn about the universe.