Last week, the Indian Space Research Organisation launched 104 satellites into space via one rocket. Out of those 104, 101 are CubeSats, small satellites that have the potential of doing big things for astronomy, and yet for various reasons the astronomy community isn’t utilizing them. Most of the 613 CubeSats that have launched (as of this writing) are used for communication relays, education, various Earth observations, and to test space technology. Astrophysics research is sorely missing.
CubeSats were introduced back in 2000 as educational tools, providing a way for students to get into programming, electronics, hardware, and spacecraft. A CubeSat is made up of standardized cubic units, each 10cm on a side; a satellite with one unit is 1U, a satellite with three is 3U, and so on. The standardized (and small) sizes make for a whole bunch of positives: They can be built faster, they are less expensive to produce — which is a benefit for learning purposes — and they can easily be integrated into larger spacecraft for launch.
Why aren’t we using CubeSats to learn more about how our universe works? One of the main catches is that, well, CubeSats are small, and that limits how much “stuff” it can carry: the number of instruments and the size of any onboard telescope. You can see how those space constraints are at odds with a core idea of astronomy. “Astronomy is primarily a game of collecting photons,” says The Aerospace Corporation’s David Ardila. “We believe that the way to know more about the universe is to collect more photons.” And to collect those particles of light, you need a larger telescope.
One of the projects Ardila is involved with at Aerospace involves coming up with ways to utilize CubeSats for science research, and he is making the case for more CubeSats in astronomy. I spoke to him recently about the possibility of increasing the number of CubeSats used in astronomical science. A way to do that is of course focus on the arenas where these small satellites could make a big impact.
Most operational telescopes collect a moment in time, a photograph of a galaxy or a star cluster. The large ground-based telescopes and the space-based instruments like Hubble can’t watch a single patch of sky or a single galaxy, for months at a time; we just don’t have the resources. But a CubeSat could. This is the so-called “time domain” in astronomy. “We really don’t know about the universe in time,” says Ardila, making this an important unexplored frontier. “This is not remedial science or a consolation prize,” he adds. This research would go toward putting together the full picture of our universe, watching how celestial objects change in time, and tracking cosmic evolution. Other ground-based projects are also studying time-domain astronomy — Pan-STARRS out of Hawaii, and the Large Synoptic Survey Telescope, set to come online in the mid 2020s. In a similar vane, CubeSats, just like arrays of small telescopes on the ground, can survey the sky looking for a specific type of object.
There are other reasons as well why CubeSats aren’t popular among the astronomy community, and those have to do with both longevity and funding. The CubeSats that have been produced since 2000 aren’t extremely reliable — less than half of those that have launched have met their mission objectives. So researchers would need to develop their own hardware kit to fit the standardized CubeSat launch compartment, along with every other part of a typical mission.
A CubeSat for science research, including development, construction, and support during the mission itself, is expected to be between $5 million and $10 million. Where would that funding come from? “NASA Astrophysics does not have an appropriate slot to fund science-based CubeSats,” says Ardila. The space agency has several different mission classes, which are proportional to different funding amounts, and researchers propose missions to fit within each class. The two classes that are closest to what a CubeSat would cost are the Missions of Opportunity, a class that has a maximum of dozens of millions of USD (I’ve seen a couple different numbers) and thus a CubeSat would be competing with more ambitious projects, or a different type of grant program called “Astrophysics Research and Analysis” (APRA) grant that ranges between $100,000 and $1 million a year and is on the low end of a CubeSat cost.
The CubeSat regime isn’t being entirely ignored by astronomers, though. HaloSat is an APRA-grant-funded 6U CubeSat that will launch in 2018 to look for the diffuse, million-degree, X-ray emitting gas that envelopes our galaxy. CUTIE is a proposed ultraviolet surveyor. And ASTERIA (launching this year) would test technology for a future star monitor looking for brightness dips due to exoplanets.
Astronomy is already a field of study that requires creativity — with a measurement of A you can calculate B, which has a known relationship to C. Perhaps another twist of creativity can progress research further, namely thinking about all of the different types of spacecraft and their onboard instruments can achieve the science goal. Adds Ardila, “I think we are being limited by our imagination.”
This morning, when Hurricane Matthew updates showed up on my local news, I had a very selfish mentality. The approaching hurricane is interesting from a weather standpoint, but I wasn’t too concerned. I mentally checked through my list of family and friends; none of them are directly in harm’s way. And so, I let out a sigh of relief. Until a few hours later, when it dawned on me that the most important location for space exploration is in Hurricane Matthew’s crosshairs. The eye of the hurricane is estimated to hit NASA’s Kennedy Space Center and Cape Canaveral at 6am Friday morning.
NASA’s Kennedy Space Center is home to the following:
Cape Canaveral Air Force Station is home to:
As of the time of this writing, Hurricane Matthew is a category 4 story. Tropical-storm force winds are expected around midnight local time, with hurricane-force winds hitting around 6am Friday morning. The current forecast calls for 125 mph winds with gusts up to 150 mph. (For more about Hurricane Matthew, check out the ImaGeo blog.)
According to the Kennedy Space Center blog, a “hurricane ride-out” team of 116 people is onsite and will stay at various locations at the center through the hurricane. “During the storm they will report any significant events to the Emergency Operations Center, located in the Launch Control Center at Complex 39,” wrote Brian Dunbar. “They can also take any action needed to stabilize the situation and keep the facility secure.”
The next planned NASA launch, a cargo delivery to the International Space Station, is still scheduled for October 13, but that will lift off from NASA’s Wallops Flight Facility in Virginia. The next Florida launch is scheduled for November 4, and that spacecraft is currently housed in nearby Titusville, Florida. Perhaps ironically, it’s a weather satellite, and part of the geostationary spacecraft network that observes hurricanes.
The Space Coast is crucial to both the government’s space-related activities plus the private space exploration industry. If the current projections hold, Hurricane Matthew’s impact could be felt long after the winds die down.
On this cold boulder-strewn surface lies Rosetta. The European spacecraft launched 12.5 years ago, into a life filled with exploration. Most of that time was spent traveling to her destination, Comet 67P/ Churyumov-Gerasimenko, where Rosetta has since perished. She arrived at that target on August 6, 2014, and spent 786 days exploring the comet’s long-hidden secrets. Her life ended this morning, with an intentional and controlled crash-landing into her two-year home.
During her life, Rosetta showed how complex comets are. She revealed Comet 67P’s surprising shape: not of a potato as expected, but instead of two lobes that looks oddly like a duck. Astronomers think both of those lobes formed separately and then adhered together after a low-speed collision. The spacecraft’s 11 scientific instruments detected molecular nitrogen (the first time this compound had been seen at a comet), molecular oxygen, and several noble gases (argon, krypton, and xenon). From these detections and studies of the comet’s water ice, scientists gained a better idea of where Comet 67P had formed billions of years ago: in the distant, cold reaches of our solar system. Rosetta gave the astronomical community many more discoveries. She measured the comet’s mass and its physical dimensions, from which scientists could calculate a density. At 533 kg/m3, Comet 67P is just over half as dense as water; it is probably made of ice-dust clumps grouped together in piles, with air pockets in between. Rosetta also spotted outgassing water across the surface — from cracks, pits, and smooth regions. And she documented for the first time what happens as the wind of energetic charged particles from the Sun collides with a comet.
In November 2014, three months after Rosetta arrived at the comet, she sent another spacecraft, Philae, to the surface. Philae’s landing didn’t go as planned, and she bounced before resting at an awkward angle in a cliff. But scientists could still use that botched landing to learn about Comet 67P: its surface is harder than expected and covered in about 3cm of loose dust. Just four weeks before her own death, Rosetta photographed her friend Philae, laying in the shade on her side and surrounded by rocks.
Rosetta has now joined Philae on Comet 67P’s surface — but at a different area on the surface. During Rosetta’s last hours, she captured photographs of her final resting place with increasing resolution. During that approach, she also sent back to Earth more data of dust and gas. And at 13:19 Central European Summer Time (5:19 AM Central Daylight Time in the United States), the mission’s control center received Rosetta’s last signal. So long, dear friend.
Yes, an obituary is a slightly odd way to reflect on a space mission, but it connects to something I’ve noticed a lot of lately: We personify our spacecraft. Cartoons of Rosetta on the mission’s website show it smiling and beady-eyed. When a spacecraft fails unexpectedly, like the Japanese ASTRO-H mission earlier this year, we mourn its passing like a friend lost too soon. And for a spacecraft that carried out its entire mission, we celebrate its life and plan for an appropriate way to go out — for example, joining it with the comet it rendezvoused with.
When you’re walking through a forest, it’s nearly impossible to know the forest’s boundaries or layout. This same idea applies with our galaxy, yet we know the Milky Way is a spiral galaxy, and we know our Sun orbits the center of the galaxy from about 26,000 light-years out. With no way to propel ourselves outside of the galaxy and look at it from that view, how have astronomers learned so much about the Milky Way? By tracking the positions and movements of stars and gas clouds. And astronomers just received details from the Gaia spacecraft of a billion stars.
To map these stars, astronomers use astrometry — measuring the precise positions of stars. It isn’t the sexiest field of astronomy, but it’s extremely important. Astrometry lets you track the movements of stars and gas clumps, and that provides a way to map and measure the structures in space.
To measure distances nearby, astronomers use the parallax method. For a basic idea of what this is, hold your index finger a few inches from your face and close your right eye. Note the position of your finger compared to the background. Next, keep your index finger in the same physical location but open your right eye and close your left. Again, note the position of your finger against the background. That apparent movement is parallax, and nearby stars do the exact same thing against a background sky. The difference is that instead of using the distance between your two eyes, astronomers use the distance between Earth at opposite points in its orbit around the Sun. The tinier shift they spy, the farther away the star is from us. Using geometry plus a few known values (the amount the star shifts on the sky and the distance Earth traveled over those six months of its orbit), you can calculate the distance to the star. And while Gaia isn’t on Earth, it’s near our planet and so this same idea applies.
Gaia uses two telescopes pointed at different directions and separated by a constant angle. (The telescope mirrors, along with other instruments, are welded to a solid ring to ensure the angle never changes.) Gaia slowly rotates, so that the telescopes continually scan the sky. Each day, the telescopes collect light amounting to 40 gigabytes of data. Over its 5-year mission, Gaia will see the complete sky 70 times, and collect detailed position information for about 1 billion stars during those 70 observations. It will also measure the three-dimensional motions of two million of the brightest stars.
Because everything in the galaxy is moving at all times, the positions and distances are measured with respect to the other stars. That makes for an enormous amount of data that needs to be analyzed. A collaboration of 450 scientists from 160 institutions is up to the task. “We take all the raw telemetry that comes from the mission — all these zeroes and ones that you see coming in from the sky — and we turn them, through our six data processing centers, into data products that the scientists can interpret,” said Anthony Brown, who leads the Gaia Data Processing Analysis Consortium, during the press conference announcing the first data release. It’s one enormous, complex moving machine.
The science within the map
For the past two decades, astronomers have relied on the positions and distances measured by the Hipparcos satellite. With Hipparcus, they could only measure parallax out to a few hundred light-years, but Gaia is far more sensitive. Gaia extends the parallax measurements out to a couple hundred thousand light-years; when mission is complete, we’ll have a 3-D map of the Milky Way out to that distance from us.
This map isn’t just for kicks. Knowing the directions and velocities that stars are moving in, astronomers can then turn back the clock and figure out where those stars came from. Gaia is also collecting data about the stars’ compositions and temperatures. From all of this information, scientists will tease out which stars were born with what other stars, or if a grouping of suns came from a satellite galaxy that crashed into the growing Milky Way. They’ll know more about the evolution of our galaxy.
And on smaller scales, they will reveal the structures within gas clouds and known star clusters. This first data release already uncovered some 400 million stars that hadn’t been individually seen before, said Brown during the press conference. And there’s no doubt that some of the pinpoints of light Gaia sees are not stars but asteroids in the solar system or perhaps something far outside our galaxy.
The distance measurements astronomers are collecting will lead scientists to better calibrate the tools they use to measure distances much farther out than a couple hundred thousand light-years. Any distances greater than the Hipparcos distances had been measured using specific types of brightness-varying stars. The amount of time it takes for such a star cycle from its max brightness to its minimum brightness and back to its max depends on the amount of light it produces. So if you measure the time it takes for that cycle, you can calculate how bright it should be. That gives you a specific-wattage light bulb. (If we know that bulb should be, say, 100 watts, and we see it as fainter than that, we can calculate how far away it is.) With Gaia, astronomers have measured the distance of a few hundred of these types of varying-brightness stars using parallax. And those detections can be used to calibrate the variable-star period-distance relation and serve as a way to double-check that scientists had understood that relation. So far, so good.
This first batch of Gaia data includes brightness changes for 3,000 variable stars, the positions of 1.1 billion stars, motion information for more than 2 million of those 1.1 billion stars, and positions for more than 2,000 active galaxies. The team plans to release another three data sets before the final release, which scheduled for the early 2020s and incorporating all five years of observations.
Using the telescopes that have come in the decades before Gaia, we know that Earth orbits the Sun, and the Sun lies within a disk of stars and gas and dust. The cloudy stream arcing across the sky at night is the Milky Way’s disk. At the center of the disk lies a supermassive black hole, and our galaxy’s spiral arms pinwheel and rotate around that center. From decades of mapping the stars and gas clouds, we have a two-dimensional illustration of our spiral galaxy. But with Gaia, we’re switching from that folded map in your glove box to a spectacular “3D motion picture,” as Brown puts it.
Last week, NASA launched its newest spacecraft on a mission to sample an asteroid and send that collected rock and dust back to Earth. The OSIRIS-REx spacecraft (short for Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer) is on a two-year journey to Bennu, an asteroid about 1,600 feet (290 meters) wide and orbiting the Sun in a similar orbit to Earth’s. The spacecraft will get to Bennu in August 2018, and spend the next two years surveying the space rock. It will produce a range of different types of maps — photographic, altitude measurements, mineral maps, heat maps — and from those data, scientists will chose a location on the asteroid to snag samples. In the summer of 2020, the spacecraft will then approach the chosen location and stretch out its robotic arm. An instrument at the end of the arm will send out a puff of gas to agitate the surface material, and any loose rocks and dust will be sucked up into a capsule. (I really hope we get to see video footage of this …) In the following spring, OSIRIS-REx will head back to Earth. Then on September 24, 2023, the spacecraft will let go of its capsule holding the sampled material, and send it plunging through Earth’s atmosphere. Finally, scientists will have pristine material from asteroid Bennu.
But it’s not the first pristine material from any asteroid.
Japan beat NASA to that title 6 years ago, when its Hayabusa spacecraft returned a bit of dust from asteroid Itokawa. That mission launched in 2003 to orbit the asteroid (which is about the same size as, but more potato-shaped than, Bennu) before touching down on its surface. The sampling process didn’t go quite as planned, unfortunately. The craft was supposed to fire small metal projectiles into the asteroid to loosen bits of rock and dust from the surface, but those didn’t fire. Instead Hayabusa collected only the dust that flung upward when the craft landed. That sampled dust, only about 1 milligram of uncontaminated material, arrived on Earth in June 2010. For comparison, OSIRIS-REx is expected to collect between 2 ounces (60 grams) and 4.4 pounds (2 kilograms) of material.
The Japanese Aerospace Exploration Agency (JAXA) launched its second asteroid-sampler, Hayabusa2, on December 3, 2014. It should arrive at its target, asteroid Ryugu, in mid-2018. And in late 2019, Hayabusa2 will head back toward Earth carrying several grams of rocky asteroid material. This sample is expected to touch down on our planet in late 2020 — still nearly three years before OSIRIS-REx’s sample reaches us.
At first glance, it might seem a bit repetitive to have three asteroid sample-return missions, but they’re actually going after very different targets. Just like planets and stars come in many varieties, so do asteroids (and comets, for that matter).
Hayabusa studied an S-type asteroid, Hyabusa2 is on its way to a C-type asteroid, and OSIRIS-REx is hurtling toward a B-type asteroid. So what’s the difference? These classifications refer to the asteroid’s appearance and thus surface composition. S-type is stony or silicate materials. These tend to be lighter-colored material so they reflect more Sunlight than other asteroid types. C-types are the most common, and are carbonaceous with darker, coal-like surfaces. B-type asteroids are a subclass of C-type, but even darker surfaces (B is for black — creative). The different surface brightnesses come not just from the differing materials that make up the asteroids, but also from the effects of “space weathering.” When high-speed subatomic particles (from the Sun or from even farther away) or pieces of dust slam into any space object, it can change the surface in many ways: like cratering, color tweaks, and chemical composition changes.
By sending these missions to different asteroid types, scientists are investigating the broad spectrum of space rocks. Did these different types of rocks form through different processes? How much larger were their parent bodies? Are their compositions what we actually think they are? (And, of course, what do these asteroids tell us about how life arose on our planet?) Plus, with each mission come more advanced technologies — better cameras and other detectors, and improvements about how to gather sample material (and hopefully avoid the mistakes of the past).
Three decades ago, the only planets that we had ever seen were those in our solar system. Then two decades ago, humankind knew of a dozen planets orbiting stars other than the Sun. These types of worlds became known as exoplanets. At that time, I was in high school and I was already hooked on the universe and astronomy, but the idea of finding new worlds outside of our home solar system was an intriguing draw to study physics and astronomy.
Ultimately, I didn’t continue down the exoplanet astronomy PhD path as a career. That’s partly because I found my way into cosmology, galactic astronomy, learning about how stars evolve and die, in addition to all the other objects in our solar system. I didn’t have to choose one path. Instead, I now get a tasting of each subject in astronomy because I write about all of these. But of course, excitement of new worlds outside of our solar system never went away.
Now we know of 3,518 worlds (or 3,375 worlds, depending on which reference source you prefer) outside our solar system, and the tally constantly rises. We’ve found planets larger than Jupiter orbiting their stars closer than Mercury orbits the Sun and a planet with three host stars. We’ve imaged planets still glowing from the heat of formation, and watched as exoplanets vacuum material along their orbits to etch circular paths in the dust disks around their stars. We’ve found planets around the dense leftovers of once-Sun-like stars. We’ve found rocky planets with seething hot surfaces, ice-cold gaseous planets so far away from their host stars that we can’t figure out how they got there, and everything in between.
But none of those worlds have been quite as exciting of the discovery announced last week, that astronomers found a planet circling the star next door. Now we have a system close enough to send robotic explorers (although, it would take a while time to get there), and close enough to have a shot at observing with the next generation of telescopes.
In the enormous universe — where our home galaxy is 120,000 light-years wide, the next nearest large galaxy is millions of light-years away, and the cosmos holds billions of galaxies — the star only 4.2 light-years from us hosts a planet. And this planet, Proxima b, orbits its small, cool, red star at the right distance that the temperature could keep surface water in liquid form.
Of course, we have no clue if this exoplanet has water, an atmosphere, or even if it’s rocky like Earth. But so far, scientists know enough about it to make them pretty excited about learning more about Proxima b. We’ve come a long way in the couple decades since discovering the first planet orbiting another star.
Even though astronomy is my first love, sometimes I wander away and explore other science. This week, I attended a mechanical engineering conference and sat in on sessions specifically devoted to the influence of origami in engineering design. Lucky for me, there have been a few talks that combine this area of engineering with space-based applications.
One of the coolest was a compressible tube whose structure is based on origami folds. This type of tube (called a bellows) has all kinds of uses on Earth — like connecting a jet bridge to an airplane’s open door — and in space, it can protect a smaller tool that lies inside the bellows but can extend out.
The most advanced rover currently rolling around Mars, Curiosity, has a drill as part of its system that samples material on the planet. That drill and its associated instrumentation lie within a metal bellows. NASA next Red Planet rover, Mars2020, will also be equipped with a bellows. And researchers have been working on an origami version for future space exploration missions.
A bellows needs to be able to compress — for storage — and expand as needed. (Think again of the bellows that lies between a jet bridge and an airplane. Both the bellows and jet bridge are compressed and closest to the terminal when not in use.) The metal bellows that will fly on Mars2020 has a 1:8 compression ratio, meaning its expanded length is 8 times its compressed length. But what if you could make that ratio a smaller number?
Origami is the art of paper folding, so, perhaps unsurprisingly, its influence leads to a bellows with a much smaller compression ratio: 1:30. And that leads to a smaller compressed structure. Brigham Young University’s Jared Butler, one of the researchers working on this origami bellows, explains why this is beneficial. “If I can reduce the amount of volume my bellows takes in compressed state, I can also reduce the amount of shaft length that is used to stow that bellows, which reduces the mass of my mechanism.” And that’s good news in spaceflight.
The researchers had focused on a bellows that could fly on Mars2020, and that meant they could compare its lab-tested specifications to the metal bellows on Curiosity. After deciding on the fold pattern, called Kresling, they folded by hand several samples of different flexible industrial materials and films (like polyethylene). The Red Planet is home to harsh environmental conditions, so to ensure their origami bellows could withstand those conditions, Butler and his colleagues worked with the Jet Propulsion Laboratory.
To simulate martian dust storms, they placed the bellows in a contraption where a powerful fan whipped around sand particles, battering each bellows. Mars’ thin atmosphere blocks very limit ultraviolet (UV) radiation at the surface, and to test that exposure, the researchers placed each sample in another chamber with a deuterium light bulb to bathe each bellows with a high UV dose. To test for the wide temperature swings between each day and night, Butler and his colleagues used a different chamber with dry ice and watched how many compression-expansion cycles the samples could withstand. The key was testing how soon the structure tore (if it did at all), which not surprisingly occurred on the point, called the vertex, where multiple folds met. While they had very promising results, NASA chose the previous metal bellows design for the future Mars2020 rover.
A bellows for an ARM
But that doesn’t mean the work was for naught. A mission still in the development stages would use a larger version of this Kresling origami bellows — if the mission gets the go-ahead from Congress. This is NASA’s Asteroid Redirect Mission (ARM), and it’s a bit controversial (but that’s another story for another day). In this project, a larger drill would stay contained within the origami bellows the entire time, expanding and compressing when in use. With ARM, the instrument would operate not on a planetary surface but on a small asteroid flying through space. This type of environment’s harsh conditions are different from those on the martian surface. For example, there aren’t dust storms to contend with in open space, but there are micrometeoroids. The good news is the drill and bellows would sit within a titanium cage, which would block the meteoroids and any UV radiation. They still need to test for rocky debris: that’s material that the action of drilling into the asteroid would fling up, says Butler. A bellows would protect the rest of the spacecraft, its mechanics, and gears from that debris.
So far, this origami structure has passed every test the researchers have put it through. The BYU team still has a few unresolved issues, though, with the bellows. First is that all the samples they’ve worked on have been hand-folded. With 59 fabrication folds per sample, there’s bound to be inconsistencies among the samples. They need to come up with another way to make the exact same structure every single time.
Another issue is that to end up with a cylinder, you have to take a flat sheet, crease the folds, and then connect two opposite edges of that sheet to form a cylinder. Tape and other similar adhesives don’t work at the frigid temperatures of space. The engineers have sewn along that edge with Kevlar thread, but that puts small holes into the bellows as a result of the stitching. But Butler isn’t worried about overcoming those obstacles.
Not just drills
The origami bellows isn’t the only space-based application of origami engineering. The large solar panels on spacecraft in addition to solar sails, which help space probes travel through space via the pressure from sunlight, have to be much smaller for launch. (The storage compartment in a rocket is only so big.)
And exoplanet hunters are also hoping that the near future will see another structure that could make use of origami’s influence. This would be a huge sunflower-shaped starshade that would fly in formation with a telescope and block the light from a star, allowing that telescope to look at the much fainter light reflected off an exoplanet. This shade would be 100 feet in diameter. And to safely cram that material into a rocket, one idea is to fold the central disk and then wrap the petals around the folded disk. Once in space, it would unfurl and unfold. Even though a launch-ready starshade is still a decade away, researchers are building prototypes in the lab to find the ideal fold patterns and material.
These space-based structures are only a few ways that engineers are incorporating the centuries-old art of origami. Of course, this influence is adding a coolness factor. But it’s also improving upon the technologies that have already flown in space.
Most of the space news we hear about comes out of the biggest ground-based telescopes and the observatories launched into space. I admit, I’m guilty of focusing on these sorts of news, too. The biggest and most expensive projects are often equipped to touch on a broader amount of science. (Hubble, for example, has cost billions but astronomers have used it consistently for 25 years to study everything from the atmospheres of planets orbiting other stars to the oldest and most-distant galaxies we can see in our universe. The Cassini spacecraft has spent over a decade at Saturn to uncover a diverse and exciting system — including learning that a saturnian moon that might have the right conditions for life.) But astronomers have a much larger bag of tools. One of those is an array of small telescopes working in unison. I’ve touched on one small-telescope setup before for Discover. Another is sounding rockets, with cameras and other detectors launched into Earth’s atmosphere to collect data for a few minutes. And yet another is balloons carrying instruments for weeks at a time. In this post, I want to bring attention to this third one.
Climb into the atmosphere
The type of light humans see — so-called visible light — makes up only a tiny portion of all types of radiation. Visible light passes right through Earth’s atmosphere, but most other radiation doesn’t. The atmosphere blocks some lower-energy light than what we can see, like infrared, and nearly all of the higher-energy radiation, like X-rays and gamma rays. Balloons are a great way to get your science experiment above at least part of Earth’s atmosphere, so that you have a better chance of catching some of that cosmic radiation.
Some balloons can stay afloat for hours, while others remain in the atmosphere for up to 100 days. It all depends on the balloon’s structure. Those with openings at the sides and bottom allow gas to escape as the balloon rises, and so they last for shorter amounts of time and ride the weather pattern. Those with the gas locked in can stay afloat for weeks. (And NASA is currently developing an Ultra Long Duration Balloon, or ULDB, that can fly for about 100 days.)
None of these balloons are what you’d find at your local Party City. Instead, most are enormous — like, 40 million cubic feet enormous. (NASA uses the following comparison: a football stadium could fit inside one of these balloons when it’s inflated. Because in America we like sports comparisons.) These balloons need to hold a huge volume of gas in order to lift thousands of pounds of scientific instruments and their electronic brains miles into the atmosphere. Even the smaller scientific balloons are still about a million cubic feet.
OK that’s all interesting, but the main question is: have science instruments flying via balloons collected important data? Yes.
We know of the telescopes that flew in space to study the universe’s earliest light, called the cosmic microwave background (CMB). And we know of ground-based observatories that also collect this light (like the BICEP2 hoopla from 2014). But some of the most important measurements of the CMB’s tiny variations in temperature were made in 1998 by a balloon flying for 10.5 days, and again in 2003, above Antarctica. That was the BOOMERANG project.
Another balloon-flown experiment above Antarctica discovered that the highest-energy particles from space, called ultra-high-energy cosmic rays, can generate pulses of radio light as those particles traverse Earth’s atmosphere. The ANITA project while it was lofted above Antarctica and hanging from a balloon, had detected those radio waves reflecting off the Antarctic ice.
And over the past several days, smaller balloons carrying lower loads — these are “only” 90 feet across and carry about 40 pounds of instruments — have been taking off from Northern Sweden for short flights. This is the BARREL project, and the onboard detectors are collecting X-rays to study Earth’s radiation belts.
I could talk about dozens more experiments that have flown into Earth’s atmosphere via balloons over the past few decades — but I’ll spare you the details for now. The point is that when you’re hearing about astronomical discoveries and news, know that it’s not coming from only billion dollar telescopes in space and multi-million dollar observatories on the ground. Smaller projects — and many of them built almost entirely by students — are also extremely important tools that we’re using to learn about the universe.
Earlier this year, the X-ray astronomy community experienced the highs of a successful observatory launch followed only a month later with the lows of that spacecraft’s demise. This was the Japanese-led Hitomi X-ray space telescope, and it was supposed to shed light on the high-energy processes of the universe. Instead, it broke apart shortly after giving astronomers a taste of what the craft could do. Its loss is a major hit to X-ray astronomers, but if the current talks pan out, we might see Hitomi version 2.0 launch in 2020 or 2021.
A bit of history
Here’s a short reminder about what happened earlier this year. ASTRO-H launched smoothly February 17 into its circular orbit around Earth. (As with other missions from the Japanese Aerospace Exploration Agency [JAXA], the ASTRO-H observatory was renamed just after launch. Its new name became Hitomi.) Everything appeared stable until March 26, when JAXA couldn’t communicate with the craft. Something was wrong.
We now know that after Hitomi had reoriented to observe a new target, a software error combined with human error caused the craft to misunderstand its rotation, which sent it rotating out of control. Several pieces of the spacecraft snapped off, including the solar panels and the optics portion that extended out from the main body. Losing these pieces was detrimental. Without solar panels Hitomi can’t collect energy to power the observatory. After investigating the March 26 events, JAXA announced April 28 that there was no way to restore the mission.
Why it mattered
The first time I heard about ASTRO-H was actually pretty late in its development. It was at a high-energy astrophysics meeting in April 2013, when X-ray astronomers were excitingly planning their observing projects.
ASTRO-H’s main instrument, the Soft X-ray Spectrometer (SXS), would revolutionize our view of the high-energy sky. The SXS would detect individual packets of X-ray light, and directly measure the heat of each of those X-ray photons. Scientists would know, for example how intense different regions of a gas cloud is at each X-ray energy. From that data, astronomers could piece together the movement of the object — mapping gas clumps of different elements flying out from the site of a stellar explosion or the motions of colliding galaxy clusters.
Other telescopes — like NASA’s Chandra and Europe’s XMM-Newton — have X-ray spectrometers, but those work differently. They spread the light into a rainbow, and can get detailed information on only point sources (like a bright star or distant active galaxy). The SXS could get an array of detail from several areas within an object (studying, for example, the winds near the center of a large galaxy). Plus, the SXS would be some 30 times as sensitive as any other space X-ray spectrometer.
It was ASTRO-H’s bread and butter. And it was a type of observation that X-ray astronomers desperately wanted.
This year’s high-energy astrophysics meeting took place the first week of April and just days after Hitomi’s communication failure. Instead of an upbeat session showcasing the progress of the spacecraft, team member Andy Fabian of the University of Cambridge, updated the X-ray astronomy community about the situation. “This is extremely sad and distressing for the enormous team working on this,” he said. The air in the room was heavy, gloomy.
During its five functioning weeks, Hitomi gave astronomers a taste of what the SXS instrument could have delivered. The observatory watched the bright center of the Perseus cluster of galaxies, and the collected data, said Fabian during this same presentation, “are completely transformational.” (The Hitomi team published these observations in a recent Nature paper.)
Even though Hitomi is completely lost, neither JAXA nor NASA are giving up on the idea of a successor mission.
A July 14 presentation from the JAXA Space Science Institute Director Tsuneda Saku overviews a suggested mission. (Unfortunately, the presentation PDF is in Japanese, and thus I had to rely on Google translate and similar apps.) From what I can make out, this observatory’s focus would be the SXS instrument. The spacecraft’s body and design would mimic Hitomi’s, which would therefore speed up development and construction. The launch goal is 2020 — if the mission and its funding is approved. During a July 15 press conference, the Japanese Minister of Education, Culture, Sports, Science and Technology Hiroshi Hase spoke of his support for a successor to Hitomi.
But even if JAXA does get approval to build Hitomi version 2.0, they need NASA’s involvement. That’s because JAXA wasn’t responsible for the main instrument — America’s space agency was.
That’s where the NASA X-ray Science Interest Group comes in. On June 8, the group of about 75 researchers discussed whether NASA might participate in a subsequent flight. In the weeks since, the group compiled a white paper to support the science case, and that was presented at a NASA Advisory Council Astrophysics Subcommittee meeting over the past two days. The science case is there — no one doubts that.
There is no X-ray instrument like the SXS in space right now. The European-led mission scheduled for a 2028 launch, Athena, will carry a similar instrument, but that’s a long time to wait for the views that X-rays astronomers had hoped to have now. If NASA and JAXA decide to move forward on a Hitomi successor with the SXS, the project could launch in 4 to 5 years. Both agencies will meet early next month to talk further about collaborating on a possible re-fly.
No scientist ever wants his or her experiment to go wrong. I’ve seen the disappointment before, for different astronomical projects; it’s heartbreaking. “There’s no question we went through the five stages of grief when we first heard about the spacecraft,” says NASA X-ray Science Interest Group Chair Mark Bautz.
Astronomers spend years, sometimes a lot longer, developing the hardware and the software to run said hardware, plus the subsequent observing plan. “We’ve been waiting for this for decades — literally,” says Harvard X-ray astronomer Randall Smith, “and to get a small taste and then lose it has just been awful.”
In space science, you usually get one shot. Maybe ASTRO-H will get a second.
We all have our favorites. Some stargazers prefer our rust-hued neighbor, Mars. Others instead look toward the Orion Nebula, the glowing stellar nursery. Personally, I’m quite fond of our galaxy’s center. There, the extremes of nature meet in spectacular fashion — and give us a pretty great laboratory to explore those extremes.
I know, other writers at Discover have also focused on this same target. There are reasons for that popularity.
First of all, a black hole takes center stage, and black holes are pretty rad. This one weighs in at 4 million times our sun, and all that mass is crammed into a space not even 20 times as wide as our sun. That makes for a very dense region of space. (Which certainly makes sense, because black holes are the densest objects nature makes.) Anything that comes too close to a black hole — anything that reaches beyond the point of no return — falls into the black hole’s gravitational pull. For our galaxy’s central supermassive black hole, with the unexciting name of Sagittarius A*, that tipping point is around 7.3 million miles. That sounds like a lot, but in the grand scheme of things, it’s not. Our sun’s radius is about 430,000 kilometers.
Whizzing nearby and around the black hole are dozens of stars. Tracking how those pinpricks of light move in the presence of Sagittarius A*’s immense gravitational pull is actually why astronomers even know the black hole exists and how heavy it is. But the black hole doesn’t just calmly sit at our galaxy’s center. It spins, likely dragging the fabric of space with it. The black hole munches on gas that comes too close. It throws out flashes of light. Surrounding the black hole is a tumultuous environment, laced with magnetic fields and hot plasma and who knows what else. It’s a constantly evolving, congested place.
Astronomers have trained a brigade of telescopes on the center of our galaxy. They’ve seen X-ray flashes and a diffuse X-ray glow. They’ve detected infrared flares, gamma-ray signals, and constant radio waves. The galactic center glows in every color of the radiation rainbow.
But there’s a lot they still haven’t seen, like the outline of that black hole — a shadow marking the border of no return, beyond which everything falls into Sagittarius A*’s gravitational pull. Astronomers probably won’t have to wait much longer to see it. They’ve been prepping a system of radio telescopes scattered across Earth to image that shadow, and the long-awaited photo may come next year.
We also think the black hole’s environs boost electrons and other lightweight particles to extraordinarily high energies. The power required to do this is out of reach of anything that exists on Earth, and so the center of our galaxy is the nearest laboratory to us to find out how particles with those energies can even exist.
The Milky Way’s center is a location rich in astronomy and physics and the extremes, making it a prime target for the armada of telescopes we have today. My declaration of the best place in the universe (well, aside from the comfort of our hospitable planet) makes for my introduction to the blogging world. Welcome to Astrobeat, where I’ll explore the ever-evolving rhythm of the universe — from new research, to the stories of those looking toward the cosmos, to historical perspectives, and everything in between.