The one-dimensional volcano

By Phil Plait | June 15, 2010 7:30 am

Think this is just another devastatingly gorgeous picture of a volcano from NASA?


Well, you’re right. Kinda.

First, the image is from NASA’s Earth Observatory-1, which — surprise! — observes the Earth. The volcano in question is Volcán Villarrica, a 2850 meter (9300 foot) snow-capped stratovolcano at the southern tip of Chile. It’s a fairly active mountain, frequently ejecting ash and airborne rocks called pyroclasts, and causing lahars (mud flows). You can see the mess it’s made to the east (right), and to the west there is a vast network of grooves caused by flowing mud and lava.

So, cool picture, right?

The thing is, this isn’t a picture. At least, not really! You’d expect that EO-1 is equipped with a camera much like a digital camera you can get in a store (though probably a tad more pricey). And in fact, most cameras on board satellites are like that: a two-dimensional array (or grid) of light-sensitive diodes. When exposed to light, they create electrons which fill each pixel like water fills a bucket. Electronics then read out the electrons and count ‘em up. Brighter spots have more electrons, dimmer spots have fewer. Tadaa! Picture.

But that’s not how the Advanced Land Imager on EO-1 works. The following description simplifies things a bit (go to their page for more details), but essentially the imager has a row of pixels instead of a grid. When sitting at the back on of the telescope facing downward, each pixel sees a square of the Earth about 10 meters (33 feet) on a side. Imagine the satellite were standing still, and took a picture. It would see a long thin rectangular region of the Earth, 10 meters wide and some kilometers long. It takes some tiny fraction of a second to take that picture and to read out the row of pixels — that is, get the electrons out of each pixel and record how many there were (that’s why your camera pauses for a moment after you take a picture).

But the satellite is moving, orbiting the Earth. So now imagine that the exposure and readout time of the row of pixels is exactly the same as the time it takes for the view of the camera to move by 10 meters. In that case, just as the camera is ready to take another picture, the pixels have moved (or the Earth has slid underneath by) exactly their own field of view. When it takes the second shot, it’s seeing the very next strip of land adjacent to the first shot. This happens continuously, so the camera is basically taking a picture of strip after strip of the Earth. Once all those rows are beamed down to Earth, software can be used to stitch the image together, turning a pile of one-dimensional images of the Earth into a glorious two-dimensional picture like of the volcano, above.

In other words, that picture of the volcano wasn’t taken all at once: it was taken row by row, each one individually, and then stitched together after the fact. Cool, huh?

And maybe it sounds familiar: I bet you’ve used this method yourself. Ever had a scene you couldn’t fit on one picture, so you took two? And then later, using some software like Photoshop, you stitched the two pictures together (creating what’s called a panographic picture). Well, that’s what the imager on EO-1 does, but instead of turning the camera to get the second shot, the satellite allows its orbital motion to naturally put the next shot into frame.

Also, this is how scanners work! They don’t take a two-dimensional image of what you’re scanning; they have a single row of pixels that moves across the document, continuously reading out what it sees. Once it’s done, all those individual rows can be stuck together to make an actual picture.

For an Earth-orbiting satellite this is a pretty clever move. Having only a single row of pixels saves weight, space, and power. Since the satellite orbits the Earth you just use its motion to make the panning movement for you. Some astronomical observatories do this as well, like WISE, which spins around the Earth and takes huge scans of the sky as it does. Other big ‘scopes like Hubble point at their targets and sit there, letting the picture build up, but that’s not always the best way to get your data. It just depends on what you’re trying to do.

The cool things about all this for me is just how many ways we can observe the Universe (and our home in it!) and the fact that smart people have figured this out! I’ve said it before, and no doubt I’ll say it again: I’m glad smart people are around. They make life so much more interesting for the rest of us!

Image credit: Jesse Allen and Robert Simmon, using EO-1 ALI data provided courtesy of the NASA EO-1 team


Comments (53)

  1. CJSF

    Up until recently, this was pretty much how all Earth obvserving satellite images were taken. Some, like the Thematic Mapper and Enhanced Thematic Mapper sensors on the Landsat satellites employed a rotating sensor head and mirror assembly that required a correction for each scan line to keep the image rows parallel. This device failed several years ago on the Landsat 7 satellite, rendering the outer portions of the image with gaps between the pixel rows. Framing cameras are gaining more popularity with Earth observing platforms because the technology has progessed enough to make them much smaller and lightweight.


  2. Big Al

    Way back in the sixties, when I made a panoramic view, “cut and paste” was the literal solution, or cut and tape sometimes. And a photoshop was where we took our film to get it developed. Yep, uphill, both ways.

  3. This sounds very similar to the HiRise sensor used on the Mars Reconnaisance Orbiter, which has been producing fantastic images for years and years and years — is it the same tech, or an independent reinvention?

    (Also, your ‘go to their page for more details’ link is bogus, missing the ‘ht’ from ‘http’.)

  4. Pi-needles

    , the image is from NASA’s Earth Observatory-1, which — surprise! — observes the Earth.

    Really? With that name I would’ve thought it was a satellite that observes Neptune instead! ;-)

    Cool picture. :-)

  5. Mr. Paul

    Ah, someone has put a scanning back on a satellite :)

    Actually, this is pretty close to a swing lens panoramic camera, except the movement of the camera instead of the lens.

  6. VJBinCT

    I think the usual nickname for this common technique is ‘push broom’. It is very frequent for NIR airborne multi-spectral photography.

  7. I have seen this photo technique used in other satellites too. What’s it called? Push broom or something?

  8. QuietDesperation
  9. hirise_hugger

    INAIS (I’m not an imaging specialist), however I believe the HiRISE camera onboard MRO
    also operates in a similar fashion (continuous image acquisition + groundspeed compensation). Out of its toaster pops 40,000 pixel-long images, yielding succulent long strips of Martian landscape goodness.

    Mmmmm, planet imagery….

  10. Ah, fixed the bad link. Thanks for letting me know.

  11. CJSF

    Yes, the arrangement of a one dimensional array is a pushbroom sensor. The Landsat TM/ETM I described above is sometimes called a “whiskbroom” sensor.


  12. Nick

    Some astronomical imagers have a normal 2 dimensional CCD, but use time-delay integration mode which does pretty much the same thing. You scan and at the same rate you shuffle the charge across the CCD and readout. The Sloan Digital Sky Survey worked in this way, and the European Space Agency Gaia mission will operate like this too.

  13. The photo sensor in every iPhone also does a similar (though not quite identical) thing. This results in very odd looking photos of fast moving objects like this one:

    Naturally a satellite never photographs anything that is moving quite that fast relative to the satellite, so that is not an issue.

  14. “By the nine moons, Tharog, your antennae keep down! Here the Observatory From Earth around comes again!”

  15. Darrin Chandler

    FYI, the NAC instruments on LRO are also push broom cameras.

    If you want to get a bit more esoteric, check out how LRO’s WAC works… it’s something of a mixture of framing and push broom. It does have a grid but it’s striped with color filters. For a given wavelength you get a short but wide image like a slat from a venetian blind. As the orbiter moves around the moon subsequent frames are taken to line up the beginning of one wavelength’s slat with the end of the previous frame’s slat. There are 5 visible light filters and 2 UV filters, with VIS and UV having separate optics.

  16. Chris

    No “click to envolcanate”?

  17. Carlos

    Someone mentioned that size was the main issue for needing these line arrays. Some of the high-resolution sensors might be 20,000 pixels across the line. If you wanted a square (frame) array sensor of the same width you’d end up with 20,000 by 20,000 or 400 Megapixels! As you can imagine that would be a very big sensor.

    The terminology is a bit interesting, and not always consistent. When the satellites natural orbit motion creates the necessary scanning motion for the camera it is called pushbroom. Whiskbroom like Landsat usually has the sensor moving back and forth perpendicular to the satellite’s motion. These definitions, however, can and do vary between individuals and organizations.

    A hybrid (for which I’ve seen the names pushbroom and whiskbroom also applied) is what the commercial hi-res satellites like DigitalGlobe or GeoEye use. A normal pushbroom mode is too slow for these satellites – so they rotate the entire spacecraft at a high rate in order to obtain the scanning motion. This can lead to slightly different perspectives between ends of the picture. In one end of a picture you might be looking straight down, in the other end which could be tens of kilometers away you might now be looking at an angle. This is most apparent in city scapes, where the buildings bring out the look angle.

    In case it isn’t obvious — since both DigitalGlobe and GeoEye use line array cameras, it means that all of the satellite hi-res imagery on Google Earth/Maps was collected one line at a time. Pretty cool.

  18. You have a typo, unless “picture” starts with a vowel. See the sentence “The thing is, this isn’t an picture.”

  19. Larry

    I had problems that looked like that when I was a teenager.

    A little Clearasil cleaned ‘em right up! ;)

  20. davem

    Villarrica is a tourist destination – you can walk up the volcano in about 5 or 6 hours, to see the activity. Perhaps if the satellite got a bit closer, it could pick out the ice axe I dropped there in ’88. I stuck it in as I was sliding down at a rapid rate of knots. I have a broken finger to prove it. Still, it did make me a few friends – whenever I was asked ‘Que pasa?’, I got invites into everyone’s homes, and to see their daughters :0)

  21. Actually, this is more common than you think – iPhones do it routinely. The old style CRT monitors and TVs produce their images that way, just too fast for the eye to see (but you can use them to test shutter speeds on your camera.)

    In fact, just about every SLR, digital or film, that uses a shutter speed above 1/250 second, often slower, actually does this – in a way. There’s only so fast a shutter can uncover the film/sensor frame, and it is moving across the frame when it does so, resulting in uneven exposure at those high shutter speeds (the movement of the shutter may be very fast, but when total exposure is small fractions of a second, that movement turns out to be a high percentage of the exposure time.)

    So, there are two shutter curtains: one that opens, say, left-to-right, the other that follows and closes left-to-right. At high shutter speeds, the closing one starts across before the opening one has even completed its journey – this means that a keyhole, or slit, travels across the frame, and the time between the curtains is the shutter speed. So the whole frame does not get exposed at the same time. This makes for interesting photos when someone goes for a fast moving subject, like that propeller above, or a race car, which moves at the same time the shutter slit is moving, and ends up getting bent or distorted in the final image.

    That’s why you can’t use a standard flash at high shutter speeds. The brief pulse of light (1/10,000 second or so) only illuminates whatever the slit has uncovered at the time it goes off. Special high-speed flash units are made to emit multiple pulses and illuminate each section of the frame as the slit travels across.

    The other way to have fun is to keep the slit fixed, and slide the film across, producing an even weirder effect. This used to be used regularly at horseraces to determine the “photo finish” (maybe still is,) which can be seen here (I love that I linked to an astronomy site with that one.) The film moves at roughly the same speed as the horses, so they appear almost normal, while the background goes smeary because the same narrow slice of background gets exposed across the entire film.

    And finally, a while ago on Google Earth I found multiple images of the same plane during takeoff at Atlanta’s Hartsfield-Jackson airport. Apparently the satellite or photography plane did its sequence in the same direction as the departing aircraft. Alas, it seems to be gone now.

  22. Guillermo Abramson

    I also climbed it around that date. The vision of the magma boiling at the bottom of the huge chimney, seen from the rim of the vertical shaft, still haunts me and sends chills down my spine. :-O

  23. -b-

    The MOC camera on Mars Global Surveyor worked the same way. And much earlier, I seem to recall that the imaging system on Pioneer 10/11 did something similar, except the spacecraft itself was spinning, and images were built up one “pixel” at a time via a sort of mirror + stepper motor arrangement. I think Galileo was supposed to have a pushbroom imaging mode too, as a fallback in case its wacky spun/despun arrangement failed and they had to spin the whole thing.

  24. hirise_hugger

    Instead of being a “HiRISE Hugger,” maybe I should be a grammar hugger — only 3 errors in my previous comment. Doh. (note to self: coffee first, type second).

    I had mentioned that HiRISE images can be 40,000 pixels long, but just got clarification: for bin-1 mode (max resolution), the HiRISE camera can crank out approximately 125,000 lines (pixel rows). Which instead of a “push broom” or “whisk broom” sensor, is more like one of those “cleaning the elephant cage brooms.” But less smelly.

  25. This type of sensor is used in terrestrial photography as well. The common term for it is a digital scanning back.

    Due to the slowness of the scan, this type of sensor is used for still life, art reproduction, and static landscapes (no wind, please!). In good light, you might expect a single image capture to take 30 seconds or more.

  26. Lauren

    The part that’s making my brain hurt is that all the lines of pixels… well, line up to make the picture! I feel like there should be lines duplicated, or maybe spaces between some where the next orientation wasn’t quite perfect. It just amazes me not only that somebody worked out how to do that so perfectly, but that it actually translates from the pen-and-paper into real world success!

    A planet scanner. Like my scanner on my desk. But for A PLANET. Yep, makes my brain hurt. :)

  27. John

    Pardon my possibly misplaced exactitudinalness…but if the single row of sensors takes a picture 10 meters wide, isn’t that already two-dimensional? One-dimensional would require there be no width, correct? Wouldn’t that require an infinite number of scans to produce an image that was viewable in two dimensions?

    But beautiful photo, yes.

  28. CJSF

    Actually, satellites like QuickBird and GeoEye do not rotate. The “building lean” you see happens because the satellites are pointable, so they are often not pointed straight down at the scene. This causes tall object to “lean” away from the image centerline in the look direction.


  29. 24601

    So… it’s basically a big scanner in the sky. Cool

  30. Smalls

    Is this why the captions on Nasa’s Earth Observatory website often say “photo-like image”? I’ve always wondered what they meant by that.

  31. Mario

    I’ve been there in 1998 but couldn’t get to the top, it started venting sulphurous(?) smoke. But the view and the rapid descend (sliding down sitting in the snow) was amazing!

  32. Jesse

    Presumably, if you adjusted your sampling rate so that the second row of pixels overlapped the first row, you could do some fancy math and improve the resolution of the camera to some extent? A given pixel can still only see one color for a 10m square (say), but if you shot every meter and thus had 10 overlapping images for each 10m square you could make a pretty good guess about what color each meter should be?

    I expect this has been done for decades and is old hat, but I’m not an optics guy and it sounds neat :)

  33. SlyEcho

    This is how cheap cameras take pictures (and video), the obvious disadvantage is that moving objects appear skewed.

  34. Timothy Reed

    I headed the optical integration and alignment of the HiRISE imager during its development and construction at Ball Aerospace & Technologies Corp. in Boulder, CO. A few years ago Emily Lakdawalla asked me to explain some of the details of pushbroom imagery (and the difficulties of getting that fantastic photo of Phoenix descending through the Martian atmosphere); the article on the Planetary Society website is available at

    The article explains some further refinements to the single pixel explanation that Phil has given. HiRISE uses time delay integration (TDI) to allow for greater dynamic range and flexibility in imaging. I’m not familiar with the particulars of the Advanced Land Imager’s focal plane array, but TDI is a common imaging method now, used not only on HiRISE but also on the terrestrial imagers QuickBird and WorldView-1 and -2.

  35. Dennis

    Why would imaging a single row of pixels be considered “one-dimensional.” Each pixel represents a non-zero length *and* width, resulting in a (very skinny) rectangular (2D) image.

  36. QuietDesperation

    the obvious disadvantage is that moving objects appear skewed.

    The skew effect is sometimes the desired result.

    Why would imaging a single row of pixels be considered “one-dimensional.” Each pixel represents a non-zero length *and* width, resulting in a (very skinny) rectangular (2D) image.

    It’s just a technical term. You’re over analyzing it.

  37. psuedonymous

    I’d always heard of this type of imaging referred to a ‘linescan’ rather than ‘pushbroom’. I guess linescan is the analogous UK term.

  38. The Viking Mars landers had stereo slitscan cameras

  39. magetoo

    So, what kind of cool tricks can you do with the optics when you take a satellite image one pixel row at a time? (I suspect that’s where the really unexpected things, and the biggest weight savings, would be.)

  40. Long “strip” photographs of terrain can also be generated from a moving video camera. The February 2010 edition of Sky & Telescope magazine has an interesting article about generating long, high-resolution “strip” photos from the Kaguya satellite. Kaguya orbits the moon and has two HDTV cameras (angled 22.5 and 18.5 degrees below horizontal) pointing along its direction of travel (one forward, one backward.) These record and send video, but still images that approximated a down-looking camera were desired. The cameras record images of the moon’s limb as it passes in front of and underneath the moving satellite.

    One method to assemble these video frames into long, “flat” stills was to take a horizontal strip of pixels from each video frame and stack them to make a long strip of terrain. (The author’s technique was a bit more involved.) The results are quite impressive. This technique could possibly be applied to, say, a side-looking video camera in your car to make a long panoramic terrain photograph. It might also work on video taken from a steadily-rotating video camera, allowing you to take amazing panoramas at home without special lenses or multiple exposures.

  41. I worked for a company in the late ’80s that made automated inspection equipment. This usually involved hanging a camera over a moving conveyor. This was right at the beginning of the CCD era, so the cameras were a single row (1K pixels) that scanned across the belt. A speed sensor on the belt was fed to the camera controller so that it could scan every time the the belt moved one row width. This produced a digital image one belt wide by infinitely long. We built them for everything from inspecting the security thread in money paper at the Crane Paper plant, to inspecting the degree of absorbent material packed into Pampers for P&G.

    Another use for slit cameras that goes back nearly a century is the photo-finish camera at race tracks. Since they know approximately how fast the horses are traveling, that’s how fast they pull the film. The camera is aimed at the black/white striped pole on the other side of the track, so the image shows a black/white wall behind the horses making it easy to measure. The thing is, the horses are captured in time, not in space. The further back you go, the later it is, so the one crossing first “by a nose” is easy to determine.

    - Jack

  42. Wayne on the plains

    Yeah, EO-1! I was at the launch from Vandenberg AFB. I’ll never forget the year, we’d go back to the hotel each night and see if the Supreme Court had decided who the President would be yet…

  43. Albert J. Hoch

    This used to be called a “focal plane” shutter which moved a slit across the film (before digital).

  44. G

    Villarica is in central Chile.

  45. DaveS

    I have a friend who’s into vintage photography. He has a camera called the “Cirkut” that uses the same slit principle, but using large-format print film. This is one of his photos using is Cirkut.

  46. JB of Brisbane

    Has anybody mentioned photocopiers yet? Or flatbed scanners?

  47. Captn Tommy

    There was a gentleman in the 1890s who I think first developed the process of using a slit aperture scan cammera – the lens moved across a large strip of film. His most famous shot was of San Fransisco CA USA within days after the 1906 earthquake from a special kite (he had designed) carrying a ten foot long clockwork camera that scanned a ten foot strip of film.

    I believe this photo was then compared with a panaramic shot the gentleman had taken with the same camera from the same spot a few years earlier. the comparison was used as a way to measure the extent of the damage, a first for aerial photography before the Airplane.

    I read this information in the Smithsonian magazine several years ago.

    Captn Tommy

  48. Carlos

    @29. CJSF Says:
    Actually, satellites like QuickBird and GeoEye do not rotate. The “building lean” you see happens because the satellites are pointable, so they are often not pointed straight down at the scene.

    CJSF – You are right – QuickBird, GeoEye, WorldView and OrbView can point off-nadir (nadir=straight down) to take an image. However all of them are line sensors, like the one in this article. As a result all of them must rotate to “paint” one line after another. That’s why the building lean can be different on one end of the picture than another. The scanning speed can be anywhere from 5000 to 20000 lines per second. If you assume a resolution of around 0.5 meters, that means the line on the ground is moving at around 10 km/s. To cause that motion, the satellite rotates. In fact while most of the images they take tend to aligned north-south, the images can also be taken in other directions (say east-west). If you pull up the Digital Globe layer on Google Earth you’ll notice this.

    In conclusion – yes, the satellites do rotate to take an image! How do I know? I wrote the algorithm that accomplishes this for one of them!

  49. 44. Albert J. Hoch Says: “This used to be called a “focal plane” shutter which moved a slit across the film (before digital).”

    Similar, but not quite the same, Albert. With a focal plane shutter, the slit moves in front of the film. With a slit camera, such as used in aerial reconnaissance and other applications mentioned above, the slit is fixed and the film is pulled across it.

    - Jack

  50. 49. Carlos Says: “yes, the satellites do rotate to take an image! How do I know? I wrote the algorithm that accomplishes this for one of them!”

    Cool, another alumnus from the (ahem) space imaging world!

    Remember that Pioneers 10 and 11 that took the first images of the outer planets did not even have a camera on board in the usually sense. The had a “Rotating Imaging Photopolarimeter” that was essentially a one-pixel camera that used the spacecraft’s spin to do the scans.

    Similarly, the Viking landers in the mid ’70s had a similar camera that was highly mechanical. It, too had a one pixel imager that used a pivoting mirror to do make the individual scan lines, and then it rotated the whole assembly to pan across the scene. The images actually took several minutes each to generate. For the team picture, they used the ground test version of the camera to take it. The team members didn’t have to stand there the whole time, they only had to move into place as the camera got to their position and then they could leave after it had passed. One guy is actually in the photo three times by moving into a new position as it scanned by!

    - Jack

  51. Buzz Parsec

    The EUV spectroheliometer on Skylab was similar. The main mirror scanned left and right and up and down to focus light from one spot on the Sun on a slit. Behind the slit was a diffraction grating that created a spectrum spanning 5 or 6 single-pixel photomultiplier tubes. Each photomultiplier tube as carefully positioned behind a slit to capture light at a single “interesting” wavelength.

    By tilting the main mirror back and forth and up and down, it could build up simultaneous images of the same area of the Sun in 5 or 6 different wavelengths.

    By holding the main mirror still and sliding the sensor array along the focal plane, it could take an EUV spectrum of a single point on the Sun. (Actually,
    5 or 6 simultaneous spectra of a single point, but I don’t know how useful that was.)

    The way the mirror scanned was first left to right (using a stepping motor, one pixel at a time.) At the end of a line, it would move down one step and scan the next line in the other direction. This was so it didn’t have to waste time moving the mirror back to the other side between each row. This was described in the official documents with a really cromulent word: boustrophedonic, meaning “in the manner that an ox plows a field.”

    The spacing of the slits along the focal plane was designed to capture some of the more prominent EUV spectral lines. One of my first jobs (before launch) was to discover other positions for the sensors (which were at fixed relative positions, but free to move along the focal (spectral?) plane, which would also capture more than one “interesting” line. I found a couple of dozen useful positions (allowing at least 3 or 4 of photomultiplier tubes to see something non-boring) many of which were actually used during the flight.

  52. adam

    @ 37. QuietDesperation

    It’s just a technical term. You’re over analyzing it.

    Actually, I’d say it’s the opposite of a technical term. The term Phil’s looking for is “composite 2d strip image.” There’s your technical term.


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!


See More

Collapse bottom bar

Login to your Account

E-mail address:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »