Apollo 11’s “1202 Alarm” Explained

By Amy Shira Teitel | January 5, 2018 12:15 pm

Capcom Charlie Duke, and backup crewmembers Jim Lovell and Fred Haise in Mission Control during Apollo 11’s descent. NASA

“Got the Earth straight out our front window.” As the lunar module Eagle yawed into a windows up orientation, Buzz Aldrin looked away from the computer to see the Earth nearly a quarter of a million miles away.

“Sure do,” agreed Neil Armstrong, adding, “Houston, [I hope] you’re looking at our Delta-H.” The Earth wasn’t his main concern for the moment. The mission’s commander was laser focused on getting the spacecraft down onto the Moon’s surface for the first time in history. He had just 30,000 feet to go…

“That’s affirmative,” replied Capcom Charlie Duke. The room full of flight controllers listened to the exchange while keeping a close eye on the numbers filling their screens, looking for any little anomaly that could force an abort.

Then came Armstrong’s voice over the radio again, this time marked a slight note of urgency. “It’s a 1202… What is that? Give us a reading on the 1202 Program Alarm…”

The 1202 program alarm is featured is just about every retelling and dramatization of Apollo 11’s lunar landing. Understandably so; it was a dramatic moment in an already dramatic event that could have forced an abort and left the commander of Apollo 12, Pete Conrad, as history’s first man on the Moon. But it didn’t. As we know, Apollo 11 made it to the surface and the alarm has become little more than a story point. So what exactly was the 1202 program alarm that could have killed Apollo 11’s landing? To answer that question, we need to go back and understand a little more about how the Apollo Guidance Computer worked.

Armstrong training in the lunar module simulator. NASA.

Armstrong training in the lunar module simulator. NASA.

The Apollo Guidance Computer in Super Brief

By the time Apollo missions flew to the Moon, the software program that ran the mission could fit — though only just fit — into a set of read-only magnetic cores. This meant the pieces of information could be called up at any time and run nearly simultaneously, which was pretty important. Take the moment of landing on the Moon, for example. The computer needed to take in a lot of data points simultaneously to facilitate a good landing. It had to know where the lunar module was and where it was moving, information called state vector. It needed to maintain the right attitude based on that position, as well as velocity, altitude, and engine performance data. It also needed to adjust the abort trajectory constantly, ready to get the crew back into orbit should something force an abort.

Now think about a full lunar landing mission for a second — getting into Earth orbit, burning the engine to travel towards the Moon, recovering the lunar module, adjusting the course mid-way to the Moon, getting into orbit, landing, leaving the Moon’s surface, and traveling home. You can start to appreciate how many pieces of information would be going through that computer at any given time.

For the sake of simplicity, each task (a task in this case would be a single mission event like the lunar landing) was broken down into parts. These parts or programs were manageable modules that could be run individually while rendering the whole system more reliable.

If you’re following along you can see how this creates a potential new problem. The Apollo Guidance Computer was a single processor computer, computer. So how could it run multiple programs — the parts that make up a whole mission event — simultaneously? Well, it didn’t. Not really. But between relatively fast processing speed and relatively slow human perception it was simultaneous enough to run the mission smoothly. The programs were also scheduled and run based on priority with measures in place to interrupt any program should something vital come up.

In the case of the Apollo Guidance Computer, it had a 12-word data area called the Core Set. This contained all the information to execute a given program. There were six of these Core Sets in the Command Module and seven in the Lunar Module. In each 12-word Core Set, processing information took up five words, one each for the program’s priority, entry point address, copy of the BBANK register, flags, and the last pointed to the Vector Accumulator or VAC area. The seven remaining words were left for temporary variables or extra storage… whatever they might be. They’re called Multipurpose Accumulator or MPAC.

So in short: twelve words in the Core Set, five words of memory to execute a program, and the seven MPACs deal with the extra information as needed.

Apollo 11's lunar module Eagle before launch. NASA.

Apollo 11’s lunar module Eagle before launch. NASA.

Scheduling a program falls to the Executive. It starts by looking at the task’s starting address and its priority, then passes that to the NOVAC routine. This scans the core set to see if there is any available space for the program to execute, and if so where that space is. It then schedules and runs the program.

Through exhaustive testing, the team at MIT’s Instrumentation Lab designed the computer such that it would never be full at any point in a mission. There would always be space available for the next program, rules in place to interrupt a program if something needed to be run immediately, or space to schedule the program after whatever was currently being run through the computer. But when Apollo 11 was descending towards the lunar surface, the computer ran out of Core Sets. This is where the 1201 and 1202 program alarms come in.

Apollo 11’s 1202 Alarms

Not long after the lunar module got into its 69 mile by 50,000 foot orbit in preparation for landing, the crew turned on their rendezvous radar to track the command-service module. This was was a safety measure. The radar tracked the CSM so it knew where to direct the lunar module in the event of an abort. The crew left the radar on in SLEW mode meaning it had to be manually positioned by an astronaut, and also meant that it wasn’t sending data to the computer.

What neither the astronauts nor the guys in Mission Control knew was that radar Coupling Data Units were flooding the Apollo Guidance Computer with counter interrupt signals. This was due to an oversight in the computer’s power supply design structure. These signals were taking up just a little bit of the computer’s processing time, and the spurious job kept running in the background, taking up space. So unbeknownst to anyone, this signal prevented vital programs associated with the landing from completing. When a new task was sent to the computer there was nowhere for it to go. The running and scheduled jobs were holding their Core Set and VAC areas.

Eventually the Executive found that there was no place to put new programs. This triggered the 1201 alarm signaling “Executive Overflow – No Core Sets” and the 1202 alarm signaling “Executive Overflow – No VAC Areas.” These in turn triggered a software reboot. All jobs were cancelled regardless of priority then started again as per their table order, quickly enough that no guidance or navigation data was lost. But it didn’t clear up the issue. The computer was still overloaded by the same spurious radar data, stopping new programs from running. In all, it triggered four 1202 alarms and one 1201 alarm.

Eventually Buzz Aldrin noticed a correlation. At the second 1202 alarm, he called down, “Same alarm, and it appears to come up when we have a 16/68 up.” The 16/68 code — Verb 16 Noun 68 — was used to display the range to the landing site and the LM’s velocity. The command in itself didn’t place a heavy load on the computer, but with the existing load that extra bit of processing power seemed to trigger the 1202 alarm. Realizing this, the solution was simple: ask Houston for that data instead of calling it up from the computer.

Houston, meanwhile, gave Apollo 11 a GO in spite of the alarms because of how spread apart they were — they came at mission elapsed time 102:38:22; 102:39:02; 102:42:18 (that was the 1201); 102:42:43; and 102:42:58. If they had been closer together it could have wiped out navigation data during a reboot, but being separated even by as few seconds as they were meant that that vital information was retained. The computer behaved exactly as designed, protecting itself in a way that wouldn’t cancel a lunar landing without just cause.

View from one of Eagle's windows during Apollo 11's landing. NASA.

View from one of Eagle’s windows during Apollo 11’s landing. NASA.

Safe Landing

The 1202 program alarm wasn’t something either Neil Armstrong or Buzz Aldrin had seen in training. Their time simulators had been filled with other alarms, most of which had them reaching for the abort button. Which was the right response. In simulations you train for the right reaction. But when they saw the 1202 and 1201 program alarms it was the real thing, which meant the right response was completing the mission objective. They weren’t going to give up on landing on the Moon if they didn’t have to.

The guys in Houston had the same response. “We’re go on that alarm,” Charlie Duke called back up, though he wasn’t entirely calm when he said it. The astronauts and flight controllers watched the second 1202 alarm blare on board the Eagle, followed by a 1201 alarm three minutes later then the last two back-to-back 1202 alarms almost immediately.

“Eagle, looking great. You’re Go,” came Duke’s call from Mission Control. Thanks to a handful of clever computer programmers, he passed up that Apollo 11 was still clear to land on the Moon.

Sources: NASA; NASA; NASA; The Apollo Guidance Computer by Frank O’Brien.

MORE ABOUT: Apollo, History, NASA
  • http://www.mazepath.com/uncleal/qz4.htm Uncle Al

    A 64 GB thumb drive is now unremarkable. When things go FUBAR kerflooey today, it is because NASA saved a buck on a cheap valve, mixed Imperial and metric units, porkbarreled solid fuel booster grain casting, or envirowhinerized previously working materials like asbestos-dichromate putty and freon-blown foam insulation.

    Should “progress” transform inconvenience into lethality?

    • Jeffrey Cornish

      I think you are being overbroad in your criticism here.
      Certainly NASA is not a perfect organization.

      It’s budget is partly on the whims (or political interests) of the congress who write the budget. –This is not within the control of the rank and file NASA employee or administrator to control (remember that government employees cannot lobby like private industry can). So programs that absorb a large amount of resources such as SLS exist. (I remember the original DIRECT/Jupiter concept)

      Safety with materials makes sense. And consider this. you may lament that asbestos-dichromate putty is banned. have you considered the benefits of engineers, chemists, and other scientists finding something that replaces it. Think of the thousands of other places in the civil/commercial world that would turn out to need that. Remember that for every dollar we spend on NASA, $7-14 in new revenue are generated

      As far as porkbarrel politics. I mostly agree, but I also understand that an industry that has a trained workforce and a product that has a history has a value. However, no one seems to want to fund several companies that make a product such as a solid rocket motor, and our selection processes tend to make it so there is only one winner. And the winner gets to have a significant ability to steer more contracts and funding thier way.

      Approaches such as the 1990’s “Faster Cheaper Better” which ended up dissuading the Mars Climate Orbiter’s engineers from fully analyzing the effects of the fully integrated system (Lack of schedule time and budget, but it was faster and cheaper that way)

      Regarding the NOAA N-Prime accident, did you read this bit:
      “the Responsible Test Engineer (RTE) did not “assure” the turnover cart configuration through physical and visual verification as required by the procedures but rather through an examination of paperwork from a prior operation. Had he followed the procedures, the unbolted TOC adapter plate would have been discovered and the mishap averted. Errors were also made by other team members, who were narrowly focused on their individual tasks and did not notice or consider the state of the hardware or the operation outside of those tasks. The Technician Supervisor even commented that there were empty bolt holes, the rest of the team and the RTE in particular dismissed the comment and did not pursue the issue further. Finally, the lead technician and the Product Assurance (PA) inspector committed violations in signing off the TOC verification procedure step without personally conducting or witnessing the operation. The MIB found such violations were routinely practiced.”

      So, your solution might not have had any effect. A technician NOTED THAT THE BOLT HOLES WERE EMPTY, and the other staff, busy with their own task, and the inspectors fell down on their job. What would your solution be?

      We are all human. we all make mistaeks, overlook things. And even those who are trained, who are certified, who have years of experience, can fail to note that something is not right, or be convinced by a colleague that the situation is acceptable.

      The solution for this is obviously, paperwork and documentation, and more documentation, and inspection of the inspectors and so on.

      Which is why you have NASA’s contractors having a standing army to build the SLS, and that changes must be reviewed and so on.

      Also NASA is failure averse. How would it look to a congressman that a large project, like the SLS, were to have a failure. And there goes that funding….

      So companies that deal with NASA get failure averse, and the cost plus structure (getting paid no matter what) don’t help to make the companies want to be agile and innovative.

      SpaceX for example is not committed to exclusively selling services to NASA. They are committed to reducing the cost of access to orbit, towards the goal of making humanity a multiplanetary species. Launching cargo to the ISS and comsats pays the bills… They have a lofty goal, and SpaceX is not averse to making mistakes to do this. (do you remember Gwen Shotwell’s reaction to the first falcon 9 first stage to collide with the ASDS? it was along the lines of “We made it to the ship!” Also every failure is data that improves the next attempt. The fact that SpaceX does not bog down in to a two year accident report and correction measure cycle helps.

      How many times has what SpaceX does surprised you? If the same happened at NASA with ‘tax dollars’ (“the prototype we built ended up having a flaw and exploded, but we learned 500 things that will make the next one better. Please give us the budget to build 2 more to possibly blow up”) what would you say?

      Plenty of the rank and file in NASA support SpaceX’s efforts, which is good. and there are plenty of projects at NASA where they are doing ‘build a little, test a little, fly a little’ to make impossible technologies doable.

  • Fred Feirtag

    Here’s an excerpt from “The First Men on The Moon” by David M Harland:

    Gene Kranz’s flight control team took 4 July off, but returned to work the next
    day for their ‘graduation’ simulations. As Armstrong and Aldrin were unavailable,
    Pete Conrad and A1 Bean took their places as a welcome training opportunity for
    Apollo 12. The flight controllers successfully overcame six tough scenarios during
    the morning. The afternoon sessions were to be ‘flown’ by the Apollo 12 backup
    crew of Dave Scott and Jim Irwin, the rationale being that a less-experienced crew
    would increase the pressure on the flight controllers. Three minutes into the first run,
    Koos prompted the LM’s computer to issue an alarm. A caution and warning light
    illuminated, and the computer flashed the numerical identifier for that particular
    problem. Computer alarms could result from a hardware fault, a software issue, out-
    of-tolerance data, or a procedural error either by the crew or the ground. Steve
    Bales, the guidance officer, was monitoring the LM’s computer to ensure that it
    received the correct data from Earth and that its guidance, navigation and control
    tasks were being properly executed, In this case the alarm was a 12-01. Bales had
    previously seen it during functional tests of the computer on the ground, but never in
    a simulation, and certainly not in flight. While the LM crew awaited advice, he
    checked his manual: the 12-01 alarm was ‘executive overflow’, which meant that the
    computer was overloaded. The computer‘s executive was to repeatedly cycle through
    a list of tasks in a given interval of time, and evidently the time available was no
    longer sufficient to finish the tasks before it was obliged to begin the next cycle. Bales
    called Jack Garman, a support room colleague and software expert, and they agreed
    that the alarm was serious, especially since it was recurrent. With no mission rules to
    inform his decision-making, Bales called Kranz, told him that there was something
    amiss with the computer, although he could not say what, and recommended an
    abort. This call came out of the blue as Kranz had not been party to the discussion
    between Bales and Garman, but as a flight director must trust the judgement of his
    controllers — especially on abort calls — he confirmed it. Charlie Duke, serving as
    CapCom, relayed the abort to the crew, who performed the manoeuvre and made as
    if to rendezvous with their mother ship (which was not actually in the simulation). At
    the debriefing, Koos pointed out that the 12-01 had not necessitated an abort; in the
    absence of a positive indication that the computer was failing they should have
    continued. Shocked that he had made a bad call, Bales got together with the people
    from the Massachusetts Institute of Technology who had written the software, in
    order to investigate the alarm. Later that evening, he called Kranz and conceded
    there had been no need to abort. The next day, 6 July, Koos triggered a range of
    computer alarms to enable Bales’ team to record data on the ability of the computer
    to continue to function. On 11 July Bales added a new mission rule listing the alarms
    that would require an immediate abort; in all other cases the powered descent was to
    continue pending a positive indication of a critical failure.

    “Go!” called Willoughby.
    “Go!” called Aaroni
    “Go!” called Zieglschmid.
    “CapCom we’re Go for landing”
    “Eagle, Houston. You’re Go for landing.”
    0n hearing this, Jan Armstrong sat up on her heels at the foot of her bed. Pat
    Collins exclaimed, “Oh God, I can’t stand it.”
    “Roger. Understand, Go for landing,” acknowledged Aldrin. “3,000 feet.” But
    then, “Program alarm.” He keyed the DSKY for the code, “12-01.”
    “Roger,” acknowledged Duke, “12-01 alarm.”
    “Same type,” responded Bales immediately. “We’re Go, Flight.”
    “We’re Go. Same type,” relayed Duke, the tension evident in his voice. “We’re

    On concluding his momentous shift, Kranz shook Tindall’s hand, found Koos to
    thank him for throwing the 12-01 program alarm into the final simulation, and then
    he accompanied Douglas Ward, his Public Affairs Officer, across the road to the
    News Center.

  • Pascal Xavier

    You are just repeating what you have read, Amy.
    This is the real explanation of the famous alarm 1202 (which occurred in the descent of Apollo 11, and caused much trouble in the control room), in a comment of the anomalies section of the mission report of Apollo 11, entitled “Computer Alarms During Descent, at page 16-13 you find this:(excerpt of the comment):
    “Any difference in phase or amplitude between the two 800-hertz voltages will cause the coupling data unit to recognize a change in shaft or trunnion position, and the coupling data unit will slew (digitally). The “slewing” of the data unit results in the undesirable and continuous transmission of pulses representing incremental angular changes to the computer. The maximum rate for the pulses is 6.4 kpps, and they are processes as counter interrupts. Each pulse received by the computer requires one memory cycle time (11.7 microsecond) to process. If a maximum of 12.8 kpps are received (two radar coupling data units), 15 percent of the computer time will be spent in processing the radar interrupts. The computer normally operates at approximately 90 percent of capacity during peak activity of powered descent.) When the capacity of the computer is exceeded, some repetititvely scheduled routines will not be completed prior the start of the next computation cycle. The computer then generates a software restart and displays an Executive overflow alarm.”
    End of quote.
    Now tell me:
    1) Why would the engineers have provided a radar mode which was causing a problem that they perfectly knew that it would happen (and which was not happening when the radar mode commutator was on the LGC position)?
    2) Why would engineers have made the processor count itself the radar pulses when, in all other systems, including the one of the Saturn rocket, these pulses are counted by electronic counters instead and never, NEVER, by the processor itself; for, if these unwanted parasite pulses had been counted by electronic counters, the processor would just have read the count of pulses on the electronic counters (like all other systems do), and could have seen that the count of pulses was too high; it would not have had to restart, it would just have issued an alarm that the radar mode was on the wrong position.
    Are you able to rationalize this, Amy?

  • J. Kevin Dix

    I can’t confirm this completely, as the book I read it in has been passed to another “space cadet”, but I seem to recall that Dean Krantz wrote in his book, “Failure Is Not An Option”, that this particular alarm actually “occurred” (was injected) in the last simulation that was run prior to Apollo 11, and that it had never been seen in simulation before. The team aborted the “landing” because of it, and in the debriefing it was determined that this had been a misjudgement and the landing should have gone ahead. Thus, when it did actually arise, the decision to go ahead was well supported.

    • Jim Lakey

      You are confusing Gene Kranz with Dean Koontz.

      • J. Kevin Dix

        You’re right, I did make a hash of Mr. Kranz’s name.

  • Jim Lakey

    The altitude of the 1202 was 3000ft, not 30,000.


Vintage Space

Vintage Space is all about digging into the minutia of the space age. Rather than retelling glossy stories of astronauts, Vintage Space peels back that veneer to look at the real stories -- the innovations that failed, the unrealized technologies, and the human elements that are less publicity-friendly so often remain buried. Gaining a clear picture of spaceflight's past ultimately helps us understand our present position in space and have a more realistic expectation of what the future might bring.

See More


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar