A war waged by remote control with military drones from thousands of miles away may sound like one of the most impersonal conflicts imaginable. But the film “Eye in the Sky” shows how modern drone warfare can also be intensely personal with the surveillance capability to watch a potential human target for hours on end. The timely thriller also does not shy away from the thorny issue of how military commanders and political leaders weigh the value of human life in a world transformed into a global battlefield.
The central conflict of the film rests upon a familiar scenario: weighing the cost of an innocent life against the possibility of saving dozens more lives by preventing a suicide bombing. The urgent mission quickly draws together a wide cast of characters scattered across the globe. Colonel Katherine Powell (Helen Mirren) spearheads the the military effort from a UK military command center. Lieutenant General Frank Benson (the late Alan Rickman in his final film role) represents Powell’s superior officer coordinating with the higher echelons of the British government at Whitehall in London. Jama Fara (Barkhad Abdi) and his team gather intelligence on the ground in Nairobi, Kenya with the assistance of special micro drones. Meanwhile, Steve Watts (Aaron Paul) and Carrie Gershon (Phoebe Fox) form the core team of U.S. Air Force drone operators at Creech Air Force Base in Nevada as they control the “eye in the sky” — an MQ-9 Reaper drone that represents one of the workhorses for the U.S. military and intelligence community.
Those who don’t mind learning some more plot details can check out the film trailer below. For the rest of this review, I’ll go into some of the basic plot details, so heads up on spoilers.
Computers and robots play a crucial role alongside their human counterparts. The Reaper drone provides the main overhead surveillance and also carries Hellfire missiles to potentially launch a lethal strike on the ground. Tiny micro drones inspired by real-life military prototypes provide an even sneakier form of robotic spying closer to the ground and inside buildings. Facial recognition software enables a U.S. military analyst at a base in Pearl Harbor, Hawaii to quickly run identification checks on potential human targets spotted by the drones.
But this is still far from a Terminator future of robots automatically targeting and killing on their own. The decision on whether or not the Reaper drone should fire its Hellfire missiles rests upon a chain of human decisions. That chain stretches from the highest branches of the U.S. and U.K. governments down to Watts, the U.S. Air Force officer who ultimately has control over pulling the trigger.
The tension quickly mounts as the question of “collateral damage” involving possible innocent civilian death or injury enters the picture. On the military side, Powell and Benson urge their civilian counterparts to allow them to give the order for a strike that could prevent a suicide bombing and also kill several individuals on the most wanted list for terrorism. On the civilian side, bureaucrats and politicians worry about the legality of their actions and the possibility of losing the propaganda war if the drone strike ends up killing an innocent. Many refuse to make the hard choice and keep referring the decision up to their superiors.
All the while, video camera footage from the drones provide a close-up view of both the suspected terrorists’ activities and the unsuspecting innocent as she goes about her daily life. The view of the terrorists’ activities heightens the tension as their preparations for the suicide bombing move steadily forward. On the other hand, the intimate view of the innocent’s life makes the decision-making even more agonizing for everyone involved — and especially for Watts, the man with the finger on the trigger that launches the Hellfire missiles.
The complex decision-making process involving humans and machines enters a crucial stage as military analysts use software to calculate the possibility of death or injury to the innocent. The software provides a collateral damage estimate that shows the area of likely death or injury radiating outward from the strike zone. What percentage risk to the innocent is acceptable? It’s far from a precise or scientific estimate of death or injury, but the civilian leaders ultimately responsible for the decision grasp at the risk percentages like sailors drowning in a sea of uncertainty.
I won’t spoil the ending of the film. But I will leave off with a thought about the possible future of drone warfare and automated robots making life-or-death decisions on their own. Such automated killing machines will ultimately have to rely upon software similar to today’s algorithms that calculate facial recognition probabilities and collateral damage estimates. Where should the line be drawn for the drones or robots to automatically strike or hold their fire? And who holds the ultimate responsibility in that case?
Update: Given that this is a Hollywood thriller clocking focused on the ethical issues, the film does not address the broader issue of U.S. military drone operators reportedly being overworked to the point of fatigue and low morale. That problem has contributed to the problem of finding enough drone operator replacements to meet the growing demand for military drone operations. (You can read more about the issue here: “Drone War Pushes Pilots to Breaking Point.”)