The Real Problem with Driverless Cars

By Veronique Greenwood | April 28, 2012 8:08 am

When Nevada made driverless cars legal in the state last year, we armchair futurists sat up a little straighter. All of a sudden a number of meandering philosophical questions about how our society would have to change to embrace such technology seemed quite a bit more urgent. This question seemed especially pressing: Driverless cars are safer than those piloted by humans, but how would we feel about deaths caused by machines rather than people?

In our post on the topic we considered the ethics of the situation, but we think this recent short piece from Popular Science nails the liability angle on the issue: the real question, as far as car manufacturers are concerned, is not whether the cars are fundamentally safer, but who will should take legal responsibility for the accidents:

When a company sells a car that truly drives itself, the responsibility will fall on its maker. “It’s accepted in our world that there will be a shift,” says Bryant Walker Smith, a legal fellow at Stanford University’s law school and engineering school who studies autonomous-vehicle law. “If there’s not a driver, there can’t be driver negligence. The result is a greater share of liability moving to manufacturers.”

The liability issues will make the adoption of the technology difficult, perhaps even impossible. In the 1970s, auto manufacturers hesitated over implementing airbags because of the threat of lawsuits in cases where someone might be injured in spite of the new technology. Over the years, airbags have been endlessly refined. They now account for a variety of passenger sizes and weights and come with detailed warnings about their dangers and limitations. Taking responsibility for every aspect of a moving vehicle, however—from what it sees to what it does—is far more complicated. It could be too much liability for any company to take on.

But if the benefit to public health is great enough, the federal government can sometimes get involved to grease the skids. It’ll be interesting to see what happens in the case of driverless cars.

Read more at Popular Science.

  • DS4119268002

    Drivers of standard cars pay several hundred dollars a year for insurance but with a driverless car, that cost would be removed. That difference could be used to make driverless cars economical, either by letting carmakers charge the users a usage liability fee (say 60% of a typical insurance rate) or by raising the price of a car and giving special, no interest financing for the liability premium.

  • scribbler

    There may not be a driver, per se, but there will be an operator/owner. I see this as no different than owning and operating a gun. Even if someone isn’t behind the wheel, someone is responsible for the car being on the road and where it is going. They are the ones who “aim” the dangerous machine, so they would be the ones legally liable, in my opinion…

  • Dave

    The ones who “aim” the machine will be the ones held legally liable. In the case of the driverless automobile, the computer does the driving and the occupants are only passengers. Think of the instance where the owner of a bus company is a passenger on one of his busses: accident’s due to driver error are the fault of the driver; accidents due to foreseeable driver error will be shared between the driver and the owner; accidents due to maintenance failures will be the owner’s fault; and, accidents due to the design or manufacture of the bus will be the fault of its manufacturer. Driverless vehicles will remove the possibility for driver liability. I would expect to see manufacturers of driverless vehicles to quickly add vehicle insurance operations to their conglomerates.

  • IronGolem

    Scribbler, I do not think there is truly an operator during the driving phase. As far as I can tell, the destination is programmed in while it’s still on your driveway or in your garage, and you simply tell it to go. Then from that point on until you reach your destination, you’re just a passenger like the rest of the seats in the car.

    Now, if there is an accident involving a driverless car, it needs to be determined who is at fault. Under normal circumstances, with the assumption that the driverless car is programmed not to violate any laws, fault clearly goes to the other party. If the car does make a mistake, fault obviously belongs to the manufacturer, or whoever is responsible for creating software updates for the car.

    Now, for the other half of the coin, I’m also going to assume that if drivers of normal cars can make mid-trip changes in plans (emergency restroom stop, detour to pick up a friend, etc.), the operator of a driverless car can request the car do so. Only in these circumstances will the passenger behind the wheel take on the role of “operator” again. Regardless, it still lies in the hands of the programmers and manufacturers to make sure the car handles these requests safely without violating any laws.

    The only way I can see the operator/passenger behind the wheel being at fault is if they tamper with the self-driving software, which should obviously (yet still be clearly stated to the customer) void any warranty and remove all liability from the manufacturers/programmers.

  • http://deleted VIP

    I simply do not understand how a computer-driven car can compensate for the stupidity of most drivers who cause accidents. Unless it stops all the time, hardly a way to move in traffic.

  • Rafael

    I must disagree with scribbler. The gun will only work where and when its operator “decides” to, performing a very specific function, kill. The operator has domain of the fact, he controls whether someone is shot or not, killed or not. The car, unlike, will be set in motion by it’s operator with the propose of transporting him from point A to point B, once it’s moving the operator has no control over it’s performance, this is completely in the hands of the machine. Therefore, the operator has no influence in the events and decisions that led directly to the fatal accident; he has no control over the final fact, which is the assumption behind the authorship accordingly Welzel. The car company, on the other hand, by an action that violates the necessary degree of care would produce blameworthiness and, therefore, culpability; for example the collision between two “driverless car”. It wouldn’t be accounted however if the accident was, mainly, due to the fault of another driver (human).

  • Iain

    And then of course there is the threat of hackers. Wouldn’t they have a ton of fun?

    Also, if one doesn’t get the vehicle serviced as often as the manufacturer suggests, who is then at fault?

  • PeterC

    No nut behind the wheel
    Its driven by electrons
    Buzzing around their chips
    Trying to get a move on

    Water water everywhere
    Including in the processors
    In a bend the camera erre’d
    we join our predecessors

  • Ian P

    scribbler, the problem with the gun analogy is that the gun kills when it is working as intended. The car kills when it is not working as intended.

    The only precedent I can think of is owning a dog. The owner assumes the dog will not harm anyone when they take it out for a walk, and takes precautions to ensure that, but if it does bite someone, it is the owners responsibility, not the person who bred/raised the dog.

    If we allow the driver to take control of the vehicle, then the responsibility seems to be with the owner, who can make sure that the vehicle is being operated correctly. However, then this negates the benefit that a person in a driverless car could do other things along the journey, and without that benefit, I don’t see the point.

  • Cody

    Of course if the car isn’t at fault—or even optimally prepared for an eminent collision—it could have the entire situation recorded in great detail, which would allow both the increased safety benefits as well as removing liability from the manufacturer, assuming the manufacture can be that confident in their technology, (though I think it’s extremely likely they will, just as they did with airbags).

    You can even see the early stages in the current parallel-parking, braking, and sophisticated cruise control systems already sold. I doubt Toyota & others would be selling automatically-parallel-parking cars if they thought that even 1% of the time it could result in a minor collision, since the total liability would be enormous.

    And as for scheduled maintenance, the manufacturer could force the car to require maintenance, or even have the car drive to be maintenance on it’s own. Or the state could require some annual test as they do with other inspection requirements.

    Hacking remains a very important question, though probably highly relevant to many of today’s vehicles as well…

    Also, I think capitalism will inevitably lead not only to automated cars, but eventually to the automation of virtually all human jobs, which is a transition we should figuring out how to play out.

  • Yacko

    Driverless cars will not jackrabbit accelerate. Driverless cars will not stop hard unless an emergency. Driverless cars will not go over the speed limit and might undercut that limit by 5-10%. Assuming the logic works, driverless cars will react to traffic faster than humans. Driverless cars possibly will have fewer accidents and what accidents occur, are far less ghastly than those caused by human drivers. Accidents may go down so significantly and be less severe that auto safety regulations will be less necessary for a survivable accident. Cars may no longer look like cars. Perhaps the undercarriage will be one or several standardized models and the customer choice will be the body bolted onto it.

  • Yacko

    “the manufacturer could force the car to require maintenance, or even have the car drive to be maintenance on it’s own. ”

    Or the government could force it. All either entity has to do is send the car a text message and it will schedule service itself or be undriveable. The logic is out of your hands.

  • Alan Andrew

    A combo of insurance and owner responsibility ( the latter to insure the vehicle is maintained as per manufacturer’s manual). If 90% of car accidents are of human cause (and might all accidents be human caused???) then “automated driving” might be a great solution. In the end, I’d think the insurers would be far better off, owners would get a great break on insuring their driverless vehicles, police and paramedics would have a greatly reduced workload, income taxes would then be reduced due to reduced need for these professionals, auto wreckers would die and their ugly yards of smashed vehicles would disappear to a great extent, road rage would disappear (even the lesser”street stress”–my bon mot), little kids might then be able to sit in the front!
    What’s not to like here?

  • Victor

    The driverless car will not succeed because there is no way to inform it that it’s driving the wrong way on a one way street or about to back into another car or failing to stop for a stopped school bus or about to slam into a careless biker making an illegal left turn. Etc. etc. etc. What an incredibly DUMB idea!

  • Aidan

    No, Victor. You are a dumb idea. There are so many ways around those solutions and they’ve already been implemented. One test car has already driven a couple thousand clicks without any trouble. You know nothing of robotics or engineering or computing. Don’t go hating on something when you nothing about it…

  • scribbler

    I’d like one time to read comments here WITHOUT someone calling someone else stupid! If your ego is that fragile that you need to prop it up by putting others down, please stop calling yourself a scientist… Sheese!

  • Woody Tanaka

    “Also, I think capitalism will inevitably lead not only to automated cars, but eventually to the automation of virtually all human jobs”

    Then the only question is who do we smash first, the capitalist, the computer programmer or both…

  • Victor

    If the driverless car is simply a remote controlled device, then it isn’t driverless. And if it is truly robotic, i.e., totally on its own, then I’m sorry, Aidan, but that’s an insane idea. You can test it till you’re blue in the face but you’ll never convince me to drive anywhere near one. Artificial intelligence is a cute idea, but it’s still a long long way off, for sure. We’re not talking computer chess, but driving under a wide variety of conditions that can never be fully anticipated.

  • Victor

    “Also, I think capitalism will inevitably lead not only to automated cars, but eventually to the automation of virtually all human jobs”

    Which wouldn’t be a problem if the means of production were controlled by the 99% instead of the 1%. Computer programmers aren’t the problem, Woody, capitalism is. For more on this topic, I suggest

  • Cody

    Victor, watch this video:

    Victor, of course there could be a way to inform an automated car about those things, though that would effectively defeat the purpose of automating the car in the first place, so instead they will (perhaps have) built the car so that it can reliably identify one way streets, where all objects of potential hazards are whether parked cars or reckless bikers, school busses, etc., and they must be even more reliable at identifying these things than humans (since we make mistakes). There is no reason to think this can’t be achieved (indeed it may have already been achieved).

    Woody Tanaka, I really do mean inevitable—say some nations fear the consequences and legislate against some areas of technological innovation (as Nevada has done with automated cars), that just means other nations will be left with the enormous technological advantage that inventing such devices will bring. It’s the most basic premise of capitalism: competition leads to innovation. And the innovations to come are potentially even more valuable than anything else in history—there is an enormous incentive to succeed here, on the national, corporate, and even individual level.

    It’d be like trying to stop the robotic welding of car frames: why would car manufacturers bother making cars in a country that required them to make inferior frames at a higher cost?

  • Victor

    Yes, Cody, of course such a vehicle can be designed to read signs, or be programmed via GPS to know in advance about one-way streets, highway construction projects, detours, etc., but all this assumes that we live in a perfect world where nothing goes wrong and every exception to every rule is carefully noted and promptly and accurately entered in the proper database.

    What I was referring to was situations where something went wrong, where the vehicle found itself going the wrong way or doing the wrong think accidentally, which happens all the time, not only to flesh and blood drivers, but computers as well (which of course we are all too familiar with). If I go the wrong way on a one-way street, I’ll be alerted to that fact by drivers around me and turn around as soon as possible. If I’m a driverless car, how will I know what those honking horns mean and how will I be able to recognize that a mistake has been made?

    Or if a bicyclist suddenly appears in front of me, I will know enough to veer out of his way rather than stop dead in my tracks, because the car behind me might not be able to stop in time. And I will know enough not to veer onto the sidewalk if pedestrians are present. Split second decisions of that kind can certainly be programmed into a computer, but the possible variations are too great for anything less than a true artificial intelligence to handle. And believe me we are VERY far from that today or at any time in the foreseeable future.

    It’s very troubling to learn that a company as huge and powerful as Google is behind this thing, because that means it will probably become a reality, a very dangerous and expensive reality, which will hopefully amount only to a temporary disaster once the horrible truth can no longer be ignored. If you really think computers will make better drivers than humans, I suggest you recall the last time your own computer shut itself down unexpectedly or performed some other disastrous or near disastrous act for no discernable reason.

  • scribbler

    The bullet goes from point A to point B. The driverless car goes from point A to point B. There is no difference in liability of those who point them, is there?

  • scribbler

    Ian, there are more purposes to a gun than to kill. I have enjoyed their use for decades and have killed no human! 😉 Guns are, for this thread, dangerous machines that can and do kill. So are cars. The bullet will only go where it is pointed. So will the car. If the bullet strays, it is the aimer’s liability. If the car strays, it is the aimer’s liability.

    The analogy is valid, then is it not?

  • AsleepAtTheWheel

    Let’s all agree on a third party property type insurance scheme. Everybody contributes to a pool of funds that are used to provide for those unfortunate enough to be involved in an accident with an automated vehicle. Agree on claims limits and liability and suddenly you have an environment where automation can blossom.

    Once the tipping point is reached, where people are so de-skilled they are too afraid to manually drive, we will see the rates of road fatalities fall to near zero. Accidents will continue to happen, just as with planes, but I would be far happier with an aviation like accident percentage than what our current road statics are now.

    Generally, people are not skilled or trained enough to do something as horrifyingly complex and dangerous as pilot 2 tones of steel and plastic at 120km/hr passing within 12 inches of other similar objects. The sooner we give it to the machines the better.

  • Superchicken

    I don’t think the firearm analogy holds because the only pointing the “aimer” is doing is saying “get me from here to there and you (car) take care of the details”. The aimer is then the software and computer for all intents and purposes. What you would be saying is that if I go to sleep or am reading a magazine and my car’s software causes an accident due to a software glitch or sensor error, then how could I, the passenger, be held responsible for that. Your analogy where I would be responsible would be like me hailing a taxi and then asking the taxi driver to take me to point A (which we will assume is a legal and valid address.) Then, under your analogy, because I told the driver where to go, if there is an accident, then I am to blame. The only way the analogy would hold is if there was a hypothetical self-aiming firearm, and I told it to shoot the intruder that just broke in and instead the firearm turned around and shot my girlfriend. And in that case, I don’t think you’d feel you would be responsible for that error, would you?

  • Superchicken

    Victor, I can only think of a few instances when my computer actually performed anything which I would consider disasterous, and in all cases it was because that’s what I told it to do. Besides, computers already fly planes, run potentially dangerous medical devices, manage data – which if mixed up could result in death or injury (think electronic medical records), the list goes on. The only time I ever hear of the software even mentioned in accidents is when the operator overruled the computer and went against what the software was advising (admittedly sometimes due to unintuitive interfaces.) There were some early medical devices (early radiology equipment springs to mind) which had software issues, but the industry adjusted with much higher vetting procedures. Your everyday software isn’t fail-safe, unlike what you can be sure any software which operates a vehicle will be.

  • Link

    The liability would be more similar to that of elevators/lifts than of guns. You have the owner, the passengers/drivers, the maintainer and the manufacturer.

    Given a choice people will instinctively only accept a high safety level before they would give up control.

    Animals that didn’t mind putting themselves in dangerous situations where they have no control and their genes play a lesser part in their survival prospects probably died out long ago.

    Whereas if the animals were in dangerous scenarios where their genes have greater influence over their survival odds (climbing cliffs etc), the survivors may go on to contribute potentially good traits to the population’s genetic pool.

  • m

    I am with scribbler on this. The liability rests with the owner of the car – not the manufacturer. If you are dumb enough to buy one of these things, you accept all consequences and risks associatied with it.

  • Brian Too

    We may have intermediate stages on this. All the driverless systems I’ve seen, there is still a steering wheel, a driver’s seat, pedals, gear shift if manual, etc. The automation system is an option and it can be turned on and off at any time. It’s like an autopilot in a plane–there is still presumed to be a pilot, no matter how good the autopilot.

    As long as this holds true, it may be that the driver will continue to hold responsibility and liability on the automation. After all they can intervene at any time and rescue the system.

    What if the driver is asleep or distracted? That could be addressed by simply legislating that the driver cannot sleep or be distracted. Although I have to admit, the latter severely limits the appeal of driverless vehicles.

  • Dave G

    My concern is what happens in underground parking garages where there is no GPS / 3G signal? Otherwise brilliant. In a country with 1 /1oth the population of the US, killing the same number of people in cars as the US, South African traffic may as well be driverless, without any fancy technology! Bring it on and get rid of the idiots behind the wheel.

  • Victor

    It amazes me, Superchicken, that you can be so naive. Sure, airplanes, medical devices and data management software are more complex and sophisticated than automobiles. But the act of driving a car in traffic is far more complex than any of the above. There’s plenty of open air for planes to fly in, no other planes for many miles, no red lights, stop signs, pedestrians, bikers, left lanes, right lanes, kids chasing balls, panicked reindeer, etc., etc.. The radar on the plane can see for hundreds of miles, with an unobstructed view in every direction. And if the system breaks down, there’s a pilot there to take over. Medical devices are highly specialized to operate in strictly limited, highly predictable conditions. And as for financial models, on which the data management software is based, well look at what happened in 2008!

    We’ve all heard of sudden acceleration problems in cars equipped with computerized driving controls. The designers are in denial, but no one knows for sure what causes such problems. My guess is that these computers crash from time to time. Usually, however, the driver can take control and override the computer, which is why such events are relatively rare. A driverless car would be completely out of control if its computer crashed. Even if there were some relatively “minor” software glitch, that could have serious consequences.

    The sort of optical recognition that would be required to take all possibilities into account, including obliterated or oddly positioned signs, power outages that affect traffic lights, cars that suddenly pull out of their lane, etc., requires a sophistication far beyond what is now possible for any AI system.

    I must say I’m also disturbed by the many posts here expressing concern for the legal ramifications, but completely indifferent to basic safety issues. Brave New World we live in!

  • Cody

    Dave G, it’s only using the GPS to find the route, not actually drive. Just like a human uses a map, but then looks out the windows to steer the car—the automated cars use a laser scanner to build a 3d model of the local environment, updated several times per second with a resolution down to 11 centimeters (nearer to the car), and four radars (front back left right, for longer distance detecting)—the rest is software.

    Victor, you’re missing the point: there is no database. These cars use databases for addresses, but they don’t need to “pre-know” anything about the roads’ directions, speed limits, etc., they learn it all as they go, just like us. And yes, of course they have to be able to handle all the hazards you raise, just like a human driver, did you watch that video? Did you see how it has no problem identifying cyclists and pedestrians and side walks and signals and even understands what sort of ‘body language’ to express at a multi-stop intersection to negotiate right of way?

    Some types of AI are far off, yes, but why should driving be one of those types? It doesn’t require a lot of the more challenging aspects of higher thought, it operates in a very limited realm, and many of the technical problems have been under intense technological development over the last several years due to the DARPA challenges.

    And failsafe operation is an obvious given—like a Segway it’ll have to operate in a safe way even under massive multiple failures. I was kind of thinking even home computers have sort of overcome crashing, I don’t remember the last time I had a crash (though I have a mac and reboot probably monthly), and my old roommate who has a PC ran his computer for many months without rebooting or anything.

    Watch the video, it is making a reasonably high resolution 3d model of it’s entire surroundings at all times, it definitely has a better eye on the car’s relationship to the world within 50 feet than any human can possibly have. (360° continuous 3d mapping and highlighting all moving objects, estimating their trajectories, detecting signals and signs, looking for any interference with it’s planned route.) I really think you should check out the video I linked before you argue these points—I am just as concerned with safety as anyone can be, I’m convinced it won’t be more than 10 years, maybe only 5, before someone is selling automated driving systems somewhere.

    I honestly think they can be made so reliable and sophisticated that when they do inevitably get involved in accidents they’ll have detailed 3d models of the local environment leading up to the accidents that will both vindicate them and produce greater trust in them.

    Here is the video again:

    It’s a fourteen and a half minute presentation about some of the technology behind the car, it’s functioning and the challenges they’ve faced thus far. I’m not saying they’re ready, but I find it hard to think they won’t be within a few to several years.

    Also, stop thinking in terms of a program and supporting hardware that is programmed to do one thing rigidly and think instead of a system that can record a comparable level of visual detail, a 3d scanner providing a 3d model with matching detail, and the well developed software for facial recognition applied to various categories of obstacles. You know facial recognition software is now better than humans? And HD cameras fit in cell phones?

    The potential for automated cars to reduce accidents is really mind blowing. And that is why it will ultimately take over.

  • Victor

    Thanks for your thorough response, Cody. I certainly won’t deny that what’s been achieved so far is a truly awesome technological advance, based on a great many dramatic advances in technology of many kinds, including AI. I watched the video and was definitely impressed. But at the same time I was aware of the stars in the eyes of the presenters — clearly, they were delivering a sales pitch, completely ignoring the many problems yet to be resolved.

    “Did you see how it has no problem identifying cyclists and pedestrians and side walks and signals and even understands what sort of ‘body language’ to express at a multi-stop intersection to negotiate right of way?”

    Hey, what you saw was a sales pitch. I have no doubt such systems are impressive, but what we see in that video is how these cars behave under ideal conditions. I’d like to see a demo of how they respond when something goes wrong with any or all of their many subsystems. Automated machinery can simply shut itself down when something goes wrong. An automobile traveling at highway speeds in heavy traffic cannot.

    It’s possible that at some future date this technology might be sufficiently fail-safe for general use, but the presentation I saw was clearly based on how it would operate under ideal conditions, which tells me the promoters haven’t really thought through all the many possible situations under which drastic failure, with serious consequences, could occur. Nevertheless, they are currently testing these vehicles under actual highway and city conditions, which as I see it is a disaster waiting to happen.

    I can see this technology as useful for the military, under combat conditions, as a control system for a drone tank, for example. And I can see it as possibly useful in strictly controlled settings, as for example in special highways set aside exclusively for such vehicles. Also I can see it piloting airplanes, where the environment will always be extremely predictable.

    But I can’t see this as the future of land-based transportation. Even if it worked as advertized, it would be a huge waste of resources compared with mass transit.

  • Cody

    Victor, I must admit I am somewhat blind to the sales pitch aspect, and agree that correcting for it dramatically reduces expectations—maybe I have the same starry-eyed blinders that they do.

    I don’t think their current testing has too great a potential for disaster, after all there is a driver sitting behind the wheel waiting to take control if the car shows any sign of danger. I also imagine this as a gradual process, starting with increasingly sophisticated cruise control—i.e. I wouldn’t expect physical steering wheels to disappear until several years of confidence had been established with cruise control systems so smart and reliable that they demonstrated a clear ability to dramatically reduce the number of accidents.

    And I think it’ll show up first as either farm or military equipment, then find it’s way into the trucking industry and the luxury car cruise control systems, then trickle down into the more general consumer market.

    I agree it would be less efficient than mass transit, but I don’t think mass transit can really compete with cars in large portions of America due to our lower population density. It also ought to be more efficient than normal cars, since the computer can optimize acceleration & braking tasks more accurately than humans.

    There are some people who think automated cars could also become a form of public transport, where you called automated cabs, which would reduce the need for parking which turns out to be an enormous waste of resources. And then there is the potential to automate public transportation itself, like buses, since they too suffer occasionally from human failings. A few years ago a train operator in Boston collided with another train because he was texting—the radar based cruise control systems of many luxury cars could have easily prevented that, which is why I think we’ll see our society do this sort of thing.

    And with the size and power of computers, why not just have multiple redundant systems? The Space Shuttle four redundant computers all checking one another with a fifth backup holding emergency landing procedures on top of that—the size power & cost of computing is making that trivial, you could double up the radars even and power everything separately, though none of this is really that important if you still have someone sitting behind the wheel who will be responsible for handling the car in the event of a catastrophic failure, the way we treat current cruise control systems.

    Your skepticism is good, but I think we still probably disagree on a lot here, though I’m sure some is in part due to my naivety. Thank you for engaging in this discussion with me, and civilly no less, I hope I’ve been equally courteous.

  • Atarivandio

    Most of you are wrong in the asumption that the manufacture will take bulk/minor responsibility. In many of these videos the google employee/employees stress that it is a driving aid like super cruise control. It is abuse of policy to utilize it at all times, making it the users fault if there is a crash. The only video I have seen that does not include that statement is the one featuring the blind fellow, but most of these laws require at least a driver/passenger. That video inparticular made pains to streess ‘in cooperation with local police,’ and some organization for the blind. It is only years from now when it is fully dependible that manufacturers will drop that statement, but by then the developement will be so far along that users can rely on its driving only. Google is pretty law savy, plus its way too big a boost for the eco people as well as campaign points for policy makers to pass this one up. Isn’t it cool how living star trek is better than watching it. Thank God I’m only 22, 60 more years of this will be awesome. Imagine highway speed limits of 150 in your electric car. This is exactly like minority report, plus the tech can extend itself to truck and train. This will greatly reduce the overall costs associated with the shiping industry and bring the prices at wal mart down significantly. Literally nothing will be able to escape the far reaching hand of benefit this tech can provide.


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!


80beats is DISCOVER's news aggregator, weaving together the choicest tidbits from the best articles covering the day's most compelling topics.

See More

Collapse bottom bar