We are at a cusp point in medical generations. The doctors of former generations lament what medicine has become. If they could start over, the surveys tell us, they wouldn’t choose the profession today. They recall a simpler past without insurance-company hassles, government regulations, malpractice litigation, not to mention nurses and doctors bearing tattoos and talking of wanting “balance” in their lives. These are not the cause of their unease, however. They are symptoms of a deeper condition—which is the reality that medicine’s complexity has exceeded our individual capabilities as doctors.
Gawande has two main arguments. First, that when doctors use checklists they prevent errors and quality of care goes way up. Second, that doctors need to stop acting like autonomous problem solvers and see themselves as a member of a tight-knit team. Gawande is one of the few sane voices in the health care debate. However, later on in his speech, he says that the solution to the health care conundrum is not technology. To a large degree, I agree with him. But not completely. Tech still has a big role to play. If we take a closer look at Dune and Star Trek, we’ll see why Qualcomm and the X-Prize Foundation are ponying up 10 million bucks to fund a piece of medical technology that could help make Gawande’s dream of team-based medicine a bit closer to becoming reality. Read More
Update 8/8/11: The conversation continues in Part III here.
I’m back after a hiatus of a few weeks to catch up on some stuff in the lab and the waning weeks of spring quarter teaching here at Northwestern. In my last post, I put forward an idea about why consciousness– defined in a narrow way as “contemplation of plans” (after Bridgeman)–evolved, and used this idea to suggest some ways we might improve our consciousness in the future through augmentation technology.
Here’s a quick review: Back in our watery days as fish (roughly, 350 million years ago) we were in an environment that was not friendly to sensing things far away. This is because of a hard fact about light in water, which is that our ability to see things at a far distance is drastically compromised by attenuation and scattering of light in water. A useful figure of merit is “attenuation length,” which in water is tens of meters for light, while in air it is tens of ten thousand meters. This is in perfectly clear water –add a bit of algae or other kinds of microorganisms and it goes down dramatically. Roughly speaking, vision in water is similar to driving a car in a fog. Since you’re not seeing very far out, the idea I’ve proposed goes, there is less of an advantage to planning over the space you can sense. On land, you can see a lot further out. Now, if a chance set of mutations gives you the ability to contemplate more than one possible future path through the space ahead, then that mutation is more likely to be selected for.
Over at Cosmic Variance, Sean Carroll wrote a great summary of my post. Between my original post and his, many insightful questions and problems were raised by thoughtful readers.
In the interest of both responding to your comments and encouraging more insightful feedback, I’ll have a couple of further posts on this idea that will explore some of the recurring themes that have cropped up in the comments.
Today, since many commenters raised doubts about my claim that vision on land was key – raising the long distance sensory capabilities of our sense of smell, and hearing, among other points – I thought I’d start with a review of why, among biological senses, only vision (and, to a more limited degree echolocation) is capable of giving access to the detail that could be necessary to having multiple future paths to plan over. Are the other types of sensing that you’ve raised as important as sight?
Zombie stories are often about the utter failure of the government to deal with a big problem and, thanks to George Romero, also a great way to expose issues of class and social status. No one really believes they might attack one day. Zombies are a metaphor, like vampires or werewolves, for the horrifying and uncanny aspects of the human. They also remind you that, when things really hit the fan, you’re on your own. So be prepared! The Center for Disease Control does not want you to be caught unawares. In a post that walks the line between “ha ha this would never happen” and “but seriously just in case, you never know,” Ali S Kahn details the worthy forms of emergency response to hoards of the necrotic, brain-seeking undead:
I’m wary of the idea of meeting at the mailbox. Though I’m no expert, I have a strong suspicion that the mailbox is insufficiently fortified against the shuffling corpses invading the neighborhood. But hey, I’m not at the CDC, so I’m going to trust Kahn on this one. Maybe she keeps a shotgun (or cricket bat? Lobo?) in her mailbox. I just don’t know.
What I do know is I need to get an emergency kit like the one on the right. Because a zombie hoard is nonsense. But the Singularity might trigger a new stone age and I won’t be able to dash off to Wal-Mart for supplies. Should I be embarrassed that a small part of me hopes/expects some sort of epic disaster for the selfish reason that modern life doesn’t let me use a flashlight or flint in day-to-day routines? I mean, I just don’t have enough reasons in my life to use a kerosine lantern.
Maybe that’s how I can write off my next camping trip: research for zombie apocalypse.
For more on zombies, check out my series, the Ethics of the Undead.
Image of zombies kindly broadcasting their presence via Wikipedia
Hooray! I now have a Master of Arts degree from New York University. I even got to wear a bright purple robe with strange sleeves, was hooded, and topped it all off with a mortarboard that barely fit on my head.
My degree is from the John W. Draper Interdisciplinary Master’s Program in Humanities and Social Thought, which means I cobbled together a few disparate fields into my own academic Voltron of study. Critical theory, gender studies, and bioethics comprised the triumvirate of nerdiness out of which I forged my thesis, “Human Enhancement and Our Moral Responsibility to Future Generations.” My advisor was a tremendous resource, educator, and inspiration. Thanks, Greg!
Oh, and I competed in the northern hemisphere’s first ever Threesis competition. The goal: summarize your thesis in three minutes to a lay audience with nothing but a single static keynote slide for visual backup. Not easy, but quite fun.
I had the support of friends and family (my parents and partner in particular) throughout the process. They stood by me while I was pulling all-nighters, living in the library, and deliriously rambling on about Derek Parfit, Jurgen Habermas, and Julian Savulescu.
In true science nerd fashion, I spent the day with the family at the Museum of Natural History looking at the brain (there was an Ethics of Enhancement section!), giant dinosaurs, the stars, and butterflies. A fitting celebration!
I love Pixar. Who doesn’t? The stories are magnificently crafted, the characters are rich, hilarious, and unique, and the images are lovingly rendered. Without fail, John Ratzenberger’s iconic voice makes a cameo in some boisterous character. Even if you haven’t seen every film they’ve made (I refuse to watch Cars or its preposterous sequel), there is a consistency and quality to Pixar’s productions that is hard to deny.
Popular culture is often dismissed as empty “popcorn” fare. Animated films find themselves doubly-dismissed as “for the kids” and therefore nothing to take too seriously. Pixar has shattered those expectations by producing commercially successful cinematic art about the fishes in our fish tanks and the bugs in our backyards. Pixar films contain a complex, nuanced, philosophical and political essence that, when viewed across the company’s complete corpus, begins to emerge with some clarity.
Buried within that constant and complex goodness is a hidden message.
Now, this is not your standard “Disney movies hide double-entendres and sex imagery in every film” hidden message. “So,” you ask, incredulous, “What could one of the most beloved and respected teams of filmmakers in our generation possibly be hiding from us?” Before you dismiss my claim, consider what is at stake. Hundreds of millions of people have watched Pixar films. Many of those watchers are children who are forming their understanding of the world. The way in which an entire generation sees life and reality is being shaped, in part, by Pixar.
What if I told you they were preparing us for the future? What if I told you Pixar’s films will affect how we define the rights of millions, perhaps billions, in the coming century? Only by analyzing the collection as a whole can we see the subliminal concept being drilled into our collective mind. I have uncovered the skeleton key deciphering the hidden message contained within the Pixar canon. Let’s unlock it. Read More
If you haven’t seen it yet, Thor is a ridiculous and entertaining superhero spectacle. All the leads did a great job, particularly Hopkins as Odin. If you can take a man seriously when he’s standing on a rainbow bridge wearing a gold-plate eyepatch, he’s doing something right. Kenneth Branagh’s interpretation of Asgard was visually overwhelming, but weirdly believable.
The reason? Branagh leans heavily on the magi-tech rule of Arthur C. Clarke, which Natalie Portman’s character quotes in the film, “Any sufficiently advanced technology is indistinguishable from magic.” So what is the difference between really-really advanced technology and actual magic? Sean Carroll, who did some science advising for the film, clears the idea up a bit:
Kevin Feige, president of production at Marvel Studios, is a huge proponent of having the world of these films ultimately “make sense.” It’s not ourworld, obviously, but there needs to be a set of “natural laws” that keeps things in order — not just for Iron Man and Thor, but all the way up to Doctor Strange, the Sorcerer Supreme who will get his own movie before too long.
In short, the Marvel universe is internally consistent, which makes me all the more excited for the Avengers film. Clarke’s rule of magical tech helps create some of that consistency. I both love and loathe Clarke for that statement. Love because it strikes at the heart of what technology is: a way for humans to do things previously believed not just implausible, but impossible. Loathe because it creates an infinite caveat for lazy authors and screenwriters. It seems like anytime some preposterous technology is injected into a narrative either as a McGuffin or a deus ex machina, that damn quotation from Clarke gets trotted out as the defense. So does Thor live up to Carroll’s hopes or abuse Clarke’s rule? Read More
Imagine you know everything on Wikipedia, in the Oxford English Dictionary, and the contents of every book in digital form. When someone asks you what you did twenty years ago, on demand you recall with perfect accuracy every sensation and thought from that moment. Sifting and parsing all of this information is effortless and unconscious. Any fact, instant of time, skill, technique, or data point that you’ve experienced or can access on the internet is in your mind.
Cybernetic brains might make that possible. As computing power and storage continue to plod along their 18-month doubling cycle, there is no reason to believe we won’t at least have cybernetic sub-brains within the coming century. We already offload a tremendous amount of information and communication to our computers and smartphones. Why not make the process more integrated? Of course, what I’m engaging in right now is rampant speculation. But a neuro-computer interface is a possibility. More than that: cyber-brains may be necessary. Read More