Singularity Summit 2012: the lion doesn’t sleep tonight

By Razib Khan | October 15, 2012 10:07 pm

Last weekend I was at the Singularity Summit for a few days. There were interesting speakers, but the reality is that quite often a talk given at a conference has been given elsewhere, and there isn’t going to be much “value-add” in the Q & A, which is often limited and constrained. No, the point of the conference is to meet interesting people, and there were some conference goers who didn’t go to any talks at all, but simply milled around the lobby, talking to whoever they chanced upon.

I spent a lot of the conference talking about genomics, and answering questions about genomics, if I thought could give a precise, accurate, and competent answer (e.g., I dodged any microbiome related questions because I don’t know much about that). Perhaps more curiously, in the course of talking about personal genomics issues relating to my daughter’s genotype came to the fore, and I would ask if my interlocutor had seen “the lion.” By the end of the conference a substantial proportion of the attendees had seen the lion.

This included a polite Estonian physicist. I spent about 20 minutes talking to him and his wife about personal genomics (since he was a physicist he grokked abstract and complex explanations rather quickly), and eventually I had to show him the lion. But during the course of the whole conference he was the only one who had a counter-response: he pulled up a photo of his 5 children! Touché! Only as I was leaving did I realize that I’d been talking the ear off of Jaan Tallinn, the lead developer of Skype . For much of the conference Tallinn stood like an impassive Nordic sentinel, engaging in discussions with half a dozen individuals in a circle (often his wife was at his side, though she often engaged people by herself). Some extremely successful and wealthy people manifest a certain reticence, rightly suspicious that others may attempt to cultivate them for personal advantage. Tallinn seems to be immune to this syndrome. His manner and affect resemble that of a graduate student. He was there to learn, listen, and was exceedingly patient even with the sort of monomaniacal personality which dominated conference attendees (I plead guilty!).

At the conference I had a press pass, but generally I just introduced myself by name. But because of the demographic I knew that many people would know me from this weblog, and that was the case (multiple times I’d talk to someone for 5 minutes, and they’d finally ask if I had a blog, nervous that they’d gone false positive). An interesting encounter was with a 22 year old young man who explained that he stumbled onto my weblog while searching for content on the singularity. This surprised me, because this is primarily a weblog devoted to genetics, and my curiosity about futurism and technological change is marginal. Nevertheless, it did make me reconsider the relative paucity of information on the singularity out there on the web (or, perhaps websites discussing the singularity don’t have a high Pagerank, I don’t know).

I also had an interesting interaction with an individual who was at his first conference. A few times he spoke of “Ray,” and expressed disappointment that Ray Kurzweil had not heard of Bitcoin, which was part of his business. Though I didn’t say it explicitly, I had to break it to this individual that Ray Kurzweil is not god. In fact, I told him to watch for the exits when Kurzweil’s time to talk came up. He would notice that many Summit volunteers and other V.I.P. types would head for the lobby. And that’s exactly what happened.

There are two classes of reasons why this occurs. First, Kurzweil gives the same talks many times, and people don’t want to waste their time listening to him repeat himself. Second, Kurzweil’s ideas are not universally accepted within the community which is most closely associated with Singularity Institute. In fact, I don’t recall ever meeting a 100-proof Kurzweilian. So why is the singularity so closely associated with Ray Kurzweil in the public mind? Why not Vernor Vinge? Ultimately, it’s because Ray Kurzweil is not just a thinker, he’s a marketer and businessman. Kurzweil’s personal empire is substantial, and he’s a wealthy man from his previous ventures. He doesn’t need the singularity “movement,” he has his own means of propagation and communication. People interested in the concept of the singularity may come in through Kurzweil’s books, articles, and talks, but if they become embedded in the hyper-rational community which has grown out of acceptance of the possibility of the singularity they’ll come to understand that Kurzweil is no god or Ayn Rand, and that pluralism of opinion and assessment is the norm. I feel rather ridiculous even writing this, because I’ve known people associated with the singularity movement for so many years (e.g., Michael Vassar) that I take all this as a given. But after talking to enough people, and even some of the more naive summit attendees, I thought it would be useful to lay it all out there.

As for the talks, many of them, such as Steven Pinker’s, would be familiar to readers of this weblog. Others, perhaps less so. Linda Avey and John Wilbanks gave complementary talks about personalized data and bringing healthcare into the 21st century. To make a long story short it seems that Avey’s new firm aims to make the quantified self into a retail & wholesale business. Wilbanks made the case for grassroots and open source data sharing, both genetic and phenotypic. In fact, Avey explicitly suggested her new firm aims to be to phenotypes what her old firm, 23andMe, is to genotypes. I’m a biased audience, obviously I disagree very little with any of the arguments which Avey and Wilbanks deployed (I also appreciated Linda Avey’s emphasis on the fact that you own your own information). But I’m also now more optimistic about the promise of this enterprise after getting a more fleshed out case. Nevertheless, I see change in this space to be a ten year project. We won’t see much difference in the next few I suspect.

The two above talks seem only tangentially related to the singularity in all its cosmic significance. Other talks also exhibited the same distance, such as Pinker’s talk on violence. But let me highlight two individuals who spoke more to the spirit of the Summit at its emotional heart. Laura Deming is a young woman whose passion for research really impressed me, and made me hopeful for the future of the human race. This the quest for science at its purest. No careerism, no politics, just straight up assault on an insurmountable problem. If I had to bet money, I don’t think she’ll succeed. But at least this isn’t a person who is going to expend their talents on making money on Wall Street. I’m hopeful that significant successes will come out of her battles in the course of a war I suspect she’ll lose.

The second talk which grabbed my attention was the aforementioned Jaan Tallinn’s. Jaan’s talk was about the metaphysics of the singularity, and it was presented in a congenial cartoon form. Being a physicist it was larded with some of the basic presuppositions of modern cosmology (e.g., multi-verse), but also extended the logic in a singularitarian direction. And yet Tallinn ended his talk with a very humanistic message. I don’t even know what to think of some of his propositions, but he certainly has me thinking even now. Sometimes it’s easy to get fixated on your own personal obsessions, and lose track of the cosmic scale.

Which goes back to the whole point of a face-to-face conference. You can ponder grand theories in the pages of a book. For that to become human you have to meet, talk, engage, eat, and drink. A conference which at its heart is about transcending humanity as we understand is interestingly very much a reflection of ancient human urges to be social, and part of a broader community.

CATEGORIZED UNDER: Technology
  • http://haibane.info Aziz Poonawalla

    I won’t trot out my tired link about being a singularity skeptic again, but I would appreciate a post from you about your understanding and precise meaning of the term (in a Razibian sense, not Kurweillian or Vingian).

  • https://plus.google.com/109962494182694679780/posts Razib Khan

    #1, if A.I. emerges, i think it will become strong A.I. almost inevitably. if strong A.I. emerges, i think then the evolution of intelligence will enter into an explosive phase, and the gap between human intelligence and strong A.I. is going to get enormous very fast (probably limited by the constraints upon manufacturing engineering by physics). i have uncertainty about the emergence of A.I. i also don’t know how long it will take A.I. to become ‘strong’ enough that it can start entering into efficient feedback loops of self-directed evolution. but once that phase begins, all uncertainty disappears. something like ‘the singularity’ is going to happen.

  • Sandgroper

    That has to be the cutest baby pic of all time.

  • jb

    #1 This is just one random person’s opinion, but it seems obvious to me that strong A.I., with vastly greater intelligence than humans, is possible, in the sense that the laws of physics allow it. So it could happen. Here are a couple of my own thoughts about the possibility:

    1) I don’t think were are currently anywhere near making it happen, and I have concerns about the lifespan of human technological civilization (peak resources, etc.), so if creating strong A.I. is going to require a millennium of ever more subtle advances we may never get there.

    2) It’s not at all obvious to me that a strong A.I. would also have to be conscious. Who is to say that there isn’t some other route to problem solving capability comparable to humans that doesn’t actually involve awareness (e.g., Watson’s lightning fast database scanning and hypothesis generation and filtering)? I think the ultimate nightmare would be if the human race ended up replacing itself with what appeared to be a race of super beings, but there was actually nobody there!

    3) For me one of the big questions, which nobody talks about much, is what would a race of strong A.I.s do after the Singularity? Humans are not motivated to do anything by rationality, we are motivated by instincts and emotions that evolved over millions of years, and are absolutely necessary for our survival. Without them, you would sit down where you are and starve, and you wouldn’t care. And the thing is, we have no choice about what we care about — we are hard wired to require food, and love, and to avoid pain, and all sorts of things, and so we are all engaged in a constant quest to acquire the things we need in order to go on.

    But imagine if you could reprogram your own motivations! If you wanted to achieve inner peace, you could simply reprogram yourself, and you would be at peace. You would never have to feel any sort of physical or emotional pain, unless you wanted to. So would you want to want to? Why? What would motivate that choice? The question of motivation for post-Singularity beings strikes me as a real rabbit hole; I can come up is all sorts of interesting science fiction scenarios, but I have no strong sense of what the reality would be.

  • https://plus.google.com/109962494182694679780/posts Razib Khan

    strong A.I. would also have to be conscious.

    i didn’t use the word conscious in my assertions…consciously ;-)

    but there was actually nobody there!

    and yet it is a philosophical question whether anyone is here….

    which nobody talks about much,

    people talk about it in singularity circles a fair amount. jaan tallinn’s talk in fact addressed this as a major theme.

  • jb

    jaan tallinn’s talk in fact addressed this as a major theme.

    Really? That would be quite interesting to hear. I checked out the Singularity Summit web site, and it looks like the talks all go online eventually, so I’ll have to check it out.

    You couldn’t provide a brief synopsis, could you? One of my “science fiction scenarios” is that the initial motivations are supplied by uploaded human consciousnesses, who make different choices. The choices vary a lot, and some choose (perhaps for religious reasons?) to permanently fix their motivations in ways that conflict with the choices others have made, and the result is a Darwinian struggle over resources between super beings who are basically all-knowing, and who can replicate like bacteria if they want to. Did anything like that come up?

  • https://plus.google.com/109962494182694679780/posts Razib Khan

    #6, tallinn points out that if the singularity comes about then large segments of the universe will become mostly material which is the constituent of intelligent life forms, from one where intelligence is very rare. this will happen very quickly (probably von neumann machine like dynamics; limited by light speed though).

    the talks themselves are short. but in personal conversation the stuff you’re talking about is addressed all the time. e.g., check some of carl shulman’s work on the realities of resource scarcity in a world saturated with intelligence whose computation is rate limited due to scarcity.

  • April Brown

    I’m especially susceptible to cute baby pictures at the moment, but even if I weren’t, “OMG SQUEE TEH CUTE!”~

  • j mct

    Just to point this out, but ‘programs that write programs’ have existed for a long time, they are called compilers. One can go through the whole thing, but the program that can write a program outside a ‘possible program space’ necessarily predefined by the human programmer who wrote or at least arranged the original program will happen sometime after someone travels backwards in time or comes up with a theory in evolutionary psychology that is not a just so story. The singularity is not going to happen, unless a computer start behaving in an ‘emergent’ fashion, i.e. starts doing something not definable as shuffling symbols per a calculus.

    Great pic. My daughter is nineteen, and pics like that are great for blackmail. Don’t lose it!

  • https://plus.google.com/109962494182694679780/posts Razib Khan

    #9, it would be useful if you elaborated your assertions. naked assertions from you are kind of boring.

  • Paul

    @9: A compiler is a rather simple translator of one program into another program with the same behavior. It doesn’t write a program in the sense a programmer does, creating a program from an imprecise specification or even from a set of (always incomplete!) requirements. The latter is a much harder problem, since it requires understanding of the “real world”.

  • https://plus.google.com/109962494182694679780/posts Razib Khan

    #11, don’t get baited into stupid definition wars. #9 understood exactly what i intended to say, and he decided that it was clever to simply elaborate trivial information (readers who don’t know what compilers are will take my meaning at face value, and those who know what compilers are would understand that i didn’t mean compilers obviously). i tend to agree though that it’s misleading to term compilers as programs-that-write-programs.

  • T

    I think that super intelligent computers are a near certainty. However I don’t see why the emergence of super intelligent computers necessarily implies anything.

  • jb

    BTW, there is a web comic I follow that is set in a world very much like ours, except that strong A.I.s do exist in it. The comic rarely explores this in any technical way, and mostly just plays it for laughs, but occasionally the author shows that he has actually thought about it.

    http://www.questionablecontent.net/view.php?comic=2285

    (Incidentally, their world has already had its singularity).

    http://questionablecontent.net/view.php?comic=1777
    http://questionablecontent.net/view.php?comic=1780

  • J

    Razib,
    Curious to know what you think of Nick Bostrom’s simulation argument?

    (link for the curious: http://www.simulation-argument.com/simulation.html )

  • https://plus.google.com/109962494182694679780/posts Razib Khan

    #15, not crazy. nor do i think multi-verse is crazy. but let me be clear here: MY OPINION ON THIS IS NOT WORTH SHIT. :-) well, it shouldn’t be to anyone aside from me. i don’t know enough about this stuff, nor have i thought about this much. when people ask if i’m interested in the singularity or i think about the singularity, i say “no.” why? because i can’t add much value in terms of comment, i probably know as much you (i.e., jack shit).

  • dave chamberlin

    Actually your opinion is very valuable because you admit it when you know jack shit. Singularity has a bad name by association with the futurists who are uniformly full of crap not because of what they predict but because they act so confident of their longshot guesses.

    It is fun and interesting to speculate on when and how “efficient feedback loops of self directed evolution” might occur. Of course it is science fiction like speculation now but give it time, that dot on the horizon is getting bigger.

  • ackbark

    Is the idea of a singularity necessarily limited to only the appearance of strong AI?

    Is there not an equally inevitable biological singularity, where nearly all biology becomes essentially cosmetic?

  • ackbark

    4. “I think the ultimate nightmare would be if the human race ended up replacing itself with what appeared to be a race of super beings, but there was actually nobody there!”

    That’s what I thought was the ending of Spielberg’s AI –those figures were super intelligent, strong AI, perfect in every way, but there was no one there.

  • John

    Wikipedia seems to indicate that Tallinn was not a Skype founder, just a Skype developer.

  • John Emerson

    It’s a basic Buddhist principle that there’s “nobody there” in people either.

  • Sandgroper

    …and then the nobody gets endlessly reincarnated, until (s)he achieves enlightenment, at which point (s)he becomes nobody.

  • jb

    It’s a basic Buddhist principle that there’s “nobody there” in people either.

    Still, unless you are a solipsist, it’s reasonable to assume that other people are “nobody” in the same way that you yourself are “nobody.” The scary thought about A.I. is that we could end up replacing ourselves with entities which, rather than being “nobody” like a human being, are “nobody” like a rock.

    Now maybe that just fundamentally isn’t possible…, but how would we know?

  • ackbark

    So, in Buddhism enlightenment is not worrying about it?

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Gene Expression

This blog is about evolution, genetics, genomics and their interstices. Please beware that comments are aggressively moderated. Uncivil or churlish comments will likely get you banned immediately, so make any contribution count!

About Razib Khan

I have degrees in biology and biochemistry, a passion for genetics, history, and philosophy, and shrimp is my favorite food. In relation to nationality I'm a American Northwesterner, in politics I'm a reactionary, and as for religion I have none (I'm an atheist). If you want to know more, see the links at http://www.razib.com

ADVERTISEMENT

See More

ADVERTISEMENT

RSS Razib’s Pinboard

Edifying books

Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »