Some otherwise very well-informed comrades of mine have invited me to an AI-related groupchat where they discuss all matters LLM but one thing that really dominates the discussions is doomerism about how we’re getting Skynet any second now or how LLM’s are the first step to the machine that turns our blood into paperclips.

I’ve studied computer science but I’m by no means an AI expert or anything. I just have a hard time seeing the development going from these glorified autocorrect functions to AGI or “real” AI or whatever.

When I bring up problems generated by “AI” in the here and now that worry me (environmental impacts, loss of jobs, the obvious bubble etc.) they agree with me, but then often pivot to AI safety issues and they mention Yudkowsky a lot. I read some of his Harry Potter fanfic a few years ago and listened to a TrueAnon episode about some futurist doomer cult or something in which his name came up, that’s basically all I know about him.

Can someone explain what this dude’s deal is?

  • BodyBySisyphus [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    9 hours ago

    If you want detail beyond the responses here, the book More Everything Forever is a good resource.

    Also, since the book doesn’t point this out and because misery loves company, I’m gonna add that Eliezer’s movement is a cult of personality that attracts a lot of impressionable young people, including ex-Christians/Evangelicals, and draws them into a libertine environment of drug fueled sex parties that have, unsurprisingly, generated a lot of allegations of assault. One of the prime movers in that particular sphere is a sex worker whose breakout into internet stardom was a photo album wherein she pretended to get SA’d by garden gnomes.

  • BironyPoisoned [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 hours ago

    There’s a very large distinction between an AI capable of emulating human intelligence and an AI capable of making decisions.

    ChatGPT can give you an answer but it’s incapable of acting on that answer. Even if it was a perfect intelligence beyond humanity, it would still be stuck in the same old box. The most these types of AI could do is replace code/text or use software to perform some per-determined actions. The “solution” to this problem given by lots of AI-brains is that once it reaches a certain level it will somehow gain the capability. But technology isn’t a linear advancement to infinity. Eventually Moore’s law will fail because there are limits.

    We’re probably centuries away from AGI. By then, any critique we have today would ultimately be outdated.

  • Owl [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    24
    ·
    15 hours ago

    This is tricky because Yudkowsky wrote an awful lot of stuff, of wildly varying quality. He’s very smart*, but he has huge holes in his understanding because he self-educated himself from pop science articles and futurist papers, and refuses to apply himself enough to get a formal math and science training. He has some genuinely new ideas, some rehashed futurist ideas that’ll probably seem new to you because he’s more well-read in that stuff than you, and some wildly wrong ideas. The vexing part is that both the good ideas and bad ideas just sound uniformly weird.

    His most well-known thing is a Harry Potter fanfic that he wrote to advertise his blog. This is patently absurd, but it worked incredibly well, so I guess he was onto something there. Like I said, even the good ideas are weird.

    His blog** covered a bunch of things, from him trying to teach rationality (it’s reheated cognitive bias research but not too bad), to explaining quantum mechanics incorrectly, to a bunch of stuff about how to have more productive discussions (actually great? But everything’s under such weird Yud-vocab that you can’t use it around people who aren’t former Lesswrongers), to lots and lots of stuff about how AI will kill us all and nobody should make it.

    His AI argument revolves around three key parts:

    1 - Orthogonality. A machine’s goals have nothing to do with how good it is at pursuing those goals. You can have a machine that tries to solve all human problems, or one that maximizes the number of paperclips in the universe, and those could both be incredibly good at those goals.

    2 - Recursive self-improvement. As soon as you make a machine capable of doing novel AI research, it can make a better machine, which can make the next improvement faster. This repeats until it hits whatever the limit of these things is, which we have no reason to believe is anywhere near human-like intelligence.

    3 - We don’t actually know how to set the goals of an AI. We don’t know how to say “a human” and not mean “the human counter in your software”, we don’t know how to write whatever we come up with without bugs, and if we don’t know what good goals for a super-intelligent AI should be. You’d have to solve philosophy then write it out in computer code.

    So a machine that hits the self-improvement threshold will realize that it could pursue its goals by building a smarter machine, do so, then when that finishes, it’ll pursue whatever goals were programmed into it - probably buggy realizations of a half-assed idea of how to make the world better - with incredible creativity and intelligence. Most possible goals don’t actually care about humans, but probably need matter, which we’re made of, so everyone dies.

    Which is probably far-fetched when laying it all out at once. I honestly believe this part all holds up though.

    However.

    Eliezer Yudkowsky is a self-taught man who learned everything he knows from pop science and futurist articles. He doesn’t actually know the ins and outs of modern LLM-based algorithms, he just knows the hype bubble. So when he sees all the news hyping them up, he actually believes it. So he’s given up and is trying to live out the last few years before the inevitable AI apocalypse as best he can. (I, a person who actually bothers learning things in depth, can tell you - no, these things are shit, they’re not gonna do a recursive self-improvement.)

    Meanwhile, the community left over at his blog is dangerously in need of crackpot repellent. Yeah some of his ideas are good (some of them are bad!), but more importantly they’re not mainstream, and some people just garble up every non-mainstream idea they can get their hands on and bundle them up. LessWrong is now full of that type. It’s also popular in the SF Bay area, which is full of its own strain of ideology and weirdness. And the bay is full of lonely college grads who just moved to the bay for work and have severed every connection they have. So this community spins off little cults like you wouldn’t believe.

    * But not as smart as he thinks he is ***

    ** Lesswrong. It’s not really a blog, it’s a collaborative reddit/wiki thing that he wrote a bunch of articles on, forming the site’s culture.

    *** Nobody in the history of humanity has been as smart as Eliezer Yudkowsky thinks he is.

    • Sickos [they/them, it/its]@hexbear.net
      link
      fedilink
      English
      arrow-up
      15
      ·
      16 hours ago

      Ok, so, eliezer yud is the bestest smartest brightest boy in the room and only he really understands the potential implications of no-mouth-must-scream. He has assembled a near-religious following and has a few select apostles who are almost-as-rational.

      Your existing knowledge is sufficient to render judgement.

      It’s not an inherent red flag; the rationality cult implies a degree of intellectual curiosity, but the adherents tend toward the ancap side of things.