Some otherwise very well-informed comrades of mine have invited me to an AI-related groupchat where they discuss all matters LLM but one thing that really dominates the discussions is doomerism about how we’re getting Skynet any second now or how LLM’s are the first step to the machine that turns our blood into paperclips.
I’ve studied computer science but I’m by no means an AI expert or anything. I just have a hard time seeing the development going from these glorified autocorrect functions to AGI or “real” AI or whatever.
When I bring up problems generated by “AI” in the here and now that worry me (environmental impacts, loss of jobs, the obvious bubble etc.) they agree with me, but then often pivot to AI safety issues and they mention Yudkowsky a lot. I read some of his Harry Potter fanfic a few years ago and listened to a TrueAnon episode about some futurist doomer cult or something in which his name came up, that’s basically all I know about him.
Can someone explain what this dude’s deal is?


Is that the rationalist guy? The one who wrote a harry potter fan fiction as a recruitment tool? The guy who thought that Roko’s basilisk was a serious problem?
Tbf banning discussions about Rokos basilisk seems like a good call
Yeah but not because it’s a serious concern
I think it’s kind of beautiful. It’s inherent to that annoying idea that talking about the idea will bring harm onto others so telling a bunch of rationalists that you are banning the topic to reduce that harm, even if you don’t believe it yourself is easier than telling them the idea is banned because it’s annoying.