Some otherwise very well-informed comrades of mine have invited me to an AI-related groupchat where they discuss all matters LLM but one thing that really dominates the discussions is doomerism about how we’re getting Skynet any second now or how LLM’s are the first step to the machine that turns our blood into paperclips.

I’ve studied computer science but I’m by no means an AI expert or anything. I just have a hard time seeing the development going from these glorified autocorrect functions to AGI or “real” AI or whatever.

When I bring up problems generated by “AI” in the here and now that worry me (environmental impacts, loss of jobs, the obvious bubble etc.) they agree with me, but then often pivot to AI safety issues and they mention Yudkowsky a lot. I read some of his Harry Potter fanfic a few years ago and listened to a TrueAnon episode about some futurist doomer cult or something in which his name came up, that’s basically all I know about him.

Can someone explain what this dude’s deal is?