• fox [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 day ago

    The difference is that what they’re doing is cheap on a per-user-operation level. If Google makes $0.00001 per search, but they spend a tenth of that serving it, they come out ahead. OpenAI and LLMs generally are so massively unprofitable that they cannot recover the cost of a query through ad spend at any price an advertiser would be willing to pay.

    • The expensive part of LLMs is the training though. Actual token output is rather efficent and quite cheap. For example, for Deepseek to generate a 200 token paragraph of text it costs about $0.000084. Image generation is also rather expensive, but most of the data centers and cost are around training models, not serving LLM output. It still might be more expensive than advertisers are willing to pay, but not crazy expensive.

      • fox [comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        21 hours ago

        You’d think, but efficiency gains are erased by the LLMs having bigger context windows and self-referencing “thinking” or “agent” modes that massively extend token burn. There’s public data out there showing how training costs are an enormous fixed point, but then inference costs very quickly catch up and exceed the training cost.

        A model that’s token-efficient is a model that’s pretty useless and a model that’s useable for anything is so inefficient as to have massively negative profit margins. If there was even one model out there that was cost effective for the number of tokens burned, the provider would never shut up about it to buyers

    • plinky [he/him]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Depends if they are more effective, innit. You are reaching demographic which likely not seeing ads, aspiring rich tech bros and bored professionals. Like getting 1000 divorces is 10 mil in lawyer fees, which is somewhere around the numbers they already achieve for free, here and there they could find those 10 billions