Heard a story that a doctor used llm to prescribe medicine for chronic issue that had terrible side effects and another doc had to cancel it. Come to find out the labs didn’t even match the meds. It’s over, there’s nothing more for us to do. Fucking hell world.

  • towhee [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    Doctor friends are having auto-scribe functionality pushed on them relentlessly. For those not in the know, competent doctors spend some time after your visit (up to half as long as the visit itself perhaps) writing a note summarizing the visit and deciding on course of action. This important process is being turned into slop as doctors at encouraged to use LLMs to generate the note. The time saved of course is rolled into increasing the number of patient visits or other income-producing activities for the hospital.

    • ReadFanon [any, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Y’know how reinforcement of learning is really important when you’re studying? Like listening to a lecture, writing notes, then reviewing those notes later?

      It’s the same for doctors - they listen to patients and assess symptoms, they either take notes during a consultation or they take mental notes, then they write out full notes later to keep in your patient files. But that’s not just some administrative busywork, at least not entirely. The process of listening, examining, writing out and revising the formal notes gives a doctor time to process and identify any gaps and to recall obscure info or to spot indications of what they should look into further.

      Burn me at the stake for this but I can see positive uses for limited AI in applications for diagnosis and for troubleshooting or bouncing ideas off of. That can be very useful, although it comes with risks. But using AI to replace the work of doctors is very troublesome.

      I read a story from Redd*t, I think, so 50/50 it was a real story but a person was reporting back as a medical transcriber talking about their company shifting to AI transcription and how it was making their job harder because AI would regularly hallucinate the most absurd things and it started inserting commentary from one fictitious figure that would say weird shit. The team started to talk about this figure as if it was a character in a novel and it became a running joke.

      It’s mindboggling because there are certain things that can make your life really hard in seeking healthcare, like being marked as having drug-seeking behavior or having BPD. It would only take AI one time to hallucinate this on your patient file and suddenly you’re stuck with a label that is virtually impossible to get rid of that can drastically affect your treatment as a patient. And let’s be honest here, a doctor is probably not going to remember the details from 6 or 12 months ago when they allegedly wrote that in your file, especially if they didn’t actually write it which is proven to affect recall, so they’re almost certainly going to defer to “their” notes and agree with them.

      This shit is so concerning. I wish we weren’t a dictatorship of the bourgeoisie being puppeted by silicon valley techbros. AI should get the Amish treatment - it should exist in some outhouse building, isolated from the rest of the world, and you have to intentionally go out of your way to use it purposefully and with consideration for the consequences, it shouldn’t be effectively replacing things and least of all in critical institutions like medicine or education; you can fuck with a lot of things, and believe me I have a laundry list of complaints about both of these institutions, but breaking education and/or medicine risks breaking society.