Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
  • @Akuchimoya@startrek.website
    link
    fedilink
    English
    25
    edit-2
    6 hours ago

    I had to tell a bunch of librarians that LLMs are literally language models made to mimic language patterns, and are not made to be factually correct. They understood it when I put it that way, but librarians are supposed to be “information professionals”. If they, as a slightly better trained subset of the general public, don’t know that, the general public has no hope of knowing that.

    • @Arkouda@lemmy.ca
      link
      fedilink
      English
      11 hour ago

      Librarians went to school to learn how to keep order in a library. That does not inherently make them have more information in their heads than the average person, especially regarding things that aren’t books and book organization.

    • @WagyuSneakers@lemm.ee
      link
      fedilink
      English
      2313 hours ago

      It’s so weird watching the masses ignore industry experts and jump on weird media hype trains. This must be how doctors felt in Covid.

      • @Llewellyn@lemm.ee
        link
        fedilink
        English
        36 hours ago

        It’s so weird watching the masses ignore industry experts and jump on weird media hype trains.

        Is it though?

        • @WagyuSneakers@lemm.ee
          link
          fedilink
          English
          141 minutes ago

          I’m the expert in this situation and I’m getting tired explaining to Jr Engineers and laymen that it is a media hype train.

          I worked on ML projects before they got rebranded as AI. I get to sit in the room when these discussion happen with architects and actual leaders. This is Hype. Anyone who tells you other wise is lying or selling you something.

          • @BlushedPotatoPlayers@sopuli.xyz
            link
            fedilink
            English
            113 minutes ago

            I see how that is a hype train, and I also work with machine learning (though I’m far from an expert), but I’m not convinced these things are not getting intelligent. I know what their problems are, but I’m not sure whether the human brain works the same way, just (yet) more effective.

            That is, we have visual information, and some evolutionary BIOS, while LLMs have to read the whole internet and use a power plant to function - but what if our brains are just the same bullshit generators, we are just unaware of it?