• @filister@lemmy.worldOP
    link
    fedilink
    English
    412 months ago

    That’s a very toxic attitude.

    Inference is in principle the process of generation of the AI response. So when you run locally and LLM you are using your GPU only for inference.

    • @aaron@lemm.ee
      link
      fedilink
      English
      182 months ago

      Yeah, I misread because I’m stupid. Thanks for replying, non-toxic man.