• 4 Posts
  • 55 Comments
Joined 2 years ago
cake
Cake day: June 28th, 2023

help-circle
rss
  • I agree, unless you doing low level stuff where you need absolute control you should use a modern language with proper abstraction just to save time. Most use cases where they use C++ can be replaced with Rust or Go as they aren’t saddled with years tech debt and bloat due to having mantaining backwards compatibility.



















  • 257mtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    1
    edit-2
    2 years ago

    Just to warn you it might be very bulky and the model that the script is downloading is deprecated so you’ll have to find a different .gguf model on hugging face. Try to find a lightweight .gguf model and replace the MODEL variable with it nane as well the rest of the link. Or just download from a browser and move it into the models folder.


  • 257mtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    5
    edit-2
    2 years ago

    I believe Llama is open source but not sure how complicated it is to get running locally. Nevermind: https://replicate.com/blog/run-llama-locally

    You can probably write a bash wrapper around it that feeds in “Can you summarize this text: (text here)” by setting the PROMPT variable the bash script. (Probably just do PROMPT=“Can you summarize this text: $1”) (Obviously don’t recompile everytime so remove the clone build and download code)