• @jeffw@lemmy.worldOPM
    link
    fedilink
    146 months ago

    I think that’s a bit of a stretch. If it was being marketed as “make your fantasy, no matter how illegal it is,” then yeah. But just because I use a tool someone else made doesn’t mean they should be held liable.

    • @over_clox@lemmy.world
      link
      fedilink
      -106 months ago

      Check my other comments. My thought was compared to a hammer.

      Hammers aren’t trained to act or respond on their own from millions of user inputs.

      • FaceDeer
        link
        fedilink
        86 months ago

        Image AIs also don’t act or respond on their own. You have to prompt them.

        • @over_clox@lemmy.world
          link
          fedilink
          -106 months ago

          And if I prompted AI for something inappropriate, and it gave me a relevant image, then that means the AI had inappropriate material in it’s training data.

          • FaceDeer
            link
            fedilink
            116 months ago

            No, you keep repeating this but it remains untrue no matter how many times you say it. An image generator is able to create novel images that are not directly taken from its training data. That’s the whole point of image AIs.

            • @xmunk@sh.itjust.works
              link
              fedilink
              -56 months ago

              An image generator is able to create novel images that are not directly taken from its training data. That’s the whole point of image AIs.

              I just want to clarity that you’ve bought the silicon valley hype for AI but that is very much not the truth. It can create nothing novel - it can merely combine concepts and themes and styles in an incredibly complex manner… but it can never create anything novel.

            • @over_clox@lemmy.world
              link
              fedilink
              -76 months ago

              What it’s able and intended to do is besides the point, if it’s also capable of generating inappropriate material.

              Let me spell it more clearly. AI wouldn’t know what a pussy looked like if it was never exposed to that sort of data set. It wouldn’t know other inappropriate things if it wasn’t exposed to that data set either.

              Do you see where I’m going with this? AI only knows what people allow it to learn…

              • FaceDeer
                link
                fedilink
                86 months ago

                You realize that there are perfectly legal photographs of female genitals out there? I’ve heard it’s actually a rather popular photography subject on the Internet.

                Do you see where I’m going with this? AI only knows what people allow it to learn…

                Yes, but the point here is that the AI doesn’t need to learn from any actually illegal images. You can train it on perfectly legal images of adults in pornographic situations, and also perfectly legal images of children in non-pornographic situations, and then when you ask it to generate child porn it has all the concepts it needs to generate novel images of child porn for you. The fact that it’s capable of that does not in any way imply that the trainers fed it child porn in the training set, or had any intention of it being used in that specific way.

                As others have analogized in this thread, if you murder someone with a hammer that doesn’t make the people who manufactured the hammer guilty of anything. Hammers are perfectly legal. It’s how you used it that is illegal.

                • @over_clox@lemmy.world
                  link
                  fedilink
                  -76 months ago

                  Yes, I get all that, duh. Did you read the original post title? CSAM?

                  I thought you could catch a clue when I said inappropriate.

                  • FaceDeer
                    link
                    fedilink
                    66 months ago

                    Yes. You’re saying that the AI trainers must have had CSAM in their training data in order to produce an AI that is able to generate CSAM. That’s simply not the case.

                    You also implied earlier on that these AIs “act or respond on their own”, which is also not true. They only generate images when prompted to by a user.

                    The fact that an AI is able to generate inappropriate material just means it’s a versatile tool.