• @BluesF@lemmy.world
      link
      fedilink
      246 months ago

      AI models don’t resynthesize their training data. They use their training data to determine parameters which enable them to predict a response to an input.

      Consider a simple model (too simple to be called AI but really the underlying concepts are very similar) - a linear regression. In linear regression we produce a model which follows a straight line through the “middle” of our training data. We can then use this to predict values outside the range of the original data - albeit will less certainty about the likely error.

      In the same way, an LLM can give answers to questions that were never asked in its training data - it’s not taking that data and shuffling it around, it’s synthesising an answer by predicting tokens. Also similarly, it does this less well the further outside the training data you go. Feed them the right gibberish and it doesn’t know how to respond. ChatGPT is very good at dealing with nonsense, but if you’ve ever worked with simpler LLMs you’ll know that typos can throw them off notably… They still respond OK, but things get weirder as they go.

      Now it’s certainly true that (at least some) models were trained on CSAM, but it’s also definitely possible that a model that wasn’t could still produce sexual content featuring children. It’s training set need only contain enough disparate elements for it to correctly predict what the prompt is asking for. For example, if the training set contained images of children it will “know” what children look like, and if it contains pornography it will “know” what pornography looks like - conceivably it could mix these two together to produce generated CSAM. It will probably look odd, if I had to guess? Like LLMs struggling with typos, and regression models being unreliable outside their training range, image generation of something totally outside the training set is going to be a bit weird, but it will still work.

      None of this is to defend generating AI CSAM, to be clear, just to say that it is possible to generate things that a model hasn’t “seen”.

      • IHeartBadCode
        link
        fedilink
        86 months ago

        Okay for anyone who might be confused on how a model that’s not been trained on something can come up with something it wasn’t trained for, a rough example of this is antialiasing.

        In the simplest of terms antialiasing looks at a vector over a particular grid, sees what percentage it is covering, and then applies that percentage to to shade the image and reduce the jaggies.

        There’s no information to do this in the vector itself, it’s the math that is what is giving the extra information. We’re creating information from a source that did not originally have it. Now, yeah this is really simple approach and it might have you go “well technically we didn’t create any new information”.

        At the end of the day, a tensor is a bunch of numbers that give weights to how pixels should arrange themselves on the canvas. We have weights that show us how to fall pixels to an adult. We have weights that show us how to fall pixels to children. We have weights that show us how to fall pixels to a nude adult. There’s ways to adapt the lower order ranking of weights to find new approximations. I mean, that’s literally what LoRAs do. I mean that’s literally their name, Low-Rank Adaptation. As you train on this new novel approach, you can wrap that into a textual inversion. That’s what that does, it allows an ontological approach to particular weights within a model.

        Another way to think of this. Six finger people in AI art. I assure you that no model was fed six fingered subjects, so where do they come from? The answer is that the six finger person is a complex “averaging” of the tensors that make up the model’s weights. We’re getting new information where there originally was none.

        We have to remember that these models ARE NOT databases. They are just multidimensional weights that tell pixels from a random seed where to go to in the next step in the diffusion process. If you text2image “hand” then there’s a set of weights that push pixels around to form the average value of a hand. What it settles into could be a four fingered hand, five fingers, or six fingers, depends on the seed and how hard the diffuser should follow the guidance scale for that particular prompt’s weight. But it’s distinctly not recalling pixel for pixel some image it has seen earlier. It just has a bunch of averages of where pixels should go if someone says hand.

        You can generate something new from the average of complex tensors. You can put your thumb on the scale for some of those weights, give new maths to find new averages, and then when it’s getting close to the target you’re after use a textual inversion to give a label to this “new” average you’ve discovered in the weights.

        Antialiasing doesn’t feel like new information is being added, but it is. That’s how we can take the actual pixels being pushed out by a program and turn it into a smooth line that the program did not distinctly produce. I get that it feels like a stretch to go from antialiasing to generating completely novel information. But it’s just numbers driving where pixels get moved to, it’s maths, there’s not really a lot of magic in these things. And given enough energy, anyone can push numbers to do things they weren’t supposed to do in the first place.

        The way models that come from folks who need their models to be on the up and up is to ensure that particular averages don’t happen. Like say we want to avoid outcome B’, but you can average A and C to arrive at B’. Then what you need is to add a negative weight to the formula. This is basically training A and C to average to something like R’ that’s really far from the point that we want to avoid. But like any number, if we know the outcome is R’ for an average of A and C, we can add low rank weights that don’t require new layers within the model. We can just say, anything with R’ needs -P’ weight, now because of averages we could land on C’ but we could also land on A’ or B’ our target. We don’t need to recalculate the approximation of the weights that A and C give R’ within the model.

    • @GBU_28@lemm.ee
      link
      fedilink
      English
      146 months ago

      Not all models use the same training sets, and not all future models would either.

      Generating images of humans of different ages doesn’t require having images of that type for humans of all ages.

      Like, no one is arguing your link. Some models definitely used training data with that, but your claim that the type of image discussed is “novel” simply isn’t accurate to how these models can combine concepts

    • @solrize@lemmy.world
      link
      fedilink
      13
      edit-2
      6 months ago

      And don’t understand how generative AI combines existing concepts to synthesize images - it doesn’t have the ability to create novel concepts.

      Imagine someone asks you to shoop up some pr0n showing Donald Duck and Darth Vader. You’ve probably never seen that combination in your “training set” (past experience) but it doesn’t exactly take creating novel concepts to fulfill the request. It’s just combining existing ones. Web search on “how stable diffusion works” finds some promising looking articles. I read one a while back and found it understandable. Stable Diffusion was the first of these synthesis programs but the newer ones are just bigger and fancier versions of the same thing.

      Of course idk what the big models out there are actually trained on (basically everything they can get, probably not checked too carefully) but just because some combination can be generated in the output doesn’t mean it must have existed in the input. You can test that yourself easily enough, by giving weird and random enough queries.

      • @xmunk@sh.itjust.works
        link
        fedilink
        -36 months ago

        No, you’re quite right that the combination didn’t need to exist in the input for an output to be generated - this shit is so interesting because you can throw stuff like “A medieval castle but with Iranian architecture with a samurai standing on the ramparts” at it and get something neat out. I’ve leveraged AI image generation for visual D&D references and it’s excellent at combining comprehended concepts… but it can’t innovate a new thing - it excels at mixing things but it isn’t creative or novel. So I don’t disagree with anything you’ve said - but I’d reaffirm that it currently can make CSAM because it’s trained on CSAM and, in my opinion, it would be unable to generate CSAM (at least to the quality level that would decrease demand for CSAM among pedos) without having CSAM in the training set.

        • @solrize@lemmy.world
          link
          fedilink
          56 months ago

          it currently can make CSAM because it’s trained on CSAM

          That is a non sequitur. I don’t see any reason to believe such a cause and effect relationship. The claim is at least falsifiable in principle though. Remove whatever CSAM found its way into the training set, re-run the training to make a new model, and put the same queries in again. I think you are saying that the new model should be incapable of producing CSAM images, but I’m extremely skeptical, as your medieval castle example shows. If you’re now saying the quality of the images might be subtly different, that’s the no true Scotsman fallacy and I’m not impressed. Synthetic images in general look impressive but not exactly real. So I have no idea how realistic the stuff this person was arrested for was.

    • digdug
      link
      fedilink
      106 months ago

      I think there are two arguments going on here, though

      1. It doesn’t need to be trained on that data to produce it
      2. It was actually trained on that data.

      Most people arguing point 1 would be willing concede point 2, especially since you linked evidence of it.

      • @xmunk@sh.itjust.works
        link
        fedilink
        -106 months ago

        I think it’s impossible to produce CSAM without training data of CSAM (though this is just an opinion). Young people don’t look like adults when naked so I don’t think there’s anyway an AI would hallucinate CSAM without some examples to train on.

        • digdug
          link
          fedilink
          7
          edit-2
          6 months ago

          In this hypothetical, the AI would be trained on fully clothed adults and children. As well as what many of those same adults look like unclothed. It might not get things completely right on its initial attempt, but with some minor prompting it should be able to get pretty close. That said, the AI will know the correct head size proportions from just the clothed datasets. It could probably even infer limb proportions from the clothed datasets as well.

          • @xmunk@sh.itjust.works
            link
            fedilink
            -26 months ago

            It could definitely get head and limb proportions correct, but there are some pretty basic changes that happen with puberty that the AI would not be able to reverse engineer.

            • @hikaru755@feddit.de
              link
              fedilink
              66 months ago

              There are legit, non-CSAM types of images that would still make these changes apparent, though. Not every picture of a naked child is CSAM. Family photos from the beach, photos in biology textbooks, even comic-style illustrated children’s books will allow inferences about what real humans look like. So no, I don’t think that an image generation model has to be trained on any CSAM in order to be able to produce convincing CSAM.

              • @xmunk@sh.itjust.works
                link
                fedilink
                66 months ago

                This is a fair point - if we allow a model to be trained on non-sexualizing minor nudity it likely could sexualize those models without actually requiring sexualized minors to do so. I’m still not certain if that’s a good thing, but I do agree with you.

                • @hikaru755@feddit.de
                  link
                  fedilink
                  36 months ago

                  Yeah, it certainly still feels icky, especially since a lot of those materials in all likelihood will still have ended up in the model without the original photo subjects knowing about it or consenting. But that’s at least much better than having a model straight up trained on CSAM, and at least hypothetically, there is a way to make this process entirely “clean”.

            • digdug
              link
              fedilink
              56 months ago

              This is the part of the conversation where I have to admit that you could be right, but I don’t know enough to say one way or the other. And since I have no plans to become a pediatrician, I don’t intend to go find out.

    • @grue@lemmy.world
      link
      fedilink
      English
      96 months ago

      it was trained on CSAM

      In that case, why haven’t the people who made the AI models been arrested?

      • @xmunk@sh.itjust.works
        link
        fedilink
        -16 months ago

        Dunno, probably because they didn’t knowingly train it on CSAM - maybe because it’s difficult to prove what actually goes into neural network configuration so it’s unclear how strongly weighted it is… and lastly, maybe because this stuff is so cloaked in obscurity and proprietaryness that nobody is confident how such a case would go.