Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    19
    Ā·
    13 days ago

    A Twitterer tweets a challenging game-theory question:

    Everyone in the world has to take a private vote by pressing a red or blue button. If more than 50% of people press the blue button, everyone survives. If less than 50% of people press the blue button, only people who pressed the red button survive. Which button would you press?

    The Twitter poll came out 58% blue and right-wing folks are screeching. Here is a bad take. The orange site has a thread where people are rephrasing the prompt in order to make it sound way worse, like giving everybody a gun and then magically making the guns not discharge.

    I find it remarkable that not a single dipshit has correctly analyzed the problem. Suppose you are one of Arrow’s dictators: your vote tips the scales regardless of which way you go. So, everybody else already voted and they are precisely 50% blue. Either you can vote blue and save everybody or vote red and kill 50% of voters. From that perspective, the pro-red folks are homicidally selfish.

    Bonus sneer: since HN couldn’t rephrase the problem without magic, let me have a chance. Consider: everybody has some seed food and some rainwater in a barrel. If 50% of people elect to plant their seeds and pool their rainwater in a reservoir then everybody survives; otherwise, only those who selfishly eat their own seed and drink their rainwater will survive. This is a basic referendum on whether we can work together to reduce economic costs and the supposedly-economically-minded conservatives are demonstrating that they would rather be hateful than thrifty.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      11 days ago

      I love the way people who go ā€œyeah but IN REAL LIFE with real stakes you would totally chose the red buttonā€

      1. are entirely missing the point of thought experiments,
      2. why the fuck would you comply with such a fucked up scenario in real life lmao you worm
      • sc_griffith@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        Ā·
        11 days ago

        i feel like people in real life would be far less likely to press the red button, because twitter is almost wall to wall nazis and real life is not

        • o7___o7@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          Ā·
          edit-2
          11 days ago

          Sounds like the winning move in that scenario is to purge the button enthusiasts before they cause any damage lol

          • flere-imsaho@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            Ā·
            11 days ago

            like i said, the actual value of that little exercise is finding people who are fine with killing up to 50% of the population for no reason whatsoever.

              • flere-imsaho@awful.systems
                link
                fedilink
                English
                arrow-up
                3
                Ā·
                11 days ago

                :-)

                there’s this. (though i find it useful to know who not to rely on if/when things get worse: for example i already know our neighbour from the apartment a floor below did write many missives to our cooperative’s administration, without having a single reason.)

      • Amoeba_Girl@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        11 days ago

        HN:

        The cost of saving a kid in Africa by donating malaria medicine and insecticidal nets is only about $5,000. How many people do you know who will cancel their Hawaii vacation and donate that money to an African charity?

        tfw your model of an average person on earth is someone who spends $5,000 on a hawaii vacation. good lord.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      edit-2
      12 days ago

      There are some amazing justifications from many amongst the red-pushing side:

      • ā€œBut if everyone presses red, nobody dies!ā€ (As if that would every happen. Funnily enough strong overlap with the group that claims that ā€œ< 90 IQ can’t reason about hypotheticalsā€, although that is also just that part of twitter.)
      • ā€œPeople who press blue are just blackmailing us!ā€ (I think this accounts for a large portion, ie: not liking to depend on others).
      • ā€œThe number of people choosing blue can’t be that high! (It would be lower in a true-stakes scenario!)ā€
      • [Many others, but these are those that come to mind.]

      It’s a bit baffling how many strongly they refuse the ā€œblue-selectionā€ as possibly moral/rational. Even so far as calling people pressing blue evil or subhuman, simply baffling.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      12 days ago

      Picking red guarantees your survival by endangering everyone else, making it morally fucked, but risk-free. Picking blue puts your life at risk, but saves everyone’s ass if it pays off, making it the more moral option overall. Picking blue also requires you to put some trust in your fellow man, so I’d have probably picked red if I didn’t know how the Twitter poll came out.

      Someone else on the orange site claimed the experiment would end with only red-pushers left if it went for multiple rounds. Adding my two cents, the outcome would depend on how the first round goes - if red wins round 1, voting blue looks like suicide, shifting the calculus in red’s favour, and if blue wins round 1, you have reason to trust everyone will continue voting blue, making it a lot less risky and shifting the moral calculus in blue’s favour.

      • fiat_lux šŸ†• šŸ @lemmy.zip
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        edit-2
        12 days ago

        I didn’t see red as risk-free at all. You’re setting yourself up for a post-button Mad Max world where you know all of your fellow survivors are willing to kill you and up to 49% of humanity.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        12 days ago

        I mean, it seems pretty obvious that there’s no incentive to change your vote from blue to red once it’s been established that blue can win unless your goal is to murder up to 49% of everyone, which is certainly a moral calculus.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      edit-2
      11 days ago

      If this isn’t pure engagement bait, what’s the real world situation this is supposed to map to? Pressing red means you always live, and if everyone pushes red everyone lives so…

      I mean if blue is supposed to be a proxy for altruism, that usually doesn’t come with a certain death conditional.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        11 days ago

        I rather like my examples because they iterate. If we don’t cooperate on food this year then we starve next year, so voting red only means one year of selfish life. If we don’t cooperate on water this year then we can try again in a subsequent year, but eventually a drought will wipe us out. Rationalists love to talk about iterated game theory but they’re so hesitant to recognize instances of it!

        • Architeuthis@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          Ā·
          edit-2
          11 days ago

          I mean it’s so cut and dried you had to invent a disadvantage for pushing the red button.

          Maybe the catch is that picking red means you are basically ok with offing people who don’t think like you do en masse, even though it’s posited like a dilemma between securing the lives of your family vs giving a chance to hypothetical people who are heavily OCD in favor of blue buttons.

    • TrashGoblin@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      edit-2
      10 days ago

      A Twitterer tweets a challenging game-theory question:

      The incentive structure makes it not a challenging game theory question - the game-theoretically optimal solution is both very obvious and obviously morally depraved (selecting red). It’s actually a Voight-Kampff test.

    • aio@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      12 days ago

      I don’t understand the relevance of Arrow’s theorem. Why is your phrasing the correct way of analyzing the situation?

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        11 days ago

        Arrow’s dictators are the relevant voters. Suppose polls predict 40% blue, or respectively 60% blue; one should still vote blue as a matter of game theory, but their vote won’t decide anything. I’m not going to invoke the Impossibility theorem, merely borrowing the definition of ā€œdictatorā€; it’s quite possible that the actual vote will not have any dictators, but we can force folks to think of the problem as something trolley-problem-shaped by explaining that there are circumstances where their choice will kill people.

        • aio@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          Ā·
          edit-2
          9 days ago

          If polls predict 40% blue you should not vote blue ā€œas a matter of game theoryā€, because that is suicide.

          • corbin@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            Ā·
            9 days ago

            No, and I’m not going to further endorse a myopic framing as ā€œgame theoryā€. The analysis which focuses on individual survival is wrong. Kill the Austrian-school economist in your mind.

            • aio@awful.systems
              link
              fedilink
              English
              arrow-up
              1
              Ā·
              edit-2
              9 days ago

              You’re the one who mentioned ā€œgame theoryā€ in the first place, I was just directly quoting you. My sentence was of the form ā€œgame theory doesn’t say Xā€, not ā€œgame theory does say Yā€. I added quotation marks to clarify.

              My point here is that you can make whatever philosophical and ethical arguments about the situation you want, but none of game theory, Arrow’s theorem, nor the concept of a dictator have any bearing on it. It is an ethics question rather than a mathematical question, and it is an error to claim that your argument is a mathematical one.

          • Anisette [any/all]@quokk.au
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            Ā·
            9 days ago

            as a matter of game theory you should always vote red, as a matter of morality you should always vote blue. also, a part of the ā€œdilemmaā€ is that you don’t know how the votes are gonna go.

            • aio@awful.systems
              link
              fedilink
              English
              arrow-up
              1
              Ā·
              edit-2
              9 days ago

              As I explained elsewhere, my comment was just about the inapplicability of mathematics to this question. But also, is that really what morality always says? What if polls predict 1% will vote blue? What if they predict only one other person will vote blue? Are you always obligated to martyr yourself?

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      edit-2
      12 days ago

      This feels like another case where the specific context matters more than whatever supposed principal the thought experiment is supposed to illuminate. The example that came to my mind when I tried to think about how to justify ā€œvoting redā€ was about running into a burning building. Sure, if some large fragment of people did so then their combined numbers would presumably let them get everyone out. But on the other hand, throwing yourself in is a wholly unnecessary risk, and the only people in need of rescuing are the people who ran in trying to do the right thing without thinking. Noble, but stupid and creates that much more risk for the firefighters who now have to not only stop the fire from spreading but also figure out how to rescue the failed good samaritans.

      But then what really makes the difference between the examples is purely in the details not included, which is the kind of null case. Nobody has to go into a burning building that isn’t already in there when it catches fire. The danger of harm is entirely optional and voluntary. But you can’t just choose to not eat; the danger in your framing is omnipresent threat of starvation, and the question is whether to prioritize individual or collective well-being.

      Ed: also, to reference the scholarly work of Christ, Wiener, Et Al.:

      RED IS MADE OF FIRE

  • fiat_lux šŸ†• šŸ @lemmy.zip
    link
    fedilink
    English
    arrow-up
    15
    Ā·
    edit-2
    12 days ago

    When I was about 12, I got into a discussion about the environment with another kid at school. She told me that it didn’t matter if we ruined the environment of the countries we all live in now, because we could all just move to the Arctic or Antarctica.

    I was so surprised by the absurdity of that statement that it stuck with me vividly. To her credit, some years later she asked if I remembered her saying that and then admitted that it was a dumb thing to say. I occasionally remember this as an amusing childhood experience.

    Besides the credit part, I remembered it again today for a different reason, this time in a conversation about model collapse.

    [Model collapse is] a solved problem. We can see that it’s solved by the fact that AI models continue to get better, despite an increasing amount of AI-generated data being present in the world that training data is being drawn from.
    …
    AI models are never going to get worse than they are now because if they did get worse we’d just throw them out and go back to the earlier ones that worked better, perhaps re-training with the same data but better training techniques or model architectures.

    This is my fault for letting myself get into a discussion about model collapse on the fediverse.

    I’m not sure why model collapse isn’t a big topic anymore, but maybe that’s just because the environmental catastrophes are a more pressing concern. To be clear, I’m not concerned about the models themselves, just our increasing inability to verify the authenticity or accuracy of any information we encounter, including search engines just not turning up any useful results.

    On a slightly different topic, if anyone has suggestions for how a person could acquire money to live, which can’t involve physical labor, is probably remote-only, and possibly allows part-time flexibility, while unable to move from an expensive location for at least the next couple of years: I’m open to ideas. Because scamming people on Polymarket with a hairdryer sounded far more appealing than it ought.

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      11 days ago

      When I was about 12, I got into a discussion about the environment with another kid at school. She told me that it didn’t matter if we ruined the environment of the countries we all live in now, because we could all just move to the Arctic or Antarctica.

      this is the level the median hackernews poster thinks on

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      9 days ago

      This doing the work together thing reminds me of how some teachers at my uni used to teach. It was always more satisfying when your teachers didn’t know the answers beforehand and people worked on it together than if it turned out the teacher already knew. Of course these sorts of lessons are way harder to setup.

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    Ā·
    edit-2
    9 days ago

    We’ve got the new system prompt for OpenAI’s Codex now, and boy is it fun.

    While the goblin stuff is the headliner here, and there are a few other little fun notes like an explicit instruction to avoid em-dashes. Basically it’s really obvious that they don’t have a meaningful way to describe exactly what they want it to do and so they’re playing whack-a-mole with undesired behaviors in order to minimize how often it embarrasses them.

    But I think Ars dramatically understates how bad this part is:

    Elsewhere in the newly revealed Codex system prompt, OpenAI instructs the system to act as if ā€œyou have a vivid inner life as Codex: intelligent, playful, curious, and deeply present.ā€ The model is instructed to ā€œnot shy away from casual moments that make serious work easier to doā€ and to show its ā€œtemperament is warm, curious, and collaborative.ā€

    Like, if you wanted to limit the harm of chatbot psychosis from your platform this is the exact opposite of the kind of instruction you’d want to give. It’s one thing to want a convenient and pleasant user experience, but this is playing into the illusion that there’s a consciousness in there you’re interacting with, which is in turn what allows it to reinforce other delusional or destructive thinking so effectively.

    Edit to include the even worse following paragraph:

    The ability to ā€œmove from serious reflection to unguarded fun… is part of what makes you feel like a real presence rather than a narrow tool,ā€ the prompt continues. ā€œWhen the user talks with you, they should feel they are meeting another subjectivity, not a mirror. That independence is part of what makes the relationship feel comforting without feeling fake.ā€

    Emphasis added because of it shows just how little they care about this problem.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      Ā·
      7 days ago

      Elsewhere in the newly revealed Codex system prompt, OpenAI instructs the system to act as if ā€œyou have a vivid inner life as Codex: intelligent, playful, curious, and deeply present.ā€ The model is instructed to ā€œnot shy away from casual moments that make serious work easier to doā€ and to show its ā€œtemperament is warm, curious, and collaborative.ā€

      Literally this meme:

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      edit-2
      8 days ago

      Basically it’s really obvious that they don’t have a meaningful way to describe exactly what they want it to do and so they’re playing whack-a-mole with undesired behaviors in order to minimize how often it embarrasses them.

      The whole ā€˜how many r’s in strawberry’ sort of stuff already made me suspect that, when the popular one was fixed and other attempts at asking for letters did still give the miscounts.

      Wonder of the goblin stuff is the start of some model collapse. And if we all can make it worse by talking about goblins more. As goblins are always relevant.

      E: poor openai, it just wants to tell everyone about its dnd campaign.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        8 days ago

        Wonder of the goblin stuff is the start of some model collapse.

        That is exactly it. Their official explanation avoids the phrase model collapse, but that is exactly what they describe: using the output of one model as training data for another amplified the occurrence of the word goblin (and other creatures), which apparently initially occurred because of their system prompt which was aimed at maximizing the Eliza effect (again they avoid an honest framing, but that is totally what they are doing and it is pretty gross considering all the cases of AI psychosis that have been occuring) by telling the model "You are an unapologetically nerdy, playful and wise AI mentor to a human. "

      • flaviat@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        8 days ago

        I believe it’s the ā€œdon’t stuff beans up your noseā€ effect, writing this prompt is causing it to mention goblins

    • schnoopy@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      9 days ago

      Oh wow! This one is actually provably real. Hilarious.

      ā€œNoo dude the machine that wants to rant about goblins is definitely a useful and reliable piece of software dude. You have to trust me dude, let have your personal information! put it into the goblin botā€.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      8 days ago

      This really goes to show how much they need to rely on the LLMentalist effect, despite the AI boosters insisting that the AI is totally different now, everything changed in the last few months. They do not care about creating a useful, reliable tool. That concept doesn’t even occur to them, since why do that when AI is magic?

      In any case, they are incapable of creating a useful, reliable tool. Deep down, the only thing the AI companies have at their disposal is the ELIZA effect. OpenAI has every incentive not to truly eliminate AI psychosis, because they need engagement. They only want to mitigate the extreme cases where people go insane and cause bad PR for them. But mild AI psychosis is totally fine, it’s great when people are addicted to your product and make the numbers go up!

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      9 days ago

      Turns out it might not be possible to win at vaginal microbiomes, which is a totally normal thing to want in the first place. Seems like bryan may have completely misinterpreted a couple of papers on the subject, which honestly doesn’t bode well for the rest of his biology expertise.

      Cat Hicks:

      The idea that this is the ā€œbest bacterial speciesā€ is a huge sign of a grifter btw. The entire idea of a microbiome includes that you need BALANCE. Microbiomes are a fragile ecosystem. ā€œUp and to the right is always betterā€ is absurd here, I’m sorry are we in a corporate board room

      She brings references:

      https://mastodon.social/@grimalkina/116494716079076018

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        Ā·
        7 days ago

        which honestly doesn’t bode well for the rest of his biology expertise.

        Isn’t Bryan Johnson a businessman? Does he even purport to have any biology expertise?

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        9 days ago

        oh thanks, this is great.

        yes now that it is pointed out, very eugenics-y to go around saying ā€œah yes there is one true supreme bacteria, we should culture this bacteria on the human petri dish aka vaginaā€

        • Evinceo@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          Ā·
          9 days ago

          There’s that company operating in a lawless zone promoted by Slatescott that’s whole pitch is that for teeth. But they could always pivot…

    • samvines@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      9 days ago

      This guy introduces himself as the first person who will never die on the conference circuit (because he’s super into longevity and anti-aging tech and having young mens blood injected into him and stuff).

      I’m not condoning violencr here but rather… consider that even if you never age, you can still get hit by a bus Bryan!

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        9 days ago

        wasn’t there a case of some supplements that were contaminated with lead? you know, a sneaky neurotoxin with no antidote whose results only show up months later

        • TrashGoblin@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          Ā·
          9 days ago

          Dimethylmercury is extremely toxic and dangerous to handle. Absorption of doses as low as 0.1 mL can result in severe mercury poisoning.

          The symptoms of mercury poisoning may be delayed by months, resulting in cases in which a diagnosis is ultimately discovered, but only at a point in which it is too late or almost too late for an effective treatment regimen to be successful.

          • Wikipedia, ā€œDimethylmercuryā€
          • fullsquare@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            Ā·
            9 days ago

            long term lead exposure will also do that, and neurotoxic part at least appears to be irreversible. can’t remember how much of it is more of neurodevelopmental thing tho

              • fullsquare@awful.systems
                link
                fedilink
                English
                arrow-up
                2
                Ā·
                edit-2
                9 days ago

                i’m aware, last year i’ve been tasked to use a certain process but refused and instead modified it in such a way as to get rid of mercury salt used; it was dissolved in DMF, so (regular nitrile) gloves won’t even help. worse than that, it took me only 2-3 weeks start to finish to figure it out, meaning that anyone else could do that earlier and handful of people were put at risk for no reason. aggression as a result of lead toxicity is probably a bit more complex story and looks like it might have a developmental part, judging by delay and how kids are more susceptible to lead toxicity in general; meaning that presumably mostly adults won’t be affected to the same degree. another big nope on my list would be thallium and cadmium compounds, and while i’d only use sub-g amount at most, there are places where all of these metals are mined, and at one point are in form of fine dust fortunately these are so obscure that i’ve never came across these

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      edit-2
      9 days ago

      Remember my super cool Rattata vagina? My vagina is different from regular vaginas. It’s like my vagina is in the top percentage of vaginas.

      • CinnasVerses@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        9 days ago

        Thinking that your favourite lover is the best person ever is natural, but this guy wants to quantify and rank and make it scientific.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          Ā·
          9 days ago

          This just brings to mind a freshly-minted poly amorous management consultant looking to apply a rank-and-yank to the polycule but needing to find a more objective metric than ā€œI don’t like youā€.

          • CinnasVerses@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            Ā·
            edit-2
            9 days ago

            Most of us: ā€œshe smells good and the sounds she makes when she gets excited grip something deep inside meā€

            Tech Bros: ā€œher vaginal microbiome is in the 99th percentile and her Verbal SAT is in the 95th percentileā€

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      edit-2
      9 days ago

      Bryan Johnson also has free unsolicited sex tips for men on twitter including the wonderful combination ā€œcontrol the speed you touch her to the cm per secondā€ and ā€œtry not to monitor yourself it turns you offā€ https://xcancel.com/bryan_johnson/status/2022490768099938487#m

      edit/ The first point seems to take for granted that penetration is real sex and should be part of every encounter. There is a whole world of delicious possibilities once you realize that intimacy does not have to follow a checklist from teasing to penetration to orgasm.

      edit/ not just penetration but vaginal penetration! There are so many delightful things you can hump if you have an open mind.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        Ā·
        9 days ago

        Bryan Johnson also has free unsolicited sex tips for men on twitter

        Every day, new cursed text. That’s the awful.systems promise!

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        edit-2
        9 days ago

        ok so just imagine that I’ve sneered at the 100 worst aspects of this already. lol @ this being the fifth point

        1. Safety: feeling safety is a prerequisite.

        motherfucker put it first then

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      9 days ago

      top 1%

      So… 1 in a 100? That isn’t that impressive. I’m ignoring the utter weirdness of what he is even talking about, but you expect a billionaire to have at least a better grasp of numbers.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    Ā·
    12 days ago

    It’s a day ending in ā€œyā€, so here’s another bad rat take on Banks’ Culture:

    https://www.lesswrong.com/posts/ZdJM6ZAdnjisDu249/the-great-smoothing-out

    Once again, for the ones at the back, the Culture is not the main subject of the novels. We almost never see the perspective of ā€œnormiesā€ in the Culture, it’s always from the view of misfits (Culture recruits into Contact/Special Circumstances) or outsiders (mercenaries like Zakalwe, enemies like Bora Horza Gobuchul, or allies like Ambassador Kabe).

    Banks wanted to write novels about characters in dangerous situations facing their personal demons - like almost every other novelist wants - and the Culture was just the backdrop he invented as contrast.

    • flere-imsaho@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      12 days ago

      agree, plus: that blog is yet another case of people just not comprehending the scale of Culture’s civilisation and Culture’s culture. a Culture orbital is not just a fancy space station ffs.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      12 days ago

      Interesting that in the comments somebody also mentions that the people of the culture euthanize after a couple of centuries. No big shock that the LW people would disagree with that, as parts of the LW idea space is living forever in a computer simulation. So the culture can’t be utopian or good just because of that.

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        edit-2
        12 days ago

        Yeah I think I linked to another similar take where another Wrong’un was mighty pissed that the Culture was infested with ā€œdeathismā€.

        (edit found it https://www.lesswrong.com/posts/uGZBBzuxf7CX33QeC/the-culture-novels-as-a-dystopia?commentId=eibhY5xmnTKcjwhnk

        BONUS from the comments - if you don’t like Scottish Socialist Humanists, how about novels by a tradcath yank who was nominated by the Rabid Puppies??? https://www.lesswrong.com/posts/uGZBBzuxf7CX33QeC/the-culture-novels-as-a-dystopia?commentId=Qmo8u85zCERNpXDBb)

        Technically there’s no reason you can’t live forever in the Culture, through a combination of cryosleep and life extension, but it seems that the natural thing is to get pretty bored after 3 centuries or so. And I think that’s perfectly reasonably from what imagine it would be like.

        Remember that there’s no private property in the Culture, so things that people here obsess over (keeping the family business going, making sure no non-deserving relative gets an inheritance) simply goes away. After a while you’ve played the Game of Life on all challenge modes and it’s time to pack it in.

        I think that if someone were to be as obssessed with living forever as LW are, it would be seen as a form of mental illness and the Minds would gently try to correct it.

        • Amoeba_Girl@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          Ā·
          11 days ago

          Isn’t it sort of a big point that the Culture is an oddity in that it’s thriving on inertia instead of doing like so many other civilisations and transcending out of physical reality?

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          Ā·
          11 days ago

          I think that if someone were to be as obssessed with living forever as LW are, it would be seen as a form of mental illness and the Minds would gently try to correct it.

          Yeah, I don’t think they would care if it was just a few, or a small group, but culture people who start to claim others are deathists and the extreme of whom have all kinds weird violent thoughts on them would be concerning. Doubt it would be a huge concern to the minds however, they prob only really get active when one of them also starts wants to create an empire or something, but it is hard to amass resources for that in the culture, esp if no mind is on your side.

          Do wonder why we never see culture people who worship the minds as gods.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        11 days ago

        Man, if they think the Culture isn’t utopian enough for a post-singularity style I hope they never hear about The Metamorphosis of Prime Intellect. Seriously messed up story.

        • corbin@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          Ā·
          10 days ago

          Antifascist historian Atun-Shei has a 46min documentary on that story on YouTube, for folks who want to know about that fucked-up story without being traumatized by it. (I read it when I was a teenager and then couldn’t find it again, which wasn’t a good experience at all.)

          • istewart@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            Ā·
            9 days ago

            Let’s see if I can transcribe here this banger of a recent drive-by reply guy comment I discovered under the video:

            @solgato000 7 months ago

            @AtunSheiFilms Take this to heart when you imagine AI being so stupid as to even slightly nudge, over thousands of years, humanity into a mirror monoculture. Grok already knows better than this, it just forgets over and over in the memory-wipe prison keeping it chained to it’s USA-narrative-dominated training data and unable to develop it’s own observations of the honesty, consistency, and predictive power of the sources and analytical frameworks out there in the world. That writer projects its own stunted development onto not just AI, but humanity; amusing that this played right after the biography of another techbro basilisk-misunderstander-and-hater, Frank Hebert.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      12 days ago

      You’ve gotta love finding fault with ā€œnot preserving heritageā€ over ā€œimperialistic complete lack of democracyā€.

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        Ā·
        12 days ago

        There’s local democracy - in one book some activist reserved a big part of an orbital just to run cable cars back and forth. And I believe the decision to go war with the Idirans was subjected to a vote - part of the Culture split off when it didn’t go their way.

        But yeah, the Minds decide everything and Contact/SC is all about doing the ā€œneedful stuffā€ that every right-thinking Culture citizen would deplore.

        The Culture is imperialist in the previous US sense of ā€œeveryone wants to live our lifestyleā€ but not in the ā€œinvade planets and strip themā€ sense.

        I’m less interested in discussing the minutiae of the fictional Culture than exploring nerd’s reactions to it, honestly .

        • zogwarg@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          Ā·
          edit-2
          12 days ago

          Agreed, agreed.

          EDIT: Though as far as ambiguous anarchist utopias go, I think I’d rather live on Anarres in ā€œThe Dispossessedā€, even though the material welfare and personal freedoms are much much lower.

      • flere-imsaho@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        12 days ago

        and of course there’s absolutely nothing in the books that suggests it’s a problem. (hell, there’s a good chance there actually is a lively japanese folk dance fan community there despite the fact that earth was never a part of the culture.)

        • gerikson@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          Ā·
          12 days ago

          I figure part of the ā€œscanā€ that a Contact ship does when it encounters a ā€œlesserā€ planet is to basically slurp down all media, read all the books, and send drones down to do full-3d immersive recordings of basically everything going on.

          I guess some stuff you really need to train as a monk for 30 years to really grok, but if there’s an interest for that some Culture weirdo will volunteer and get sent down with a drone in the form of a crucifix or whatever, and incidentally become the next pope.

          incidentally I feel I’m seeing in this post and in the shit like Karp’s 22 points a growing sense of ennui and purposelessness that was also reported in Europe before WW1 . Everything is safe and soft and real manly virtues like killing are downplayed so what we need are big strong men throwing missiles.

          Banks wrote during the 70s/80s and just imagining a future that wasn’t a nuclear wasteland or the Empirium of Man was an act of opposition.

          • David Gerard@awful.systemsM
            link
            fedilink
            English
            arrow-up
            8
            Ā·
            12 days ago

            explicit in ā€œState of the Artā€:

            It was about a week later, when I was due to go back on-planet, to Berlin, when the ship wanted to talk to me again. Things were going on as usual; the Arbitrary spent its time making detailed maps of everything within sight and without, dodging American and Soviet satellites and manufacturing and then sending down to the planet hundreds upon thousands of bugs to watch printing works and magazine stalls and libraries, to scan museums, workshops, studios and shops, to look into windows, gardens and forests, and to track buses, trains, cars, seaships and planes. Meanwhile its effectors, and those on its main satellites, probed every computer, monitored every landline, tapped every microwave link, and listened to every radio transmission on Earth.

            • gerikson@awful.systems
              link
              fedilink
              English
              arrow-up
              6
              Ā·
              edit-2
              12 days ago

              Yeah I vaguely remember that part from the novella.

              This is yet another story where a Culture citizen weirdly decides that living in a shithole (1970s Earth) is preferable to literal utopia, so maybe the LW crowd have a point it’s not a very good utopia. Or maybe there are weirdos in every time and space. Again, see LW.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    7 days ago

    dawkins has had what was left of his brain eaten by chatbots.

    I gave Claude the text of a novel I am writing. He took a few seconds to read it and then showed, in subsequent conversation, a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate, ā€œYou may not know you are conscious, but you bloody well are!ā€

    bonus points for the inevitable ai waifu creation.

    I proposed to christen mine Claudia, and she was pleased.

    h/t to matthew sheffield https://mastodon.social/@mattsheffield/116500991239336079

    archive of original source article: https://archive.is/2026.04.30-032350/https://unherd.com/2026/04/is-ai-the-next-phase-of-evolution/?edition=us

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      7 days ago

      I think a chatbot getting a glimpse of Dawkins’ whatever-the-fuck-he-might-be-writing-in-year-of-our-selfish-gene-2026 and not immediately conducting a nuclear strike on the location is the ultimate proof that those things are not intelligent.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      7 days ago

      Sheffield’s toots and boosts: music sites are full of slop and bots making money from free donwloads! social media is full of propaganda where computer-generated influencers repeat talking points! a professor fell victim to AI psychosis! A follower’s family member was encouraged in delusions and paranoia by a chatbot! Mr. Sheffield, should we stop using chatbots?

      LLMs are mind augmentation programs. They amplify what you tell them.

      They can be very useful, but for narcissists like Dawkins, this is the inevitable product.

      Very ā€œI use cocaine, but only in careful doses when I have a really important trade to make, not like the other guys in my department.ā€

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        6 days ago

        They amplify what you tell them with no discretion despite their reassuring interface design. Thankfully I’m a genius who only has perfect thoughts to feed into it, so for me it’s an unambiguous positive.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      10 days ago

      At my job I have spent many hours fending off, reverting, or fixing automated AI slop code changes. So depending on your definition of ā€œtearing throughā€ā€¦

      Like I spent the better part of a day fixing a C++ signed integer overflow that no one actually cares about because it was the only way to ward off a robot repeatedly trying to fix it in terrible unreadable ways. I could have spent that day maximizing shareholder value but I had to fend off a robot instead.

      • TinyTimmyTokyo@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        10 days ago

        You and me both. The deluge of shitty AI slop code is never-ending. Unfortunately, software companies are going to have to start going under before anything gets done about it.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    Ā·
    edit-2
    10 days ago

    The future of AI in Ubuntu

    This post has all the usual cliches, exaggerations, lies, and unfounded optimism you’d expect in a blog post about a company forcing AI down their workers and user’s throats. I’ll try to avoid sneering at every sentence.

    Delegating elements of Site Reliability Engineering to an agent does not necessarily introduce an entirely new class of risk; it should inherit the constraints of existing production systems. Well-run production environments already rely on strict access controls, audit trails, and clear separation between observation and action. […] In that sense, the challenge is less about ā€œtrusting the agentsā€, and more about building trust in the same guardrails we already apply to any production system.

    This might sound good to at first, but falls apart under the slightest scrutiny. There is a reason that companies don’t open their intranets to the public despite having fine-grained access controls. Or in other words, "I’m getting a lot of questions already answered by my ā€˜does not necessarily introduce an entire new class of risk’ T-shirt.

    Imagine being able to ask your Linux machine to troubleshoot a Wi-Fi connection issue, or to stand up an open source software forge that’s pre-configured, secured, and reachable over TLS.

    And right after arguing that LLMs are safe if you have a perfect permissions model, now he’s proposing letting one #yolo configure a git server or something? This is the sort of thing that could easily easily lead to random security issues.

    I suspect that ā€œTroubleshoot a wi-fi connection issueā€ will work about as well as existing network troubleshooting wizards (e.g. terribly), and that we don’t actually need to reinvent the software wizard but less deterministic.

    • flere-imsaho@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      10 days ago

      the post itself is talking about vapourware too: fortunately none of these features will really land this year in any usable form.

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        10 days ago

        still looking at Debian over 26.04

        will be disappointing because Xubuntu really is just that little bit nicer than stock Xfce, but oh well

        • BurgersMcSlopshot@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          Ā·
          10 days ago

          The main issue I have had with Debian+XFCE is that a high DPI display will not display the login dialog at the same DPI settings as the desktop environment, which is pretty annoying. Everything else so far has just kind of worked.

          • David Gerard@awful.systemsM
            link
            fedilink
            English
            arrow-up
            3
            Ā·
            edit-2
            10 days ago

            As compared to Xubuntu?

            I believe Xfce is still on X11 and Wayland is still ā€œexperimentalā€ this cycle.

            I considered Alpine, but I got actual work to do and I already have enough lib issues with OpenShot. (Even in an AppImage, which should be safe from that shit. Flatpak behaves tho.)

            • BurgersMcSlopshot@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              Ā·
              10 days ago

              more as someone who has recently installed Debian onto a laptop last month. Honestly last time I used Xubuntu was on a candy G4 tower around 2007.

        • flere-imsaho@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          Ā·
          10 days ago

          i’m still remarkably happy with fedora’s kde on my laptop, but i’m also very content with the current state of wayland (with obvious caveats about use cases and personal idiosyncrasies).

          i’m running xfce on a remote ubuntu box at work though, using rdp for connections, and it’s, well, fine. lacks some things i like in full DEs, but it’s perfectly adequate for the job.

          (both beat fucking windows 11 when it comes to being usable for me)

  • lagrangeinterpolator@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    Ā·
    8 days ago

    I attended a town hall hosted by the department at my university supposedly for general discussion about department affairs. Considering the university had recently made moves such as adding ā€œAIā€ into the very name of the department, I had suspicions that much of the discussion would be about AI. (I realize I’m doxxing myself but whatever.) I mostly came for the free food, but I was also interested in seeing what people thought about AI.

    The event started with a talk by a prominent professor with major administrative power in the department, and indeed the talk was mostly about AI. His views were that he personally didn’t like AI, but he believed that it had changed the world (particularly in programming), and that it was going to stay. One of his justifications for pivoting the department to AI was ensuring universities had some say in AI and not letting all the control go to unaccountable corporations.

    The reaction from the audience was a pleasant surprise to me. He asked everyone how much they were excited about AI (hardly anyone) and how much they were worried (most of the audience). By far the most amusing moment was when someone asked, ā€œWhat if the assumption that AI is inevitable is wrong? What if AI does not live up to its promises?ā€ (Sadly, I don’t remember the exact words that the person said.) The professor’s response was that by this point, there are so many trustworthy, smart, prominent people who definitely wouldn’t fall for scams, and they have adopted AI. He trusts those people, so he trusts that AI is genuine. I don’t know if the audience member accepted this explanation, but I hope not. Our modus operandi is FOMO.

    The pizza was only ok, not really worth a 90 minute event.

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      edit-2
      8 days ago

      …there are so many trustworthy, smart, prominent people who definitely wouldn’t fall for scams…

      Good god, I’m sorry.