

The difference was that he fired his PR agents in 2020 who stopped him from ruining their carefully created advertising. He never was that person.


The difference was that he fired his PR agents in 2020 who stopped him from ruining their carefully created advertising. He never was that person.


Perhaps something like this: https://lemmy.world/post/42528038/22000735
Deferring responsibility for risk and compliance obligations to AI is bound to lead to some kind of tremendous problem, that’s a given. But the EU has been pretty keen lately on embedding requirements into their newer digital-realm laws about giving market regulators access to internal documents on request.
This is not to suggest it’s anywhere close to a certainty that an Enron will happen though. There is still the exceptionally large problem of company executives are part of the power structures which selectively target who to prosecute, if their lobbyists haven’t already succeeded in watering the legislation down to be functionally useless before they’re in force. So it will take a huge fuck up and public pressure for any of the countries to make good on their own rules.
Given that almost all the media heat from the Trump-Epstein files has been directed at easy target single public personalities, completely ignoring the obvious systemic corruption by multiple corporate entities, I don’t have high hopes for that part. But if the impending fuck up and scale of noncompliance is big enough, there’s a chance there will be audits and EU courts involved.


It’s a wonder people haven’t started throwing water balloons filled with mud and flour at the cameras. Perhaps he should be grateful that’s not a trend?


I read both of these and what struck me was how both studies felt remarkably naive. I found myself thinking: “there’s no way the authors have any background in the humanities”. Turns out there’s 2 authors, lo and behold, both with computer science degrees. This might explain why it feels like they’re somehow incredulous at the results - they’ve approached the problem as evaluating a system’s fitness in a vacuum.
But it’s not a system in a vacuum. It’s a vacuum that has sucked up our social system, sold to bolster the social standing of the heads of a social construct.
Had they looked at the context of how AI has been marketed, as an authoritative productivity booster, they might have had some idea why both disempowerment and reduced mastery could be occurring: The participants were told to work fast and consult the AI. What a shock that people took the responses seriously and didn’t have time to learn!
I’d ask why Anthropic had computer scientists conducting sociological research, but I assume this part of output has just been published to assuage criticism of trust and safety practices. The final result will probably be adding another line of ‘if query includes medical term then print “always ask a doctor first”’ to the system prompt.
This constant vacillation between “it’s a revolution and changes our entire reality!” and “you can’t trust it and you need to do your own research” from the AI companies is fucking tiresome. You can’t have it both ways.
Same for Japan. No chance they’re wearing full hiking boots or sneakers inside the house in Japan - the shoe cabinet is built in right next to the front door of houses, tiny apartments, temples, many restaurants, etc. I assume the schools still do too.


I took a brief look at one and it seems they may have learnt their lesson from the first time around, unfortunately.


Apparently he’s a Quaker, so maybe that’s how the euthanasia stance can pass muster. But Quakerism might also make even less sense with his views on race? I don’t know enough about the reality of Quakerism to say.
Also, looks like Harris also deliberately side-stepped the dinner bait but I don’t know how much of that was because of Chomsky’s presence. Epstein tried again a year later without the Chomsky attendee name-drop, but Harris might have just not replied.
At least there are no surprises with Dawkins, even his sleazy friend Brockman seemingly finds him tiring
Glib jibes aside, I haven’t been able to bring myself to look at many of the docs that aren’t just quasi-celeb emails, the few I did see were far too much for me. I’m horrified at nearly everyone from all ideological stances on a number of different levels I never considered. I can only hope the remaining victims someday are able to find some peace, and some kind of huge systemic reform can come from this. What a vile world we live in.


So Krauss tried to introduce Joe Rogan to Epstein
But Rogan may have been unwilling to do so
How is it Joe Rogan is (possibly) the smartest person in this situation?


Who needs pure AI model collapse when you can have journalists give it a more human touch? I caught this snippet from the Australian ABC about the latest Epstein files drop

The Google AI summary does indeed highlight Boris Nikolić the fashion designer if you search for only that name. But I’m assuming this journalist was using ChatGPT, because if you see the Google summary, it very prominently lists his death in 2008. And it’s surprisingly correct! A successful scraping of Wikipedia by Gemini, amazing.
But the Epstein email was sent in 2016.
Dors the journalist perhaps think it more likely is the Boris Nikolić who is the biotech VC, former advisor for Bill Gates and named in Epstein’s will as the “successor executor”? Info literally all in the third Google result, even in the woeful state of modern Google. Pushed past the fold by the AI feature about the wrong guy, but not exactly buried enough for a journalist to have any excuse.
Excellent job on taking care of Lester. I can tell he’s in caring hands and I hope you both have many wonderful (and URTI-free, fingers crossed for that) years together.
I’d say never feel silly about a vet visit. Even if why you booked it is no longer an issue (which is definitely something that can and does happen for any pet owner), you can always use the time to pick their brains, learn new things and build a good relationship with them.
But… Where are 102 and 103 then? Are they on a separate street?
Oh wait I get it now. It’s a weird choice but ok. Where I live we just subdivide by adding letters. E.g 20 subdivide and becomes 20 and 20A.


That depends on if you consider the “inferior” to be human, if they’re even still alive after the eugenics part.


In retrospect the word quarterlies is what I should have chosen for accuracy, but I’m glad I didn’t purely because I wouldn’t have then had your vivid hog simile.


Amazon’s latest round of 16k layoffs for AWS was called “Project Dawn” internally, and the public line is that the layoffs are because of increased AI use. AI has become useful, but as a way to conceal business failure. They’re not cutting jobs because their financials are in the shitter, oh no, it’s because they’re just too amazing at being efficient. So efficient they sent the corporate fake condolences email before informing the people they’re firing, referencing a blog post they hadn’t yet published.
It’s Schrodinger’s Success. You can neither prove nor disprove the effects of AI on the decision, or if the layoffs are an indication of good management or fundamental mismanagement. And the media buys into it with headlines like “Amazon axes 16,000 jobs as it pushes AI and efficiency” that are distinctly ambivalent on how 16k people could possibly have been redundant in a tech company that’s supposed to be a beacon of automation.


Now I’m curious what replacing the word “girl” with “boy” in the same prompt would return. Would they also be hypersexualized from the inclusion of “stockings” and “braids” which can just as easily be fetish categories as innocuous physical descriptions, or is it mostly the female dataset that puts it into the porn realm?
Terrorism seems cheap when you put it that way


I love how they include a “blast radius” summary for each. What a great little website!


Careful of your eyes! I’m pretty sure you need a special filter or telescope for the sun
I assumed billionaires could afford better signs. Were all her EAs on leave that week?