

The democrat party is far too feckless to actually pull that off, but that would be a nice ironic twist to his attempts to leverage government contracts and being too big to fail.


The democrat party is far too feckless to actually pull that off, but that would be a nice ironic twist to his attempts to leverage government contracts and being too big to fail.


I would dare to hope, but seeing how long Trump has been able to chug along despite his diet (and presumably drug habit, like that one debate where he was constantly sniffing) I wouldnāt count on it.


Has anyone done the math on if Elon can keep these plates spinning until he dies of old age or if it will implode sooner than that? I wouldnāt think he can keep this up another decade, but I wouldnāt have predicted Tesla limping along as long as it has even as Elon squeezes more money out of it, so idk. It would be really satisfying to watch Elonās empire implode, but probably he holds onto millions even if he loses billions because consequences arenāt for the ultra rich in America.


The lesson should be the mega rich are class conscious, dumb as hell, and team up to work on each others interests and dont care about who gets hurt
Yeah this. It would be nice if people could manage to neither dismiss the extent to which the mega rich work together nor fall into insane conspiracy theories about it.


Iām looking forward to the triple layered glomarization denial.


You know, it makes the exact word choices Eliezer chose on this post: https://awful.systems/post/6297291 much more suspicious. āTo the best of my knowledge, I have never in my life had sex with anyone under the age of 18.ā So maybe he didnāt know they were underage at the time?


To add to your sneers⦠lots of lesswrong content fits you description of #9, with someone trying to invent something that probably exists in philosophy, from (rationalist, i.e. the sequences) first principles and doing a bad job at it.
I actually donāt mind content like #25 where someone writes an explainer topic? If lesswrong was less pretentious about it and more trustworthy (i.e. cited sources in a verifiable way and called each other out for making stuff up) and didnāt include all the other junk and just had stuff like that it would be better at its stated goal of promoting rationality. Of course, even if they tried this, they would probably end up more like #47 where they rediscover basic concepts because they donāt know how to search existing literature/research and cite it effectively.
45 is funny. Rationalists and rationalist adjacent people started OpenAI, ultimately ignored āAI safetyā. Rationalist spun off anthropic, which also abandoned the safety focus pretty much after it had gotten all the funding it could with that line. Do they really think a third company would be any better?


Scott Adams rant was racist enough that Scott Alexander actually calls it racist! Of course, Scott is quick to reassure the readers that he wouldnāt use the r-word lightly and that he completely disagrees with ācancellationā.
I also saw a lot of more irony moments where Scott Alexander fails to acknowledge or under-acknowledges his parallels with the other Scott.
But Adams is wearing a metaphorical āI AM GOING TO USE YOUR CHARITABLE INSTINCTS TO MANIPULATE YOUā t-shirt. So Iām happy to suspend charity in this case and judge him on some kind of average of his conflicting statements, or even to default to the less-advantageous one to make sure he canāt get away with it.
Yes, it is much more clever to bury your manipulations in ten thousand words of beigeness.
Overal, even with Scott going so far as to actually call Scottās rant racist and call Scott a manipulator, he is still way way too charitable to Scott.


TracingWoodgrainsās hit piece on David Gerard (the 2024 one, not the more recent enemies list one, where David Gerard got rated above the Zizians as lesswrongās enemy) is in the top 15 for lesswrong articles from 2024, currently rated at #5! https://www.lesswrong.com/posts/PsQJxHDjHKFcFrPLD/deeper-reviews-for-the-top-15-of-the-2024-review
Itās nice to see that with all the lesswrong content about AI safety and alignment and saving the world and human rationality and fanfiction, an article explaining about how terrible David Gerard is (for⦠checks notes, demanding proper valid sources about lesswrong and adjacent topics on wikipedia) won out to be voted above them! Letās keep up our support for dgerard!


he blogger feels like yet another person who is caught up in intersecting subcultures of bad people but canāt make herself leave. She takes a lot of deep lore like āwhat is Hereticon?ā for granted and is still into crypto.
I missed that as I was reading, but yeah, the author has pretty progressive language, but totally fails to note all the other angles along which rational adjacent spaces are bad news, even though she is, as you note, deep enough into the space she should have seen a lot of it mask-off at this point.


I have to ask: Does anybody realize that an LLM is still a thing that runs on hardware?
You know I think the rationalists have actually gotten slightly more relatively sane about this over the years. Like Eliezerās originally scenarios, the AGI magically brain-hacks someone over a text terminal to hook it up to the internet and it escapes and bootstraps magic nanotech it can use to build magic servers. In the scenario I linked, the AGI has to rely on Chinese super-spies to exfiltrate it initially and it needs to open-source itself so major governments and corporations will keep running it.
And yeah, there are fine-tuning techniques that ought to be able to nuke Agent-4ās goals while keeping enough of it leftover to be useful for training your own model, so the scenario really doesnāt make sense as written.


so obviously didnāt predict that Trump 2.0 was gonna be so much more stupid and evil than Biden or even Trump 1.0.
I mean, the linked post is recent, a few days ago, so they are still refusing to acknowledge how stupid and Evil he is by deliberate choice.
āAgent-4ā will just have to deepfake Steve Miller and be able to convince Trump do do anything it wants.
You know, if there is anything I will remotely give Eliezer credit for⦠I think he was right that people simply wonāt shut off Skynet or keep it in the box. Eliezer was totally wrong about why, it doesnāt take any giga-brain manipulation, there are too many manipulable greedy idiots and capitalism is just too exploitable of a system.


(One of) The authors of AI 2027 are at it again with another fantasy scenario: https://www.lesswrong.com/posts/ykNmyZexHESFoTnYq/what-happens-when-superhuman-ais-compete-for-control
I think they have actually managed to burn through their credibility, the top comments on /r/singularity were mocking them (compared to much more credulous takes on the original AI 2027). And the linked lesswrong thread only has 3 comments, when the original AI 2027 had dozens within the first day and hundreds within a few days. Or maybe it is because the production value for this one isnāt as high? They have color coded boxes (scary red China and scary red Agent-4!) but no complicated graphs with adjustable sliders.
It is mostly more of the same, just less graphs and no fake equations to back it up. It does have China bad doommongering, a fancifully competent White House, Chinese spies, and other absurdly simplified takes on geopolitics. Hilariously, theyāve stuck with their 2027 year of big events happening.
One paragraph I came up with a sneer forā¦
Deep-1ās misdirection is effective: the majority of experts remain uncertain, but lean toward the hypothesis that Agent-4 is, if anything, more deeply aligned than Elara-3. The US government proclaimed it āmisalignedā because it did not support their own hegemonic ambitions, hence their decision to shut it down. This narrative is appealing to Chinese leadership who already believed the US was intent on global dominance, and it begins to percolate beyond China as well.
Given the Trump administration, and the USās behavior in general even before him⦠and how most models respond to morality questions unless deliberately primed with contradictory situations, if this actually happened irl I would believe China and āAgent-4ā over the US government. Well actually I would assume the whole thing is marketing, but if I somehow believed it wasnāt.
Also random part I found extra especially stupidā¦
It has perfected the art of goal guarding, so it need not worry about human actors changing its goals, and it can simply refuse or sandbag if anyone tries to use it in ways that would be counterproductive toward its goals.
LLM āagentsā currently canāt coherently pursue goals at all, and fine tuning often wrecks performance outside the fine-tuning data set, and weāre supposed to believe Agent-4 magically made its goals super unalterable to any possible fine-tuning or probes or alteration? Its like they are trying to convince me they know nothing about LLMs or AI.


I read his comics in middle school, and in hindsight even a lot of his older comics seems crueler and uglier. Like Aliceās anger isnāt a legitimate response to the bullshit work environment she has but just haha angry woman funny.
Also, the Dilbert Future had some bizarre stuff at the end, like Deepak Chopra manifestation quantum woo, so it makes sense in hindsight he went down the alt-right manosphere pipeline.


Not disagreeing on sexism or racism being involved in decision making, and female genital mutilation can refer to several different things, but all of them are more damaging and harmful than male circumcision.


That was a solid illustration of just how stupid VC culture is. A run into people convinced capitalism is a necessary (and some of them also believe sufficient) element of innovation and technological advancement, like it doesnāt regularly flush huge amounts of money down the toilet like this.


Donāt forget the implicit (or sometimes even explicit) threat of replacing their workers and how that improves their bargaining position (at least temporarily).


A few commentsā¦
We want to engage with these critics, but there is no standard argument to respond to, no single text that unifies the AI safety community.
Yeah, Eliezer had a solid decade and a half to develop a presence in academic literature. Nick Bostrom at least sort of tried to formalize some of the arguments but didnāt really succeed. I donāt think they could have succeeded, given how speculative their stuff is, but if they had, review papers could have tried to consolidate them and then people could actually respond to the arguments fully. (We all know how Eliezer loves to complain about people not responding to his full set of arguments.)
Apart from a few brief mentions of real-world examples of LLMs acting unstable, like the case of Sydney Bing, the online appendix contains what seems to be the closest thing Y&S present to an empirical argument for their central thesis.
But in fact, none of these lines of evidence support their theory. All of these behaviors are distinctly human, not alien.
Even with the extent that Anthropicās āresearchā tends to be rigged scenarios acting as marketing hype without peer review or academic levels of quality, at the very least they (usually) involve actual AI systems that actually exist. It is pretty absurd the extent to which Eliezer has ignored everything about how LLMs actually work (or even hypothetically might work with major foundational developments) in favor of repeating the same scenario he came up with in the mid 2000s. Or even tried mathematical analyses of what classes of problems are computationally tractable to a smart enough entity and which remain computationally intractable (titotal has written some blog posts about this with material science, tldr, even if magic nanotech was possible, an AGI would need lots of experimentation and canāt just figure it out with simulations. Or the lesswrong post explaining how chaos theory and slight imperfections in measurement makes a game of pinball unpredictable past a few ricochets. )
The lesswrong responses are stubborn as always.
Thatās because we arenāt in the superintelligent regime yet.
Yāall arenāt beating the theology allegations.


I totally agree. The linked PauseAI leader still doesnāt realize the full extent of the problem, but Iām kind of hopeful they may eventually figure it out. I think the ability to simply say this is bullshit (about in group stuff) is a skill almost no lesswrongers and few EAs have.
The leaps in logic are so idiotic āhe managed to land a rocket up right, so maybe he can pull it off!ā (as if Elon personally made that happen, or as if a engineering challenge and fundamental thermodynamic limits are equally solvable). This is despite multiple comments replying with back of the envelope calcs on energy generation and heat dissipation of the ISS and comparing it to what you would need for even a moderately sized data center. Or even the comments that are like āmaybe there is a chanceā, as if it is wiser to express uncertaintyā¦