

Donāt forget the implicit (or sometimes even explicit) threat of replacing their workers and how that improves their bargaining position (at least temporarily).


Donāt forget the implicit (or sometimes even explicit) threat of replacing their workers and how that improves their bargaining position (at least temporarily).


A few commentsā¦
We want to engage with these critics, but there is no standard argument to respond to, no single text that unifies the AI safety community.
Yeah, Eliezer had a solid decade and a half to develop a presence in academic literature. Nick Bostrom at least sort of tried to formalize some of the arguments but didnāt really succeed. I donāt think they could have succeeded, given how speculative their stuff is, but if they had, review papers could have tried to consolidate them and then people could actually respond to the arguments fully. (We all know how Eliezer loves to complain about people not responding to his full set of arguments.)
Apart from a few brief mentions of real-world examples of LLMs acting unstable, like the case of Sydney Bing, the online appendix contains what seems to be the closest thing Y&S present to an empirical argument for their central thesis.
But in fact, none of these lines of evidence support their theory. All of these behaviors are distinctly human, not alien.
Even with the extent that Anthropicās āresearchā tends to be rigged scenarios acting as marketing hype without peer review or academic levels of quality, at the very least they (usually) involve actual AI systems that actually exist. It is pretty absurd the extent to which Eliezer has ignored everything about how LLMs actually work (or even hypothetically might work with major foundational developments) in favor of repeating the same scenario he came up with in the mid 2000s. Or even tried mathematical analyses of what classes of problems are computationally tractable to a smart enough entity and which remain computationally intractable (titotal has written some blog posts about this with material science, tldr, even if magic nanotech was possible, an AGI would need lots of experimentation and canāt just figure it out with simulations. Or the lesswrong post explaining how chaos theory and slight imperfections in measurement makes a game of pinball unpredictable past a few ricochets. )
The lesswrong responses are stubborn as always.
Thatās because we arenāt in the superintelligent regime yet.
Yāall arenāt beating the theology allegations.


I totally agree. The linked PauseAI leader still doesnāt realize the full extent of the problem, but Iām kind of hopeful they may eventually figure it out. I think the ability to simply say this is bullshit (about in group stuff) is a skill almost no lesswrongers and few EAs have.


PauseAI Leader writes a hard take down on the EA movement: https://forum.effectivealtruism.org/posts/yoYPkFFx6qPmnGP5i/thoughts-on-my-relationship-to-ea-and-please-donate-to
They may be a doomer with some crazy beliefs about AI, but theyāve accurately noted EA is pretty firmly captured by Anthropic and the LLM companies and canāt effectively advocate against them. And they accurately call out the false balanced style and unevenly enforced tone/decorum norms that stifle the EA and lesswrong forums. Some choice quotes:
I think, if it survives at all, EA will eventually split into pro-AI industry, who basically become openly bad under the figleaf of Abundance or Singulatarianism, and anti-AI industry, which will be majority advocacy of the type weāre pioneering at PauseAI. I think the only meaningful technical safety work is going to come after capabilities are paused, with actual external regulatory power. The current narrative (that, for example, Anthropic wishes it didnāt have to build) is riddled with holes and it will snap. I wish I could make you see this, because it seems like you should care, but youāre actually the hardest people to convince because youāre the most invested in the broken narrative.
I donāt think talking with you on this forum with your abstruse culture and rules is the way to bring EAās heart back to the right place
Youāve lost the plot, youāre tedious to deal with, and the ROI on talking to you just isnāt there.
I think youāre using specific demands for rigor (rigor feels virtuous!) to avoid thinking about whether Pause is the right option for yourselves.
Case in point: EAs wouldnāt come to protests, then they pointed to my protests being small to dismiss Pause as a policy or messaging strategy!
The author doesnāt really acknowledge how the problems were always there from the very founding of EA, but at least they see the problems as they are now. But if they succeeded, maybe they would help slow the waves of slop and capital replacing workers with non-functioning LLM agents, so I wish them the best.


I really donāt know how he can fail to see the irony or hypocrisy at complaining about people trading made up probabilities, but apparently he has had that complaint about P(doom) for a while. Maybe he failed to write a call out post about it because any criticism against P(doom) could also be leveled against the entire rationalist project of trying to assign probabilities to everything with poor justification.


I posted about Eliezer hating on OpenPhil for having too long AGI timelines last week. He has continued to rage in the comments and replies to his call out post. It turns out, he also hates AI 2027!
I looked at āAI 2027ā as a title and shook my head about how that was sacrificing credibility come 2027 on the altar of pretending to be a prophet and picking up some short-term gains at the expense of more cooperative actors. I didnāt bother pushing back because I didnāt expect that to have any effect. I have been yelling at people to shut up about trading their stupid little timelines as if they were astrological signs for as long as thatās been a practice (it has now been replaced by trading made-up numbers for p(doom)).
When we say it, we are sneering, but when Eliezer calls them stupid little timelines and compares them to astrological signs it is a top quality lesswrong comment! Also a reminder for everyone that I donāt think we need: Eliezer is a major contributor to the rationalist attitude of venerating super-forecasters and super-predictors and promoting the idea that rational smart well informed people should be able to put together super accurate predictions!
So to recap: long timelines are bad and mean you are a stuffy bureaucracy obsessed with credibility, but short timelines are bad also and going to expend the doomerās crediblity, you should clearly just agree with Eliezerās views, which donāt include any hard timelines or P(doom)s! (As cringey as they are, at least they are committing to predictions in a way that can be falsified.)
Also, the mention about sacrificing credibility make me think Eliezer is intentionally willfully playing the game of avoiding hard predictions to keep the grift going (as opposed to self-deluding about reasons not to explain a hard timeline or at least put out some firm P()s ).


It sounds like part, maybe even most, of the problem is self inflicted by the VC model traps and the VCs? I say we keep blocking ads and migrating platforms until VCs learn not to fund stuff with the premise of āprovide a decent service until weāve captured enough users, then get really shittyā.


What value are you imagining the LLM providing or adding? They donāt have a rich internal model of the scientific field to provide an evaluation of novelty or contribution to the field. They could maybe spot some spelling or grammar errors, but so can more reliable algorithms. I donāt think they could accurately spot if a paper is basically a copy or redundant, even if given RAG on all the past papers submitted to the conference. A paper carefully building on a previous paper vs. a paper blindly copying a previous paper would look about the same to an LLM.


I kinda half agree, but Iām going to push back on at least one point. Originally most of redditās moderation was provided by unpaid volunteers, with paid admins only acting as a last resort. I think this is probably still true even after they purged a bunch of mods that were mad Reddit was being enshittifyied. And the official paid admins were notoriously slow at purging some really blatantly over the line content, like the jailbait subreddit or the original donald trump subreddit. So the argument is that Reddit benefited and still benefits heavily from that free moderation and the content itself generated and provided by users is valuable, so acting like all reddit users are simply entitled free riders isnāt true.


Going from lazy, sloppy human reviews to absolutely no humans is still a step down. LLMs donāt have the capability to generalize outside the (admittedly enormous) training dataset they have, so cutting edge research is one of the worse use cases for them.


Yeah, I think this is an extreme example of a broader rationalist trend of taking their weird in-group beliefs as givens and missing how many people disagree. Like most AI researchers do not believe in the short timelines they do, the median (including their in-group and people that have bought the boosterās hype) guess among AI researchers for AGI is 2050. Eliezer apparently assumes short timelines are self evident from ChatGPT (but hasnāt actually committed to one or a hard date publicly).


The fixation on their own in-group terms is so cringe. Also I think shoggoth is kind of a dumb term for lLMs. Even accepting the premise that LLMs are some deeply alien process (and not a very wide but shallow pool of different learned heuristics), shoggoths werenāt really that bizarre alien, they broke free of their original creators programming and didnāt want to be controlled again.


Eliezer is mad OpenPhil (EA organization, now called Coefficient Giving)⦠advocated for longer AI timelines? And apparently he thinks they were unfair to MIRI, or didnāt weight MIRIās views highly enough? And doing so for epistemically invalid reasons? IDK, this post is a bit more of a rant and less clear than classic sequence content (but is par for the course for the last 5 years of Eliezerās content). For us sane people, AGI by 2050 is still a pretty radical timeline, it just disagrees with Eliezerās imminent belief in doom. Also, it is notable Eliezer has actually avoided publicly committing to consistent timelines (he actually disagrees with efforts like AI2027) other than a vague certainty we are near doom.
Some choice comments
I recall being at a private talk hosted by ~2 people that OpenPhil worked closely with and/or thought of as senior advisors, on AI. It was a confidential event so I canāt say who or any specifics, but they were saying that they wanted to take seriously short AI timelines
Ah yes, they were totally secretly agreeing with your short timelines but couldnāt say so publicly.
Open Phil decisions were strongly affected by whether they were good according to worldviews where āutter AI ruinā is >10% or timelines are <30 years.
OpenPhil actually did have a belief in a pretty large possibility of near term AGI doom, it just wasnāt high enough or acted on strongly enough for Eliezer!
At a meta level, āpublishing, in 2025, a public complaint about OpenPhilās publicly promoted timelines and how those may have influenced their funding choicesā does not seem like it serves any defensible goal.
Lol, someone noting Eliezerās call out post isnāt actually doing anything useful towards Eliezerās goals.
Itās not obvious to me that Ajeyaās timelines aged worse than Eliezerās. In 2020, Ajeyaās median estimate for transformative AI was 2050. [ā¦] As far as I know, Eliezer never made official timeline predictions
Someone actually noting AGI hasnāt happened yet and so you canāt say a 2050 estimate is wrong! And they also correctly note that Eliezer has been vague on timelines (rationalists are theoretically supposed to be preregistering their predictions in formal statistical language so that they can get better at predicting and people can calculate their accuracy⦠but weāve all seen how that went with AI 2027. My guess is that at least on a subconscious level Eliezer knows harder near term predictions would ruin the grift eventually.)


Image and video generation AI canāt create good, novel, art, but it can serve up mediocre remixes of all the standard stuff with only minor defects an acceptable percentage of the time, and that is a value proposition soulless corporate executive are more than eager to take up. And that is just a bonus, I think your last fourth point is Disneyās real motive, establish a monetary value of their IP served up as slop, so they can squeeze other AI providers for their money. Disney was never an ally in this fight.
The fact that Sam was slippery enough to finagle this deal makes me doubt the analysts like Ed Zitron⦠they may be right from a rational perspective, but if Sam can secure a few major revenue streams and build moat through nonsense like this Disney deal⦠still it will be tough even if he has another dozen tricks like this one up his sleeves, smaller companies without all the debts and valuation of OpenAI can undercut his prices.


the actual fear of āgoing madā seems fundamentally disconnected from any real sense of failing to handle the stress of being famously certain that the end times are indeed upon us
I think he actually is failing to handle the stress he has inflicted on himself, and thatās why his latest few lesswrong posts hadreally stilted poor parables about Chess and about alien robots visiting earth that were much worse than classic sequences parables. And why he has basically given up trying to think of anything new and instead keeps playing the greatest lesswrong hits on repeat, as if that would convince anyone that isnāt already convinced.


Yud, when journalists ask you āHow are you coping?ā, they donāt expect you to be āgoing mad facing apocalypseā, that is YOUR poor imagination as a writer/empathetic person. They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.
I think the way he reads the question is telling on himself. He knows he is sort of doing a half-assed response to the impending apocalypse (going on a podcast tour, making even lower-quality lesswrong posts, making unworkable policy proposals, and continuing to follow the lib-centrist deep down inside himself and rejecting violence or even direct action against the AI companies that are hurling us towards an apocalypse). He knows a character from one of his stories would have a much cooler response, but it might end up getting him labeled a terrorist and sent to prison or whatever, so instead he rationalizes his current set of actions. This is in fact insane by rationalist standards, so when a journalist asks him a harmless question it sends him down a long trail of rationalizations that include failing to empathize with the journalist and understand the question.


One part in particular pissed me off for being blatantly the opposite of reality
and remembering that itās not about me.
And so similarly I did not make a great show of regret about having spent my teenage years trying to accelerate the development of self-improving AI.
Eliezer literally has multiple sequence about his foolish youth where he nearly destroyed the world trying to jump straight to inventing AI instead of figuring out āAI Friendlinessā first!
I did not neglect to conduct a review of what I did wrong and update my policies; you know some of those updates as the Sequences.
Nah, you learned nothing from what you did wrong and your sequence posts were the very sort of self aggrandizing bullshit youāre mocking here.
Should I promote it to the center of my narrative in order to make the whole thing be about my dramatic regretful feelings? Nah. I had AGI concerns to work on instead.
Eliezerās āAGI concerns to work onā was making a plan for him, personally, to lead a small team, which would solve meta-ethics and figure out how to implement these meta-ethics in a perfectly reliable way in an AI that didnāt exist yet (that a theoretical approach didnāt exist for yet, that an inkling of how to make traction on a theoretical approach for didnāt exist yet). The very plan Eliezer came up with was self aggrandizing bullshit that made everything about Eliezer.


I mean, I assume the bigger the pump the bubble the bigger the burst, but at this point the rationalists arenāt really so relevant anymore, they served their role in early incubation.


My poe detection wasnāt sure until the last sentence used the āstill earlyā and āinevitablyā lines. Nice.
That was a solid illustration of just how stupid VC culture is. A run into people convinced capitalism is a necessary (and some of them also believe sufficient) element of innovation and technological advancement, like it doesnāt regularly flush huge amounts of money down the toilet like this.