Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)


Thank you for providing some actual domain experience to ground my idle ramblings.
I wonder if part of the reason why so many high profile intellectuals in some of these fields are so prone to getting sniped by the confabulatron is an unwillingness to acknowledge (either publicly or in their own heart) that ārandom bullshit goā is actually a very useful strategy. It reminds me of the way that writers will talk about the value of just getting words on the page because itās easier to replace them with better words than to create perfection ex nihilo, or the rubber duck method of troubleshooting where just stepping through the problem out loud forces you to organize your thoughts in a way that can make the solution more readily apparent. It seems like at least some kinds of research are also this kind of process of analysis and iteration as much as if not more than raw creation and insight.
I have never met Donald Knuth, and donāt mean to impugn his character here, even as Iām basically asking if heās too conceited to properly understand what an LLM is, but I think of how people talk about science and scientists and the way it gets romanticized (see also Iris Meridethās excellent piece on āwarrior cultureā in software development) and it just doesnāt fit a field that can see meaningful progress from throwing shit at the wall to see what sticks. A lot of the discourse around art and artists is more willing to acknowledge this element of the creative process, and that might explain their greater ability and willingness to see the bullshit faucet for what it is. Maybe because science and engineering have a stricter and more objective pass/fail criteria (you can argue about code quality just as much as the quality of a painting, but unlike a painting either the program runs or it doesnāt. Visual art doesnāt generally have to worry about a BSOD) there isnāt the same openness to acknowledge that the affirmative results you get from an LLM are still just random bullshit. I can imagine the argument being: āThe things weāre doing are very prestigious and require great intelligence and other things that offer prestige and cultural capital. If ārandom bullshit goā is often a key part of the process then maybe it doesnāt need as much intelligence and doesnāt deserve as much prestige. Therefore if this new tool can be at all useful in supplementing or replicating part of our process it must be using intelligence and maybe it deserves some of the same prestige that we have.ā
Iād say that the great problems that last for decades do not fall purely to random bullshit and require serious advances in new concepts and understanding. But even then, the romanticized warrior culture view is inaccurate. Itās not like some big brain genius says āIām gonna solve this problemā and comes up with big brain ideas that solve it. Instead, a big problem is solved after people make tons of incremental progress by trying random bullshit and then someone realizes that the tools are now good enough to solve the big problem. A better analogy than the Good Will Hunting genius is picking a fruit: you wait until it is ripe.
But math/CS research is not just about random bullshit go. The truly valuable part is theory and understanding, which comes from critically evaluating the results of whatever random bullshit one tries. Why did idea X work well with Y but not so well with Z, and where else could it work? So random bullshit go is a necessary part of the process, but Iād say research has value (and prestige) because of the theory that comes from people thinking about it critically. Needless to say, LLMs are useless at this. (In the Knuth example, the AI didnāt even prove that its construction worked.)
I think intelligence is overrated for research, and the most important quality for research is giving a shit. Solving big problems is mostly a question of having the right perspective and tools, and raw intelligence is not very useful without them. To do that, one needs to take time to develop opinions and feelings about the strengths and weaknesses of various tools.
Of course, every rule has exceptions, and there have been long standing problems that have been solved only when someone had the chutzpah to apply far more random bullshit than anyone had dared to try before.
Upvoted, but for me the answer is as simple as noting that Knuth is a reverent Lutheran who is deeply involved with their church. Lutherans generally think that technology is part of Godās wonderful creation and that everything is beautiful from the right angle. Knuth thought that algorithms were beautiful and Godly already, and he understands how LLMs work mechanically, so why canāt they be beautiful and Godly too? Also they think that God exists, so theyāre primed to be misled and deluded.
Hypothesis: When he wrote Surreal Numbers, Knuth was a poet and thus unknowingly of the Devilās party.