Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. What a year, huh?)


The sad thing is I have some idea of what itās trying to say. One of the many weird habits of the Rationalists is that they fixate on a few obscure mathematical theorems and then come up with their own ideas of what these theorems really mean. Their interpretations may be only loosely inspired by the actual statements of the theorems, but it does feel real good when your ideas feel as solid as math.
One of these theorems is Aumannās agreement theorem. I donāt know what the actual theorem says, but the LW interpretation is that any two ārationalā people must eventually agree on every issue after enough discussion, whatever rational means. So if you disagree with any LW principles, you just havenāt read enough 20k word blog posts. Unfortunately, most people with ābounded levels of computeā aināt got the time, so they canāt necessarily converge on the meta level of, never mind, screw this, Iām not explaining this shit. I donāt want to figure this out anymore.
@gerikson @lagrangeinterpolator
> but it does feel real good when your ideas feel as solid as math
Misread this as āmethā, perfect, no further questions
I know what it says and itās commonly misused. Aumannās Agreement says that if two people disagree on a conclusion then either they disagree on the reasoning or the premises. Itās trivial in formal logic, but hard to prove in Bayesian game theory, so of course the Bayesians treat it as some grand insight rather than a basic fact. That said, I donāt know what that LW post is talking about and I donāt want to think about it, which means that I might disagree with people about the conclusion of that post~
I think Aumannās theorem is even narrower than that, after reading the Wikipedia article. The theorem doesnāt even reference āreasoningā, unless you count observing that a certain event happened as reasoning.
I donāt think thatās an accurate summary. In Aumannās agreement theorem, the different agents share a common prior distribution but are given access to different sources of information about the random quantity under examination. The surprising part is that they agree on the posterior probability provided that their conclusions (not their sources) are common knowledge.
The Wikipedia article is cursed
Iād say even the part where the article tries to formally state the theorem is not written well. Even then, itās very clear how narrow the formal statement is. You can say that two agents agree on any statement that is common knowledge, but you have to be careful on exactly how youāre defining āagentā, āstatementā, and ācommon knowledgeā. If I actually wanted to prove a point with Aumannās agreement theorem, Iād have to make sure my scenario fits in the mathematical framework. What is my state space? What are the events partitioning the state space that form an agent? Etc.
The rats never seem to do the legwork thatās necessary to apply a mathematical theorem. I doubt most of them even understand the formal statement of Aumannās theorem. Yud is all about āshut up and multiply,ā but has anyone ever see him apply Bayesās theorem and multiply two actual probabilities? All they seem to do is pull numbers out of their ass and fit superexponential curves to 6 data points because the superintelligent AI is definitely coming in 2027.
the get smart quick scheme in its full glory
āyou should watch [Steven Pinkerās] podcast with Richard Hananiaā cool suggestion scott
Surely this is a suitable reference for a math article!
Honestly even the original paper is a bit silly, are all game theory mathematics papers this needlessly farfetched?