

womp, and wait for it, womp


womp, and wait for it, womp


Do you want Tylers Durden? Because this is how you get Tylers Durden.


Train your chatbot on TV Tropes, and the password will always be swordfish.


The post names Joscha Bach as someone Aella tried to exclude.
You do not under any circumstances have to hand it to Aella


āYes, I am hammering myself in the balls. But maybe its worth it?ā


The phrase āambient AI listening in our hospitalā makes me hear the āDies Iraeā in my head.


A longread on AI greenwashing begins thusly:
The expansion of data centres - which is driven in large part by AI growth - is creating a shocking new demand for fossil fuels. The tech companies driving AI expansion try to downplay AIās proven climate impacts by claiming that AI will eventually help solve climate change. Our analysis of these claims suggests that rather than relying on credible and substantiated data, these companies are writing themselves a blank cheque to pollute on the empty promise of future salvation. While the current negative effects of AI on the climate are clear, proven and growing, the promise of large-scale solutions is often based on wishful thinking, and almost always presented with scant evidence.
(Via.)


Itās morginā time


Limor Fried and I had a class together at MIT in 2001. This has no bearing on the present circumstances and offers me no real insight (anything I could say about our extremely limited interactions would amount to confirmation bias). Itās just the odd little factoid that comes to mind whenever adafruit Does Something Online.


Presuming that they are all liars and cheaters is both contrary to the instincts of a scientist and entirely warranted by the empirical evidence.


First of all, like, if you canāt keep track of your transcripts, just how fucking incompetent are you?
Second, I would actually be interested in a problem set where the problems canāt be solved. What happens if one prompts the chatbot with a conjecture that is plausible but false? We cannot understand the effect of this technology upon mathematics without understanding the cost of mathematical sycophancy. (I will not be running that test myself, on the āmeth: not even onceā principle.)


Mathematicians: [challenge promptfondlers with a fair set of problems]
OpenAI: [breaks the test protocol, whines]
We will aim to publish more information next week, but as I noted above, this was a quite chaotic sprint (you caught us by surprise! please give us time to prepare next time!). We will not be able to gather all the transcripts as they are quite scattered.
Some of the prompts included guidance to iterate on its previous workā¦


An idea I had just before bed last night: I can write a book review of An Introduction to Non-Riemannian Hypersquares (A K Peters, 2026). The nomenclature of the subject is unfortunate, since (at first glance) it clashes with that of āgeneralized polygonsā, geometries that generalize the property that each vertex is adjacent to two edges, also called āhyperā polygons in some cases (e.g., Conway and Smithās āhyperhexagonā of integral octonions). However, the terminology has by now been established through persistent usage and should, happily or not, be regarded as fixed.
Until now, the most accessible introduction was the review article by Ben-Avraham, Shaāarawi and Rosewood-Sakura. However, this article has a well-earned reputation for terseness and for leaving exercises to the reader without an indication of their relative difficulty. It was, if we permit the reviewer a metaphor, the Jacksonās Electrodynamics of higher mimetic topology.
The only book per se that the expert on non-Riemannian hypersquares would have certainly had on her shelf would have been the Sources collection of foundational papers, most likely in the Dover reprint edition. Ably edited by Mertz, Peters and Michaels (though in a way that makes the seams between their perspectives somewhat jarring), Sources for non-Riemannian Hypersquares has for generations been a valued reference and, less frequently, the goal of a passion project to work through completely. However, not even the historical retrospectives in the editorsā commentary could fully clarify the early confusions of the subject. As with so many (all?) topics, attempting to educate oneself in strict historical sequence means that oneās mental ontogeny will recapitulate all the blind alleys of mathematical phylogeny.
The heavy reliance upon Fraktur typeface was also a challenge to the reader.


From the HN thread:
Physicist here. Did you guys actually read the paper? Am I missing something? The ākeyā AI-conjectured formula (39) is an obvious generalization of (35)-(38), and something a human would have guessed immediately.
(35)-(38) are the AI-simplified versions of (29)-(32). Those earlier formulae look formidable to simplify by hand, but they are also the sort of thing youād try to use a computer algebra system for.
And:
Also a physicist here ā I had the same reaction. Going from (35-38) to (39) doesnāt look like much of a leap for a human. They say (35-38) was obtained from the full result by the LLM, but if the authors derived the full expression in (29-32) themselves presumably they could do the special case too? (given itās much simpler). The more I read the post and preprint the less clear it is which parts the LLM did.


Previously discussed here.


What they donāt tell you about opening the Lament Configuration is, after the pearl-headed nails and the sewing of wires to nerves, just how many puns are involved.


If the engineer does not commute they will be unable, or rather un-abelian
I would also not put my finger on those microscope slides