Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(2026 is off to a great start, isnāt it? Credit and/or blame to David Gerard for starting this.)


Against my better judgement I got into an argument with a promptfan on Bluesky. To his credit, aside from the usual boring arguments (āmodels are getting better, and betterā, āhave you tried model xyzā, āeveryone not using chatbots will be left in the dustā he provided an actual example.
https://github.com/dfed/SafeDI/issues/183 Itās a bug thatās supposedly easy to test, but hard to reason about. Took the chatbot half an hour while it would take him several (allegedly).
Now, my first thought was: āIf a clanker could do it (something that famously canāt reason) then it couldnāt be that hard to reason about.ā
But I was curious so I looked. Unfortunately it is an area Iām not familiar with and in a language (Swift) I donāt know at all.
Probably should file the claim under ānot true or falseā and touch grass or something, but itās bugging me.
Any one yāall who could say if thereās something interesting in there?
Complementing sibling comments: Swift requires an enormous amount of syntactic ceremony in order to get things done and it lacks a powerful standard library to abbreviate common tasks. The generative tooling does so well here because Swift is designed for an IDE which provides generative tools of the sort invented in the 80s and 90s; when their editor already generates most of their boilerplate, predicts their types, and tab-completes their very long method/class names, they are already on auto-pilot.
The actual underlying algorithm should be a topological sort with either Kahnās algorithm or Tarjanās algorithm. It should take fewer than twenty lines total when ceremony is kept to a minimum; here is the same algorithm for roughly the same purpose in my Monte-in-Monte compiler, sorting modules based on their dependencies in fifteen lines. Also, a good standard library should have a routine or module implementing topological sorting and other common graph algorithms; for example, Pythonās
graphlib.TopologicalSorterwas added in 2020 and POSIXtsortdates back to 1979. I would expect students to immediately memorize this algorithm upon grokking it during third-year undergrad as part of a larger goal of grokking graph-traversal algorithms; the idea of both Kahn and Tarjan is merely to look for vertices with no incoming edges and error if none can be found, not an easy concept to forget or to fail to rediscover when needed. Congrats, the LLM can do your homework.If thereās any Swifties here: Hi! I love Taytay; I too was born in the late 80s and have trouble with my love life. Anyway, the nosology here is pretty easy; Swiftās standard library doesnāt include algorithms in general, only algorithms associated to data structures, which themselves are associated to standardized types. Since Swift descends from Smalltalk, its data structures include Collections, so a reasonable fix here would be to add a
Graphcollection and make topological sorting a method; see Pythonās approach for an example. Another possibility is to abuse the builtin sort routine, but this will costO(n lg n)path lookups and is much more expensive; itās not a long-term solution.Thanks! Iāll definitely check out Python graphlib sometime. Thatās more in my wheelhouse.
Doesnāt look interesting to me. NB Iām not a Swifty. If youāre someone looking to make a compile-time dependency injection validation framework, cycle detection seems like an early feature to add, and feels like a pretty early unit test to implement.
E: read response from BurgersMcSlopshot please :)
DI frameworks are tricky beasts. Either they sacrifice flexibility for simplicity (Iāve seen this done in Go and in Scala, where the DI essentially generates basic instantiation and more advanced resolution is left to the app developer) or they can get really complex but do some handy things (.Net 4.x DI frameworks like Castle Windsor provided some neat lifecycle management tools but was internally very complex).
Cycle detection gets a little hairer the more complex a dependency/ class of dependencies gets. The process itself doesnāt change but the internal representation of the graph needs to be sufficiently abstract enough to illustrate a cycle for all possible resolution scenarios.
Based on the commit to fix the particular bug, it looks like the change will address a specific scenario but will probably fail to address similar issues.
All this to say āthe problem isnāt too hard to think about but the solution isnāt straight-forwardā, also āthis is a fine short- term fix but longer-term would involve redefining the internal representation of a dependency graphā, and finally " An LLM-provided solution is at best a band-aid, in the most generous light.ā
Thanks so much! Now I can waste my life on more interesting thingsā¦
I see. I guess I was thinking too abstractly about how a system like this might work.
For someone that has a bit of a PL/compiler background ā itās not hard if youāre familiar with things like this.
What is worrying is that while the fix does address the test case from the issue, it seems there was no analysis performed as to why the failure occurred. Like okay, this test case passes, but Iām not immediately sure the system is now sound.
If itās hard to reason about then it means you as the developer are supposed to sit the fuck down, figure it out, and document it so that itās no longer hard to reason about for someone who reads it. Anything short of that is a cop out.
Iām not going to actually try to figure out how this DI framework works to do this analysis, definitely not for free.