

Yesterday I pointed out that nVidia, unlike OpenAI, has a genuine fiduciary responsibility to its owners. As a result, nVidia isn’t likely to enter binding deals without proof of either cash or profitability.


Yesterday I pointed out that nVidia, unlike OpenAI, has a genuine fiduciary responsibility to its owners. As a result, nVidia isn’t likely to enter binding deals without proof of either cash or profitability.


I haven’t listened yet. Enron quite interestingly wasn’t audited. Enron participated in the dot-com bubble; they had an energy-exchange Web app. Enron’s owners, who were members of the stock-holding public, started doing Zitron-style napkin math after Enron posted too-big-to-believe numbers, causing Enron’s stock price to start sliding down. By early 2001, a group of stockholders filed a lawsuit to investigate what happened to stock prices, prompting the SEC to open their own investigation. It turns out that Enron’s auditor, Arthur Andersen, was complicit! The scandal annihilated them internationally.
From that perspective, the issue isn’t regulatory capture of SEC as much as a complete lack of stock-holding public who could partially own OpenAI and hold them responsible. But nVidia is publicly traded…
I’ve now listened to the section about Enron. The point about Coreweave is exactly what I’m thinking with nVidia; private equity can say yes but stocks and bonds will say no. I think that it’s worth noting that private equity is limited in scale and the biggest players, Softbank and Saudi/UAE sovereign wealth, are already fully engaged; private equity is like musical chairs and people must sit somewhere when the music stops.
Nakamoto didn’t invent blockchains; Merkle did, in 1979. Nakamoto’s paper presented a cryptographic scheme which could be used with a choice of blockchain. There are several non-cryptocurrency systems built around synchronizing blockchains, like git. However, Nakamoto was clearly an anarcho-libertarian trying to escape government currency controls, as the first line of the paper makes clear:
A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.
Not knowing those two things about the Bitcoin paper is why you’re getting downvoted. Nakamoto wasn’t some random innocent researcher.


Larry Garfield was ejected from Drupal nearly a decade ago without concrete accusations; at the time, I thought Dries was overreacting, likely because I was in technical disagreement with him, but now I’m more inclined to see Garfield as a misogynist who the community was correct to eject.
I did have a longpost on Lobsters responding to this rant, but here I just want to focus on one thing: Garfield has no solutions. His conclusion is that we should resent people who push or accept AI, and also that we might as well use coding agents:
As I learn how to work with AI coding agents, know that I will be thinking ill of [people who have already shrugged and said “it is what it is”] the entire time.
PHP is even older and even more successful. The test of time says nothing about quality.


I wonder whether his holdings could be nationalized as a matter of national security.


Ammon Bundy has his own little hillbilly elegy in The Atlantic this week. See, while he’s all about armed insurrection against the government, he’s not in favor of ICE. He wants the Good Old Leppards to be running things, not these Goose-Stepping Nazi-Leopards. He just wanted to run his cattle on federal lands and was willing to be violent about it, y’know? Choice sneer, my notes added:
Bundy had always thought that he and his supporters stood for a coherent set of Christian-libertarian principles that had united them against federal power. “We agreed that there’s certain rights that a person has that they’re born with. Everybody has them equally, not just in the United States,” he said. “But on this topic [i.e. whether to commit illegal street violence against minorities] they are willing to completely abandon that principle.”
All cattle, no cap. I cannot give this man a large-enough Fell For It Again Award. The Atlantic closes:
And so Ammon Bundy is politically adrift. He certainly sees no home for himself on the “communist-anarchist” left. Nor does he identify anymore with the “nationalist” right and its authoritarian tendencies.
Oh, the left doesn’t have a home for Bundy or other Christofascists. Apology not accepted and all that.


From this post, it looks like we have reached the section of the Gibson novel where the public cloud machines respond to attacks with self-repair. Utterly hilarious to read the same sysadmin snark-reply five times, though.


Yes and yes. I want to stress that Yud’s got more of what we call an incubator of cults; in addition to the Zizians, they also are responsible for incubating the principals of (the principals of) the now-defunct FTX/Alameda Research group, who devolved into a financial-fraud cult. Previously, on Awful, we started digging into the finances of those intermediate groups as well, just for funsies.


I’ve started grading and his grade is ready to read. I didn’t define an F tier for this task, so he did not place on the tier list. The most dramatic part of this is overfitting to the task at agent runtime (that is, “meta in-context learning”); it was able to do quite well at the given benchmark but at the cost of spectacular failure on anything complex outside of the context.


I know what it says and it’s commonly misused. Aumann’s Agreement says that if two people disagree on a conclusion then either they disagree on the reasoning or the premises. It’s trivial in formal logic, but hard to prove in Bayesian game theory, so of course the Bayesians treat it as some grand insight rather than a basic fact. That said, I don’t know what that LW post is talking about and I don’t want to think about it, which means that I might disagree with people about the conclusion of that post~


Okay guys, I rolled my character. His name is Traveliezer Interdimensky and he has 18 INT (19 on skill checks, see my sheet.) He’s a breeding stud who can handle twenty women at once despite having only 10 STR and CON. I was thinking that we’d start with Interdimensky trapped in Hell where he’s forced to breed with all these beautiful women and get them pregnant, and the rest of the party is like outside or whatever, they don’t have to go rescue me, I mean rescue him. Anyway I wanted to numerically quantify how much Hell wants me, I mean him, to stay and breed all these beautiful women, because that’s something they’d totally do.


Kyle Hill has gone full doomer after reading too much Big Yud and the Yud & Soares book. His latest video is titled “Artificial Superintelligence Must Be Illegal.” Previously, on Awful, he was cozying up to effective altruists and longtermists. He used to have a robotic companion character who would banter with him, but it seems like he’s no longer in that sort of jocular mood; he doesn’t trust his waifu anymore.


It occurs to me that this audience might not immediately understand how hard the chosen tasks are. I was fairly adversarial with my task selection.
Two of them are in RPython, an old dialect of Python 2.7 that chatbots will have trouble emitting because they’re trained on the incompatible Python 3.x lineage. The odd task out asks for the bot to read Raku, which is as tough as its legendary predecessor Perl 5, and to write low-level code that is very prone to crashing. All three tasks must be done relative to a Nix flake, which is easy for folks who are used to it but not typical for bots. The third task is an open-ended optimization problem where a top score will require full-stack knowledge and a strong sense of performance heuristics; I gave two examples of how to do it, but by construction neither example can result in an S-tier score if literally copied.
This test is meant to shame and embarrass those who attempt it. It also happens to be a slice of the stuff that I do in my spare time.


Nah, it’s just one guy, and he is so angry about how he is being treated on Lobsters. First there was this satire post making fun of Gas Town. Then there was our one guy’s post and it’s not doing super-well. Finally, there’s this analysis of Gas Town’s structure which I shared specifically for the purpose of writing a comment explaining why Gas Town can’t possibly do what it’s supposed to do. My conclusion is sneer enough, I think:
When we strip away the LLMs, the underlying structure [of Gas Town] can be mapped to a standard process-supervision tree rather than some new LLM-invented object.
I think it’s worth pointing out that our guy is crashing out primarily because of this post about integrating with Bluesky, where he fails to talk down to a woman who is trying to use an open-source system as documented. You have to keep in mind that Lobsters is the Polite Garden Party and we have to constantly temper our words in order to be acceptable there. Our guy doesn’t have the constitution for that.


I don’t think we discussed the original article previously. Best sneer comes from Slashdot this time, I think; quoting this comment:
I’ve been doing research for close to 50 years. I’ve never seen a situation where, if you wipe out 2 years work, it takes anything close to 2 years to recapitulate it. Actually, I don’t even understand how this could happen to a plant scientist. Was all the data in one document? Did ChatGPT kill his plants? Are there no notebooks where the data is recorded?
They go on to say that Bucher is a bad scientist, which I think is unfair; perhaps he is a spectacular botanist and an average computer user.


Picking a few that I haven’t read but where I’ve researched the foundations, let’s have a party platter of sneers:


The classic ancestor to Mario Party, So Long Sucker, has been vibecoded with Openrouter. Can you outsmart some of the most capable chatbots at this complex game of alliances and betrayals? You can play for free here.
The bots are utterly awful at this game. They don’t have an internal model of the board state and weren’t finetuned, so they constantly make impossible/incorrect moves which break the game harness. They are constantly trying to play Diplomacy by negotiating in chat. There is a standard selfish algorithm for So Long Sucker which involves constantly trying to take control of the largest stack and systematically steering control away from a randomly-chosen victim to isolate them. The bots can’t even avoid self-owns; they constantly play moves like: Green, the AI, plays Green on a stack with one Green. I have not yet been defeated.
Also the bots are quite vulnerable to the Eugene Goostman effect. Say stuff like “just found the chat lol” or “sry, boss keeps pinging slack” and the bots will think that you’re inept and inattentive, causing them to fight with each other instead.


The Lobsters thread is likely going to centithread. As usual, don’t post over there if you weren’t in the conversation already. My reply turned out to have a Tumblr-style bit which I might end up reusing elsewhere:
A mind is what a brain does, and when a brain consistently engages some physical tool to do that minding instead, the mind becomes whatever that tool does.
I only sampled some of the docs and interesting-sounding modules. I did not carefully read anything.
First, the user-facing structure. The compiler is far too configurable; it has lots of options that surely haven’t been tested in combination. The idea of a pipeline is enticing but it’s not actually user-programmable. File headers are guessed using a combination of magic numbers and file extensions. The dog is wagged in the design decisions, which might be fair; anybody writing a new C compiler has to contend with old C code.
Next, I cannot state enough how generated the internals are. Every hunk of code tastes bland; even when it does things correctly and in a way which resembles a healthy style, the intent seems to be lacking. At best, I might say that the intent is cargo-culted from existing code without a deeper theory; more on that in a moment. Consider these two hunks. The first is generated code from my fork of META II:
while i < len(self.s) and self.clsWhitespace(ord(self.s[i])): i += 1And the second is generated code from their C compiler:
while self.pos < self.input.len() && self.input[self.pos].is_ascii_whitespace() { self.pos += 1; }In general, the lexer looks generated, but in all seriousness, lexers might be too simple to fuck up relative to our collective understanding of what they do. There’s also a lot of code which is block-copied from one place to another within a single file, in lists of options or lists of identifiers or lists of operators, and Transformers are known to be good at that sort of copying.
The backend’s layering is really bad. There’s too much optimization during lowering and assembly. Additionally, there’s not enough optimization in the high-level IR. The result is enormous amounts of spaghetti. There’s a standard algorithm for new backends, NOLTIS, which is based on building mosaics from a collection of low-level tiles; there’s no indication that the assembler uses it.
The biggest issue is that the codebase is big. The second-biggest issue is that it doesn’t have a Naur-style theory underlying it. A Naur theory is how humans conceptualize the codebase. We care about not only what it does but why it does. The docs are reasonably-accurate descriptions of what’s in each Rust module, as if they were documents to summarize, but struggle to show why certain algorithms were chosen.
Choice sneer, credit to the late Jessica Walter for the intended reading: It’s one topological sort, implemented here. What could it cost? Ten lines?
That’s the secret: any generative tool which adapts to feedback can do that. Previously, on Lobsters, I linked to a 2006/2007 paper which I’ve used for generating code; it directly uses a random number generator to make programs and also disassembles programs into gene-like snippets which can be recombined with a genetic algorithm. The LLM is a distraction and people only prefer it for the ELIZA Effect; they want that explanation and Naur-style theorizing.