Anthony
@abucci@buc.ci
Well, everyone, you can now submit a comment to let the FCC know what you think about SpaceX asking for 1 million satellites for "AI datacenters" whatever the fuck that means.
https://docs.fcc.gov/public/attachments/DA-26-113A1.pdf
Comments due March 6.
I am having a very hard time believing this is really happening. Fuck you, SpaceX, and fuck you, FCC. This is not regulation, this is a fucking joke, that will destroy our ability to use satellites for centuries.
@sundogplanets have you ever thought of using the surface of the ocean to show the non-maths how this looks on a smaller sphere? Get them to drive a boat through the grid or something tangible.
I don't think they understand 3D...or 2D
If anyone has time and energy to set up instructions for how to submit a comment to the FCC (it's really fucking complicated, on purpose, I'm sure), I would very much appreciate it! Otherwise I'll do it in the coming days.
@sundogplanets It's not going to work. That will be obvious long before he has all that many satellites up, and he'll move on to his next sick joke.
If somebody wants to venture into this, please test all steps.
The first one involves sending an email to ecfs@fcc.gov with "get form" and your email address in the message body.
The reply I got was trying to strangely gaslight me:
"Delivery has failed to these recipients or groups:
ecfs@fcc.gov
Your message couldn't be delivered. The Domain Name System (DNS) reported that the recipient's domain does not exist."
There seems to be a strange subdomain falstaff.fcc.gov involved. The attached error log says:
Diagnostic information for administrators:Generating server: SJ0PR09MB11735.namprd09.prod.outlook.com
ecfs@fcc.gov
Remote server returned '550 5.4.310 DNS domain falstaff.fcc.gov does not exist [Message=InfoDomainNonexistent] [LastAttemptedServerName=falstaff.fcc.gov] [SA2PEPF00003023.namprd09.prod.outlook.com 2026-02-05T12:30:46.776Z 08DE6078A5284768]'
@sundogplanets A.I. data centers in space is such a batshit crazy idea, it's hard to believe anyone takes it seriously. But they do.
It's just mind-blowing. Like living inside a comic book.
@sundogplanets let me marh this badly...
At a launch a second they could have that sucker up in months. At a launch every hour it'd take more than a century. At a launch a day millenia. Assuming of course one satellite per launch. That's just the getting to orbit bit. Fabbing the satellites might well take longer. After of course the lead time to ensure hallucinating chatbots are not on the worse granola.
@sundogplanets may be a blessing in desguise, i mean if (when) they fuck up, it will (force) the US to clean their mess by inventing tech to clean out there (if scientifically possible i'm not well known in space physic).
Elon the illegal South African Nazi predator just can't stop pushing his unwanted stubby nub of a weenie in people's faces.
What a dirty pig.
@sundogplanets the good thing is that Elon Musk has never delivered on anything he's ever promised, and this would require rocket launches at such an wildly high pace that I don't think they could come close to pulling this off.
It's just to juice SpaceX IPO and then musk can be a trillionaire. If the next US administration comes to be, this plan is dumpstered for sure.
@sundogplanets How would a data center event work in outer space? Heat would build up. Unless I'm missing something and the idea is to have something super worse than regular DCs down here?
Remember the hole in ozone days ?
Sheesh .
Think it was after “ give hoot don’t pollute “ and just before acids rain.
Anyway ,, Now we blast rocket fuel through it for the new computer science data barns.
🤔. Interesting swings in folk ideology.
@sundogplanets But SpaceX will “scal[e] to make a sentient sun to understand the Universe and extend the light of consciousness to the stars!” How could you possibly be against that?!?
Yeah, it’s batshit crazy, Musk is hyping for IPO (although I don’t know if he believes it himself… he’s huffed his own farts a few times too many). I’ll be astonished if more than, I dunno, a few dozen ever get launched. By that point, it should be clear what a stupid idea it is.
@sundogplanets Our economy is no longer serving the public, but the financial system.
Data centers in space won’t work, but the investment bankers don’t know that, so bullshit that make number go up rules.
Data centers in space are the tulip craze of our age!
Why would they even want AI datacenters in space? We haven't solved how to cool them without massive amounts of water down here on earth, how are they going to cool them up there? Some months back I read of someone else wanting to create the same datacenters in international waters. Is this just a new way for them to try to avoid legal liability? If that's the case, I wonder what kind of content they will generate up there.
@sundogplanets That means to not comply with Police investigations and Courts orders (grok, grokipedia,...) worldwide through alternative SpaceX internet except by use of force. That means controlling Space launches. In addition to polluting our sky and the Space
@sundogplanets With Kessler syndrome thought to be inevitable now I'm surprised this is even being entertained and policy isn't being drawn up to ration new satellites and mandate retrieval of spent/decommissioned/junk satellites in order to get approval for a new one.
@sundogplanets Any chance the guidelines for commenting when the FCC was mulling axing/reinstating Net Neutrality are still valid at this point?
@sundogplanets Thanks for the great talk tonight in Hamilton Dr Sam. Scary but enlightening... time to spread this widely.
@sundogplanets plus space is not only a US thing (until some guy decide he wants a property title on it for some US thing). There is a higher authority for this.
@sundogplanets Many thanks for sharing the link - I wasn't aware of his at all. Absolutely disgusting! I shall be submitting some 'developmental feedback' to the FCC!
@sundogplanets This is more about the upcoming ipo than reality. It's to make dumb investors wet themselves with excitement. At that level, it'll probably work...
But still, needs a big NO from the FCC, who should not be the arbiter anyway, it's an international issue.
I wonder what the Chinese think about it?!!
@sundogplanets the environment agency should have their word to say. Because all these millions of datacenters are going to be disposed off... And they won't be recycled. Just burned up. In the open. No filters. No catalysators. Just burned in the atmosphere.
@sundogplanets Why would Trump want sophisticated communication systems to be available to just anyone?
@sundogplanets "unlimited number of satellites"
Soon our weather forecast will be predicting how many of those satellites will be falling down tonight.
@sundogplanets i think it means space x is about to go through an ipo and he is trying to scam investors for the thousandth time
@sundogplanets Any companies doing orbital clean-up. I'm looking for an investment opportunity. *sarcasm*
@sundogplanets this will go as well as hyperloop did, if at all.
Good luck cooling these machines.
But a lot of waste and undesirable side effects will happen before the "proof concept" is delivered.
It's strange and frustrating that most AI researchers don't seem interested in natural intelligence.
In the early days, when "neural networks" were seen as models of brains, many people seemed at least superficially interested in neuroscience. It's not like that now. I'm sure some folks would say "yeah, and aerospace engineers don't worry about bird flight, either!" but that feels wrong to me.
If all you care about is moving cargo, then sure, flight is solved, and who cares if our designs are "biologically realistic". Similarly, if all you care about is recognizing images, playing video games, and generating slop, then AI is solved. We'll just make the current solutions better.
But I think we've barely scratched the surface of what intelligence actually is! Current AI is so narrow and so shallow by comparison, yet I think people don't even notice that because they haven't actually thought about how intelligent living things are, and in how many different ways!
Not to mention the fact that so many of our current AI solutions are really just models of human intelligence, trained by example. Do we not care about creating genuine, original intelligence? Is bottling it up in a computer so that it's reproducible the same thing as making it in the first place? I'd say obviously not, but apparently many people don't see the difference...
I too have been struck by the apparent lack of curiosity of our colleagues. One tentative conclusion I've come to is that there is a subspecies of STEM folks driven largely by impressive demos. I feel like this tendency reflects the perversities of short-termist economic thinking, but in any case the view is a splashy demo that makes it into Nature and the NYT but has shoddy science backing it is superior to excellent science that has no splashy demo. The field "progresses" from one impressive demo to the next. Pollack used to say things along these lines, but that's no surprise given that he founded the DEMO lab: it's right there in the name! I think that mindset is not uncommon, though. Even those who agree the science is important run up against the constraint that there's significantly more funding for flashy demos than for basic research.
@abucci Completely agreed. I also think there's something to be said about how AI is ostensibly about understanding intelligence, but in practice seems more concerned with automating human cognitive tasks.
Here's John Searle in 1983:
Marvin Minsky of MIT says that the next generation of computers will be so intelligent that we will ‘be lucky if they are willing to keep us around the house as household pets.'Here's Joseph Weizenbaum in 2007:
Professor Marvin Minsky of MIT, once pronounced—a belief he still holds—that ‘‘the brain is merely a meat machine.’’He goes on to note that meat is dead and might be eaten or thrown out. Flesh is what's alive. He also draws attention to the word "merely", as in "nothing more than".
I share with Weizenbaum the belief that Minsky has clearly expressed a disdain for human intelligence. We're on the order of household pets. Our brains are no more than food or trash. Obviously Minsky doesn't speak for all AI researchers then or since, but his "meat machine" language is all over the place, and this disdain or even contempt for human intelligence and achievement is also common.
It definitely doesn't speak to a curiosity about intelligence, which I think requires at least a little bit of love and esteem.
You might be interested in some of the work being done by some of Daniela Rus’ graduate students. Rus is a roboticist , and she and her students were looking at what the 312 neurons of C.elegans could accomplish in contrast to the army of matrix entries in their “neural nets”.
Some differential equations later, they had a nineteen neuron structure that could pilot a quadcopter and follow a person around campus.
They have a startup, Liquid.ai, starting from there you can find some of their papers.
@lain_7 That sounds amazing! I was just thinking: robotics really is an exception to this rule. I think it's more apparent in that field just how much we still have to learn. I hadn't heard of that experiment, though! I'll definitely check it out.
I think a good place to start might be their paper, “Causal Navigation by Continuous-time Neural Networks”, which Google can find on arXiv.org.
That’s an early paper from before they actually built their quadcopter, but searching for the authors will find you more.
oh, and this:
(I’m not sure where I got the 19 neuron figure from, maybe an informal talk by Rus.)
I should say, robotics really is the exception to this rule. There are some very cool biomimetic robots these days! And kind of always have been, I think. Perhaps they're more eager to draw inspiration from nature because it's more obvious just how much we have to learn in that domain?
Just today I was reviewing an excellent paper on the design of efficient quadruped gaits. I was delighted by all the anatomy and physiology references, and it's clear these ideas directly led to their very cool discovery. I'm looking forward to seeing that one get published. 🙂
And, of course, there's the ALife research community, which is into life-like intelligence in all its forms. I just don't think of that field as AI. It often feels more like a reaction to AI's overly pragmatic, anthropocentric approach.
I always thought it was weird how people worshiped Chomsky for his political thought when his field was linguistics.
And I always felt like the thrust of his analysis was of the "what did you expect of the US?" variety. I resented that because it de-activates people.
So I felt like people who wanted to talk at me about Chomsky were more interested in words than actions.
Didn't know he was a creep, though.
@D_J_Nathanson When I hit grad school I started interacting with women who had worked with him. So none of this is surprising to me, and boy do I wish he wasn't lionized the way he appears to be.
I have now destroyed my collection of pencil drawings.
Not quite as "destructively calming" as when I deleted a 20+ year-old mountain of code made just for myself.
@datarama Destroying them simply to declutter, or is there another reason? Are you doing okay?
@sitcom_nemesis because making them was pointless, and looking at them is a depressing reminder of attempted cope for a wasted life
@sitcom_nemesis The code I deleted in 2024 was because it sat there like a taunting, depressing monument to an entirely wasted life.
@datarama Sounds like you've been going through a really tough time. I'm sorry you're feeling like this 😢 .
@sitcom_nemesis The last 3 years have been, with no doubt, the most miserable of my life.
I actually *miss* the other contenders for "most miserable" - the 1½ years of total isolation in the start of the pandemic, or being a little disabled kid who got beaten up all the time, or getting by life well under the relative poverty line while dragging myself through alcohol abuse withdrawal. All of those times were terrible too, but at least I had *something* that made life worth living.
@sitcom_nemesis It turns out I can cope a lot better with life simply being tough than I can with nothing in life remaining that makes it worth living.
@datarama Humans tend to work like that, iirc it's a theme that came up in Victor Frankl's book Man's Search For Meaning.
Is there anything else in life that helps make you feel like it's worth living, that's harder for Big Tech to get its grubby hands on? Friends, perhaps? Family? Stardew Valley?
@sitcom_nemesis I basically have no social life anymore. The pandemic destroyed that. The only people I see regularly are coworkers - though I do still have online contact with two of my old university mates. I'm long-term single and never had kids.
In 2024, I think I actually managed to burn myself out on Stardew Valley.
I'll be contributing to "Hybrid Minds" in Vienna later this month (registration at link below).
I'll provide some philosophical reflections on the idea that we can replace organic parts of a living being one by one with mechanical ones - and end up with an equivalent system. I call it the "Cyborg Myth." It is part of the machine view of the world that regards organisms as mechanisms and thinking as computation, but neglects the problem of biological organization.
@abucci yes, and yet it has substance, such that if you fly through it at speed it can cause turbulence...
So I see #Nolto doing the rounds as a supposed #OS #LinkedIn alternative
Judging from the Codeberg issues [1] and the self-professed ignorance of the prompter about fundamentals of federation and software dev [2], it seems highly likely that Nolto is vibe-coded
Superficially feature-rich but lots of loose ends and basic errors, this is a privacy and security disaster waiting to happen — I would not touch this with a barge-pole 😬
[2] https://codeberg.org/Tensetti/Nolto/issues/31#issuecomment-10292844
Addition: @sandorspruit points out that the readme discloses this clearly (in case there was any doubt)
It's interesting to me that the style of the disclaimer itself features the typical LLM-assisted hubris: emphasis on lowering cost (whose cost, and at what price?) and big words about "deliberate and manual" decisions, but relatively little to show for it.
I will take the hint and move on.
@dingemansemark Is this even allowed on @Codeberg? It might violate §2.3 or §2.6.4 of the ToS as the LLM was trained on stolen work. This could make generated text illegal too, but that was not ruled on by a court yet.
@dingemansemark
Since @JTensetti is here, perhaps you could ask him about it directly? They did ditch the loveable.app site recently, for one.
@elmerot
I think @JTensetti is busy enough as is, and to be fair the readme is clear on the "AI assisted coding". I still decided to post because I think many people discover Nolto not via its repo but through its slick home page and / or a widely shared blog post about leaving LinkedIn, neither of which mention anything about vibe coding or Nolto's incomplete federation features
Nature discovers a level below Nature Scientific Reports
> The vision of human-level machine intelligence laid out by Alan Turing in the 1950s is now a reality. Eyes unclouded by dread or hype will help us to prepare for what comes next.
you fucking idiots
@davidgerard That is a very strong claim by a bunch of people with not really any track record in the field.
Nature really has found a bunch of wunderkinder.
@davidgerard I sometimes wonder if these particular people really are on par with their LLM love interests
@davidgerard My money says Danks and Bergen corralled this piece together. And that UCSD is looking for funding.
Also, AI tools screw up math. Why are people still having this discussion?
@davidgerard the thing nobody ever mentions is that in that same article Turing discusses ESP as proven scientific fact.
which isn't a knock on Turing but should raise some reservations about taking the article uncritically
@davidgerard @cstross You fucking idiots indeed.
Turing sadly failed to realise that most people would very easily find any old crap indistinguishable from people. The Test only proves something about people, not about AIs.
Wake me up when they’ve got to grips with Searle’s Chinese Room.
@BashStKid @cstross The Chinese Room is a puzzle about the dangers of being a Chinese student in the vicinity of John Searle
@davidgerard @cstross “But, Professor, I can help you with your translation problem!”
(Searle sticks fingers firmly in his ears)
@BashStKid @cstross Searle was notorious as a sexual harasser of Asian students in particular
I really wonder if these people claiming human-level 'intelligence' from machines ever spent time programming an Eliza, and I suspect the answer is "no".
@davidgerard @jaztrophysicist Is there a reason why I can't access to the link you posted? (I keep getting CAPTCHA in languages such as Thaï, Arabic, Greek, etc.
)
@davidgerard Oh dear. I read the first five paragraphs and honestly can not face any more of this. Am now strenuously resisting the urge to say libellous things about the authors.
at last, an ethical chatbot https://www.maiasa.ai/
Maiasa, an #AI for the future that is ready for anything you throw at it*:
https://www.maiasa.ai/
CALL NOW!
(via @davidgerard)
*) Maiasa's active vocabulary is limited to the single token "a", and it will reply to all prompts with only "a".
MESSAGE OF HIS HOLINESS POPE LEO XIV FOR THE 60TH WORLD DAY OF SOCIAL COMMUNICATIONS
His emphasis on face and voice is good.
Musk wants to merge SpaceX with xAI, then take it public
Or maybe Tesla
https://www.youtube.com/watch?v=VMKyocHwUJI&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20260202-musk-wants-to-merge-spacex-with-xai-then-go-public - podcast
time: 5 min 18 sec
https://pivot-to-ai.com/2026/02/02/musk-wants-to-merge-spacex-with-xai-then-take-it-public/ - blog post
@davidgerard Oh look, after parading his child around while he was destroying the US, he's planning to use another one of his babies as a shield to protect Twitter from the repercussions of being home to the for profit CSAM engine. 'You can't imvestigate or prosecute me, US national security needs twitter and rockets!'
Here's my list of established para-academic educational & research organizations:
https://docs.google.com/document/d/1q8HP1tMe1L42nZq-v_iB0dCIrGPYNe_sGeimtWep5x4/edit?tab=t.0
Related post: https://elftheory.substack.com/p/para-academia-is-the-future
I only add to the list when I stumble across things or when people make suggestions. I haven’t done any intentional research; I don’t even know what search terms I’d use, since “para-academic” hasn’t been enshrined & “alt-academic” has multiple conflicting meanings. Feel free to share & make suggestions, thanks!
Hello #fediverse! Thanks to my new DFF grant, I'm now looking to hire a PhD student to join me at AAU in Aalborg 🇩🇰 to work on "usable decentralization", i.e. on making distributed and federated cloud services accessible to the everyday user. For more details, see link below, and please don't hesitate to DM me with questions!
https://www.vacancies.aau.dk/phd-positions/show-vacancy/vacancyId/895183
#HCI #academia #getfedihired #decentralization #AAU #Aalborg #Denmark #selfhosted #selfhosting
Here's one point: masked thugs in mismatched camo are breaking laws that include abducting, assaulting, and executing people, and the regime is attempting to force us all to refer to this as "law enforcement". For all the many criticisms one might aim at US law enforcement, members typically wear blue, show their faces, and doctrinally aspire to behave transparently and professionally (please don't @ me with your critiques of law enforcement or hashtag ACAB comments; this is not meant as a defense, only a contrast. I've been beaten and teargassed by cops and witnessed far worse so I am acquainted with this aspect of US "law enforcement").
In other words the regime is attempting to change consensus reality so that Americans accept that the phrase "law enforcement" includes random, unprovoked breaking and entering by heavily armed masked men into homes or vehicles, assaults, and summary executions on the street, targeting all citizens. They want this to be a normal and accepted occurrence anywhere and at any time. Much as one might criticize previous administration's immigration policies---and one might really really criticize those---this attempt to shift consensus on "law enforcement" to include summary executions of anyone and all the rest is new (It is not new for certain groups; what I think Snyder is saying is that it'd be new for everyone to be targeted in this way, and for most people to accept that's just how it is now).
Snyder also points out that the border plays an important role here because it is where the country, and therefore the law, ends. In other words, it's not coincidental that the regime chose to elevate ICE. Historically, authoritarian regimes have a marked tendency to expand the lawlessness and indefiniteness of border zones to include the entire territory of the country, and the current regime is no different. Immigrants and immigration aren't the only targets here. The larger aim is to indefinitely suspend the rule of law nationwide by making the entire nation into a border zone (Recall the Texas governor kidnapping immigrants and shipping them to "blue states", a classic attempt to spread resentment of immigrants throughout the country). Again, this is a narrative move, an attempt to shift consensus reality.
So, one way to resist is to simply not accept either of these attempts to change reality. Continue to refer to what's happening as unacceptable, not who we are, etc. Continue to point out that out of control border "enforcement" has led to street executions. Continue to name these actions as the criminal acts of thugs. Continue to pressure people with power, such as lawmakers, to do the same. Not out of some misguided or naive nationalism or patriotism, but in order to keep a stake firmly planted in the ground against the forces attempting to move it. This is something we can all do.
Incidentally, all eyes on Haitian and Haitian-American people in Ohio over the coming weeks. The Haiti Temporary Protected Status designation ends this coming Tuesday, February 3, 2026. Haitians in Springfield were specifically abused during the Trump campaign. The US has a long history of abusing and dehumanizing Haitians dating back at least to Thomas Jefferson, so the Trump campaign rhetoric was no outlier or anomaly. Haiti is also one of the countries specifically named in the US State Dept's announcement about indefinitely halting immigrant visa processing. It would not be surprising if the next ICE "surge" targeted Haitian immigrants in Ohio given how the groundwork's been laid.
#USPol #ICE #history #authoritarianism #tyranny #resistance #resist #narrative #immigration #Haiti
In the schools and churches of Springfield, Ohio, people are making hasty preparations for a “large deportation” promised by the president. To all appearances, and according to local sources, the city is two or three days away from a federal ethnic cleansing, grounded in a hate campaign organized by the vice-president and American Nazis. The destined victims are ten thousand or more Haitians.From Timothy Snyder's substack. Note the use of the phrases "ethnic cleansing" and "hate campaign", which are accurate. Snyder later states, regarding his use of the word "Nazi", "I use the word advisedly". It's well worth reading what he says about that. J.D. Vance's words led to a self-professed US Nazi group terrorizing Springfield Ohio. Snyder spells out how the hyperreality--the non-existent parallel and false world Vance set in motion and then Trump amplified--is turning into real-world consequences for real people.
#USPol #ICE #history #authoritarianism #tyranny #resistance #resist #narrative #immigration #Haiti
The Trump-Vance lie that the city had been “destroyed,” the notion of “carnage,” the dehumanization of immigrants— all of this creates the impression that their promised ethnic cleansing action would be a response to something, rather than a simple choice to exercise state violence against an invented racial enemy. These reversals are very important. It is important to consider this carefully.#USPol #ICE #history #authoritarianism #tyranny #resistance #resist #narrative #immigration #HaitiFirst, Vance reported that the Nazi propaganda campaign that he had himself inspired amounted to factual evidence. Mental chaos has been created where there was none before. And then that mental chaos becomes the justification for physical chaos: Trump’s “large deportations,” ICE raids that will, in fact, wreck an improving local economy. And once that physical chaos has been created, it will be blamed on the immigrants who are no longer there. Most of this has already played out. A key threshold, which it appears that we are about to cross, is the application of the state violence. At that point, so to speak, the lie is supposed to become “true.”
Someone noted that ICE agents are apparently now just seizing people's phones without more procedure, and keeping them.
Which is robbery, and even armed robbery since ICE is armed.
Is that actionable ?
I mean, keeping the rule of law is a way to refuse the lie, as you wrote.
That said, my understanding is that if you are not under arrest, ICE has no right to access your phone unless they have a judicial warrant. You can file a criminal complaint of theft in the local jurisdiction where it happened (local politicians sometimes recommend this: https://www.bostonglobe.com/2025/06/06/metro/ice-immigration-rights-bystander/ ). The Alasaad v. McAleenan ruling found such searches unconstitutional. I don't know what followup rulings might have occurred or whether the consensus has shifted since then. You definitely do not need to consent to a search either of your person or of your phone.
Even if arrested, you are not required to speak to anyone other than to say "I wish to remain silent", "I do not consent to a search", and "I wish to speak to an attorney". You are entitled to contact an attorney. They will confiscate your belongings, but you do not need to tell them the password to unlock your phone.
Practically speaking, this is an agency that is willing to shoot people dead knowing that even the president will openly spread lies about what happened. They will almost surely threaten someone to unlock their phone and will take a copy of its contents regardless of whether they are legally entitled to do that. How much one resists is a question of how much danger one anticipates and what they are comfortable with risking. I'm definitely in no position to make recommendations about something like that.
@abucci The day Alex Pretti was murdered I noticed at least 1 man (could have been a woman) dressed in a red hoodie & a multi-colored knit ski hat (with a pompom) within the gang that dragged Mr. Pretti to the ground. This person was also masked & carrying an automatic rifle. The way these ppl are operating there's no way to tell who they are, much less if they're "law enforcement." The lack of transparency is perfect setup to infiltrate their ranks & create chaos
The result of a consistent and total substitution of lies for factual truth is not that the lie will now be accepted as truth and truth be defamed as a lie, but that the sense by which we take our bearings in the real world—and the category of truth versus falsehood is among the mental means to this end—is being destroyed.Truth and Politics, Hannah Arendt
Yelp, don't consider an AI service such as ChatGPT to be a store of important content. Always keep a backup you control (which is in general good advice when working with cloud-based services)
> When I contacted OpenAI’s support, the first responses came from an AI agent. Only after repeated enquiries did a human employee respond
Err, yes, that is to be expected? We'll see this much more in the future. Expectations will shift.
https://www.nature.com/articles/d41586-025-04064-7
@paulmelis frankly the information security and research integrity implications of this irresponsible piece by a tenured professor warrant a stronger response than 'keep a backup'. Dumping coauthored drafts, grant proposals, email correspondence, and student data (!) in a commercial ChatGPT Plus account, while knowingly giving OpenAI consent to train on all of it for >2 years, amounts to violating every professional guideline and breaching the trust of everyone you work with
I guess I'll be doing some insulating. This is the longest, coldest cold snap I've experienced here and the bright side of it is that it's alerted me to a trouble spot as well as the limits of our current systems!
Developers say AI coding tools work—and that's precisely what worries them
Ars spoke to several software devs about AI and found enthusiasm tempered by unease.
https://arstechnica.com/ai/2026/01/developers-say-ai-coding-tools-work-and-thats-precisely-what-worries-them/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social
@arstechnica Did you actually talk to developers?
Because the ones I know are uneasy because the "code" produced by these tools is as suspect as the other "content". At best it's replacing some rote code construction, but mostly it's expanding my workflow to be a bomb defuser.
Binance bitcoin bailout
it does not seem to me to bode well that they’re getting this nervous when the bitcoin price hasn’t even dropped below $80,000
@molly0xfff I don't really understand this situation. Is this connected to the continuous devaluation of the US-$ in the last weeks?
@raisondetredev it mostly reads to me that they’re nervous about sinking bitcoin prices, and hoping to bolster them by a) injecting $1B of demand and b) publicly declaring their plans to do so in hopes that it will trigger others to buy
@molly0xfff Injecting $1B demand is nice, but ... what's on the other side of that $1B? Isn't that supposed to hold up the stablecoin or something? Is the stablecoin now bound to bitcoin?
@henryk no, USDC is reserve backed. the seller either decides to hold the USDC instead of the bitcoin, sell it for some other crypto asset, or redeem the USDC with Circle. it changes the seller’s portfolio, but USDC remains reserve-backed.
@molly0xfff
Where do you think this is heading? Is the billion enough to get bitcoin to seem secure, or will it keep dropping?
I'm confused.
How could $1 Billion of fake, fiat currency prop up a real, stable hard asset like bitcoin?
Shouldn't that be the other way around?
@molly0xfff Bitcoin has already been succeeded by more useful coins but it doesn't matter anyway, the reason is because it has no real world use case outside of being a more efficient version of western union.
@molly0xfff $IBIT holding 65 Bn, Strategy ~35 Bn alone, not counting whales.
One Bn is like feeding a cherry to a cow.
@molly0xfff are they buying #Bitcoin? Did they lose funds? Or is this just because of an expected dip?
Frankly it reads to me like an admission of their own guilt, the "brazen theft" being a projection, mens rea expressed as legal action.
Spotify and the three main major record labels sue Anna’s Archive for $13trillion for “brazen theft of millions of files containing nearly all of the world’s commercial sound recordings”
#culture #music #AnnasArchive #spotify #UMG #WarnerBros #Sony #lawsuit
As AI tools for writing become more common, let me throw one more worry into the mix: Students who write well without AI assistance may be falsely accused of #plagiarism by teachers using imperfect tools to detect AI-assisted writing.
Update. This fear is coming true.
We tested a new ChatGPT-detector for teachers. It flagged an innocent student.
https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/
"Five high school students helped our tech columnist test a #ChatGPT detector coming from #Turnitin to 2.1 million teachers. It missed enough to get someone in trouble."
Update. Of course teachers sometimes make false accusations of #plagiarism even without relying on imperfect tools. Now that they're on the lookout for #AI-generated submissions, the rate might increase.
New study: "When given a mixture of original and general abstracts, blinded human reviewers correctly identified 68% of generated abstracts as…generated by ChatGPT, but incorrectly identified 14% of original abstracts as being generated."
https://www.nature.com/articles/s41746-023-00819-6
Update. More examples of this fear coming true.
Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers
https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601/
Update. More examples of this fear coming true.
AI Detection Tools Falsely Accuse International Students of Cheating
https://themarkup.org/machine-learning/2023/08/14/ai-detection-tools-falsely-accuse-international-students-of-cheating
Update. More examples of this fear coming true.
AI-Written Homework Is Rising. So Are False Accusations.
https://www.thedailybeast.com/ai-written-homework-is-rising-so-are-false-accusations
Update. How often might this fear come true? A new study "found that [an] #AI text detector erroneously identified up to 8% of the known real [human-written] abstracts as AI-generated text."
https://www.sciencedirect.com/science/article/pii/S2153353923001566
Update. Here's another study showing that tools to detect #AI-written text are easy to fool with "simple techniques to manipulate the AI generated content." But this one goes a step further and makes the right recommendation for teachers and schools. "GenAI tools and detectors…cannot currently be recommended for determining academic integrity violations due to accuracy limitations and the potential for false accusation."
https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-024-00487-w
Update. More evidence that this fear has come true.
https://www.bloomberg.com/news/features/2024-10-18/do-ai-detectors-work-students-face-false-cheating-accusations
"Even…a small error rate can quickly add up, given the vast number of student assignments each year, with potentially devastating consequences for students who are falsely flagged."
Update. This fear has come true to such an extent that students who write well without #AI assistance now feel pressure to use AI to "humanize" their writing and avoid the charge of AI-assisted #plagiarism. #Grrr.
https://www.nbcnews.com/tech/internet/college-students-ai-cheating-detectors-humanizers-rcna253878
@petersuber this unfortunately has seemed like the logical conclusion. I wonder if the version control on most document writers could help? Like if a student is working in Google Docs you could possibly tell if they are copy/pasting or taking the time to type up the document. Of course, some could still generate a paper, type up a sentence or two, watch a YouTube video, and repeat to make it look human. Of course, than this is work which calls to mind why plagiarize in this way.
@abucci @petersuber I should have specified. I’m not surprised students have turn to this given that schools have started checking through these means. I like the view that plagiarism, in terms of completely stealing another’s work, harms the culprit. As it defeats the entire purpose of education. This approaches the question of the end of education. Is it merely to be able to work in industry or to provide the student deeper understanding/a foundation for further study?
Still, it's the framing I object to. This technology has been forced down the throats of everyone, students included, and if surveys are to be believed it's been against a large majority's will. I don't think we should require the victims of this, especially not overburdened teachers and their students, to take extra steps to cope with the consequences. That's an injustice we should not accept. But the inclination to try to think of technology solutions to this grander social problem is doing exactly that, in my view. By now technologists have demonstrated beyond reasonable doubt that they do not have our collective best interests at heart, and we should therefore not be accepting their tools as solutions to the problems they themselves are heavily contributing to causing.
ICE agents shatter window, leave 1-month-old baby, mother in car after Portland arrest
Video shows federal immigration agents leaving behind an infant and broken glass after detaining a Guinean immigrant with no known criminal history.From https://www.pressherald.com/2026/01/28/ice-agents-shatter-window-leave-1-month-old-baby-mother-in-car-after-portland-arrest
Can someone clarify, in academia and industry are LLM hallucinations the result of overfitting, or simply a false positive?
I'm beginning to think that hallucinations are evidence of overfitting. It seems surprising that there are few attempts to articulate the underlying cause of hallucinations. Also, if the issue is overfitting, then increasing training time and datasets may not be an appropriate solution to the problem of hallucinations.
If you trained an LLM with only demonstrably true statements, they would still output false statements. They have no representation of truth at the level of the sequences they emit.
"Hallucination" is therefore a human judgment about the human user's reaction to LLM output and is not reflective of any semantic content of that output. All LLM outputs, whether they appear to have semantic content or not and regardless of whether that aligns with the user's expectations, are thus hallucinations in a sense.
@abucci Woah! This notion of interpolation threshold and double descent was a key insight that I was missing. Looking at the time stamps of papers detailing double descent most of them were published right after I had concluded my Deep Learning course. As I never worked at an industry, I missed this development. Initial perspective is that this seems to explain the generational by generational improvements of these models.
Well, this is going to fill my reading list for the next couple of days.
@abucci What's really interesting about this is seeing how this is our original conversation coming full circle.
The tragedy is watching it develop despite all the challenges and shortcomings.
@wwhitlow it's not over fitting or under fitting, it's simply a result of the stochastic nature of LLM and how they're not able to know that they don't know. They have to give an answer no matter what, and even if the answer is wrong, it simply outputs the most likely thing, which can include "hallucinations".
@clf
I can understand how the stochastic nature of the model leads to this. Given that it is a next token predictor. Really it’s recognizing that it was about 5 years ago that I first studied these models in a formal setting. I’ve recently returned to giving them serious thought and so I was curious as to how the decisions regarding training were affecting overall performance. While not drawing a one to one comparison, I can recall how problematic overfitting can be for Neural Nets.
Who are these eminent philosophers?
Anthropic describes this constitution as being written for Claude. Described as being "optimized for precision over accessibility." However, on a major philosophical claim it is clear that there is a great deal of ambiguity on how to even evaluate this. Eminent philosophers is an appeal to authority. If they are named, then it is possible to evaluate their claims in context. This is neither precise nor accessible.
@abucci honestly, the same thought kept coming to my mind, as I was trying to make sure there wasn’t any details I had missed. Many of those references felt like how LLMs describe philosophical concepts. Which is typically limited in terms of specifics. The part that fuels and enables in-depth philosophical discussions.
@wwhitlow That's from here, right?
https://www.anthropic.com/constitution
They do need to name names.
@twsh Yes, that is the correct link. Details about Claude being the audience are in the preface. Screenshot and question about which philosophers comes from the section on Claude’s Nature.
A friend pointed out last night that WinRAR is more profitable than OpenAI and I can't stop thinking about it.
So it turns out every project with Gen AI generated code either had a team of human programmers spend more time debugging the slop code than would have been spent making it with zero Gen AI involvement...
Or the devs have totally lied about Gen AI producing the code. That's the torment nexus bot getting credit for human work, in order to shift capitalism into a direction where there's no paid human labour. So at least 95% of the population needs labour income, so with billions more people unable to even buy a banana... Profit?
@kimcrawley @davidgerard classic case of "we trained our bot on the entirity of a code base and the llm was kind of able to reproduce the code base in some way, with limitations, no real functionality, and we had to put in a lot of hours to kinda make it work". they really invented a terrible version of "cp ."
@ypislon @kimcrawley @davidgerard I will now be replacing the words "gen AI" with "lossy cp" in every AI coding pitch I read in the future
We don't tend to tolerate lossiness in text compression, especially when the decompressed form matters, but hey there's no time like the present to start!
@abucci lossy compression as in it's losing the compression humans intentionally put there in the first place
@kimcrawley@zeroes.ca @davidgerard@circumstances.run for translation, it's the same... It takes humans more time to clean up machine translations than it would take those humans to produce a clean translation from scratch... Unfortunately, in the case of translations, AI slop quality is deemed "good enough" by most companies. Hence thousands upon thousands of translators are out of work and we're all stuck with piss poor translations. Yay.
#SearX #SearXNG #SearchEngines #AlternateSearchEngines #MetaSearchEngines #web #dev #tech #FOSS #OpenSource #AI #AIPoisoning #AISlop #AI #GenAI #GenerativeAI #LLM #ChatGPT #Claude #Perplexity
@abucci@buc.ci I didn't went through all of them. But what are the end results of these threads?
Are there no open source search indexes for open source search engines to use?
Why must they rely upon Google/Bing etc?
@colinstu@birdbutt.com https://marginalia-search.com is one. The index is not complete, though if you find something missing you can ask it to index it. It also has a indie web discovery page.
I myself use Searxng. 4get is also a thing. But these are meta search engine not independent.
There's a lot of talk of AI being anti-human, or at the very least anti-Humanities, but this is missing the point. The AI assemblage is fine with both people and pedagogies as long as they are docile. What AI is anti, and what it undermines, is any tendency to refuse or revolt.
@danmcquillan sometimes i think of it as capital's attempt to replace what people want, as citizens of democracy and workers of the world, with "customer service", in which ostensibly "the customer is always right" (hence the sycophancy and total incapability of saying "i don't know") but in reality the customer is subject to, as with advertising & PR, intense psychosocial engineering and their consent + intent + values are ultimately things to be funneled towards profit maximization.
@abucci @danmcquillan when i call it “anti-human” what i mean is that it is, in addition to the things already mentioned, and extinction machine via energy and resource consumption and waste.
@abucci Right. The conflict isn't between biological and computational but between conformity and imagination.
Perhaps the most (in)famous and illustrious American computer scientist and acknowledged principal pioneer of the discipline now known as artificial intelligence (AI), Professor Marvin Minsky of MIT, once pronounced—a belief he still holds—that ‘‘the brain is merely a meat machine.’’ It is significant that the English language distinguishes between ‘‘flesh’’ on the one hand, and ‘‘meat’’ on the other. The latter is dead and may be eaten, thrown in the garbage, fed to pigs, and so on. Flesh, on the other hand, is living matter and, as such, deserves the respect and dignity for life of which, among others, Albert Schweitzer spoke eloquently. The word ‘‘merely’’ in Minsky’s sentence means essentially ‘‘nothing but,’’ that is, also not deserving unusual respect. His statement is a clear reflection of a profound contempt for life that, as I see it, is shared explicitly by important sectors of the AI community, the artificial intelligentsia, as well as many scientists, engineers, and ordinary people. Daniel C. Dennett, an important American philosopher, once said that we must give up our awe of life if we are to make further progress in AI.From Weizenbaum, Joseph (2007). Social and Political Impact of the Long-term History of Computing
What do these people actually mean when they shout that man is a machine? It is, as I’ve suggested, that human beings are ‘‘computable’’ (berechenbar), that they are not distinct from other objects in the world, in any way deserving of special respect or even attention. This then leads, at first gradually and then with exponentially increasing speed, to a view of human beings as mere objects who—no! Not who, that—can be exploited, inducted in killing machines, imprisoned, tortured, killed (providing they are ‘‘enemy combatants’’). It leads to the American military sponsoring programs to produce robot soldiers. What is then left of Norbert Wiener’s vision of the human use of human beings? And does not our world show us with utmost clarity how far we have already come?#AI #ComputerScience #human #life #TormentNexus #machinicI want here to emphasize, especially to this audience, that all this is not the fault of the computer. Guilt cannot be attributed to computers. But computers enable fantasies, many of them wonderful, but also those of people whose compulsion to play God overwhelms their ability to fathom the consequences of their attempt to turn their nightmares into reality.
@abucci "Evil begins when you begin to treat people as things." Terry Pratchett, I Shall Wear Midnight
I grew up watching gritty 1970s police procedurals, so every time someone is talking about health care and mentions their PCP there's part of my brain that is like "Angel dust?"
1. Economists from the physiocrats (18th century) onward promised society freedom from material deprivation and hard physical labor in exchange for submitting to an economic arrangement of society
2. In a country like the US, material deprivation and hard physical labor have been significantly reduced since then:
@abucci we're all fucked, aren't we?
this is so self evidently stupid i don't really know how to approach writing about it https://www.propublica.org/article/trump-artificial-intelligence-google-gemini-transportation-regulations
how deliberately ignorant of everything would you have to be to think this was a good idea?
this ignorant i s'pose
@abucci @davidgerard Trump regime level stupidity.
Perhaps I should have bolded the "self-evident" part in my response, because that's the keyword.
@davidgerard
That'll be why they're putting a data centre under the ballroom. So they can do this shit 'in-house' without any of that nasty oversight that might come from using more widely available LLM set ups that are being forced into keeping chat logs (c.f. ChatGPT, and I suspect Grok very soon)
It fits with the Government of one (plus favoured lickspittles) the US has right now
#AI will produce regulation in perfect bureaucratic language. The probem is that we need new regulation to fix new problems. AI only has old regulation as its example. Garbage in, garbage out.
@davidgerard What happens when you put people who don't believe in governance in charge of government?
“We don’t need the perfect rule on XYZ. We don’t even need a very good rule on XYZ,” he said, according to the meeting notes. “We want good enough.” Zerzan added, “We’re flooding the zone.”
"Flooding the zone" is not how I usually think of regulations. In fact, it's kind of the opposite of what you want with regulations
If it seems that I am critical and suspicious of people making the big noise to combat fascism, it is only because I saw those same people get restless and bored with Covid mitigation mere months into the pandemic and surrender to soft eugenics so they could resume brunches and travel. People do not have the attention span for big problems without quick resolutions and will dip out the second they feel inconvenienced.
So yeah, I share your skepticism.
i can't possibly move to linux you must understand my PC not booting is load-bearing for my work
@davidgerard I wonder if "potential fixes and workarounds" includes firing all the slopslinging vibe coders and hiring real programmers again.
@theorangetheme "exploring potential fixes and workarounds" means "we have no fucking idea either"
@davidgerard "We're considering potential options and courses of action, up to and including 'doing something', 'making changes', and even 'fixing things'. In short: we have concepts of a plan, yes."
@davidgerard I want a disembodied voice to intone this article every time someone tells me windows is "enterprise grade" and linux (on the desktop) isn't
@errant this is the hard evidence that windows is enterprise software
"exploring potential fixes and workarounds" means Microsoft has no fucking idea what to do here either
@davidgerard “on physical machines” like “oopsie we accidentally broke your computer, guess you better just rent a VM from us, tee-hee”?
@arestelle *better run it in a VM on Linux
@davidgerard if you really need to run it at all
@arestelle i find the main barrier to going Linux is That One Fucking App, and a VM is an answer that works quite a lot
you can often reuse your box's OEM licence
@davidgerard @arestelle What's really funny is when That One Fucking App is also what's preventing an upgrade from Windows 10 to Windows 11, let alone onto linux
@Jer @davidgerard wait idk which one is That One
my one app was my music player and I just run it on Wine
@arestelle @Jer in my case it was Kindle Previewer, which didn't work in Wine at the time but was happy in a Windows 10 VM
@davidgerard @arestelle @Jer Scrivener, which is "works or I die and I'm taking everyone with me," does not work on anything but Windows and macOS.
And I'll stab anyone who calls Apple 'Unix' or 'good.'
@rootwyrm @arestelle @Jer it *used* to run in Wine didn't it?
@davidgerard @arestelle @Jer never ran quite right at all, doubly so when working with network storage.
@davidgerard
The shit that happens when you fire many developers in favor of your own AI that NOONE wants to use..
Couldn't happen to a more deserving company.
@davidgerard Of course they don't. They got Copilot to vibe-code some random feature, it spat out something that mostly works but with the odd side-effect, and nobody knows how it does what it does, as it's impossible to review.
LLMs are biting them, hard. It's been fairly obvious for the last year or so that the (already shoddy) quality of #Windows has plummeted. 30% of code is LLM-generated? Join the dots, Satya.
@davidgerard that's why people don't switch to Linux. On Linux, you loose one holiday everytime microslop pushes an update. /s
@davidgerard okay but in fairness, with Linux I have to do my own patching to make my computer unable to boot. That has been a lot of effort for me in the past, but I have occasionally pulled it off. MS is providing this as a service.
@brianpiper this is why Linux is not enterprise quality
@davidgerard @brianpiper the shareholders are not happy having to keep and pay in-house crews of experienced boot prevention engineers. 🤷
@davidgerard any time there's a tech headline like this with the word "botched" i can't help but think of https://www.youtube.com/watch?v=s3pk9CMwtN0
@davidgerard Have you tried throwing the PC in the sea and moving to a log cabin in the woods?
the palantir employees seem to have an acute case of the most obvious “hans, are we the baddies” syndrome, and it tears me up, really. so sad, poor fucking doves.
they should be ridiculed, laughed at, shamed, shunned and never employed by decent humans again.
coders for concentration camp it systems are no better than the gas chamber operators.
@mawhrin idk how common it is in other countries, but here when you graduate university you take an oath (though it's not binding) to not use the science, technology and knowledge you learned for harming people and society. They could have used that in their comp sci degrees.
On the bright side, lately I've been doing a lot of informal thinking while shoveling snow. I'm turning over an informal argument grounded on Chaitin incompleteness that if our physical universe has continuous space-time, then we must make non-computable leaps in our theories in order to increase the fidelity of our understanding. "Artificial scientists" running on computers will always have inescapable limits that don't apply to human beings. It's exactly the sort of wacky thing that makes for good shovel thinking: it passes the time, and there might be something in there that's more than passing theoretical fancy.
No grand ideas about the universe came to me, but I did realize something about Bénabou cosmoses I didn't know before so that's something.
Remember: when folks say an LLM "explained its reasoning", to a language model, explaining and reasoning are the same thing.
There's no actual reasoning. Just explaining.
Don't believe me? Ask an LLM to one-shot a complex problem without explaining what it's doing, and then try it again, but with the model showings its working.
Same model. Same problem.
Do it a few times. Compare the accuracy of the results.
The reason why multi-step explaining (let's not call it "reasoning" - that's a category mistake) tends to produce better results - fewer errors - is because each step is a simpler pattern that's more likely to fall inside the model's data distribution.
A complex one-shot is likely to be out-of-distribution.
That's all it is. Climbing Mount Improbable one probable step at a time.
You can produce similar results by getting the model to perform a sequence of smaller one-shots that mimic the steps in its explanations.
The advantage of solving complex problems this way is that you have more opportunities to provide external feedback and course-correct when the "reasoning" takes a wrong turn. Which, with many steps, is extremely likely.
Tall tales of long-form unsupervised multi-agent workflows that end up where you intended are... unlikely, shall we put it?
@jasongorman Isn't that exactly what something like Gas Town is?
@datarama In a very convoluted way, yes. It's an *attempt* at it.
The results appear to be... mixed.
@datarama And it's not AI itself. It's an attempt at automating external feedback, coordination and course-correction.
It likely falls into a trap that business leaders know very well - the difficulty of designing performance measures that balance complex and often competing needs and that accurately reflect the outcome you *really* want.
At just a few weeks old, and with no body of evidence as to its performance, I file it with all the other claims. I'll believe it when I see it 🙂
@jasongorman I don't really care much whether the thing driving a multi-agent flow is "AI" or a bunch of tools surrounding the AI, if the end result still is that I lose my job.
you have more opportunities to provide external feedbackGiven how these products are being developed and deployed, it seems to me an appropriate translation of this is "you have more opportunities to fill in for the shortcomings in the product design and thereby convince yourself that you should pay a corporation when you use your own skills."
Putting on my CS hat, what you're describing is sometimes called a hilllclimber (if it actually operated as advertised, which it really doesn't as far as I understand). Hillclimbers notoriously get stuck at local suboptima. Setting lots of independent hillclimbers loose on a problem is called a population hillclimber or parallel hillclimber. They tend to have fewer problems with local optima. Of course there are significantly better heuristics in this class that could be used instead, and would be used in my opinion if the companies were serious about product design.
we only know Gas Town supposedly even works in any meaningful manner from posts that sound like they're written on amphetamines
@davidgerard it really is remarkable. Yegge used to be quite bright and rational. I mean, I'm blown away by what the best models can do now, but there's this weird cult of burning tokens to produce as much code as possible, and Yegge is fully in the cult. Beads is hundreds of thousands of lines of barely coherent code seemingly tuned to allow agents to chew tokens at maximum velocity. I can only imagine how crazy Gas Town is.
So, naturally, crypto bros will throw money at it.
@swelljoe @davidgerard Oh, I did not connect the dots. Yegge is totally nuts, sure, but I hadn’t figured out that people are pushing this stuff because it allows them to bridge cryptobollocks and LLM bullshit.
Thanks for the epiphany. I must go lie down now.
I tried to follow Travis' advice (about the Gas Town post, not tiktok), but I made it about three sentences in before I gave up.
@davidgerard I tried to read through a post about it and couldn't actually work out what the fuck "Gas Town" was supposed to be in any sense, not helped by the random slop illustrations which seemed irrelevant. It was like the programming equivalent of Time Cube only at least Time Cube was a self-contained thesis rather than just a sprawling mess of nonsense
@j0ebaldw1n yeah, the nonsense is the product
@j0ebaldw1n @davidgerard Thank fck, it wasn't just me, I saw it on lobst.rs and thought I was going crazy.
@chrisp @j0ebaldw1n remember, Lobsters is meant to have the atmosphere of a *garden party*
one where AI salesmen are invited to talk over everyone else
@davidgerard Assuming the purpose of Gas Town is 'push the overton window so that "I'm only using the one LLM" becomes the moderate position' ...
... then it seems to me that it's working great
@davidgerard I don't think this is literally true in like a "Steve Yegge has sat down and thought about how he can shift the Overton window on LLMs" way, but I do think it's true in a vaguer "purpose of a system is what it does" way
@davidgerard Gotta be honest, it's hard to turn down this pitch: "Gas Town is also expensive as hell. You won’t like Gas Town if you ever have to think, even for a moment, about where money comes from."
@davidgerard
This is my question.
Where are the code generated open source projects that everyone loves and uses and contributes to?
Where is the evidence of generated, working code?
If the productivity gain is so large, where is the clear and inspectable evidence of such?
@EricCarroll The claim I keep hearing is that everybody is making individual, bespoke software for themselves now.
So it‘s definitely there, you just can‘t see it.
@chris_evelyn @EricCarroll @davidgerard
I think making bespoke generative software will wind up causing more problems than it will solve.
@davidgerard it's real? I just assumed it was a particularly wild satire and/or an attempt to scoop up a little of the craziest money in the market.
Which is why I'm launching Poe's Lawyer, an LLM which detects sarcasm, satire and hyperbole and is fully integrated with the PoeCoin Futures Market: let the wisdom of crowds determine if you were joking or not and invest in a slice of the zeitgeist.
Gas Town? Gas Town is the thing that’s convincing holdouts that vibe coding is the future? Really?
@baldur what’s a gas town? is it this? https://github.com/steveyegge/gastown
@davidgerard @bobschi @baldur I thought never meet your heroes applied to hard-rock heroin enthusiasts and actors who are secretly sex pests. I'm so disappointed in this wave of high profile devs getting hooked on vibes. We live in the darkest timeline
@svines @davidgerard @baldur the writing’s been on the wall for the last two decades. i am just privileged enough i could ignore it for the first few years of that time …
@svines @davidgerard @baldur those two decades are for me, individually, from when i finished school and decided to study computer science despite all my doubts. basically because i couldn’t think of something better to do. reading the tim berners-lee bio atm, i think the problem was there way longer. *sigh*
@baldur Gas Town looks a lot like a straight up grift and a characteristic of grifters is that they know how to appeal to new marks; it's fundamental to their existence.
@baldur like, it’s weirdly impressive, but in that horrifying way where you can’t stop gawking at the audacity and incredible wastefulness
@baldur the "convoluted for its own sake, fun to get lost in the self-made hell of an architecture" pitch makes me feel like these people just need to discover videogames. i got back into Satisfactory this weekend and nobody else had to hear about my descent into madness. https://www.satisfactorygame.com
Never buy a OnePlus phone ever again. They now have a hardware anti-rollback fuse that blows if you revert to an earlier version or install a custom ROM.
https://consumerrights.wiki/w/Oneplus_phone_update_introduces_hardware_anti-rollback
@davidgerard people shouldn't buy OnePlus in the first place, because holy shit what garbage devices. Which get completely abandoned in 12 months or less.
@davidgerard
Something doesn't check out.
So there are fuses in Qualcomm SoCs that can somehow be blown from the software? Fuses that, if blown, _only_ prevent installation of certain operating systems, but have no other effect on the functioning of the device?
I have a hard time believing whoever wrote that piece knows what a fuse is.
@davidgerard manau, ES išpistų už tokius dalykus; Color OS yra skirta CN rinkai.
Beje, kaip tik neseniai nusipirkau OnePlus 13 (trečias telefonas iš jų), jaučiu kokybę šiek tiek santykinai žemesnę palyginus su OnePlus One.
Thank you for the Info, I had been looking toward purchasing a one plus device.
Now one Plus and OPPO are off my shopping list.
@davidgerard
I didn't know anything about that device, but I'm guessing that preemptively replacing the fuse before ROM changes is not an option. Is that right?
@davidgerard
So if I buy a OnePlus and it gets a tiny scratch, I just need to to attempt a downgrade and return it for a warranty replacement🤔
(Yes, I live in a country where they would have to prove that the fault is not said fuse to refuse a warranty claim).
@davidgerard Never buy anything Qualcomm again. They are the ones who put this mechanism in their CPUs.
"The anti-rollback mechanism uses Qfprom, a region on Qualcomm processors containing one-time programmable electronic fuses. These microscopic components are physically altered when "blown"; a controlled voltage pulse permanently changes the fuse's state from "0" to "1." This change cannot be reversed by any software means."
@davidgerard My phones over the last 6 years have been Oneplus. Looks like I'll have to start saving for a FOSS one when it's replacement time. :/
@davidgerard The Oneplus One was my first and last phone from them, pretty poor support even a year after it came out
@davidgerard Hm that narrows down the field of possible future purchases.
I was pretty happy with my OnePlus One, I must say. In fact it still works, but runs Ubuntu Touch. Part of the touch screen is broken, so I can only use it for dev stuff.
But yeah, they abandoned support for the phone as soon as its successor came out, so cannot recommend them unless you're going to run, say LineageOS on their phones. And with this anti-rollback move, they've said they're against openness and reuse... not going back.
@davidgerard My oneplus 6T has been one of my best phone, easy to install custom rom and lasted me for six years. Sad to learn that it's no longer a reliable brand.
@davidgerard concept of a block on a roll back isn't that new. Google Pixels couldnt roll back to Android 13 from 14 even with custom roms, as I remember
@davidgerard That's just anti-rollback protection and actually a pretty common feature on modern Android. They should have disclosed for which vulnerability they're burning it, and their software should have disabled the protection for bootloader-unlocked phones (maybe it does? Doesn't seem clear from the link to me). But other than that, this doesn't seem particularly surprising, and you'll likely see similar things from other vendors, too.
@davidgerard Hardware limitations like this suck. I loved my OnePlus (yes, including the OS) but had to switch because of the ridiculously short security update support. And the prices are no longer that good compared to other phone brands.
@davidgerard I don't use my phone much and usually have either my wife's hand me downs or some rando we got cheap.
My current phone is a oneplus nord n30. i've been meaning to put custom rom on it but haven't had the time.
Guess now I'll have to check to see if that will brick it.
I have already made the decision to just plan on paying top dollar going forward to have devices that are freedom respecting and want the device to serve me instead of the other way around.
@Jason_Dodd my fairphone 5 is nice, disadvantages: noticeably heavy (220g), not at all waterproof (the plastic back cover just pops off for your convenience) and the front camera's not good
other than that it's an extremely good high end phone
@davidgerard i'm thinking that when i get rid of this it will be a fairphone.
none of those negatives are issues for me.
@davidgerard seeing clippy and talking about phones compels me to ask, "what do you think of rossman's phone company?"
@Jason_Dodd hadn't heard of it until your comment!
i mean looks like a nice idea, i have no idea if anyone's made a list of obvious issues
@davidgerard i was thinking of changing providers when i'd heard of it. i'm slow and lazy so it will probably be a month or so but i might give it a try if for no other reason than i like to promote the general things he does.
@davidgerard @Jason_Dodd Although the weight is compensated for by all the money you don't now have weighing down your trousers after buying it. :-)
@denisbloodnok @davidgerard lol. I suppose when it comes to devices and companies continually treating us worse and worse, i'll opt for paying more when i trust that won't be the same.
@Jason_Dodd @denisbloodnok the fairphone is reasonably priced for the loadout, the 6 is £549 on the site
(mine was a gift i should note)
@davidgerard i agree. the only thing is that i'm usually on the trailing edge when it comes to devices. my main computer or phone almost always costs me less than $100. so me paying for a fairphone or a framework laptop is saying something.
@Jason_Dodd i mean if i didn't get this i was gonna get a second hand note 13 🙂 ~£170. laptops are ex-corporate kit off eBay for £300 total.
The HMD Skyline manages replaceable battery and IP65 (find a video to show how it comes apart! ) , but it's closed-ROM with no known exploits - ( would LOVE to be 'well, actually'd about this.)
#HMD #GraphineOS
@Orb2069 @Jason_Dodd yeah fairphone 5 and 6 manage IP55 lol
fairphone sells phones with stock Android and e/os, and is not ROM-hostile (e.g. PostMarketOS is not official, it installs though not everything works yet)
We're meant to be getting 12" of snow or more Sunday going into Monday, but right now it's sunny. Forecasts keep changing wildly so I'm not sure what to expect yet. That much snow will mean 2+ hours of shoveling, which I'm not looking forward to I have to confess.
Doing the arithmetic, there are roughly 78,400 non-white people in this state. Under the assumption that the targets of certain federal agencies are not white, wouldn't searching Maine be a waste of resources? Unless those resources were not in fact intended to efficiently find people with unfavored immigration statuses, say. You don't go looking for needles in one of the biggest haystacks.
To be clear I am not saying other places should be trageted. I am saying that the facts speak to ulterior motives.
Fascists are "constitutionally incapable of being honest with themselves". Yes this is a deliberate choice of words I quote, written by a veteran who gained somewhat of an anonymous fame. Fascists are in denial of their insanity and illnesses.
The WWII losers who became #Fascists are mostly of German descent and noted on the image here... they and their ancestors have been sucking the teat of America's soil that showed them kindness when they were immigrants.... they sucked so much that they are obese, obtuse, slow and stupid.
#Maine was colonized by #French Jesuits. The southern border of the US was colonized by Spanish missionaries. #Minnesota as a land got caught amid wars between the French and German "explorers".
The #pattern that is emerging from the domestic terrorists working for ICE is that the NORTHERN border of the US is where most of the #DHS ICE raids are taking place rn, because mentally ill men are cosplaying their #intergenerational resentments on innocent people.
WHY?
The #Fascists are petty, unable to work with other people. they cannot form equitable partnerships. They hold onto resentments and ALWAYS blame other people for their problems.
Fascists refuse to take responsibility for their own shortcomings and weaknesses.
Mark #Zuckerberg, Trump / Drumpf, Alex Karp, Elon Musk, RFK Jr, "and other false chiefs" are ALL of German descent. Their mental illness prevents them from seeing that hell they created will not "secure" their future at all but instead it will torture their kids and grandkids for generations to come... and it will be a hellacious torture their DNA cannot escape no matter how much money hoarded; all that stolen money will exacerbate their hell.
It really is that simple.
Edited for clarity and to add some of the alt text. This image was human created and compiled various answers from Google #Gemini, it did not spit out this result without a human like me).
Question: what do AI companies claim that image models are for?
(As opposed to the slop, porn and fascist aesthetics they are actually used for...)
@danmcquillan Rendering. Specifically, rendering visual artists unemployed (whilst studiously avoiding saying that quiet part out loud)
@mark_f_lynch Absolutely. I'm just a bit vague on the narrative justifications for image models, as compared to LLMs (e.g. as a step to AGI, as a tool for automation & efficiency etc)
@danmcquillan I guess it's similar to the naive idea I had as a child when I decided I wanted to learn how to draw 'anything I could imagine'?
For DALL*E, OpenAI states: "easily translate your ideas into exceptionally accurate images".
@SylviaFysica yes, they can be sold as satisfying an impulse, but is that the way the industry justifies spending $$$ on developing & deploying them? LLMs can at least make lots of (false!) claims about economic and strategic potential. I just don't really get it for image models.
@danmcquillan Many companies use product photos (for clothes etc., this also requires models) or other illustrations for advertisements; and images-from-description could also be used for prototyping new products. So, I guess making these nearly free is similar model to providing nearly free copy? (But I know nearly nothing about business.)
((And for moving images, film & game are of course also big $$$ industries, with promise for personalized products, but assume you meant static image gen.))
@SylviaFysica yes, those seem like commercial applications. (Reminds me of this use of genAI to "increase diversity" https://petapixel.com/2023/03/24/levis-to-use-ai-generated-models-to-increase-diversity/) . But it's hard to see that providing the rhetoric for massive investment.
@danmcquillan editing smiles into family photos :)
@livliilvil turn "that frown upside down" (and also people's jobs and cultural production 🤪 )
@danmcquillan I also suspect that there must be a whole cottage industry of people who are encouraged by those companies to use these models to produce low-stakes, ephemeral social graphics, like birthday cards, party announcements, etc.
Luckily, I’m not in such social circles myself, this is purely anecdotal and speculative. But I see this supposed embodiment of “everyday usefulness” as a deliberate strategy to manufacture consent for wider societal rollout, same as with LLMs more broadly.
@livliilvil yes that seems plausible - pervasive dissemination at a micropolitical level to legitimise wider changes. But I also don't see that being the rhetoric they could use for investment. Maybe I should research some SEC filings or something
@danmcquillan "art", as if they know what that is
@benofbrown Right. So perhaps a general claim to 'solve for creativity' (like their claim that LLMs 'solve for intelligence')?
@abucci I broadly agree - not so much with the rules of chess analogy but with the broader point about undermining any challenge to power. I just don't see how that was converted into an investment pitch for the $$$ spent on developing diffusion models. LLMs can at least make (false!) claims about productive efficiencies and flaunt the idea of AGI.
But I can't say I know for sure; I'm just spitballing. They've probably confessed some of their motives in their earnings calls.
@abucci Interesting point about the broader significance of latent space encodings. Fwiw, I think Matteo Pasquinelli would have addressed the relationship between AI and Marxist concepts like dead labor in 'Eye of the Master'. Yes, I realised I probably need to seek out some SEC filings to get honest answers about what they tell investors, although I doubt I'll get the time!
@danmcquillan Adobe claims that "From generating podcast thumbnails and graphics to prototyping complete brand campaigns, Firefly has the AI models and design tools you need to go from initial idea to final product."
I might be reading too much into it, but it sounds like Adobe is saying "we know you take pride in your work and that your bosses are asking you to produce high-fidelity designs for projects that probably won't go anywhere, so this will help you waste less time on that."
At least in the way I use computers as a low vision person, pop-ups are extraordinarily anti-accessibility. Yes, even tooltips and alt-hovertext, depending on how they're done. Some websites are close to unusable because of these things.
@abucci As a web developer we hate it too. Usually the way it works is we have to put an ad tag on the page that marketing uses to configure the pop-ups themselves 🙄
The one that was plaguing me today was on a shopping site, but wasn't an ad. It's become fairly common in product views to have a set of images of the product, with a set of thumbnails below or alongside the current main image. The one I was viewing changed the image if you moused over a thumbnail, and popped up a zoomed version of the image if you moused over the main image. The net result was that placing your mouse nearly anywhere resulted in something popping into your view and covering up a sizable proportion of the page. The frenetic popping and unpopping made it easy to lose track of where your mouse pointer is, which led to even more frenetic popping and unpopping. Since I use a screen magnifier most of the time, the popped up stuff took up 90% of the available screen real estate. The net result was deeply frustrating, and I closed the page and moved on with my life.
The individual features are all useful. I like being able to see several different images. Being able to zoom the image is nice at times. However, the way they were all crammed together was poor, at least for me.
Early in my career, about 2001, I worked for the Salt Lake Trib, in charge online classifieds. I hated making ads.
It was the heyday of Adobe Flash, with ads that popped over the page and also had sound. I went to work one day, and the local auto dealership put a Utah Jazz paired ad screaming at the users when they loaded the page. 😱
I refused to even embed those ads, much less make them. Just fuqing no! 🚫
I redirected my career to software engineering after that.
One of my fave contracts was for a medical information company, to make all of their brochures screen reader accessible - about 2012 or so. I loved doing that so much! I learned a lot and it helped me later on, so I could speak to accessibility when I had more control over the software design. The earlier negative experiences reinforced my determination to design software with accessibility in mind. @abucci @WeirdWriter
I'm still getting comments along the lines of "why are you being so mean to somebody just because they're rhetorically defending Mein Kampf?" (Paraphrased, obviously.)
Being cast as the bad guy online just because I think Mein Kampf is beyond the pale and that people defending it know full-well what they're doing was not on my my bingo card for 2026, but I guess it fits right in with how everything else is going at the moment.
@baldur Wait, actually Mein Kampf? It's really, really bad.
Years ago the Titanic (think German The Onion) had a critically annotated version: https://www.titanic-magazin.de/fileadmin/_migrated/pics/kritisch-pk.jpg
(It's the first page of mein Kampf with almost every other word marked with a footnote. The footnotes are all the same and read "Note: Bullshit")
Nice to see a very visible person like @anilkseth converge, in every possible way, with things I've been saying for a while now:
https://www.noemamag.com/the-mythology-of-conscious-ai
Consciousness is *not* a matter of computation, but a matter of experiencing. #AI algorithms are unable to experience. They are simply not the kind of system that experiences anything. Autopoiesis is required. We may be able to simulate that, but simulation is not reality. To be real, organisms must invest energy to construct themselves.
And, most important of all: generating conscious AI would be a really, really stupid and irresponsible thing to do.
Where I still disagree: #AlgorithmicMimicry is also *not* real intelligence. True intelligence not only solves, but *frames* problems.
But that's a minor quibble.
This is going in exactly the right direction...
Here's some of my (mostly collaborative) writing on these kinds of topics:
@yoginho @anilkseth Nice piece! I have the same intuitions about “consciousness” in the sense of awareness (what it ‘feels like to be something’), though it’s ultimately an empirical question; but it’s far less clear that functionalism is wrong with respect to the other aspect of what it can mean to be conscious, namely possessing mental states (e.g., have a belief). It’s the latter that’s been the main target of computationalism.
Or to put it differently, for the kinds of tasks for which people are presently hoping to use AI systems, such as language or reasoning, there’s no real functional story on what consciousness in the sense of awareness even matters (consciousness just isn’t something people in those fields in psychology talk about, for example), so the argument is a little more restricted/limited than it might seem?
To possess a belief, to have mental states, you have to be a locus of experience (a self). I don't see how a purely algorithmic system can have that kind of state.
But, of course, it can do a lot of mimicry in the domains of language & reasoning without anything like a mental state.
The self seems crucial to understand drive, agency, judgment, creativity, and all of those aspects of experience (and intelligence) that are *not* computational.
But maybe I misunderstand?
@yoginho @anilkseth sorry, first my post was kind of confusing so I edited it to try and be clearer. But the standard notion of what it means to be a mental state is to either be a conscious state *or* an intentional state. So to reject functionilism (and computationalism) it's not enough to establish the idea that 'consciousness' can't be understood in computational terms, because that leaves the sense of mental state as intentional state (e.g., belief) intact.
That matters in this context particularly (I think), because our theories of language or reasoning in psychology typically make little to no reference to consciousness whatsoever...
in that context, you're offering an opinion that to have mental states you have to have a locus of experience, which seems to be just asserting that, in fact, there is just one way to be a 'mental state' - which runs counter to the standard construal in philosophy (on my limited understanding), so isn't something you can just marshall as a premise. At best, it's a really substantive conclusion that requires a lot of work to establish, no?
I like Terry Deacon's concept of "entention." It's like intention, but without necessarily requiring a brain. A state or behavior that is about something that is not currently present. Deacon argues that it requires what he calls "teleodynamics." It's roughly the same as autopoiesis or self-manufacture.
Combine that with Rosen's insight that self-manufacturing systems are not computable and, as we've argued, cannot even be completely formalized:
Then you get a pretty solid theoretical argument why ententional states cannot be explained by computation alone.
If I understand your argument correctly, you are not disputing that anything we may count as "conscious" state needs a locus of experience? Again, that locus can only be provided by a teleodynamic (i.e. embodied, self-manufacturing) system.
Computation is about rule-based execution of operations. I just don't see where there could be an explanation for ...
... any kind of experience or entention in that. And I have seen no convincing accounts from any computationalists in this regard.
Thus: doesn't the burden lie on them to provide some kind of plausible account?
Because self-manufacture *does* provide a plausible account for drive, agency, judgment, and creativity, which can only occur in a system that must invest physical work into its own organization to continue to exist.
This is, of course, not anywhere near a sufficient explanation for what consciousness is, or how it evolved. But, I believe, it provides some necessary first steps towards such an account that already absent in any computationalist account I've seen so far.
In other words: computationalism fails at the first step: to explain agency, entention, and drive in primitive living beings. How are we ever going to build a higher-level account on that?!
at root, what I'm saying is that consciousness, functionalism, computationalism, and what AI does or does not do (e.g., is it just "mimicry") are not as tightly bound together as the way the piece seemed (to me) to suggest.
I personally can't see a notion of "consciousness" that I would attribute to current AI systems (at all..), but I don't think anything really follows from that with respect to functionalism or computationalism, let alone the capabilities of these systems.
I agree. That *is* the problem of mimicry & the reason Turing regretted publishing the Turing test.
It seems that we can make statistical systems that imitate behaviors to arbitrary degrees.
It quacks like a duck, it walks like a duck. So computationalists believe it *is* a duck. But you can't eat the duck. And it does not shit like a duck either.
Usually, such arguments are dismissed as "metaphysical" (a four-letter word!), but I think good metaphysics are crucial.
I think we don't agree beyond current systems not being conscious ;-):
the very issue to me is whether we should view these systems as, say, genuinely reasoning or as merely imitating reasoning.
For a computationalist, the idea is that we can understand the real thing in terms of computation....
to claim, against this that the quacking isn't quacking. and the walking isn't walking requires substantive, non-question begging argument.
I don't think that pointing out the lack of digestion is it....
and the obvious reason for that is that our standard concept of 'what it means to walk' doesn't rest on digestion in any way....
If one thinks that the lack of conscious awareness, sense of self or autopoiesis, or anything really, is, say, a bar to a system being taken to 'reason' or 'speak' one needs a specific functional role for that missing thing in that concrete behaviour. In other words, you need an answer to the question what actual role does consciousness play in reasoning, what does it do, what is missing without it?
without such a story, it's kind of all just assertion - or treating a substantive issue as a simple definitional one, at which point it becomes wholly uninteresting.
But such an account doesn't presently exist anywhere to my knowledge, and, in fact, there's quite a bit of empirical evidence in psychology to indicate that working out such an account would actually be difficult
The mimicry can work, almost to perfection, for the quacking and the walking. It's not embodied in the right kind of way.
So: I think you can reproduce language & reasoning to an arbitrary accuracy with computational systems that have no experience or consciousness at all.
I just don't see how convincing quacking & walking can then be used to argue for embodied experience or awareness.
There is no account that can do that, because computation is not what does that.
For me, to believe you can explain actual experience with computation is a fundamental category error. It forgets what computation actually is, and what it is supposed to do as a concept (the quacking and walking, alright, but not the meat on the bone, and the poop on the ground).
But: that meat *is* required for real duck-quacking and walking, in the end. It is what makes the computational reproduction of these behaviors mere mimicry.
Of course, this is a philosophical argument, so we can agree to disagree. But I don't see how we get a reasonable account of how experience and consciousness arise without making these metaphysical distinctions.
And, no. It's not a mere definitional game. Or, then, the computationalist or functionalist account is also just convenient definitions.
Without good concepts, no true insight. And concepts are socially constructed by human knowers, either way.
the obvious reason for that is that our standard concept of 'what it means to walk' doesn't rest on digestion in any way....looks to me like asserting the consequent. Being embodied implies digestion of some kind is necessary. So this is not at all an "obvious reason"; in fact, saying so assumes away the argument entirely. In other words it's a metaphysical statement that excludes what's being discussed.
Why would digestion or metabolism be necessary? Because to walk requires (physical) work, which requires dissipating organized energy. The process of taking the organized energy and converting it to the work required to move is a breakdown of the organization: metabolism, digestion, call it what you like. Saying that walking has no necessary relationship to this is equivalent to saying that walking is not an embodied action, assuming away the entire argument.
only if you equate "embodiment" with "embodied specifically in an organism with the kind of metabolism that requires a digestive system"
I would maintain that that's not our everyday notion of walking....
Abucci, a concrete example: adult mayflies apparently neither feed nor digest.... (says Wikipedia....)
but I think we'd say they can walk?
no, I think I'm pointing out that walking, as a physical process, requires energetic input of some kind, but our everyday sense of the word walking is indifferent to where that energy comes from
I too am wanting to be precise, but, as it happens, I don't know of any "more precise" definition of walking than our everyday definition.
I'm more than happy to look at any definition you can find, but I would be kind of surprised if there are any, and I'd be even more surprised if they mentioned digestion.
Therefore, a source of organized energy must be available to the moving thing: when it comes to things that walk that's usually in a material form we'd call food. And, the moving thing must dissipate that energy in order to achieve an actual motion: when it comes to potential energy bound up in matter that's a process we'd typically call digestion or metabolism.
This isn't a question of the definition of walking, in other words. It's a question of how thermodynamics works.
At this point this thread feels like talking past one another, so this is my last post. Best regards!
so we are starting to converge here (as will be apparent if you take a look at the paper on reasoning, I linked to):
whether or not something is mimicry or 'just the thing' depends on there being something essential missing
I agree that you can "approximate" reasoning to arbitrary accuracy with a computational system without experience or consciousness, precisely because "consciousness" isn't (on any current empirical account) functionally relevant to its workings...
but then, buy the same token, there is no basis for making that the peg on which to hang the difference.
We're happy so say a robot 'can walk' even though it doesn't digest...I'm saying exactly the same is true for 'consciousness' and reasoning. There's no functional connection between digesting and walking that would help it to pick out 'really walking' (as opposed to simulating it).
And the same holds, analogously for consciousness and reasoning, until someone comes up with a functional, linking story.
Did I miss the link to the paper?
"Reasoning" as in logical thinking *is* computational. Language, as well, is mostly transmission of regular (though often ambiguous) patterns. It's the contextual framing part that is not computational in nature. But that doesn't require consciousness either, just agency, as it happens in all living organisms.
At that level, I think we are making first steps toward a better understanding. But, indeed, it's a long way to consciousness.
Question: what do you mean by a linking theory? As you say, the dependencies between these capabilities may be loose and much more complicated than computationalists assume.
I'm using the word linking theory just for any account that spells out, in functional terms, how the two things are connected.
If we want to say something can't walk because it can't digest, then the linking account sets out what about walking is necessarily missing without digestion
likewise, what core features of `reasoning' are absent in the system without consciousness? If there are none, then consciousness doesn't matter to the determination...
the linking theory is the bit that spells out how and why it does
there's probably a better term.... ;-)
So many interesting questions in those links: arguably, you *do* need a metabolism for the evolution of locomotion & later on, walking. Walking animals then evolved technological mimicry of their own mode of locomotion. Maybe not a direct functional link, but evolutionary dependencies that are endlessly fascinating & intricate.
Challenging indeed, but much work to be done, & much to be discovered here, but all of it glanced over by a superficial computational paradigm.
I'm very much enjoying this exchange, so just a minor comment: you can keep throwing it in (your call!) but as someone who has spent the last 30 years working in that framework, I don't see that the computational paradigm is 'superficial', and I also don't think calling it that adds anything.
It's precisely because I think the computational paradigm is interesting and important that I've also always been interested in critiques (I read Maturana as a graduate student).
Clearly you and I disagree in how we see the framework, but that's kind of irrelevant, ultimately, as it's about what can actually be rigorously established.
I've (always) had every sympathy for the lines of thought that you primarily draw on. And that perspective might well be right. But the difficulty has always also been (to my mind) making the global `perspective' actually stick in detail. And even if or when it eventually does, it won't follow from that that computationalism was (or is) useless or, for that matter, can be readily replaced at the same level of detail with a different framework... but that's just me ;-)
@yoginho @anilkseth
or to put it differently, the first decade of my life as a cognitive scientist was dominated by three successive "paradigm shifts" regarding explanatory frameworks: from symbolic systems, through connectionism, to dynamical systems,.
There was a wealth of interesting and important substance there, but there was also a huge amount of acrimony and partisanship that basically rested on the idea that "once my theory is complete it will be so much better than yours" while the existing parts of each actually covered only small fragments, and those didn't even really overlap so didn't actually meaningfully compete....a
that kind of fighting on the basis of things that don't yet exist seemed a bit pointless to me...
Important clarification: computationalism is superficial (and an impediment to understanding) only if adopted as a worldview that excludes other views. It is an amazingly powerful tool to study natural systems and their behavior, if explicitly treated as a tool, with a clear view on its limitations.
It is one of many useful perspectives.
But it won't be sufficient to explain drive, agency, judgment, or creativity.
We need many tools. Not unifying theories.
>"Reasoning" as in logical thinking *is* >computational.
so then I think we agree that we need to take the 'mimicry' issue on a behaviour by behaviour basis, which was part of my point about the duck...
we've then moved the argument from consciousness to `agency' in order to make a decision about mimicry, for *that* question to be resolved by more than just stipulation, we need an account of 'agency', what bits of it LLMs don't have, and why those missing bits matter to the particular behaviour at hand....
happy to resume that another day, but have to go for now, so it's really just the shape of the argument I want to emphasise
RE: https://spore.social/@yoginho/115934974652346389
As it happens, the three quoted papers work out the beginnings of such an account, building on work by Rosen, Hofmeyr, Deacon, Varela, Maturana, Kauffman, Moreno, Mossio, and many others.
This organizational account is not perfect yet, but a more promising avenue than any computationalist or functionalist attempts I've seen. For one, it is a pretty cogent explanation of what agency is and where it comes from.
And it also gets you a locus of experience, the prerequisite and precursor of "what it is to be like."
So: we *do* have a promising starting point. We need more people to see that. But computationalist ideology prevents them from even seeing that there is a problem in not being able to eat the duck...
when you say autopoiesis is required, do you think a swarm of machines that kinda eat asteroids to rebuild themselves could ever count as experiencing, or does it have to stay squishy cell-style self construction for you?
@yoginho I agree that experience is fundamental to minds, however, this piece does not really present any actual evidence for biological uniqueness, it just asserts it.
Predictive Processing is substrate neutral. That's almost the entirety of the argument here.
Yes, it's now perfect. @anilkseth still doesn't connect to organizational accounts of organismic agency (such as the one I work on, based on the work of many other excellent people since Robert Rosen first proposed it in 1958, see linked toot below). But that's an important piece of the puzzle.
It is nevertheless amazing to see him converge on ideas flowing from that account.
https://spore.social/@yoginho/115934974652346389
Here's some of my (mostly collaborative) writing on these kinds of topics:
Btw: it follows from the organizational account that predictive processing is NOT sufficient to explain the basic characteristics of life and the emergence of organismic agency. I write about that in this book chapter:
https://www.expandingpossibilities.org/12-the-world-is-not-a-set.html
and the last section of this appendix:
Bayesian inference is still computational, and priors remain undefined, not only in terms of their values, but what we should take as priors in the first place.
I highly recommend Kate Nave's "Drive to Survive" on that same topic:
https://mitpress.mit.edu/9780262551328/a-drive-to-survive
She explains, definitively in my opinion, why active inference and the FEP miss the point, when it comes to explaining embodiment and agency.
@yoginho @anilkseth Ill check these out, thanks.
I would say that deixis is the fundamental operation though. And that doesn't require anything remotely intelligent, just the ability to sense that something is proximate. More advanced agents can classify the stimulus, but that's not required to operate on it.
Even a bacterium can classify food vs. toxin (not because it is contemplating the difference, but because it has evolved to do so). How do you enact affordances without this kind of relevance realization?
We've very explicitly argued that *all* living beings can realize relevance:
but non-living systems can't. And without relevance realization, no agency or consciousness.
@yoginho Yes this is adjudication, and it's ultimately what distinguishes biological entities from current artificial ones.
But first, there is designation. That is the beginning of all relevance, meaning, and cognition.
I'm developing a new framework combining designation and adjudication into a process I'm calling "somatic deixis" that makes qualia the origin of cognition, not its product.
Here's the introductory essay https://plus.flux.community/p/3068dd91-e040-405c-abde-1d0174114da6
Thanks for sharing. Will read! Long train ride on Sunday.
@yoginho Excellent.
I am looking to do some podcasts on these topics, I think we could have a great discussion!
In retrospect I might have written non-sense in place of nonsense.
If you're in tech the Han reference might be a bit out of your comfort zone, but Andrews is accessible and measured.
It's nonsense to say that coding will be replaced with "good judgment". There's a presupposition behind that, a worldview, that can't possibly fly. It's sometimes called the theory-free ideal: given enough data, we don't need theory to understand the world. It surfaces in AI/LLM/programming rhetoric in the form that we don't need to code anymore because LLM's can do most of it. Programming is a form of theory-building (and understanding), while LLMs are vast fuzzy data store and retrieval systems, so the theory-free ideal dictates the latter can/should replace the former. But it only takes a moment's reflection to see that nothing, let alone programming, can be theory-free; it's a kind of "view from nowhere" way of thinking, an attempt to resurrect Laplace's demon that ignores everything we've learned in the >200 years since Laplace forwarded that idea. In that respect it's a (neo)reactionary viewpoint, and it's maybe not a coincidence that people with neoreactionary politics tend to hold it. Anyone who needs a more formal argument can read Mel Andrews's The Immortal Science of ML: Machine Learning & the Theory-Free Ideal, or Byung-Chul Han's Psychopolitics (which argues, among other things, that this is a nihilistic).#AI #GenAI #GenerativeAI #LLM #coding #dev #tech #SoftwareDevelopment #programming #nihilism #LinkedIn
I suppose to a certain extent this is the epistemological question of LLMs. Does data arrive at true understanding, or statistical associations? This insight seems to be growing in popularity, but now has to climb the mountain of sunk cost from investments. As such, many of these conversations are on the fringe of being impossible to even have.
The greatest irony, is believing we have moved beyond these challenges. Whereas the reality is we have merely stopped engaging with them.
Does data arrive at true understanding, or statistical associations?Maybe you've read him, but Han digs into this, and his answer is a resounding "no". He refers to a data-only view as "total ignorance". He cites Hegel while making this argument, so we've been around this block for over 2 centuries now.
The greatest irony, is believing we have moved beyond these challenges. Whereas the reality is we have merely stopped engaging with them.I couldn't agree more. There's a lot to engage with here, and it's frustrating at times that so many seem to be assuming the problems away rather than grappling with them. Among other things it's a wasted opportunity to learn and discover.
I have not read Hans yet, but I should probably add his work to my list.
I consider it fortunate to find myself indebted to the Aristotelian metaphysics of hylomorphism. Which, while it does not address all questions, it certainly helps to ground contemporary philosophy in a realist tradition. In that regard, one of the shortcomings of LLMs is believing that associations of words without experience constitutes knowledge. Rather than language deriving from experience.
Channeling Hegel, I think Han is saying that associations of words lack comprehension: there's no single concept that encompasses what the words are saying into a unified whole. So, word associations are accretive: you can only add more, never synthesize. In my mind it's a bit like gathering more and more 2-d points of a circle without ever realizing they lie on a circle; you have a growing mess of data with no comprehension, no finality or completion in the form of the circle. I think there's a case to be made that computers by themselves are not capable of bridging this gap in general: humans must be involved because we are able to make leaps of logic that are uncomputable. Which I suppose is a kind of experience.
I like that analogy.
Unfortunately? I never got into functional programming, so I had to look up hylomorphism as a CS concept. I have found that computational and philosophical terms often have a high degree of similarity, but not so much here.
Aristotelian hylomorphism supposes that substances are composed of matter and form. Wherein form is the principle of intelligibility and provides the end. Intellect extracts the form from material beings. Hence language alone cannot suffice.
I’m curious to know how often the distinctions between the various types of algorithms has been made? There has been a lot of discussion about #ai. Yet this is often used as a very generic term to describe everything from ChatGPT to AlphaFold without much distinction that the underlying algorithms are often very different. This difference seems like something that should be obvious and yet is not a common distinction in articles. Is this well known or distinction worth exploring? #philosophy
It seems to me that when we discuss #AI we are often discussing an application like ChatGPT. However, this is only one type of AI models. I would like to propose a three-fold distinction for discussing AI from a philosophical perspective. Each distinction is a different way of processing data, with time often being the varying factor. The three distinctions can be called data clustering, functional creativity, and process generative. #philosophy 2/
3/ Data clustering:
From a popular perspective, this is the least interesting of the distinctions. These models categorize new data points based on readily available historical data. Taking a large number of arbitrary data points, these models are able to organize them according to a degree of similarity. Then when new data points are added they are similarly categorized. These models find their true use in data science. Hence their lack of popular appeal. #AI #philosophy
4/ Functional creativity:
This is a more novel way to describe reinforcement learning. It is acknowledges how models like AlphaGo and AlphaFold are able to develop revolutionary advancements through policy adaptation. The important aspects are a well-defined scope and a definitive evaluatory framework. The game Go and protein folding fit these qualifications well. The challenge with these models is the process of creating/defining the environment for policy development. #AI #philosophy
5/ Process generative:
These are models like ChatGPT and Dall-E. Their goal is to generate new responses to text or image inputs by the user. As such, they are trying to encode the entirety of language or visual representation. The recent success in this context has enabled the explosion of #AI services. The challenge of encoding all this information should explain all the shortcomings. These models operate based on relations learned from a large dataset. Everything develops from this dataset.
generate new responses to text or image inputs by the userI'd pick some fault with the choice of name "process generative". The cited artifacts seem to be neither of those things. They can be embedded into generative processes for sure, but it'd be the people who interact with them that make them processal and generative, in my view, not the models themselves. Below is the tl;dr:
I'd say the use of "new" in the quote above is load bearing. Technically and historically, the latent diffusion model underlying many image generators was developed to represent complicated probability distributions in a concise set of parameters. To use this model as an image generator, one must collect a set of samples of the image distribution one hopes to represent, and then apply a training procedure to develop the parametric representation. There's already several layers of representation happening here, each with a corresponding fidelity loss but also a kind of "reality loss" if you want to call it that: the subjects of images -> the images themselves -> vector representations of images -> a probability distribution over the vector representations -> parameters representing distributions over vector representations.
Once you finally have the parameters in hand, you can then sample from the represented distribution. This is the "generation" step, and what you are referring to as "new". I'd argue both words are inappropriate here in any but their jargon senses.
Something like ChatGPT has a similar flavor, though it arose from sequence-to-sequence translation research, which is not explicitly about representing complicated probability distributions. However, implicitly that's what it's doing (it's a descendant of conditional random fields, which were more explicit about this aim). At base when you enter a prompt, you're drawing a sample from a conditional probability distribution over sequence space, conditioned on the prompt sequence (I'm ignoring the guardrails and other wrappers around the core LLM for brevity).
So, what exactly is "new" in a sample from a probability distribution? Arguably nothing. Users might be surprised because they were not previously aware that some particular sample was "in there". But it's the kind of surprise a street magician trades in with a two-headed coin, the kind of surprise that happens in a board game. What we generally think of as "new", "novel", "creative", usually happens in the realm before this stack of representations, not several layers deep in it. Or it plays with the representations themselves, rather than keeping them fixed and sealed. Or it knocks the board game over entirely. Or it comes up with some other thing I haven't listed.
What exactly is "generative" about a sample from a probability distribution? Also arguably nothing. Yes, "generative" is a piece of jargon used to mean roughly "draw a sample from". But if we imbue "generative" with a sense of open endedness, a quality we think human language and creativity, biological evolution and ecosystems, and political and social systems have, among other examples, then a probability distribution cannot be generative. It encapsulates what Leonard Savage called a "small world", and even he acknowledged there's such a thing as a "large world" and that it's inappropriate to apply these small world methods and concepts to the large world.
To me, words like "generate", "new", and "process" refer to the large world. There might be small world analogs, but those will always be missing something important.
I had failed to challenge the base assumptions. All the talk of #GenAI had led me to adopt that very language while trying to critique it. There is the reality that the prompt from the user is necessary, but all of the creativity is explicitly encoded into the dataset. I suppose then the generative aspect more so derives from the trade secrets of dataset creation.
Thank you. This is where I am trying to think out loud and work on refining my own ideas and how to communicate them.
I think one of the many challenges with current discourse around AI is that it's functioning to draw people into a small world that is convenient for certain perspectives. It's as if a group of chess grandmasters succeeded in convincing everyone to settle all disputes over chessboards. I think we have to pay attention to this.
I'd say the creativity lies in the prompts and what the person does with the output, more than in the dataset. The dataset is dead, so to speak. It can't get up and dance for you. What's still live is the interactions that are made with it.
That is fair. I suppose the question is how much human influence is actually going into dataset creation. It may be a naïvety to hope that there is a serious effort of intentionality, and not merely a deluge of data.
This does, however, echo my original insight when ChatGPT first went live as a public facing website. That ChatGPT does not understand what it is outputting, it is the human reading the output that provides the context and understanding.
I started writing this book project using zettlr in markdown, because that seemed like the best way to incorporate many citations from a wild variety of journals/books/articles. But I am now having serious problems trying to integrate files.
Latex is hard but I know it quite well, I just don't have a latex editor that I like that's reliable (I'm not doing this on overleaf, it's just me, no sharing needed).
Recs? Advice? I'm running Ubuntu.
@sundogplanets yeah, I use Texstudio for this. Ugly but it works.
@sundogplanets since other probably recommend better things I recommend what I did for my Bachelors thesis. You can generate Markdown to PDF or to epub etc. with pandoc there are plugins for pandoc like citeproc (you can search the web easily for pandoc citeproc) that adds citations to your paper. There are other plugins that improve table of contents etc.
you can create script files that simplify the generation I even made a makefile back then.
@sundogplanets I can also recommend #quarto for a slightly more streamlined experience for converting markdown to various formats (html, pdf, epub) using pandoc under the hood
@sundogplanets You might want to check out typst.app It's a relatively up and coming LaTeX alternative. I think will take markdown 1:1 and otherwise I found it to be just quite nice (from a programmers perspective)
They have an online editor but you can just download it and run it locally as is (or integrated into vs code/codium)
Two years ago it was still a little immature for my thesis (I used a lot of TikZ) It seems they can take bibtex files but also have their own format
@sundogplanets i also really do just wish there was something that was 1:1 like overleaf but offline because, honestly, overleaf rips
@nu I LOVE overleaf. I just don't want to do something on a cloud when I really just need it locally on my laptop!
sudo apt install kile on the command line or with whatever tool you use to install software. I haven't written book-length things with it, but I have used it for articles and I don't see that it'd falter on longer works. I don't have any experience using Markdown in it, only Latex. I don't know if it can even handle Markdown.@sundogplanets can you say more about what you like? which features, what ease?
@sundogplanets I recently started using LaTeX Workshop, an extension that can be downloaded for VS Code or VS Codium (both available on Linux). It has worked nice for me so far. I also know that there is a LaTeX editor called Setzer that can be found on Flathub for Linux, although I haven't tried it much and, from what I've heard, the download size can be a bit heavy.
@sundogplanets I can very much recommend Pandoc and it's plethora of plugins and filters. I use Pandoc, CSS, and Weasyprint to make PDF booklets with single file HTML renders from the same source and styling.
It's relatively simple, it's been around forever, and it won't lock you into just PDF like most LaTeX toolchains will. (Yes htlatex exists but no one who tells me about it has ever used it or wants to)
@sundogplanets maybe see if @mattgemmell pandoc publishing might work for you.
Thought I'd mention LyX, a pretty popular and well-maintained cross-platform app.
I've used it -- it's nice.
And it includes "textclasses for scientific societies, such as AMS, APS, IEEE, or specific journals like Astronomy and Astrophysics"
@sundogplanets
You might want to continue with zettlr, just convert with pandoc, using a template like this one by @maehr :
https://pandoc-templates.org/template/academic-pandoc-template/
Did I say I was good at latex? HA HA HA I brought this on myself didn't I?
I want to get some writing done but I'm stuck in why-won't-latex-find-my-bibliography purgatory instead
(Not asking advice, it's too specific and I'm probably doing something dumb. Just yelling into the void about latex. Feels very grad-student-y)
@sundogplanets Latex is the best tool for procrastination when you should be writing
@sundogplanets all I'm hearing is "I'm looking for great latex extensions", so here you go: https://www.overleaf.com/latex/examples/latex-coffee-stains/qsjjwwsrmwnc
(I offer this in the hopes that it's humorous enough to relieve a bit of latex pain)
always had a love/hate relationship with latex. once you finally figure out what obscure incantion you missed the first 12 times, it can do anything.
I FIXED IT!! I SAID THE RIGHT INCANTATION!!
@sundogplanets LaTeX is great and it isn't, all at once.
The best rebuttal of LaTeX I heard was "I don't want to debug my document". Couldn't argue against that.
@sundogplanets I'm glad you fixed it before I spotted your post as I fear I may have struggled to avoid the unsolicited advice loop that's almost a requirement on social media. It would probably have involved a fountain pen, a very specific ink, almost certainly Clairefontaine paper and writing all the latex / markdown (or my personal demon, re-structured text) by hand.
Good thing we managed to avoid all that!
In other news. Migraine. And bored.
@sundogplanets And zettlr looks interesting. I'll have a look into that one, post-migraine.
@calum Hope your migraine lets up soon!
@sundogplanets I use Texmaker and JabRef
I wrote my first four books using LaTeX in vim:
I wrote a macro that I bound to F2 that expanded the current word to a template, so itemize<F2> would expand to a begin/end block with a \item and the insert point after it.
Beyond that, the main advice I’d give is to keep a very strict separation of content and presentation. Write semantic markup and define the macros that you use in a separate file (for example, I had a \keyword macro that italicised the word and added it as an index entry, and a similar one that expanded abbreviations and added the abbreviation as a cross-reference in the index).
@sundogplanets not sure if it was mentioned anywhere else but typst https://github.com/typst/ is very good.
There is decent integration in a number of apps, they have their own webUI, the tool at its core isbasically a single binary with a decent albeit not latex size packages.
@sundogplanets I like TeXmaker, but I've also used emacs with Acutex in past
@sundogplanets
I like TexStudio so far, it has some autocomplete if I want but also doesn't get in my way with too many features.
Otherwise I've historically done vim, but that doesn't get you the side by side preview or quick navigation to the headings and labels.
TeX studio also has optional wizards and tools to help remember how to do things like insert images, and let's you select text to then change its size or style.
@sundogplanets
I'm probably late to this, but I'll suggest LyX. It's built on top of LaTeX and looks acts mostly like a normal wordprocessor. It can do anything LaTeX can do without needing to write markup, though it can do that too.
And yes, I have personal experience, having written a complete book in it complete with 800+ references.
It won't stop you hating bibliography management though :)
Amazing how capitulating to AI can be presented as bravely standing up to it. "Our new AI strategy puts Wikipedia’s humans first" https://wikimediafoundation.org/news/2025/04/30/our-new-ai-strategy-puts-wikipedias-humans-first/
@danmcquillan I am no AI fanboi but the policy seems ... sensible.
How is this capitulation?
Our investments will be focused on specific areas where generative AI excelsIt excels at making bullshit, porn, and spam at scale, when it's not greasing the wheels of oppressive government and corporate policy. Wikipedia is, or at least has been, the opposite of this.
Supporting Wikipedia’s moderators and patrollers with AI-assisted workflows that automate tedious tasks in support of knowledge integrityNope. That's not how moderation, or generative AI, work. Fighting a bullshit generator's bullshit output is a formula for making moderation harder, not easier. Knowledge integrity is compromised if the bullshit generator can randomly place hard-to-find bullshit within the knowledge base.
Giving Wikipedia’s editors time back by improving the discoverability of information on Wikipedia to leave more time for human deliberation, judgment, and consensus building;Nope, that's not how AI works either. Generative AI is terrible at information retrieval, and terrible for information retrieval. We've known this forever. Besides Google's AI telling you to glue the cheese onto your pizza, there's research on this. See https://dair-community.social/@emilymbender/113422515033543544 , which links two academic papers and an expository op ed. AI does not improve the discoverability of information.
It goes on and on and on like this, splattering AI booster rhetoric all over the place. To me it reads like a surrender document. Human beings managing human knowledge meant for humans should be holding the line against this anti-human, anti-imagination, anti-ecological technology, not embracing it and using the accelerationist rhetoric of its promoters.
@abucci @danmcquillan have you used AI tools selectively in an enterprise setting?
There is some genuine value, albeit limited.
If you have a look at my CV you'll see I am quite capable of building such things myself from the ground up, so this is not a position taken out of technical ignorance.
@matthewskelton @abucci imo this soft assimilationism is a much a vector for AI harms as Altman's rantings
@matthewskelton ""LLMs capable of summarizing and generating natural language text make them particularly well-suited to Wikipedia’s focus on written knowledge" https://meta.wikimedia.org/wiki/Strategy/Multigenerational/Artificial_intelligence_for_editors are you kidding me
Good evening #forkiverse 👋🏼🌙 (it's nearly midnight here oopsie)
On today's quest for the average #searchengine listener:
What generation are you part of?
As always: Forklift this post to have it reach more people!
| Boomer: | 3 |
| Gen X: | 17 |
| Millenial: | 34 |
| Gen Z: | 3 |
Good morning #forkiverse 👋🏼🤗
Let's continue our social study of "who's the average #searchengine listener" in order to build our profile vs. The Hard Fork listener profile 😂
Next up: Are you and extrovert, an introvert or an ambivert (I'm classifying extroverted introverts and introverted extroverts under that)?
As always: Forklift this post to have it reach more people!
| Extrovert: | 12 |
| Introvert: | 33 |
| Ambivert: | 26 |
RE: https://spore.social/@yoginho/115898679347033969
As I am in the wintherry mountains, I am listening to the vast sonic landscapes of Paysage d'Hiver (https://en.wikipedia.org/wiki/Paysage_d%27Hiver) and the emptiness & cold of Darkspace (https://en.wikipedia.org/wiki/Darkspace_(band)), which provide a deeply layered & beautiful atmosphere to work to.
Connoisseurs of black metal, do check them out!
Arrived in #Tschiertschen, my home, yesterday.
Reading in the sun, outside.
#WritingRetreat is off to a good start after some exciting adventures in France these last ten days. More on that later...
I'm still processing and recovering from my travels, which is where I feel the limitations of my #MECFS most acutely.
Technological power always seems to push into the head (neuroscience) and the reproductive system (genetics; eugenics). I think that's because the former is for better controlling your human resources (flesh machines) while the latter is for ensuring the reliable manufacture of more of them. The interest in "health", and surveillance generally, is about maintenance and maximizing ROI. Resources are to be developed into assets, after all, and investors (or their agents) need to ensure these assets mature optimally.
@abucci@buc.ci
AGI is what we used to call AI before OpenAI gave us their chat bot and redefined the definition of AI.
Edit: I neglected the important bit that throughout all this time, there have been notions of "weak AI" or "narrow AI" developing alongside "strong AI"/AGI/what have you. OpenAI was formed in 2018 if memory serves, and only rose into mainstream awareness after ChatGPT came out in 2023.
@abucci@buc.ci Good luck with the job search! It's brutal out there.
I was just pointing out, from a layman's perspective, that what we (the public) used to consider "AI" -- before OpenAI's release of ChatGPT -- is now called "AGI" because the "AI" before OpenAI's birth has been redefined. That's all.
Thanks for the well wishes. It's definitely brutal out there (here?), and I'm old enough to have been around this block a half dozen times.
I appreciate videos like this one from Nature that collect expert viewpoints, but sometimes the experts should be challenged.
Jared Kaplan of Anthropic had some very misleading claims.
LLMs do not democratize access to expertise. It feels like that because they sounds like an expert, but only when you ask them questions in domains you don't know. Really, they're just making shit up, and you don't notice in areas you're not an expert in.
LLMs will not solve open problems in STEM. Researchers may use machine learning tools to do that, but ML is for finding patterns in data. It can't "solve" or make "insights." It only applies when we already have vast amounts of the right kind of data.
And if we want to talk about LLMs as a cybersecurity threat, we should talk about how vulnerable they are to attackers. Imagining a genius AI hacker is nothing more than a distraction!
Mustafa Suleyman of Microsoft was more realistic and more deceptive.
He talks about an LLM becoming an ever-present companion that knows everything about you and helps in your daily life. But, let's be real: companies model you to sell you things. Their LLM will help you use their products more and streamline your purchasing flows. The more you share your data and defer your decisions to them, the more they win. But does anybody actually want an AI doppelganger to live their life with / for them?
I also hate how he talks about responsibility. He says he puts society first, but that's not how companies work. Profits come first. At best, the concern for society is second because it only comes up in the context of a profitable product. Also, yes, everyone ought to know this technology better. But it's absurd to put all the burden on them! Its your research, your product, and your responsibility. You are the only one who knows the details well enough to restrain yourself effectively!
@ngaylinn if it's truly the burden of the public to understand LLMs better, then open the code you cowards
@abucci Yeah, I agree.
The problem is that Anthropic really is a leader in this field, and we ought to care who they put up as their "expert." That perspective is very relevant! But we ought to be challenging that person's credentials, and explicitly contrasting them with an outside expert's opinion, since they have a clear conflict of interest.
Nature... sorta tried to do that? They did at least get some good independent voices. But they didn't allow any critique, just... putting out a variety of opinions, for viewers to make up their own minds.
That's not journalism, and it's not science, and that's a shame because Nature ought to be good at both.
#maine #CMP #PowerOutage
A couple years ago we went a good five days without power (about a day in between two longer outages when we did) but haven't had one since till now.
I always give it a bit when the power comes back because more than once it's come back then gone out again shortly after. I think this outage might be over though.
The future lies in embracing statefulness and effectfulness in self-modifying code. Unlike the bureaucratic procedure of rule-based coding, this style of programming is more like surfing, or performance, or gardening. Your task as a programmer is to plant a seed of code that unfolds into something beautiful, possibly guiding it as it unfolds if you have the mastery. I'll leave as an exercise for the reader what the soil is in this metaphor.
(I'm only half joking!)
@abucci nah, I don't think so, but we do understand that one best. For all the fancy ideas we can have, if we don't know how they fit in we're just stumbling along. 😋
In that respect, I do agree with you that there is merit for it.
At this point, I'm grateful that I can still have a job at all.I believe this is largely what's driving the push of LLMs out into the world. Labor discipline.
Good morning from Kennebunk! Snow likely this morning, then snow and rain this afternoon. Snow accumulation around an inch. Highs in the mid 30s. Light and variable winds, becoming southwest around 10 mph this afternoon. Chance of precipitation near 100 percent. #MEwx
oh man Sentry is turbo fucking itself with chatbot code
https://x.com/jshchnz/status/2009372836419248263
this is an amazing text, every paragraph gets better*
-----
text too long for the alt text, here's the email:
David Cramer 89 Inbox - 12:52PM
“WW” Adopting Al at Sentry
To: Team
2026 is the year that we are asking everyone to get comfortable with using LLMs in their daily workflows. I want to talk about why that matters, what it looks like at Sentry, and how we're thinking about adoption.
If you haven't been paying attention LLMs have gotten to the point where they're actually quite powerful for a lot of tasks. They're by no means a cheap solution to many problems, but in a lot of situations we're optimizing for getting more done vs doing things the cheapest way possible. Think of this as Venture vs Private Equity: we're on the Venture side of the world, and we invest in growth. While my experience is primarily in using LLMs for engineering tasks, the same principles apply to other domains. I will focus on that for the sake of this conversation, but the message rings true for every department and every role.
You may know this already, but my role at Sentry is not contributing code; I do it because I enjoy it and it's really a core reason I'm even in this industry. It's the reason I built our MCP server, and recently I've even been able to start contributing to the core of Sentry again. What you may not know is I have not personally written code in more than six months. I've fully adopted agentic coding, and have been able to ship production changes to Sentry as well as other software with it. I've identified production security vulnerabilities, shipped internal tools, and brought a product to market fit - all by kicking off work in-between meetings and a few do-not-schedule blocks on my calendar. It's not always the most effective way to get the job done, but it's allowed me to multitask in a way that's never been possible before, and it's created productivity gains like I've never seen. It's not without its problems, but it's here.
Now is the time to get comfortable with using these tools yourselves. Many of you are, and some of you are in as deep as I am. Others may still be skeptical or just haven't taken the plunge yet. I want you to make the time to get comfortable with the tools, talk to your peers using them, and become experts. This is now a necessary skill for you to have in your career in order to stay competitive. It doesn't mean it's going to replace the skills you've already developed, but it is a huge boon to remove monotonous tasks, to improve your quality of life, to free up time for more interesting work. It _builds_ on the domain knowledge you've already developed and makes you much more capable in your role.
At Sentry we've been forward thinking here in the engineering org. We've unlocked budget, relaxed IP restrictions, and opened up the most cutting edge tools to our engineers. We've also been very intentional about adoption, but so far we haven't seen the organic growth we'd like in the company. About half of engineering is using Al tools in their daily workflows, but only a small subset of engineers have gotten truly comfortable with the state of the art. I want everyone to get comfortable with using the tools throughout your day - you don't have to go as far as I have, but you need to develop expertise in using them. It is quickly becoming a required skill.
Our focus in 2026 is full adoption across the company. I will be focusing on engineering, but the expectation is across the board adoption. We're going to be looking at how trends are going, trying to understand which tools are and are not working, and look for ways to accelerate education and adoption. This will rely on everyone at the company leaning in, both in making the time to learn the technology, going into it with an open mind, and being willing to help your peers. There's a lot of things happening and they're changing constantly, which means it's a continuous learning process.
One example of investment here is a tool I built over the last two days called Abacus. It's intended to help us understand adoption within engineering. I had to build this tool because nothing existed, and I was able to do it in two days because I am well versed at using Claude Code in combination with my engineering domain experience. We'll be using this to help gauge adoption - we want to see more commits touched by Al, more "average usage" of tools within the org. This project is a great testament of why the tools are valuable, but also I'm hoping folks within engineering and adjacent orgs get some value out of the visibility as well.
If you're in engineering and have ideas for what we can do here to grow adoption, to improve how we're using the tools today, reach out to ----- or -----. as they're good points of contact on this. There's also the #-----, for ad-hoc discussions. Again, this is not something that we can solve top-down, we will need folks to lean in and we are going to expect it.
If you're in another department, take the lead or work with your peers to learn how you can best adopt the tooling. I know many of you are, but I'm sure there's just as many that aren't certain what would help them in the day to day. The easiest place to start? Use that ChatGPT subscription we give you. If you've got something you're confused about, drop a question in there and see if it helps you. Maybe you're working on a new project that you're unfamiliar with? Have ChatGPT help you come up with a project plan. You'll be amazed at how fast the tools are progressing, especially if you tried them in the past and were not satisfied. The agents are a great peer.
Lastly, you may see big numbers when it comes to how much money we're spending on some of these tools. Make no mistake the numbers are not comfortable, and there will be a point where we need to address that. For now we're simply looking to stay ahead of the curve and understand what is working and what isn't, so we're being more relaxed with budgetary spending. This is especially true within engineering where some of us are spending more than $100/day on these tools. Yes, it's a lot. Yes, it's worth it. Yes, we will fix it at some point.
Pardon the typos, I've got more things to ship ;)
Best,
David
> I built over the last two days called Abacus. It's intended to help us understand adoption within engineering.
I think if I received this, I'd do two things
Start using a wrapper for git commit so that it always adds a co-authored-by suggesting I used Claude/whatever
Start resume polishing and looking to get the fuck out of dodge
@davidgerard I can totally see Sentry management hop on this bandwagon too, because I need to maintain an on-prem Sentry instance and it's the most insane docker-compose clusterfuck I have encountered.
They ship _everything_ that anyone ever yoloed with little apparent consideration into the codebase. Depending on how you count, there are 3-4 database-ish things, a couple of message-broker-ish things, a distributed S3 clone (the on-prem version runs on a single machine usually) and the list just goes on.
@redsakana @davidgerard We just nuked our on-prem Sentry (in Kubernetes) a few weeks ago. It was always an unstable, barely functional pile of dung. The one team that needed the functionality is happy to wait until we can implement an alternative option.
@rainynight65
Sentry feels like a good example of Horowitz's dictum that the worst outcome for a VC fund is a company with a good, solid product—it's gotta be either unicorn or bust.
The core functionality (crash tracking) is very valuable, but piling on slopcode won't change the fundamental trajectory of a unicorn that has lost its rocket-rocket-rocket motor (this one or any other). Maybe the real value of slopcode for the big VCs is that it can make failed unicorns go out faster in a blaze of glory and spin out their remaining funds to the massively unprofitable slop merchants, while slopbros sing glowing eulogies of their valiant efforts to adopt AI (if only they had they had started earlier to make it onboard the glorious revolution).
@davidgerard
@davidgerard To summarise: "There's a thing I enjoy, but I don't do it any more and now I am more Productive."
"Again, this is not something that we can solve top-down
...
we are going to expect it."
Their text-generating probability machine would never know the meaning of words.
@davidgerard To me, the most telling line was the last one: "Pardon the typos, I've got more things to ship ;)"
No time for attention to detail or correctness, he's too busy making stuff happen as fast as he can.
@davidgerard 'Yes, it's a lot. Yes, it's worth it. Yes, we will fix it at some point.' -- what on earth is he going to do when the bubble pops and the costs x10 on him (or x40, to take your optimistic figure)? How is he going to fix it?
@davidgerard "I was not hired to write code, but I do it anyway. Well, I don't do it, I ask a chat bot to do it for me and then take credit because I hate writing code."
These people have brain worms.
Aside from the other parts that are wrong with this - namely *everything* - who the hell takes 2 *days* to write something which the core of could likely be done in 15 minutes, tops, as a log analysis filter for the log of commits to their repo?
"I tried to code this thing in the most inefficient possible way, which would normally take me weeks, but with the aid of AI I was able to do it in only 2 days!"
Buddy, I think there's a reason you weren't hired to code.
I love and hate that I can make a complex, animated, interactive data visualization like this with matplotlib.
I can pick any trial from my experiment and play back the simulation in full detail. I can jump around, pause, and resume just by clicking on the figure. I can see nearly the full state of all the agents and their fitness as they evolve over time. It's very information dense, but useful, and it actually looks pretty decent!
On the other hand, the code is horrifying. Matplotlib has got to have one of the worst APIs of all time, and the animation tools are particularly gnarly.
gnuplot script that parsed the output log of the coev process, made a plot (replacing the contents of the gnuplot output window with the new plot), and then re-loaded itself to do it all over again. Modern tools might be gnarly but they are meaningfully less gnarly, it seems to me!A mere week into 2026, OpenAI launched “ChatGPT Health” in the United States, asking users to upload their personal medical data and link up their health apps in exchange for the chatbot’s advice about diet, sleep, work-outs, and even personal insurance decisions.(from https://buttondown.com/maiht3k/archive/chatgpt-wants-your-health-data/).
This is the probably inevitable endgame of FitBit and other "measured life" technologies. It isn't about health; it's about mass managing bodies. It's a short hop from there to mass managing minds, which this "psychologized" technology is already being deployed to do (AI therapists and whatnot). Fully corporatized human resource management for the leisure class (you and I are not the intended beneficiaries, to be clear; we're the mass).
Neural implants would finish the job, I guess. It's interesting how the tech sector pushes its tech closer and closer to the physical head and face. Eventually the push to penetrate the head (e.g. Neuralink) should intensify. Always with some attached promise of convenience, privilege, wealth, freedom of course.
#AI #GenAI #GenerativeAI #LLM #OpenAI #ChatGPT #health #HealthTech
@abucci Very interested in your post. I've done alot on my podcast (aiGED) to help the 65+ crowd know how to use and not use AI on health and medical issues.
An image of a cat is not a cat, no matter how many pixels it has. A video of a cat is not a cat, no matter the framerate. An interactive 3-d model of a cat is not a cat, no matter the number of voxels or quality of dynamic lighting and so on. In every case, the computer you're using to view the artifact also gives you the ability to dispel the illusion. You can zoom a picture and inspect individual pixels, pause a video and step through individual frames, or distort the 3-d mesh of the model and otherwise modify or view its vertices and surfaces, things you can't do to cats even by analogy. As nice or high-fidelity as the rendering may be, it's still a rendering, and you can handily confirm that if you're inclined to.
These facts are not specific to images, videos, or 3-d models of cats. The are necessary features of digital computers. Even theoretically. The computable real numbers form a countable subset of the uncountably infinite set of real numbers that, for now at least, physics tells us our physical world embeds in. Georg Cantor showed us there's an infinite difference between the two; and Alan Turing showed us that it must be this way. In fact it's a bit worse than this, because (most) physics deals in continua, and the set of real numbers, big as it is, fails to have a few properties continua are taken to have. C.S. Peirce said that continua contain such multitudes of points smashed into so little space that the points fuse together, becoming inseparable from one another (by contrast we can speak of individual points within the set of real numbers). Time and space are both continua in this way.
Nothing we can represent in a computer, even in a high-fidelity simulation, is like this. Temporally, computers have a definite cha-chunk to them: that's why clock speeds of CPUs are reported. As rapidly as these oscillations happen relative to our day-to-day experience, they are still cha-chunk cha-chunk cha-chunk discrete turns of a ratchet. There's space in between the clicks that we sometimes experience as hardware bugs, hacks, errors: things with negative valence that we strive to eliminate or ignore, but can never fully. Likewise, even the highest-resolution picture still has pixels. You can zoom in and isolate them if you want, turning the most photorealistic image into a Lite Brite. There's space between the pixels too, which you can see if you take a magnifying glass to your computer monitor, even the retina displays, or if you look at the data within a PNG.
Images have glitches (e.g., the aliasing around hard edges old JPEGs had). Videos have glitches (e.g., those green flashes or blurring when keyframes are lost). Meshes have glitches (e.g., when they haven't been carefully topologized and applied textures crunch and distort in corners). 3-d interactive simulations have unending glitches. The glitches manifest differently, but they're always there, or lurking. They are reminders.
With all that said: why would anyone believe generative AI could ever be intelligent? The only instances of intelligence we know inhabit the infinite continua of the physical world with its smoothly varying continuum of time (so science tells us anyway). Wouldn't it be more to the point to call it an intelligence simulation, and to mentally maintain the space between it and "actual" intelligence, whatever that is, analogous to how we maintain the mental space between a live cat and a cat video?
This is not the say there's something essential about "intelligence", but rather that there are unanswered questions here that seem important. It doesn't seem wise to assume they've been answered before we're even done figuring out how to formulate them well.
@abucci don't want to spoil this, but the real world is discrete on a fundamental level. You don't have, for example, continuous energy states for things like single electrons. On the other hand our brains also only contain a finite number of cells to process and store information. So, I think your argument does not disprove the possibility for artificial intelligence.
That said, I don't think that LLMs capable of intelligence.
I'm well aware of quantum mechancs, quantum field theory and so on. I'm no physicist, but my view is that quantization is not discretization of space and time, even though it's common to see folks confuse the two. The "basic stuff" of physics is continuous, even as phenomena we care about might not be. Likewise, a continuous guitar string has vibrational modes that are discrete; this fact doesn't mean the guitar string is discrete in nature nor that it vibrates in a discrete time the way a computer does.
Here are some examples illustrating why I say that:
The Schroedinger equation happens over continua: the wave function isn't over the natural numbers or integers, but over topological manifolds. This means in particular that if the wave represents a particle's possible positions, those positions could be anywhere in a continuous space. The wave function evolves according to continuous time, not a chunky time like a computer.
The strings in string theory (at least the variations of string theory I'm aware of) are continuous entities existing in a continuous space. They vibrate in different modes, which give rise to discrete phenomena analogous to guitar strings, but they can be located anywhere in a continuous space, and they vibrate in continuous time.
A free electron moving through a vacuum doesn't cha-chunk cha-chunk from one discrete point to the next; it moves smoothly along, as far as we know. The electron holds discrete increments of energy, which affects such things as the orbitals around a nucleus that it can occupy in probability, but as far as we know there's no impediment to it occupying any orbital shape at all. The theory tells us where it's most likely to be, not that it is 100% forbidden from being in certain places. That, too, is continuous in nature.
Something like causal set theory is a proper discretization of physics. In that, the physical world really is a discrete set of points, and what looks to us like continuity is only an appearance that emerges from our vast size relative to the Planck scale. But causal set theory and its relatives are not accepted as standard physics. Perhaps they will be some day, but that day is not today.
I hope that helps clarify where I'm coming from.
@abucci first, let me thank you for taking the time to explain. Please forgive my unintended rudeness. I did not want to come across as condescending.
I do disagree with you on the point of everything having a continuous base underneath. But I will have to read up on that.
Disagreement aside, you gave me something interesting to think about. Thank you for the input.
I need US politicians to pay attention to the Polish PM's warning here:
> “an attempt to take over [part of] a Nato member state by another Nato member state would be a political disaster,” and “the end of the world as we know it.”
@baldur
Or what the former chief of defence staff Sverre Diesen wrote in an op ed in a conservative paper dn.no recently: ….
History