
Years before she became the peculiar central thread linking a double homicide in Pennsylvania, the fatal shooting of a federal agent in Vermont and the murder of an elderly landlord in California, a computer programmer bought a sailboat.
The programmer was known to friends, foes and followers as Ziz. She had come to the San Francisco Bay Area in 2016 as part of an influx of young people arriving to study the dangers that artificial intelligence could pose to humanity.
In one of the most expensive regions of the United States, however, it is difficult to save the world when you can’t make rent. So she bought a boat for $600 and moored it next to a friend’s vessel in a marina. For five years, she used it as an occasional, cramped bunk.
In her waking hours, she worked on a blog of provocative and increasingly extreme ideas about confrontation and retaliation. At night, she fell asleep as the boat rocked back and forth, drifting with the flotsam of greater Silicon Valley.
Then, on the night of 19 August 2022, her sister and a friend reported that they saw her fall overboard. The US Coast Guard and local authorities scrambled boats and aircraft. After a nearly 30-hour search, neither Ziz nor her body could be found.

A newspaper in Alaska, where she was born, published a short obituary referring to her by her birth name: “Jack Amadeus LaSota left our lives but not our hearts on Aug 19 after a boating accident. Loving adventure, friends and family, music, blueberries, biking, computer games and animals, you are missed.” She was 31.
Ziz’s ideas did not die in the waters of the California coast. Nor did Ziz. She had faked her drowning and gone underground, before being arrested last month in western Maryland and charged with trespassing and illegal transportation of a firearm.
The targets of Ziz’s ire, who include some of Silicon Valley’s most prominent intellectuals, have taken security precautions. “Ziz is not stupid,” someone familiar with her, who asked to remain anonymous, told me. “This is a very smart person – both smart and crazy.”
Ziz’s writing had polarized members of a niche but influential movement of AI theorists and tech bloggers who call themselves the “rationalists”. The movement is less about specific ideas than it is about an ethos – applying rigorous, mathematically informed thinking to AI, philosophy, psychology and the big questions of our time.
Rationalists are odd, though often charming, people. They tend to be fantasy and sci-fi geeks, use lots of jargon and think intensely about things other people barely think about at all. They debate with earnest and deadly seriousness, and their preferred arena of intellectual combat is dense blogposts, often with footnotes.

Some in the rationalist community saw Ziz as a kook, even dangerous. But she had enough detractors and admirers to earn a school, of sorts, that an opponent dubbed the “Zizians”.
Very few people had ever heard of Zizians until this January, when a US border patrol agent pulled over two young people, dressed in black, driving a Prius hybrid near the Vermont-Canada border. The ensuing shootout killed a federal officer. It also left one of the alleged shooters in custody and the other, a math prodigy who had formerly worked as a quant trader in New York, dead.
From there, the story grew stranger. Reporting by Open Vallejo and other outlets found that the Vermont pair had ties to a group of leftwing anarchists in California – including one who won an $11,000 prize for AI research in 2023 and was also arrested this January for allegedly murdering a landlord.
A few things drew those people together: all were militant vegans with a worldview that could be described as far-left. All were highly educated – or impressive autodidacts. Most were also, like Ziz, transgender. But what they had in common, above all, was a kinship with a philosophy, which Ziz largely promulgated, that takes abstract questions from AI research to extreme and selective conclusions.
In reporting this story, I obtained exclusive chatroom logs that chart the Zizians’ radicalization and ultimate acceleration into violence. I examined thousands of words of blogposts, court filings and other documents, and spent weeks interviewing people familiar with Ziz and her circle.
Ziz has not been charged in any killings. Yet acquaintances are unsettled, and former teachers frightened of their apostate pupil. Many sources requested anonymity due to safety concerns – “it’s just, you know … murder cult,” one person said – or a desire to speak freely about the rationalist and AI-risk communities.
How, exactly, did hyper-intelligent young altruists – who studied at Oxford, Waterloo and Rice, won academic prizes and research grants, and spoke sincerely of bettering the world – enter a trajectory that has ended with at least six people dead? What would cause a former spelling bee finalist to write in a chatroom discussion of having “dramatic fantasies about becoming a knife murderer” – and then, a year later, allegedly participate in an attempt to stab someone to death?
The answers lie in a strange saga of idealism and disenchantment: a violent collision of internet culture and the real world – and perhaps a harbinger of more uncanny tidings to come.
Decades before Ziz was born, some philosophers and computer scientists began to predict a day when computers become truly and irreversibly smarter than humans. They called this event the “singularity”.
Because computing power generally improves at an exponential rate, and because a true AI would also learn and improve, some AI theorists thought that the singularity might arrive sooner than most people understood. In their view – which not all AI researchers share – the arrival of superintelligence would be like a tsunami: a ripple that rapidly builds, before anyone notices, into a towering wall of water.
Eliezer Yudkowsky, a burly man with a dark, bristly beard, grew up in a Modern Orthodox family in Chicago. As a precocious child in the 1990s, he became a voracious sci-fi reader. At a time when computers were still running on dial-up internet, he was particularly interested in the future of AI. An atheist who permanently rejected his family’s Judaism, he also wanted to flesh out a philosophy that could provide ethical frameworks without religion.
In the 2000s, Yudkowsky began building on the work of earlier AI theorists. In a series of blogposts, he argued that the tsunami was coming – and would remake everything in its tidal path. By the time he was 20, his writing won the attention of AI academics, who accepted him into their ranks despite the fact that Yudkowsky had never attended high school.
Today Yudkowsky is regarded as a leader of the “doomers”, a faction whose members believe that superintelligent AI will be unambiguously bad for humanity and perhaps even cause our extinction. That wasn’t always the case.
At first, Yudkowsky believed that the singularity had the potential to be the best thing that ever happened to humanity. In the world he hoped to bring about, a benevolent, centralized, god-like AI, sometimes called a “singleton”, could end hunger and poverty and protect the human species for eternity. But that AI, unless designed carefully, could also prove to be disastrous to humanity.
Researchers call it the “alignment problem”: would a superintelligent AI be hostile or benevolent? And is there any guarantee that its understanding of benevolence aligns with ours?
In 2003, Nick Bostrom, a Swedish philosopher, illustrated the risk of unintended consequences in a classic example. Say you program an AI to make paperclips. The AI is smart enough to not only make paperclips but learn new, better ways to make them. It consumes more and more resources to flood the world with paperclips. The AI resists efforts to switch it off, since that would conflict with making paperclips. Perhaps it even decides that humans, who are made of carbon, would be good raw material for more paperclips.
Bostrom’s point was that there is no reason to assume that artificial superintelligence, even if designed by humans for human ends, would think at all like a human; its thinking might be even more alien to ours than that of an actual extraterrestrial alien. In the worst nightmares of people like Yudkowsky and Bostrom, AI doom looks like an omnipotent version of Hal, the computer in 2001: A Space Odyssey that starts jettisoning humans from a spaceship’s airlock when they conflict with its sense of mission.
As he tried to proselytize the benefits and threats of AI, Yudkowsky was frustrated to encounter disagreements that leaned on what he viewed as fallacies. So he started writing blogposts explaining logic and decision-making with a statistical method called Bayesian inference. These essays were edited into a corpus – 2,100 pages in one version – that rationalists refer to, somewhat reverentially, as the “Sequences”.
Yudkowsky was trying to teach people how to think better – by guarding against their cognitive biases, being rigorous in their assumptions and being willing to change their thinking. Although rationalists tend to be polite enough, some, including Yudkowsky, hold that it is in effect impossible to “agree to disagree”: that if two “rational agents” who share the same assumptions come to different conclusions, one of them must be wrong.
An online community, including many people who work in tech, gathered around the blog Yudkowsky founded in 2009, LessWrong. Soon, contributors began writing their own essays. The rationalist movement was born.
Yudkowsky felt that there was still a larger, untapped audience. In 2010 he started publishing Harry Potter and the Methods of Rationality, a 662,000-word fan fiction that turned the original books on their head. In it, instead of a childhood as a miserable orphan, Harry was raised by an Oxford professor of biochemistry and knows science as well as magic; at Hogwarts he is assigned to the smart kids’ house, Ravenclaw, instead of the jocks’ Gryffindor; and he saves the world by embracing a manipulative, dark streak that might shock JK Rowling’s Potter.
Yudkowsky wrote the series in part to recruit talent for the alignment problem, and he succeeded wildly. The series was so popular that it birthed fan fiction of its own. Thousands of people around the world read it, including a young geek, interested in atheism, veganism and utilitarianism, who later took the name Ziz.
Two organizations would shape Ziz’s thinking – then become bedrocks of her rage. In 2000, Yudkowsky founded a thinktank that would later be called the Machine Intelligence Research Institute, or Miri. And in 2012, a group of rationalists founded a daughter organization of sorts, the Center for Applied Rationality, or CFar. Both are based in Berkeley, near Silicon Valley.
While AI theorists at Miri worked to bring to life a beautiful and safe singularity, CFar would promote rationalist thinking, build a social scene and perhaps help to feed bright research minds to Miri.
The rise of those two organizations also coincided with the emergence of the “effective altruism” movement. As a school of thought, effective altruism has now become tainted by association with its most famous poster child, the crypto entrepreneur and convicted fraudster Sam Bankman-Fried, but at the time it was catching fire among earnest techies.
Founded by some young philosophers at Oxford in the 2000s, effective altruism revitalized interest in the moral philosophy of consequentialism – the idea that ends do, to some extent, justify means – and its stepchild, utilitarianism, which holds that the goal of human endeavor should be to do the most good for the most people.
Soon, those movements began to build a pipeline of money toward research on existential risks to humanity’s future. Billionaires such as Peter Thiel, the co-founder of Paypal, and Dustin Moskovitz, of Facebook, gave funding for non-profit AI research.
Like many tech mavens, Yudkowsky is an advocate of “transhumanism”, the theory that technology will one day enable humanity to transcend the limitations of our bodies. He was deeply shaken by the death of his brother, in 2004, and his work reflects a cosmic horror at any sapient brain ever being extinguished.
Transhumanists believe that scientific breakthroughs will not only eventually allow us to live longer, but perhaps also give us a kind of immortality, by creating digital versions of our minds that live on after we’re dead. Some also believe that computer power will eventually become so strong and cheap that those digital minds can each live in their own fully realized virtual-reality worlds.
In this cosmology, an AI machine-god could give each of those minds utopias of their own choosing. But it could also choose not to – or, worse, subject people to eternal torment. A movement of atheists had found a new vision of heaven, and of hell.
Ziz’s writing would come to be consumed by visions of damnation. But when she emailed a CFar listserv back in May 2015 to ask for leads on programming jobs in the Bay Area, she had a chipper tone that would be unrecognizable a few years later.
“I think it’s about 50% likely I’ll be wanting one, starting in early August,” she wrote. “What’s the best way to find one? Preferably in Berkeley so I can not-inter-city-commute to alumni dojos and Berkeley meetups?”
She sketched a CV of sorts, with positive reviews of coding projects she’d done and work experience (internships at Nasa and Oracle), and noted that she was currently a master’s student in electrical and computer engineering at the University of Illinois.
Ziz was exactly the sort of person who would be attracted to the rationalist scene. She was a computer wonk who’d grown up in Alaska with a father who was an AI researcher and instructor and a mother who was a school counselor. While still in middle school, Ziz and some friends hacked their school’s payroll system to award money to their favorite teachers, a former teacher recently told the Boston Globe, and cut the salaries of ones they disliked. The teacher also said that Ziz had problems with emotional regulation.
(I contacted email accounts associated with Ziz but did not receive a response.)
Ziz’s writing recounts, with anger, what she describes as her mother’s reluctance to accept her trans identity, and speaks of going through puberty feeling that her genitals were an “alien parasite” on her body. She also had a visceral aversion to eating animals, and became a vegetarian by 2010 before coming to regard vegetarianism as an unacceptable compromise to veganism.
After receiving a bachelor’s degree in computer engineering at the University of Alaska, Fairbanks, Ziz moved to Illinois with the apparent intent of pursuing a PhD in computer science or physics. But the lure of the Bay Area was stronger. A rationalist scene had flourished there, and people were flocking to the area to take workshops at CFar and apply for jobs at Miri and other AI organizations.
It was an exciting time to be in Berkeley, Rachel Wolford, a startup founder who used to consider herself a rationalist, told me – “a place filled with people like me, who shared my desire for skeptical interpretations of the world. I felt welcome, partly because it was a place where people tended to be non-neurotypical in the same way I was, and partly because they had similar ideologies about the way you should approach figuring out what you believe.”
At an early CFar workshop, Ziz described the “comparative advantage” she could offer the world, according to someone present, as a willingness to do good through means other people might consider bad.
“She’s just very intense,” Octavia Nouzen, an acquaintance of Ziz’s, told me. “I mean, the word that comes to mind is just ‘intense’ – just, like, a super-penetrating gaze.” A rationalist who met Ziz during this period found her “nice enough but pretty quiet. They [Ziz] didn’t tend to come out for social events very much. They kind of just holed up in their room.”
Ziz was also cognizant of her own challenges, and her writing reflects some anxiety about being able to communicate well with others; at one point she took a seminar in “Authentic Relating”. Yet if Ziz was unusual, she did not, at first, necessarily stand out to her peers. Eccentricity was common in the Berkeley scene, even expected, and a core tenet was that people needed to feel free to discuss strange ideas, in good faith, without judgment.
There had been unintended consequences, however, to Miri and CFar’s efforts to gather the best and brightest to solve the alignment problem.
A lot of eager young idealists were showing up who fit a certain mold. They were hyper-intelligent but not always wise, and had spent dizzying amounts of time on the internet. They were information sponges with a tendency to get sucked into rabbit holes, people with ambition but not always execution.
These were people who had grown up reading Harry Potter fan fiction about rationalism by the glow of a computer screen, then quit families who didn’t understand them – or were in some cases abusive – to come to a mecca where they hoped to find intellectual parent figures and stimulating work. They’d spent lonely, sometimes traumatized, childhoods reading about the heroism of others; now they could finally be protagonists.
But not everyone made the cut. Miri, CFar and other organizations could not give jobs or research grants to everyone interested, and the Bay Area is hideously expensive.
Ziz’s money worries started almost immediately. Coding jobs she’d been promised fell through or didn’t work out. She did several rounds of interviews at Google that never went anywhere and had to ask for financial help from her parents. She stayed at some expensive short-term rentals, and one sublet where, she wrote, the landlord was abusive.
Around this time, she also had a traumatic experience. A man saw her walking at night and offered her a ride. The encounter turned into what she described as a sexual assault.
Rationalists in the Bay Area often encountered financial problems. A common fix was a “rat house” with anywhere from four to nine roommates, often in violation of tenancy laws. Even then, some people struggled to make rent, and tense social or power dynamics could develop if one person with a well-paid tech job was fronting rent for everyone else.
At their best, rat houses were fun places to be an intellectually minded idealist. At worst, rationalists told me, they were like halfway houses of smart but dysfunctional people, too fired up solving the alignment problem to do dishes, unaware that they may have been, in some sense, in the discard pile of AI-risk research.
So when Ziz met Gwen Danielson, a 23-year-old fellow rationalist, she was intrigued to learn of her offbeat solution to high rent: living in a sailboat, for just the cost of mooring. Danielson offered to let Ziz join her on the boat.
Like Ziz, Danielson had abandoned formal schooling – a full ride at Rice to study ethical AI, according to the San Francisco Chronicle – and, like Ziz, was trans and avoided eating meat. They even looked so similar, Ziz later wrote, that “strangers assumed we were siblings”.
During their long exchanges of ideas, Danielson said she was an otherkin (someone who identifies as nonhuman), Ziz wrote, specifically a dragon:
They [Danielson] showed me a dragon-shaped necklace, and said it was a reminder of how they would turn into a dragon after the singularity. And eat their human body, since that seemed like the most fitting way to dispose of it. I said I’d want mine burned once I could escape it.
Danielson was shy, people told me, and gradually became so deferential to Ziz that others were concerned. Ziz found living on Danielson’s boat less than ideal, however: Danielson talked to herself constantly.
So Ziz bought her own sailboat and moored it nearby. She decided to name it the Black Cygnet. A cygnet is a baby swan; in popular theories of knowledge, a “black swan” is an unexpected, tectonic event that seems obvious only in retrospect.
In her blog, Ziz later attributed her radicalization in part to her housing problems. “I came to see … artificially high housing prices as something that was crucial to escape for anyone who wanted to actually try to save the world. Who wouldn’t accept a 90% probability of [AI] doom.”
Ziz’s willingness to talk bluntly about these frustrations – high rents, bad landlords, trouble finding a job – earned her blog a following. In an insular community in which many people believed that airing dirty laundry could harm the cause, she attracted credibility among other young, leftwing rationalists.
Despite her unhappiness, Ziz’s worldview hadn’t yet hardened into cynicism. She could be kind, too. In an essay online, Zack M Davis, a rationalist who later drew Ziz’s anger in an intellectual disagreement about gender identity, mentions in passing that when he had a stress-induced psychotic break, in 2017, she dropped off chocolates – “allegedly good against dementors”.
Around this time, Ziz and Danielson dreamed up a project they called “the rationalist fleet”. It would be a radical expansion of their experimental life on the water, with a floating hostel as a mothership. They raised some money for the project. On 1 July 2017, Ziz sent an email to a CFar listserv:
We need a total of 5 people who are willing to serve as crew moving a 94’ tugboat (currently named Caleb) from Ketchikan, AK, to the Bay Area … For crew members with no nautical experience, all direct expenses including transportation to Ketchikan, from the Bay back to their place of residence, food, and direct incidentals would be covered.
They found some crew members, including one, Dan Powell, who had nautical experience from a stint in the US navy. On 20 July, they set out from Alaska. It went mostly without incident, though one crew member who found Ziz’s assertiveness off-putting debarked early.
The ship successfully made it to San Francisco, but a leaky second world war-era tugboat wasn’t the brilliant investment Ziz and Danielson had believed. The US Coast Guard declared the ship a “threat to the public health” and demanded an improvement plan. Powell lost tens of thousands of dollars on the project.
Ziz continued blogging. She made friends in the rationalist scene, especially among a group of technically minded trans women. They included Alex Leatham, known as “Somni”, and Emma Borhanian.
Leatham had studied math at UCLA and UC Berkeley and seemed to be a vagabond. Someone who went to high school with Leatham, in an upper-middle-class suburb of LA, recalled her to me as “really smart, beyond genius”, conspicuously bored in most classes and “extremely socially awkward”, but part of a group of math-geek friends. Her yearbook quote was Ich aufsteigen: “I rise.”
Unlike Leatham, Borhanian had a conventional day job, for a time. She was a software engineer at Google and made good money. In 2017, she donated $12,000 to Miri.
Jessica Taylor, a former AI alignment researcher at Miri, told me she was loosely part of that group for a while. In late 2017, she had a nervous breakdown. “I went kind of psychotic,” she elaborated in a recent YouTube interview. Afterward, “Ziz offered to, like, help repair my mind, in exchange for information,” she said. “I’m in retrospect glad I declined.”
Taylor may have been lucky. Leatham and Borhanian would become two of the most extreme apostles of Zizian ideas. Today, one is incarcerated, and the other is dead.
One of the traits that distinguishes humans from machines is our ability to live with contradiction. Arguably, we need nuance – even if that flexibility also allows a certain amount of moral hypocrisy. Many of us would consider it murder if someone harmed our cat or dog, yet eat meat. We raise money for a neighbor with cancer, and blithely scroll past a news article about a cholera outbreak in Sudan that sickens hundreds of people.
But Ziz, according to her writing and to people who know her, has an engineer’s obsession with taxonomy and consistency – albeit “consistency” that often involves leaps of logic. Once she comes to a particular conclusion, she applies it literally, maximally, and with confidence impervious to restraint. Her writing is contemptuous of the idea that actions should be judged right or wrong merely because laws or social norms say so.
Her personal philosophy also draws heavily on a branch of thought called “decision theory”, which forms the intellectual spine of Miri’s research on AI risk.
Decision theory studies how “rational agents” behave in situations of uncertainty. In trying to guess how a superintelligent AI in the future might act, we might try to predict that AI’s thinking with decision theory. Or an AI in the distant future, trying to guess what its human creators would have wanted it to do in a situation, might run a similar prediction to “ask” us.
Because we and the future will be effectively communicating with each other through these predictions, some rationalists do not believe that our relationship with the future is linear. To a certain sort of person, such as Ziz, the schools of thought that grapple with these hypotheticals pose exciting questions. Or frightening ones.
In 2010, a writer on LessWrong published a notorious thought experiment that became known as “Roko’s basilisk”. The gist of the convoluted hypothetical was that, in the distant future, a superintelligent AI might decide to punish people who had been able to bring it into existence sooner, but didn’t – rich people or, say, AI researchers. Roko was suggesting that the future could blackmail the present.
Some readers supposedly reacted with panic, believing that merely by having become aware of the hypothetical, they had been condemned to an eternal, AI-administered hell. Yudkowsky was furious that Roko had even posted the theory, in part, he pointed out, because blackmail only works if the person being blackmailed is aware of it. He deleted the post, and banned discussion of it for several years.
The essay is sometimes cited as an example of what Bostrom, the Swedish philosopher, has called an “infohazard” – information that is innately dangerous. Today, rationalists tend to react with embarrassment if Roko’s basilisk is brought up, and dismiss it as a silly thought experiment that should not have been taken seriously.
Yet Ziz did. She mentions it often in her writing, in a way that suggests intrusive thoughts. Similarly, she seems preoccupied with moral purity, to a point verging on obsessive scrupulosity. She describes her veganism in misanthropic terms – Zizians call meat-eating “carnism”, and non-vegans “flesh-eating monsters” – and in one essay recounts her anxiety at discovering ants in a shower. She weighs the costs of being late to work, thereby risking her job, against the moral cost of killing the insects.
Ziz became increasingly convinced that the AI-risk community had lost its way: Miri, in its early years, had started as a project to accelerate AI, before pivoting to focus on AI safety; she believed it wasn’t doing enough to prevent a hostile AI – that its leaders were self-interested people who would sacrifice others to an AI hell to save themselves, and that their considerations of the future did not account for the wellbeing of other sentient animals.
She began to believe that it was not only probable but virtually certain that a future AI would subject her, personally, to some kind of damnation. Her writing also treated abstract ideas with increasing, and alarming, literality. Decision theory became, in her hands, justification for confrontation, escalation and retaliation.
“Ziz didn’t do the things she did because of decision theory,” a prominent rationalist told me. She used it “as a prop and a pretext, to justify a bunch of extreme conclusions she was reaching for regardless”.
Ziz also felt that naturally altruistic people were easily victimized by others because of their goodness. She wondered if good people should learn to act evil – that perhaps the only way the world could be saved was by a cadre of intelligent people who adopted the methods of sociopaths.
Some rationalists were surprised, and a bit put off, when Ziz announced that she would now be known as Ziz. The name comes from Worm, a roughly 7,000-page serial fantasy story that many rationalists have read. Ziz is an alias used by a monster called the Simurgh, part of a group of villains called the Endbringers.
The Simurgh has an unsettling power, a reader of Worm told me. She’s an infohazard: anyone “who has encountered the Simurgh for too long, listened to the Simurgh for too long, becomes a liability. Because at some point in the future they will go crazy and cause a bunch of destruction.”
A couple years ago, Oliver Habryka, the CEO of Lightcone, a company affiliated with LessWrong, published an essay asking why people in the rationalism, effective altruism and AI communities “sometimes go crazy”.
Habryka was writing not long after Sam Bankman-Fried, a major funder of AI research, had begun a spectacular downfall that would end in his conviction for $10bn of fraud. Habryka speculated that when a community is defined by a specific, high-stakes goal (such as making sure humanity isn’t destroyed by AI), members feel pressure to conspicuously live up to the “demanding standard” of that goal.
Habryka used the word “crazy” to mean extreme or questionable behavior. Yet during the period when Ziz was making her way toward what she would call “the dark side”, the Berkeley AI scene seemed to have a lot of mental health crises.
“This community was rife with nervous breakdown,” a rationalist told me, in a sentiment others echoed, “and it wasn’t random.” People working on the alignment problem “were having these psychological breakdowns because they were in this environment”. There were even suicides, including of two people who were part of the Zizians’ circle.
Wolford, the startup founder and former rationalist, described a chicken-and-egg situation: “If you take the earnestness that defines this community, and you look at civilization-ending risks of a scale that are not particularly implausible at this point, and you are somebody with poor emotional regulation, which also happens to be pretty common among the people that we’re talking about – yeah, why wouldn’t you freak the hell out? It keeps me up at night, and I have stuff to distract me.”
A high rate of pre-existing mental illnesses or neurodevelopmental disorders was probably also a factor, she and others told me. (Respondents to a 2016 survey of users of LessWrong reported rates of ADHD significantly higher than the average for the adult US population.) The community also attracted people eager for hacks and shortcuts (“speedruns”, “munchkining”) for self-improvement or “optimization”.
Rationalists had considerable openness to new experiences, but sometimes poor discipline or judgment, and could embody the old joke about being so open-minded that one’s brain falls out – trying Buddhist meditation, polyamory, LSD and a radical diet in the same week, without considering why that might be a bad idea.
Some people were fond of the idea that there is a thin line between genius and psychosis, rationalists told me, and were eager to find it. Ziz is “actually straight edge [and] super paranoid about drugs”, Nouzen has said, but many rationalists weren’t, and psychedelics use was common.
To the extent that the community around Miri had always had a tinge of cultishness, some cliques took that tendency further, adopting the language of secret societies, experimenting with ritual magic or trying to give themselves alternate personalities called “tulpas”. People would speak of “mental subprocesses” being wielded almost like sorcery, or of a particular idea infecting them like a contagious virus.
In 2021, a former employee of Leverage Research, an organization that hired heavily from the LessWrong blogosphere, published an essay accusing the group of behavior including two- to six-hour “group debugging sessions in which we as a sub-faction … would attempt to articulate a ‘demon’ which had infiltrated our psyches from one of the rival groups”.
(Leverage’s CEO, Geoff Anders, responded at the time by saying that the essay “took incredible courage to write”; a representative told me that Leverage disagrees with the essay’s characterizations, and that an inquiry found them overblown.)
There could also be a tendency to treat people like gurus. One such person was Alice Monday, a rationalist who was eventually banned from CFar events for confrontational behavior. (I was unable to contact Monday.) Ziz treated her as a mentor. She also became close with Monday’s roommate, Michelle “Jamie” Zajko, a bioinformatician who had grown up in an affluent suburb of Philadelphia.
Soon, Ziz “started writing stuff that sounded a lot like Alice”, a rationalist told me, and acting “a bit more like Alice, more aggressive, more argue-y”. By then, Ziz had adopted long black robes as her signature look. She called her aesthetic “vegan Sith lord”.
She had, she later elaborated in a Discord chatroom, “constructed an idiosyncratic religion where I’m religiously required to do whatever I want”.
Ziz was still blogging regularly. She and Danielson were toying with some unusual theories about the brain. They speculated that every person is in fact “two” people, because the two hemispheres of the brain could have different personalities, genders, and good or bad moralities. They also experimented with “unihemispheric sleep”, a method of trying to make half of your body sleep while half remains awake.
To someone who is not a rationalist or AI-risk thinker, let alone a Zizian, much of Ziz’s writing would look like gibberish, perhaps even written by someone suffering from hallucinations. Here is one passage from 2019:
I think vampires are people who have made the choices long ago of a zombie or lich, who have been exposed to the shade to such a degree that it left pain that cannot be ignored by allowing their mind to dissolve. The world has forced them to be able to think. They do not have the life-orientation that revenants have to incorporate the pain and find a new form of wholeness.
Yet Ziz’s writing was, at least in some sense, coherent, which was part of what made it seductive. It was cipher, or shorthand, targeted to an extraordinarily specific reader – someone who knows computer jargon, has mathematical ability, has read hundreds of pages of Yudkowsky’s canonical work, understands decision theory, and is familiar with an array of niche fantasy and sci-fi references.
Even then, she often coined her own concepts or gave new meanings to phrases from elsewhere. The vocabulary is so confusing that Nouzen helped to compile a glossary. When I saved the glossary as a Word document, it came to 78 pages.
The only way to understand Ziz’s writing was to learn her language and theories; the problem was that this had a tendency to turn people into Zizians.
By 2019, fissures were creeping into the rationalist community. A few years earlier, a former employee of Miri had created a salacious webpage that accused people affiliated with the organization of statutory rape. The president of Miri responded at the time, according to Wired, by saying that he had investigated “some of the most serious allegations” and “found them to be straightforwardly false”.
Miri eventually reached an unspecified agreement with the ex-employee, who signed a document retracting his claims.
Ziz came to believe that Miri had paid a monetary settlement to make the website’s author go away. She was enraged. A core principle of Miri’s understanding of decision theory is that a rational actor should never pay blackmail, because it encourages blackmailers. She seemed as angry about what she saw as that hypocrisy as about the allegations themselves. Her entire worldview had been built on the credibility of decision theory; now the scaffolding threatened to topple.
At the same time, Ziz, Danielson and Leatham, who were having trouble finding employment, were considering suing CFar because, they argued, it discriminated against transgender women in hiring. (CFar disagrees with that characterization.)
On 15 November 2019, the day of CFar’s annual retreat, Ziz sent an email to hundreds of people accusing Miri and CFar of “institutional betrayal”. She, Borhanian, Leatham and Danielson, dressed in black and wearing Guy Fawkes masks, attempted to disrupt the retreat. They handed out rambling fliers accusing Yudkowsky of contributing to an AI “arms race”, and blocked a road with a truck.
Someone called the police and said, wrongly, that they were armed. A Swat team mobilized and arrested all four. They were booked, subjected to what they later said were humiliating strip searches, and spent several days in jail. They were charged with four misdemeanors, as well as felony conspiracy, though the cases were never resolved.
Ziz was consumed with revenge. Before she was arrested, she had also sent an email to Yudkowsky:
There is one crime in my religion thought to be punishable by hell. And that is lifting an evil god to heaven, feeding your fellow sentient beings to it in order to reach heaven yourself … I will burn down all evil gods on their thrones; see them in hell if I must. And I stake my soul, and much more importantly the multiverse, on justice without compromise or concession … If you want to make it out of this universe alive, I suggest you do the same.
While living at the marina, Ziz and Danielson had met Curtis Lind, a friendly older man who docked his 117ft boat there. Lind was kind, his friend told Open Vallejo; he once tried to convince city officials to use cruise ships as housing for homeless people. He let ducks and geese live on his boat.
“He tended to not have very good judgment in his choice of people,” his friend Thomas Young said. “That’s partly how these people got into his life.”
The rationalist fleet hadn’t worked out, so the Zizians had moved to a new strategy: “slackmobiles” made from covertly converted box trucks. The theory was that a commercial truck provided the mobile and cheap living of an RV, but more inconspicuously, to avoid camping permits and the attention of cops.
Ziz and her friends acquired trucks and parked them near the docks while they retrofitted the interiors with beds and cooking equipment. They cut holes in the bottom of the trucks’ holds, to access them without opening the cargo doors, and wriggled in and out to the raised eyebrows of people at the marina.

Lind had decided to sell his boat and move to a trailer lot he owned in Vallejo. He was thinking of letting artists and craftspeople live on the lot in exchange for cheap rent. The Zizians loved the idea.
Danielson made an agreement with Lind to use some trailers on the property as well as park up to half a dozen trucks there. She and Ziz abandoned their tugboat to sink slowly into the harbor, leaking oil.
By early 2020, Ziz, Danielson, Borhanian and Leatham had moved their trucks to the Vallejo lot. The Zizians were increasingly isolated from the larger rationalist scene. Vallejo is about 30 minutes from Berkeley, and they’d been banned from CFar functions. They were outcasts of outcasts, too weird even for a community that prided itself on weirdness. Yet new people continued to trickle into the circle.
One was Maximilian Snyder, known as “Audere”. He’d graduated from a prestigious private school in Seattle, then done academic work in philosophy and computer science at Oxford. Another person was someone who later told police her name was Suri Dao.
According to people familiar with the group, prosecutors and reporting by the San Francisco Chronicle, Dao was almost certainly a recent high-school graduate from Denver named Tessa “Elizah” Berns, and almost certainly also the author of an account on Tumblr and Discord called Silver-and-Ivory.
The Tumblr account contains dense discussions of leftwing political ethics, and expresses anger at parents (“almost all parents are evil in intent”), schooling and psychiatry. The author describes dealing with “scrupulosity” and a “tendency to freak out and assign myself terrible painful punishments hyperbolically when I think I’ve screwed up”. The author also says she identifies as “bigender” and uses either masculine or feminine pronouns.
Berns was adopted from China, a childhood friend told the Chronicle, and had been bullied as a child. She’d been a spelling bee finalist in middle school and finished high school as a National Merit Scholar. She was thinking of running away from her first year of college, an idea which found sympathetic ears on a rationalist Discord chatroom that the Zizians frequented.
Like the other Zizians, Berns welled with an anger at the world that seemed to braid genuine, visceral despair at moral injustice with adolescent self-absorption. Discord conversations would sometimes turn into speculations about the psychologies of famous killers such as Ted Kaczynski; during one such discussion, Berns offered a strange aside:
silver-and-ivory:
it’s really awkward talking about this because I’ve had very dramatic fantasies about becoming a knife murderer
and I wrote this whole fake news article about it ( … I’m not going to share here)
A common theme of the chatroom discussions was resentment of authority – the government, the AI establishment, and especially parents and schooling.
Here are Borhanian and a user who has been identified as Leatham:
𒀭 💮 [Leatham]:
not only do public schools function like public hospitals function like public courthouses function like public jails, they are also made out of the same materials. the same plastic chairs with cold aluminum legs, the same pinboards on the walls, the same clocks, the same “administrators” walking around. they all blur together. id microsleep during school too. it was so boring. i knew more math than the teachers and was forced to be there
emma.:
i remember in like, second grade, a bunch of teachers standing around my desk saying i was hopeless and i’d never be able to do anything because when they told me to practice drawing cursive L’s and I’s I couldn’t get consistent sizes, and started drawing a border around my page instead
later I was diagnosed with dyspraxia, learning disability affecting fine motor tasks …
𒀭 💮 [Leatham]:
they confiscated my katana [samurai sword], tried to make me sit in chairs a certain way, gave me an F for turning in a 10-page poem when they asked for a 1-page poem. i carved out some space such that i would just have chinese tea ceremonies at the back of class and talk with someone about math and [LessWrong] and would sit in my chair weird and refuse to pay attention and they stopped trying to stop me.
… also i threw desks around because i was bored. not at anyone, i just wanted to throw desks. i did refuse to go to school my last year of highschool.
emma.:
… when i went to private school my parents would threaten to send me to public school instead
everyone said i was so weird the kids there would super bully me and i’d get beat up or something
we must get revenge someday. for all of this.
There was a similar discussion about the ethics of obligation to family:
emma.:
umm i’m curious what other ppl would do [with regards to] spending large amounts of money to save the life of a birth-family member
Ziz:
I would not …
My family’s stance on the cosmos and [mine] are fundamentally misaligned. They chose to give their souls to the gods of the easy path, including evolution, as they commit suicide. My parents did not choose to create me in some timeless contract, some considered decision, they rolled the dice because their programming told them to.
Adopting vocabulary from the multiplayer video game Among Us, some of the Zizians had started to speak of foes as “impostors” and of “airlocking” people they didn’t trust.
Chatroom conversations sometimes spiraled into violent fantasies:
emma.:
like imagine getting tons and tons of revenge for JUSTICE!! isn’t that cool?
silver-and-ivory:
are you sure? do you get to hold bloody knives?
as you plunge the knife into someone’s bared throat
am unclear on how much i actually want to stab my mothers [her parents were gay women] but probably a lot
emma.:
Yes
i mean if you want
those can totally be good things
emma.:
like there’s totally a different way to do good vengeance vs evil vengeance, like, the shape of the poetry is different but
brutally plunging a knife into [a person’s] throat sounds like a clean kill
they have to be actually bad though
like forcing you to go to school or something
on pain of death
which is totally a thing parents do
or any of the other really bad things they do
“kill your parents” is a very good meme
silver-and-ivory:
so much of my revenge imagery is about bleeding people dry, hanging the white women on my wall like beautiful dead butterflies, etc etc etc
out of resentment for the false ideal they claimed to be and the reality of their imperfect selves
so that they can be perfect again in death
After two years of internal feuds, scandals and a pandemic, the rationalist community was under severe strain. Advances in machine learning had also called into question many of Miri’s technical assumptions about AI. Then the research organization OpenAI began investing in AI development at a scale that doomers found disturbing.
In April 2022, Yudkowsky published a bleak essay of defeat: “It’s obvious at this point that humanity isn’t going to solve the alignment problem, or even try very hard, or even go out with much of a fight. Since survival is unattainable, we should shift the focus of our efforts to helping humanity die [with] slightly more dignity.”
Confusion and disillusion slowly set in. Many people had paid dearly, personally and financially, to come to northern California to solve the alignment problem. They’d devoted years to the mission, often at great opportunity cost to their careers. They’d made the best friends they’d ever had, and then – in an environment where some people believed it was literally impossible to agree to disagree – lost them to bitter intellectual schisms, exhaustion and nervous collapse. They’d sacrificed to be present at the birth of the future, and now discovered that the future was already being born elsewhere.
At the Vallejo lot, the Zizians embraced this burnout. Once upon a time they’d been National Merit Scholars, math-club types. Now, free from the constraints of external authorities and structures, they returned to a state of nature. They walked around naked, carried katanas, kept their own hours. They were feral, and relished it.
Despite their peculiarities, the Zizians’ time at the lot had started well enough. Within a few months of their arrival, however, they’d stopped paying rent, citing a state Covid moratorium. Borhanian was probably the only one with any savings, and she had quit her Google job. The Zizians’ rent strike went on for two years, in an account Lind later gave to a documentary filmmaker, and “got to the point where if they saw me they’d run away”.
After the Covid law ended, in 2022, Lind decided to evict them.
The Zizians requested a meeting to ask to stay for another two months for free. At the meeting, “I said, no, I can’t do that,” he recalled. “So one of them took out a … fairly large folding knife. And started patting the blade in his hand like this and looking at me and smiling.”
Lind turned around and left. The next day he bought a gun.

Two days before the sheriff’s office was scheduled to evict the Zizians, Berns approached Lind at the lot, according to Lind, and asked for his help turning off a tap leaking water into a trailer.
As he bent to look, something hit him on the head and he blacked out. When he woke up, at least three of the Zizians were allegedly standing around him with knives.
“[T]he right side of my skull was shattered,” Lind later said. “And I was bleeding from numerous puncture wounds … The back of my neck had some severe cuts. Like somebody was trying to cut my head off.” His torso was impaled with a samurai sword.
Lind drew his gun, which was concealed in a pocket, and started shooting. He wounded Leatham and killed Borhanian. He stumbled away with the sword still in him. He survived, but lost an eye.
Some friends of the Zizians have argued that the eruption of violence wasn’t so one-sided – or even that the Zizians, not Lind, were the ones acting in self-defense.
The authorities did not agree. Leatham and Berns were arrested and charged with attempted murder. They were also charged with responsibility for Borhanian’s death, under a California felony murder law.
At the lot after the attack, police tried to interrogate a tall blond person there whose description matches that of Ziz. That person suddenly began having an apparent medical emergency. At a hospital, they vanished.
When Lind’s friends and family later searched the Zizians’ trucks, they found more than a dozen encrypted computers and an array of surgical equipment. They also found containers of lye, which they believed the Zizians intended to use to dissolve Lind’s body.
Once the tipping point was reached, events seemed to accelerate by their own logic.
In February, 2022, according to police, Michelle Zajko bought a 9mm pistol, ammunition and a holster in Vermont. She was now living there with Monday, as well as with a third acquaintance, Daniel Blank, a Berkeley bioengineering and computer science grad whose parents reported him missing.
Zajko published a blogpost alluding to a power struggle of sorts. She claimed that Ziz had recently told her that the only way to regain her trust would be to murder Monday (“Ziz helpfully suggested I use a gun with a potato as a makeshift suppressor, and that I might destroy the body with lye”), and that if she didn’t, Ziz would come to Vermont and kill her.
That never happened. Later that year, for unclear reasons, she and Ziz reconciled.
Then, on the night of 31 December 2022, according to police documents, a neighbor’s doorbell camera captured a car arriving at Zajko’s parents’ house, in a suburb of Philadelphia. Shortly thereafter there was a flurry of movement at the Zajkos’ door. A higher-pitched voice appeared to be saying “Mom!” Two people later left the house.
A few days later, Rita and Richard Zajko were found dead at home. They had been shot to death, with what police believed were 9mm bullets, during an apparent home invasion.
Not long after that, Zajko went to Pennsylvania to identify her parents’ bodies and make funeral arrangements. She was their only heir. State troopers executed a search warrant at the hotel where she was staying, detained her, and found $40,000 in cash in her Subaru.
The police also found Ziz in an adjoining hotel room, along with Blank. When they arrested them, Ziz lay down on the ground, shut her eyes and refused to move. She had to be carried out. Her booking photo shows her eyes closed, as if catatonic.
Ziz was charged with disorderly conduct and resisting arrest. Police continued investigating Zajko as a possible murder suspect, but did not charge her.
Ziz spent five months in jail. On 22 June 2023, a judge agreed to reduce her bail. She was released on an unsecured bond, and never showed up to her court date.
Within a year, two people thousands of miles away, who had never met Ziz, began acting strangely.
The first was Felix Bauckholt, known as Ophelia, an earnest math prodigy from Germany who was working in New York as a quantitative trader. She was already making half a million dollars a year at the age of 26, but “was kind of a nonconformist”, Jessica Taylor told me, and extremely interested in political ethics. Bauckholt was online friends with some of the Zizians, and in one conversation she seemed to defend the Zizians’ alleged attack on Lind, Taylor said.
The other person was Teresa Youngblut, a young computer science student at the University of Washington. Youngblut, who sometimes went as “Milo”, had attended the same private high school as Maximilian Snyder, and they’d reconnected online. Snyder had recently won his AI research prize, and they had a lot to discuss.
Around June and July, Bauckholt started taking secret phone calls, according to people who knew her. She cut contact with friends. In November she got on a flight and did not return. Not long after, Youngblut’s parents contacted police to report her missing, too, and in the grips of what they believed was a controlling romantic relationship.
After disappearing, Bauckholt and Youngblut both went to Chapel Hill, North Carolina, where they lived at adjoining rental properties. They were joined there by Ziz and visited by at least one other person.
Then, this January, Bauckholt and Youngblut went to Vermont. They wanted to see a property for sale – a remote, completely off-the-grid house, on 11 acres near the Quebec border.
An employee at the hotel where they were staying contacted authorities about two guests who dressed in black, carried guns and acted strangely; the US border patrol began monitoring them.
On 16 January 2025, back in Vallejo, prosecutors asked a judge to expedite Berns and Leatham’s trial for allegedly attacking Lind. The judge agreed. The next day, while Lind was at his lot, a man who prosecutors say was Snyder approached Lind, grabbed him and stabbed him to death.
Three days later, Agent David Maland of the US border patrol pulled over Youngblut and Bauckholt as they were driving a Prius southbound near the Canadian border. Other officers arrived behind Maland.
Youngblut may have panicked. She allegedly drew a gun and started shooting. Bauckholt also allegedly tried to draw a gun. The officers returned fire, wounding Youngblut and killing Bauckholt.
Maland died a short time later at the hospital.
On 16 February, Ziz, Zajko and Blank were arrested in a rural area of western Maryland, not far from Pennsylvania, where they were living in two box trucks that they’d parked on a stranger’s land. They had handguns and a rifle, according to authorities, but surrendered nonviolently.
According to a police report, Zajko pleaded with officers not to kill her.
Snyder has now been arrested for Lind’s murder in California, joining Berns and Leatham. (All three deny the charges against them.) Berns and Leatham have both made escape attempts, according to prosecutors. Prosecutors also believe that “Suri Dao” was an alias Berns made up while being arrested; her lawyer has said that is irrelevant.
Berns and Leatham are both being held in mental health facilities, according to Wired, and are in significant distress. Berns has engaged in self-mutilation.
Youngblut has been charged with attacking a federal officer. (She pleaded not guilty.) According to court filings, the car she and Bauckholt were in contained hollow-point ammunition, a night-vision monocular, a ballistic helmet, shooting targets, full-face respirator masks, handheld radios, cellphones wrapped in tin foil, a dozen unspecified electronic devices, and a diary with references to doing LSD and passages of “apparent cypher text”.
Ziz, Zajko and Blank are being held in a county jail in Maryland. They’ve been charged with trespassing and obstructing an officer; Ziz and Zajko have also been charged with firearms offenses. All three deny the charges against them, and at this time none have been charged in relation to any of the deaths that have occurred.
During a recent hearing, Ziz pleaded for vegan food in jail and suggested she may be suffering from malnutrition. “I haven’t done anything wrong,” she told the judge. “I shouldn’t be here.”
The attorney representing Ziz in Pennsylvania, Daniel McGarrigle, told me that she is “wholly and unequivocally innocent of the charges filed in this case”, and “has been vilified mercilessly” despite the fact that “only low-level misdemeanor charges” are pending in both the Pennsylvania and Maryland cases.
Maland was recently buried with military honors. He was engaged to be married.
The exact whereabouts of Alice Monday and Gwen Danielson are unknown, though both appear to be alive. They seem to have cut ties with the Zizians, and may be scared themselves.
It goes without saying that the AI-risk and rationalist communities are not morally responsible for the Zizians any more than any movement is accountable for a deranged fringe. Yet there is a sense that Ziz acted, well, not unlike a runaway AI – taking ideas and applying them with zealous literality, pushing her mission to its most bizarre, final extremes.
Although self-serving and grandiose, Ziz is probably to some extent sincere – “a true believer”, one person told me. It is unclear, however, what the Zizians’ long-term objectives were, if any. The murders they allegedly committed were less calculated acts of political violence than the flailing of a paranoid clique plunging out of society with no plan for how to get back.
All they had left, in the end, was Oedipal rage, certainty in their conclusions and guns. Their alleged victims were an elderly landlord who liked ducks, two suburbanite parents, a cop doing his job and themselves.
Rachel Wolford, the startup founder, asked for anyone reading this to know that “there are a lot of weird nerds who are doing a lot of really good things. They are meaningfully trying to make the world better. It’s not that weird nerds are bad; it’s that weird nerds have specific failure modes that specific branches of Silicon Valley have done a very poor job of checking.”
While expressing a range of views about AI safety, she and others believe it is still an important problem worth humanity’s time. They emphasized that AI does not need to become the distant basilisk of doomers’ waking nightmares to create serious economic, surveillance, environmental and social dilemmas in our near future.
That message is increasingly unpopular in Silicon Valley, where the doomers have, by and large, lost to the accelerationists.
Similarly, the rationalists’ influence has waned. “Post-rationalism” – a school emphasizing the self-help aspects of the scene, and trading hard-headed rationality for new age-y spirituality – is ascendent, though splinters and remnants live on in freewheeling Discord chats and officious Reddit forums.
So far, Snyder is the only one of the Zizians who has made any real public statement about his beliefs. He dictated a 1,500-word letter to the San Francisco Chronicle to give to Yudkowsky, “from one student among many, to his old teacher”. The letter called on him to think of animals as “brothers and sisters”, and lamented that Yudkowsky “could have been much more pessimistic about humanity much sooner and avoided starting the AI arms race”.
Yudkowsky refused to read it. To do so would be to surrender to blackmail and incentivize more alleged violence. Snyder, as a student of decision theory, ought to have known.
-
This article was amended on 6 March 2025 to reflect that it is unclear if Snyder visited the Vallejo lot, to reflect the identities of the people visiting or living with Youngblut and Bauckholt in Chapel Hill, to correct a chronological error related to the time of Snyder’s arrest and to reflect that Bauckholt allegedly tried to draw a gun but did not draw it.