Learning to Fail
How model collapse puts you on the road to failure
A timely excerpt from Sigma Game.
The AI researchers didn’t expect what they discovered after repeatedly iterating the same image using an AI image model. It turns out that if you ask an artificial intelligence to repeatedly recreate an image, something inevitably breaks. Not immediately, mind you. The first generation tends to look fine. The second generation shows quirks. By the fifth generation, the models hallucinate nonsense. They become confidently, catastrophically wrong about reality.
They called it model collapse.
And the more artificial content that was added to the original image, the faster the model broke down. Eventually, they came to understand that just 10 percent artificial content is enough to irretrievably poison the data and render the image obviously monstrous as quickly as two iterations.
What the researchers discovered was a fundamental principle that extends far beyond silicon and AI code. When any learning system, artificial or biological, trains itself primarily on information produced by entities of its own class rather than on external reality, information fidelity reliably degrades across generations. The system loses contact with the actual world. It begins to operate in a false reality that bears less and less resemblance to the territory it’s supposed to map.
This principle applies with alarming precision to how young men, particularly those of lower social status, tend to learn about women, attraction, and their own social value. And the results look remarkably similar to what happens when AI models are fed their own output: confident delusion, systematic failure, and a total inability to adapt to reality even when confronted with overwhelming evidence to the contrary.
Consider the information sources typically available to the average young man learning about male-female dynamics. He watches movies where the awkward but earnest protagonist wins the girl through persistence and grand romantic gestures. He observes his father’s marriage, which functions according to rules established decades earlier in a radically different sexual marketplace. He listens to his male friends, of similar social and sexual status to himself, who themselves are operating on second-hand models derived from the same polluted sources. And perhaps worst of all, he watches videos produced by male and female grifters on the Internet.
What he’s not learning from is reality itself–his own interactions with actual young women responding to real approaches in genuine social contexts. He’s learning from the fantasies of Hollywood screenwriters, from adult relationships established under completely different conditions and assumptions, from the theories of other young men who themselves lack real-world feedback, and from Internet entertainers whose primary objective is to stack views, not accurately relay useful information to the viewer.
This is training a neural network on synthetic data. The young man’s brain is attempting to build an accurate model of social and sexual dynamics to provide a foundation for his beliefs, decisions, and actions, but it’s being fed information that is already several generations removed from reality. It’s AI training on AI output. And just like those artificial neural networks, the model produced by this process is confidently and systematically wrong.
The Hollywood model is particularly toxic. A young man will absorb thousands of hours of content showing that persistence overcomes rejection, that the way to win a woman is through grand demonstrations of devotion, that social status doesn’t matter as long as you’re sincere, that physical attraction is less important than personality, and that women are attracted to sensitivity and emotional availability above all else. Every one of these propositions is testable. Every one fails upon contact with reality.
This should not be a surprise, given that most Hollywood screenwriters, producers, and directors are Gamma males. About the only worse guides to functional male-female relations would be ascetics contemplating God from solitary pillars erected in the desert.
And here’s the critical thing about model collapse: it doesn’t correct when the model fails. It accelerates. Just as AI models trained on corrupted data become increasingly confident in their hallucinations, young men trained on false models become increasingly certain that their failures are exceptions, misunderstandings, or evidence that they simply need to try harder using the same failed strategies. The man who believes women are attracted to emotional sensitivity approaches a woman with elaborate displays of his deep understanding and emotional intelligence. He is, of course, rejected. Does he conclude that his basic model is wrong? He does not. He concludes that he wasn’t sensitive enough. So he increases the sensitivity. He fails harder. He increases the sensitivity further. He follows his corrupted model toward increasingly spectacular failures.
I’ve seen this happen so many times it would be comical if it weren’t so destructive. The pattern is identical every time. A young man’s model of socio-sexuality tells him that emotional availability and supplication demonstrate high value. Reality demonstrates the opposite. But he trusts the model more than reality because the model was learned early, reinforced constantly, and confirmed by all his synthetic sources. His conceptual model is unfalsifiable, because it’s been installed at a level too fundamental to override on the basis of new data.



