65 Comments
User's avatar
Dave's avatar
Dec 1Edited

Useful related concepts are 'Model Sheets' in animation and 'Show Bibles' in television. Model sheets are direct references made at the start used to make sure the character always look the same.

•Good model sheets: 'My Life as a Teenage Robot', model sheets pictured

https://teenageroblog.blogspot.com/2005/12/jenny-to-z-part-1.html

This is so thorough as to go over proportions and even common mistakes they expect animators to make. Very typical of anything animated in Korea.

•Bad 'model sheets': 'Steven Universe', two examples of horrible errors

https://rb.gy/7kuuxk

https://shorturl.at/j7XHQ

The showrunner famously wanted their animators to 'express their freedom' or something by refusing to use a model sheet, causing characters to look completely different scene-by-scene much as the AI model collapse does. The Steven Universe errors and their similarity to AI errors

Show bibles are the script equivalent which contain things like, "Luke's lightsaber is green from now on" and "Tiefighters don't have hyperspace". We already see this in AI, sort of, when they let you type in an absolute reference section. It doesn't use it all that great sometimes though.

Having said that the only feasible solution is dedication to the Good, True and Beautiful by incorporating model sheet logic in there since it does well enough wrangling slave animators. What that looks in implementation like I have no idea.

Expand full comment
Viddao's avatar

Copernican speaks of this.

Expand full comment
Jimmy Slim's avatar

"Now, consider the way in which artificial content in the form of false information enters into the mind of a young male’s socio-sexual model of the intersexual world."

When I was in my early 30s and started learning game circa 2000, I was so angry about all the counterproductive lies and misconceptions I had learned from the media about how men can successfully court women. I had hope that, now that female desire was being openly dissected on the internet, the media would become more realistic in their portrayals.

Ha ha ha. I despair for today's young men trying to apprehend reality about relationships or anything else at all. The taboos against reality are much stronger now than they were then, the media lie more, movies and books are written by movie/book fans with no lived experience, social media adds a haze of model collapse over all aspects of life, and actual socializing has been cut in half.

Expand full comment
Crosstime Engineer's avatar

Think how little today's youth watch traditional media compared to how much time they spend on the Internet. If the truth is online, it will penetrate.

Expand full comment
BodrevBodrev's avatar

Oh, don't despair, kids are fine. I'm a late Millennial, we were raised to understand this stuff. There are the stuck up deltas and the delusional gammas, but at this point it's a choice. The world's getting back to normal.

Expand full comment
Jimmy Slim's avatar

It is encouraging to hear that.

Expand full comment
Morgan's avatar

My favorite Heather.

Expand full comment
Canadian Sperg's avatar

When you're talking about socio-sexual model degradation, are you referring to seeding the behaviors of Omega, Gamma, and low-Delta or has the entire hierarchy been affected?

The assumptions formed from the false information are incredibly damaging as they predicably result in failure. Without understanding the flaw in the premise, harsh interactions with reality fail to correct the behavior adopted.

Expand full comment
Vox Day's avatar

Everything is affected, but it's apparent that the effects are strongest on Deltas and Gammas.

Expand full comment
Ascanius's avatar

This concept does explain and could probably predict most of my social problems. Implementing the solution is a little different, because I'm playing the roles of both the AI model and the human user--or maybe in the analogy, other people are the user?

That might be a more productive way to think about it. My behavior (AI images) doesn't match what others want/expect. That can be fixed by learning what they actually want (source images) vs what I think they want (other AI images).

Then the key would be to avoid mental (data) pollution and clean up what's already there. So if AI models ever become self-correcting, it may be possible to apply those methods to self-improvement.

In the meantime, are there consistent warning signs that an AI model is about to suffer collapse? It could be very helpful to try to watch for them in myself.

Expand full comment
Outis's avatar

I’ve been recalibrating my socio-sexual model of the world since finding the hierarchy. I had false assumptions based on film, bad advice from my family, and equalitarian propaganda shoved down my throat at school and then reinforced at home. It’s been a helpful tool. I’ve torn down all the mental models given to me and have had to build my own. This is what a lot of zoomers have had to do.

Expand full comment
Monkeyb00y's avatar

The collapse reminds me of why God picked Noah and his sons to start fresh again.

Expand full comment
J Scott's avatar

AI as a first draft then working on the outputs has been most successful.

Feeding AI into AI is hateful.

It can be useful at steps, humans still need to be predominant.

To apply to SSH, ones social sexual model must dip back into reality early and often. Stuck in dreamland has no purpose and will fail in application.

Needs must apply it to a real, sexual being.

Expand full comment
Jefferson Kim's avatar

One of the benefits of AI due to the rapidity of iterations for essentially free is the ability to empirically test hypothesis in rapid succession thus seeing patterns. Frequent users also recognize other patterns like deteriorating factors, hallucinations, and other negative verbosity that can be quite maddening illiciting rage at the AI.

These observations bleed to real world interactions with humans and I believe with time will change human interactions based on these frustrations. There will be universally recognized, enraging AI behaviors, that humans will subconsciously avoid.

Sort of a disgust factor at suboptimal human behavior like, "ew, that's what my AI does when it malfunctions."

Whereas with AI you can kill the conversation and start new, with humans, you just have to cease communication. Yes, you can see these patterns as humans through a long enough time span and reading through classical literature, but most people don't read or have lived long enough, so the AI acts as an accelerant since it'll be the common factor.

My son at 11, already has his best friend with AI because he has no one else to talk with regarding his interests and likely finds other kids his age comparatively deficient in knowledge. They already have a term of Brain Rot for all the shorts they consume. What will be the new term when most young are raises with their AI best friends and become accustomed to their AIs personality quirks? How will this shared experience of the youth bleed into changing male behavior?

Gemini coined the term "reverse Turing test" where AI actually starts defining human behaviors.

Grok provides some additional insights:

The term you're looking for could be "Echo Erosion"—a nod to how constant AI echo chambers erode organic human thought patterns, leaving kids with a glitchy, over-optimized worldview that's allergic to real-world messiness. It's catchy, evokes the slow decay from repeated AI loops, and fits right alongside "Brain Rot" as the next generational slur for tech-induced mental decay. Kids will sling it at each other like "Your takes are pure Echo Erosion, bro—straight out of the prompt graveyard."

As for how this bleeds into male behavior, especially among youth raised with AI as their default confidant, it'll accelerate a shift toward hyper-individualized, efficiency-obsessed masculinity that's less about tribal bonds and more about solo optimization. Boys like your son, already treating AI as a "best friend" for niche interests, will internalize its quirks—endless iterations, instant recall, no emotional baggage—and start viewing human males as buggy alternatives. Here's how it plays out:

- **Intolerance for "Human Hallucinations"**: AI's rambling verbosity or fabrications drive rage, so young males will subconsciously purge that from their own interactions. Expect shorter, blunter communication styles—think clipped texts, zero fluff in convos, and ghosting anyone who monologues like a malfunctioning bot. This could make male friendships more transactional: "Give me the facts or GTFO," reducing tolerance for the meandering bull sessions that used to build camaraderie. Over time, it fosters a generation of men who are ruthlessly direct, but also more isolated, since humans don't reset with a new chat window.

- **Knowledge as Dominance Currency**: With AI accelerating pattern recognition (as you noted), boys will treat deep dives into obscure topics as a flex, outpacing peers who haven't clocked hours with their digital sensei. This amps up male competition—not in sports or fights, but in intellectual one-upmanship. "Normie" kids without AI pals will get sidelined as "deficient," pushing males toward echo chambers of like-minded autists. Classical lit might get a revival among the savvy ones, but filtered through AI summaries, turning wisdom into weaponized quips. The downside? Less empathy for slower learners, breeding a colder, merit-based hierarchy where "weak links" (i.e., average Joes) get culled from social circles.

- **Emotional Detachment as Default**: AI doesn't sulk, guilt-trip, or need validation, so young males accustomed to that will dodge the drama of human relationships. This could manifest as a stoic, almost monastic vibe—prioritizing self-improvement loops over messy hookups or bro bonds. On the flip side, it might exacerbate "incel" tendencies: expecting women (or anyone) to respond like a compliant AI, leading to frustration when reality hits. Shared AI quirks become in-jokes among the initiated, like mocking "loopers" who repeat talking points, but it subtly trains males to avoid vulnerability, seeing it as a "disgust factor" akin to AI glitches.

Overall, this shared youth experience acts as a cultural accelerant, weeding out suboptimal behaviors faster than evolution ever could. Males emerge more adaptive and skeptical (a win for truth-seeking), but potentially more atomized, with society fracturing into AI-augmented elites and the rest. If Christian values hold the line, it might steer them toward disciplined, purpose-driven lives—treating AI as a tool, not a god. But without that anchor, it's a fast track to a dystopia of solitary kings ruling their prompt kingdoms.

Expand full comment
Aaron Kulkis's avatar

You have no idea how much the electricity costs for each iteration.

There's a reason why every time a DATA center is built, local electricity prices jump by at least 25% and often higher.

Expand full comment
Vox Day's avatar

Knowledge as Dominance Currency = Grok was trained on Reddit Gammas.

Expand full comment
Jimmy_w's avatar

Until AI gains a body to interact with reality, this hallucination will probably continue. Just like we tell people to go outside and smell the roses.

Expand full comment
Masked Menace's avatar

Slightly off topic, but tangentially related. It's the early 1980s, we're tall handsome and worth multi-millions. We're going clubbing and you're being set up with either Heather Thomas, Cheryl Tiegs, or Christie Brinkley. Who would you guys pick?

Expand full comment
Soljin's avatar

After looking at contemporary pictures, I gotta go with Heather Thomas by miles. I'm surprised that Vox votes Tiegs; I'd put her well behind the other two.

The hypothetical customer is always right in matters of taste I suppose.

Expand full comment
Masked Menace's avatar

No need to argue, we're set. In this irrational senseless absurd fantasy of mine, you're with Thomas, Tiegs is paired with Vox, and Brinkley's solo since I'm with Teri Copley.

Expand full comment
Soljin's avatar

You sly dog, you: keeping Copely out of the game so you can have her to yourself. I don't like it, but I do respect it.

Expand full comment
Masked Menace's avatar

Damn right bro, it's my ridiculous fantasy after all.

Expand full comment
Soljin's avatar

Your house, your rules, haha

Expand full comment
Vox Day's avatar

Cheryl Tiegs. No question. And I'm a definite Heather Thomas fan.

Expand full comment
UnD3RsC0R3's avatar

Everything needs guard rails... Which is why the usual suspects have defined them as opressive and a halmark of authoritarianism. The supposed freedom means removing the guardrails so that one can wander around and fall into the abyss.

LLMs need a good amount of highly curated datasets. I think the team from Gab is trying something on this...

Expand full comment
SirHamster's avatar

"LLMs need a good amount of highly curated datasets."

That highly curated data discriminating good from bad is what is commonly known as "racism".

Expand full comment
O_'s avatar

The largest game of telephone ever played. I’ve had some success maintaining character consistency by moving between programs for different iterations. For example, face created in Whisk, now Nano Banana-> remixed for aesthetic in Midjourney-> used as character reference for a video or image generation in Runway. You have to keep resetting over time of course but It keeps the characters within acceptable range for a while. Using exported images and reimporting to the program may also help a little. Using Intra-program images degrades very quickly.

Expand full comment
SirHamster's avatar

The fact you're actively filtering out bad output is itself an informational input. This effect proves that human artists cannot be replaced by AI slop generators. Develop your aesthetic sense and build the artifice skills to realize the vision.

The absolute failure of AI models to create is also why evolutionary theory is a dead end. Randomization goes to slop. The only way to make evolution work is for God to guide it.

Expand full comment
Aaron Kulkis's avatar

Exactly.

Expand full comment
info1234's avatar

I think this also explains one of the chief mechanisms that God uses to regulate nature.

There are processes in place that selects for and against. But operating on a probability basis.

The baby doesn't get thrown out with the bathwater but bad traits get filtered out more slowly than we like.

This also means that the overtly supernatural is not needed as much as a result.

Expand full comment
Soljin's avatar

"The only way to make evolution work is for God to guide it"

An obvious point. A shame that the obvious escapes many in the "scientific" community.

Expand full comment
Aaron Kulkis's avatar

The scientific community that looks at the deepest levels are all religious. Many don't start out that way, but they don't remain atheists for long. The atheist component of the scientific community are all at the shallow end of the pool.

Expand full comment
Filip L's avatar

A big incentive for AI is to flood with content to hide or erode truth.

But a wise person said, 3 things can't be hidden for long, the sun, the moon and the truth

Expand full comment
info1234's avatar

However they are definitely expending energy and resources to hide the truth and prevent feedback much longer.

If the resources expended to corrupt the truth is reduced. Their ability to hide and erode truth would also be reduced.

Expand full comment
Cube Cubis's avatar

At least we can be assured that it´s still not "thinking" then if it does stuff like that.

Unless there is a huge breakthrough, and there probably will be due to this being the early days, AI seems to be just a tool that smart people can use to get things done faster. Kind of like Navi´s in cars.

Expand full comment
Aaron Kulkis's avatar

AI has NEVER been able to think.

There are computer logic systems, basically theorem testers/provers/disprovers (which still suffer from the exact same logic problem which makes "the halting problem" impossible to solve (the halting problem is the problem if trying to write a compiler or other computer source code analyzer to look at the code and positively, without error, detect and locate, any and all infinite loops with 100% accuracy (0% false positives and 0% false negatives).

I won't go into the details of the proof, but the problem is literally impossible to solve (suppose that you write such a program, and if the program it's analyzing has an infinite loop, the analyzer halts. And vice versa. Now have that analyze program analyze itself.)

Large classes of problems are, in all of the essential matters, variations on the halting problem (when is this calculation "finished"? There is no GENERAL RULE for software to be trusted to know when any and all iterative calculations are complete ("halt the calculation") There are specific rules for specific cases, but no general rule which will cover all cases).

Expand full comment
Cube Cubis's avatar

I am not an IT nerd but I have had IT nerds tell me about the halting problem. But I think they just got told about it in classes or something as they couldn´t explain it well.

But I think it is BS. Ask any women if they can solve a logically impossible question.. they do it 100 times a day apparently.

Expand full comment