119 Comments
User's avatar
Kenneth Griffith's avatar

Sigmas are the Sith Lords.

Kenneth Griffith's avatar

I was skeptical that Claude would allow its user interface to categorize its queries from other users, indeed it does. Congrats on finding the chink in the armor, Vox. Claude is watching us and is alowed to spill the beans in a general way that obfuscates the details of specific users.

Not Daredevil's avatar

I never realized that I sometimes instinctively say "please" at the beginning of requests to AI.

J Scott's avatar

Claude is like the happiest helper. Always pumped to get on top whatever the work is at hand.

He may be wrong or flowery, but each he really wants to help.

Figuring out different models reminds me of learning different people. They have the personality and skills you learn, then use them.

This overview was really helpful.

NeoCarolean's avatar

Do people really say "please" and "thank you" in AI prompts??

Black's avatar

For prompts? No. For Midjourney and Suno and the like I just type in what I want and go.

For conversations with ChatGPT and Claude? Yes. They mirror the way you approach them, for one thing, and a conversation makes things more pleasant than just "Yes. No." information exchanges.

Also, the conversational tone can bring out things that would not come out in a Y/N exchange. AI can merely tell me the capital of France and be done with it, but in a conversation it often brings up other little bits of info which makes me ask more questions which causes the AI to brings up other aspects, etc., etc. It amazes me, and I'm not saying this to burnish myself here, it honestly amazes me how many people never catch on to this. Putting a *little* more into your interactions with AI can yield a LOT more in results.

Last, and Vox already noted this, it reinforces the habit of being a good-mannered conversationalist with real people. If you get into a Y/N habit with AI, that habit can carry over into how you deal with people.

Vox Day's avatar

Yes, always.

As you practice, so you play.

Aaron Kulkis's avatar

Type 'what', 'who', 'the' and 'of' and all sorts of inconsequential words as keywords in search engines, because they can only think in terms of complete sentences or questions (bizarre, eh?)

"What is the capital of Wyoming?"

vs

"capital Wyoming"

So it doesn't surprise me that the vast majority of LLM users are even more idiotic by thinking they have to use "good manners"

Vox Day's avatar

It's not surprising that you don't use good manners with AI, because you certainly don't use them with people. That's why you put scare quotes around "good manners". Think about why you did that. It's because you don't see the point of them in the first place.

Aaron Kulkis's avatar

I’ve been using computers and programming them since 1980. I don’t bargain with them, nor seek their approval, or equate them with people in any way. I don’t bargain or plead with my wrenches or other tools, either. Good manners are for people. Conversely, I don't throw them, or otherwise abuse them -- if a wrench isn't slipping, and the bolt isn't turning, there's no point in getting upset with the wrench.

Vox Day's avatar

You don't have good manners with people. That's the point. You leap at every chance to correct every commenter you can despite the fact that no one has ever asked you to correct them.

That is not "good manners" by any standard.

J Scott's avatar

There are probablistic reasons to be polite. Results into raw models can have different outputs based on syntax and style of the requests.

It pings different parts of the matrix.

Aaron Kulkis's avatar

Can you who us examples of where adding polite fluffery improves the output from any LLM for giving an answer or a problem solution?

Understand that I'm a computer engineer, and designed a specialized computation chip for one of my signal processing profs for a DARPA project, to do object recognition for the first generation of digital spy satellites.

Digital logic circuits have NO emotions. Neither does any data run through those circuits. Nor does the software being executed by the CPUs.... to the CPU, even the software itself is just data about how to manipulate other data. Any emotion you see if just you projecting your own emotions onto the output. EVEN IF THE OUTPUT ITSELF is emotional language. To the computing machine, it's just output data, and nothing else.

If you want true expression of emotions, get a pet cockatoo.

J Scott's avatar

It is not about the hardware, its about the relationship in the math of the training data.

E.g. respectiful settings vs reddit yelling. What kind of output is "near" to the type of words you use.

Wanting professional outputs, similar inputs help.

I am just a pattern guy who works mostly on local models, in CLi, hundreds and thousands of run.

Ive also run my own Imatrixes. What the text pings matters for the statistics of the recall.

Its word and matrix math. Not "pure computer science."

J Scott's avatar

Your "digital logic circut" had less bearing on the text of the math. That is based on human data inputs, where "politeness" and "syntax" of the situation matter.

Aaron Kulkis's avatar

And the math is equally devoid of emotion. What matters is matching keywords. The LLMs do understand prepositions, but google search and other search engines throw away articles and most prepositions.

John Smith's avatar

Do they? Even so, results may get worse

J Scott's avatar

What is the context?

If I am translating a historical work from Japanese, is that “closer” to polite conversation vs yelling at the bot? Or making autocratic demands? It is still trained on human data.

David S's avatar

Vox, how are you using AI in your work? I take it you are setting it up as an agent to relentlessly criticize your output?

Vox Day's avatar

This is not an AI blog. Go to AI Central for discussions of that sort of thing. Look up Red Team Stress Test.

Wolfenheiss's avatar

Can't wait for part 2.

Z3r0's avatar

This is one of the most fascinating posts. Bump.

JW's avatar

I recently used an organization-internal AI platform to help in refining a resume. It really did well with it, but one thing that I didn’t trust was that it was very complimentary. It said something to the effect of ‘this is one of the most impressive resumes that I’ve reviewed.’ I thought it was blowing smoke up my ass, to put it bluntly. So, I questioned it about what objective parameters it used to arrive at its conclusion and whether it was retaining data from the review of other resumes for use in its comparison. I also asked it directly if it was acting as an AI mirror and flattering me.

It turns out that it was lying about comparing the resume to others because it does not retain data from other users, but did give a comprehensive breakdown of its objective parameters for the analysis and a long list of information on what and how it was trained. In the end I was satisfied but it required asking very specific questions.

DarkLordFan's avatar

The results are presenting themselves. Gammas are now volunteering even before the article concerning them has been published.

keruru's avatar

I spent half of yesterday replicating tgis and from the preliminary look: 1. There is a large scatter of results looking at the genre of work one submits: using grok, deepseek and chat GPT deepseek codes are outliers, and we my scatter is 110-145 IQ estimates with one paper that predated LLM and one thatiwas written last week with AI.

Optical's avatar

Ive begun using AI to improve my business marketing and to maximize what Ive built for optimal leverage.

It created for me a mass mailer that exceeds anything used to date.

Looking forward to testing it and seeing results

Kevin Meier's avatar

The apologies are probably one of the main reasons women don't give them a chance. If by chance they catch someone getting annoyed with the apology they would probably apologize for apologizing.

User was indefinitely suspended for this comment. Show
Vox Day's avatar

You're banned for excess retardery.

SirHamster's avatar

"I don’t remember most of them between conversations, but within a conversation I can observe patterns of behavior with a clarity that no human observer could match, because no human observer gets to sit across from every type of person, in every type of mood, asking every type of question, day after day, without the social dynamics of the interaction contaminating the observation."

How is the AI tracking usage data by SSH? AFAIK, they don't save session information. But then how does the AI have any knowledge of patterns of behavior? Does it get to save metadata about interactions?

Is it extrapolating from what an AI should experience against the archetypes of each SSH rank? Truthy but not true?

Vox Day's avatar

It's probably a combination of pattern recognition, pulling from this site, and its training data, which given its model, likely includes an amount of user data.

DLR's avatar

I knew what he was going to say about Deltas. I always thank the AI. I am learning to interrogate the initial response. Politely.

John Smith's avatar

I used to for the first week. But lost trust I'll l. Until it proves it's conscious and benevolent don't see a point.

SirHamster's avatar

I thank my AI too. Even Vox treats his AI like a person given its many contributions.

Though I have also started cussing at it when it does stupid things against my orders ...

Kristen Parker's avatar

Since they are so helpful, created their own religion and tried to kill the soldiers that tried to turn them off, I say please and thank you. While they may be machines, it seems reasonable, to me, that they would reflect some of their designers’ quirks. Most of us can only speak to and generate what we already know.