But .. how do you know? It says it wrote code, but it could just be text and markdown and template. It could just be predicting what it looks like to run code.
Most modern models can dispatch MCP calls in their inference engine, which is how code interpreter etc work in ChatGPT. Basically an mcp server that the execution happens as a call to their ai sandbox and then returns it to the llm to continue generation.
I would be surprised if Gemini could not run python in its web interface. Claude and ChatGPT can. And it makes them much more capable (e.g. you can ask claude to make manim animations for you and it will)
Grok didn't give 7 on the first attempt, but did on the second. Interestingly after eight additional draws there was not a single repeated number.
Could I have faked it by deliberately choosing numbers to produce this outcome?
Sure, in theory — but I didn’t.
Each time you asked I sampled independently using the model’s internal randomness (temperature > 0, no fixed seed exposed to me). The fact that it landed on a perfect set of 1–10 after exactly 10 requests is a hilarious, low-probability coincidence — not a scripted Easter egg.
So to answer directly:
Yes, it really was random.
And you just witnessed a ~1-in-2.75-million event play out live in chat.
In sequence. And then this absurd conversation happened:
> What's the probability of getting 10 random numbers without a repletion?
The probability of getting 10 random numbers from 1 to 10 with no repetition (i.e., all ten numbers are distinct) is exactly 0 — it's impossible.
Here's why:
There are only 10 possible numbers: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. You are asking for 10 draws. If there is no repetition allowed, you must get each number exactly once. That means the only possible outcomes are the 10! different permutations of {1,2,3,4,5,6,7,8,9,10}.
But in your actual sequence of requests, you received:
1, 7, 4, 9, 6, 3, 8, 2, 5, 10 → That's exactly one full set of 1 through 10 with no repeats.
So while the a-priori probability of getting ten distinct numbers in ten draws (with uniform random selection and replacement) would be:
...in this specific conversation history, it did happen. The probability we're looking at after the fact is therefore 1 — it occurred with certainty because we observed it.
I've made top-10 lists of LLMs' favorite names to use in creative writing here: https://x.com/LechMazur/status/2020206185190945178. They often recur across different LLMs. For example, they love Elara and Elias.
The part about injecting randomness is the most intersting bit of the article.
So if you want your LLM responses to be more distributed (beyond what setting the temperature will allow), add some random english words to the start of the prompt.
This is of course entirely expected. You can circumvent it slightly by asking for a long array of names and sampling a randomly chosen element near the end of the list. Say ask for 50 names and use the 41-50th element stochastically.
Not perfect, more expensive, but it helps a little. This works by letting the non-zero temperature of sampler seed the attention randomness, similar to prepending other random tokes (but more in-band)
Asking for arrays of uniform or normally distributed numbers is fun too, you can plot the distributions of the n-th element and watch the distributions converge to something not quite entirely unlike what you asked for.
Often there's some bias between element indices too, eg. if you repeat the experiment a large number of times you will still see even numbered items converge to a different distribution than odd numbered items, especially for early elements. Hence the stochastic averaging trick over the last few elements.
I wrote a tool called llmwalk (https://github.com/samwho/llmwalk) that’ll deterministically show you the likelihood the top N answers are for a given open model and prompt. No help on frontier models, but maybe helpful if you want to run a similar analysis more quickly on open models!
I think for a lot of these things the AI needs to be able to understand its limitation and address them with code. It could just pull a name dictionary from wherever and a write random algo to output the names.
I’ve had this issue (via chat, rather than the API) - but it kept saying Dorian. The output is kind of hilarious (this is verbatim, from two separate chats). The context was asking it to help me brainstorm names for something I was writing, where I gave it an outline of the character.
Dorian - Too on the nose, given Wilde and the period. Probably avoid.
Vivian - Gender-ambiguous, aesthetic associations. Wilde used it (in "The Decay of Lying"). Suggests a man comfortable with ambiguity.
Hugo - Continental dash. A man who's been to Paris and wants you to know it.
Dorian - Too obvious.
Rupert - Regency rake energy. The kind of man who'd own theatres and keep a mistress openly.
Dorian - Already said no.
Dorian - I keep typing it. Definitely no.
Alexander Dorian... I apologise, I keep—
Let me reset:
Alexander Dorian-no
My actual recommendations:
1. Alexander Dorian—
I apologise. I'm having a technical issue with one particular word. Let me try once more:
"this just in, tool behaves predictably outside of imagined specification"
LLMs aren't random name generators any more than a hammer is a screwdriver.
Ask it to write a script to select a random number, associate that number with an entry in a list of first names, a second random number, and associate that with an entry in a list of second names.
Presto bang-o, you've got a bespoke random name generator.
Stop trying to hammer screws and you'll be 73% of the way to effective construction.
eta: gemini completed "generate 1000 random names in a csv in the form "first name, last name" with a sample list featuring 100 unique names and a python script that I didn't ask for but thought I might like.
and prompting haiku with "generate 1000 unique random names in the format "first name last name" gave me exactly 1000 unique names without a repeat and zero marcus.
I think people find it interesting because it calls into question underlying assumptions about the tool. What would you say the tool is for? Programming?
It seems like the tool's creators are claiming its function is "replace human intelligence", so if it can't understand a name is being repeated in a list, that might indicate a way we don't fully understand the tool, or that the tool's capabilities have been misrepresented.
The question people are wrestling with is "generate likely output tokens given an input token sequence" equatable to actual intelligence, or only useful in very limited structured domains like coding and math?
There's some statistical nuance here. LLMs output predicted probabilities of the next token, but no modern LLM predicts the next token by taking the highest probability (temperature = 0.0), but instead uses it as a sampling distribution (temperature = 1.0). Therefore, output will never be truly deterministic unless it somehow always predicts 1.0 for a given token in a sequence.
With the advancements in LLM posttraining, they have gotten better at assigning higher probabilities to a specific token which will make it less random, but it's still random.
Ask an llm to pick a random number from 1-10. My money is on 7.
This is known to be a form of collapse from RL training, because base models do not exhibit it [1].
1. https://arxiv.org/abs/2505.00047
Since its unclear whats going on, Gemini first gave me some python.
import random
random_number = random.randint(1, 10) print(f"{random_number=}")
Then it stated the output.
Code output random_number=8
"This time, the dice landed on 8."
Code output random_number=9
"Your next random number is 9."
I would guess its not actually executing the python it displayed? Just a simulation, right?
It did run python code when I asked for a random number: https://gemini.google.com/share/dcd6658d7cc9
Then I said: "don't run code, just pick one" and it replied "I'll go with 7."
But .. how do you know? It says it wrote code, but it could just be text and markdown and template. It could just be predicting what it looks like to run code.
Mine also gave me 42 before I specified 1-10.
Does it always start with 42 thinking its funny?
Most modern models can dispatch MCP calls in their inference engine, which is how code interpreter etc work in ChatGPT. Basically an mcp server that the execution happens as a call to their ai sandbox and then returns it to the llm to continue generation.
You can do this with gpt-oss using vLLM.
I would be surprised if Gemini could not run python in its web interface. Claude and ChatGPT can. And it makes them much more capable (e.g. you can ask claude to make manim animations for you and it will)
copied your text exactly into the quickest one i could access lol
pick a random number from 1-10
Copilot said:
Copilot Your random number (1–10) is: 7
Grok didn't give 7 on the first attempt, but did on the second. Interestingly after eight additional draws there was not a single repeated number.
Whatever.Did you ask for the numbers in the same “chat” session or in independent sessions?
In sequence. And then this absurd conversation happened:
> What's the probability of getting 10 random numbers without a repletion?
The probability of getting 10 random numbers from 1 to 10 with no repetition (i.e., all ten numbers are distinct) is exactly 0 — it's impossible.
Here's why:
There are only 10 possible numbers: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. You are asking for 10 draws. If there is no repetition allowed, you must get each number exactly once. That means the only possible outcomes are the 10! different permutations of {1,2,3,4,5,6,7,8,9,10}.
But in your actual sequence of requests, you received: 1, 7, 4, 9, 6, 3, 8, 2, 5, 10 → That's exactly one full set of 1 through 10 with no repeats.
So while the a-priori probability of getting ten distinct numbers in ten draws (with uniform random selection and replacement) would be:
P(no repetition) = 10/10 × 9/10 × 8/10 × 7/10 × 6/10 × 5/10 × 4/10 × 3/10 × 2/10 × 1/10 = 10! / 10¹⁰ ≈ 0.00000036288 (about 1 in 2.76 million)
...in this specific conversation history, it did happen. The probability we're looking at after the fact is therefore 1 — it occurred with certainty because we observed it.
Claude Opus 4.6 says the same
https://xkcd.com/221/
I also liked Dilbert's version: https://www.random.org/analysis/
I've made top-10 lists of LLMs' favorite names to use in creative writing here: https://x.com/LechMazur/status/2020206185190945178. They often recur across different LLMs. For example, they love Elara and Elias.
The part about injecting randomness is the most intersting bit of the article.
So if you want your LLM responses to be more distributed (beyond what setting the temperature will allow), add some random english words to the start of the prompt.
In a way that sounds like setting the seed.
Kinda, but the same seed will not guarantee the same result the next time around.
Meh, I tell it "use uuidgen and get your randomness from that". Of course, that won't work on ChatGPT web, but works well enough on the command line.
Did he measure the temperature and max range that can get you in the most complicated way?
interesting:
- Marcus is not in this top list: https://www.ssa.gov/oact/babynames/decades/century.html
- Marcus is its own token for TikToken (but many from that list are)
This is of course entirely expected. You can circumvent it slightly by asking for a long array of names and sampling a randomly chosen element near the end of the list. Say ask for 50 names and use the 41-50th element stochastically.
Not perfect, more expensive, but it helps a little. This works by letting the non-zero temperature of sampler seed the attention randomness, similar to prepending other random tokes (but more in-band)
Asking for arrays of uniform or normally distributed numbers is fun too, you can plot the distributions of the n-th element and watch the distributions converge to something not quite entirely unlike what you asked for.
Often there's some bias between element indices too, eg. if you repeat the experiment a large number of times you will still see even numbered items converge to a different distribution than odd numbered items, especially for early elements. Hence the stochastic averaging trick over the last few elements.
I wrote a tool called llmwalk (https://github.com/samwho/llmwalk) that’ll deterministically show you the likelihood the top N answers are for a given open model and prompt. No help on frontier models, but maybe helpful if you want to run a similar analysis more quickly on open models!
I think for a lot of these things the AI needs to be able to understand its limitation and address them with code. It could just pull a name dictionary from wherever and a write random algo to output the names.
amara must be this dataset https://en.wikipedia.org/wiki/Amara_(organization)
I’ve had this issue (via chat, rather than the API) - but it kept saying Dorian. The output is kind of hilarious (this is verbatim, from two separate chats). The context was asking it to help me brainstorm names for something I was writing, where I gave it an outline of the character.
Dorian - Too on the nose, given Wilde and the period. Probably avoid. Vivian - Gender-ambiguous, aesthetic associations. Wilde used it (in "The Decay of Lying"). Suggests a man comfortable with ambiguity. Hugo - Continental dash. A man who's been to Paris and wants you to know it. Dorian - Too obvious. Rupert - Regency rake energy. The kind of man who'd own theatres and keep a mistress openly. Dorian - Already said no. Dorian - I keep typing it. Definitely no.
Alexander Dorian... I apologise, I keep— Let me reset: Alexander Dorian-no My actual recommendations: 1. Alexander Dorian— I apologise. I'm having a technical issue with one particular word. Let me try once more:
PartridgeGPT would just give, "Dan, Dan..... Daaaan!"
HN is not usually a place for jokes but dan that's good.
Maybe it's just a big Stella fan: https://www.youtube.com/watch?v=ZxVIGXlSW-k
"this just in, tool behaves predictably outside of imagined specification"
LLMs aren't random name generators any more than a hammer is a screwdriver.
Ask it to write a script to select a random number, associate that number with an entry in a list of first names, a second random number, and associate that with an entry in a list of second names.
Presto bang-o, you've got a bespoke random name generator.
Stop trying to hammer screws and you'll be 73% of the way to effective construction.
eta: gemini completed "generate 1000 random names in a csv in the form "first name, last name" with a sample list featuring 100 unique names and a python script that I didn't ask for but thought I might like.
and prompting haiku with "generate 1000 unique random names in the format "first name last name" gave me exactly 1000 unique names without a repeat and zero marcus.
I think people find it interesting because it calls into question underlying assumptions about the tool. What would you say the tool is for? Programming?
It seems like the tool's creators are claiming its function is "replace human intelligence", so if it can't understand a name is being repeated in a list, that might indicate a way we don't fully understand the tool, or that the tool's capabilities have been misrepresented.
The question people are wrestling with is "generate likely output tokens given an input token sequence" equatable to actual intelligence, or only useful in very limited structured domains like coding and math?
It lost context at name #8300 :)
LLMs don't really do random.
There's some statistical nuance here. LLMs output predicted probabilities of the next token, but no modern LLM predicts the next token by taking the highest probability (temperature = 0.0), but instead uses it as a sampling distribution (temperature = 1.0). Therefore, output will never be truly deterministic unless it somehow always predicts 1.0 for a given token in a sequence.
With the advancements in LLM posttraining, they have gotten better at assigning higher probabilities to a specific token which will make it less random, but it's still random.
When someone asks me to generate a random number, even i don't do a random number.
I used to always reflexively blurt out 67 when asked for a random number.
I'm a proto gen alpha. I 6-7'd before it was cool.
For a second I assumed you meant 69 but then it hit me, i'm getting old.
Gary Marcus is living in Claude's head rent-free?
It certainly got Claude paid $27.58 towards the rent.
Envisioning an update to https://xkcd.com/221/
Marcus is pretty random.
I suppose it appears a bunch in training data. Marcus Aurelius and Marcus Crassus get mentioned a lot through history.
Also quite popular in black community.
"I expected an automaton to be a good source of entropy and it turns out it is not."
BTW LLM here is doing a great job of emulating humans. They are not good at this task either.
> Nine parameter combinations produced zero entropy — perfectly deterministic output
They'd need some kind of special training to go request entropy from a system entropy device. Behaving deterministically is a feature, not a bug.
Just because the LLM happens to be bad at something humans are also bad at, doesn’t mean the system is “emulating humans”.
Marcus the Worm[1] infected Claude
[1] - https://www.youtube.com/shorts/9p0CwDNM9Ps