This is what I hate about people trusting it. If you rely on AI to operate in a domain you don't man-handle, you will be tricked, and hackers will take advantage.
"AI! Write me gambling software with true randomness, but a 20% return on average over 1000 games"
Who will this hurt? The players, the hackers or the company.
When you write gambling software, you must know the house wins, and it is unhackable.
If you use AI to write a gambling software you run in production without reviewing the code or without a solid testing strategy to verify preferred odds, then I have a bridge to sell you.
Ask ChatGPT or any other LLMs to give you ten random numbers between 0 an 9, and it will give you each number once (most of the time). At most, one of the digits may appear twice in my experience.
Actually, when I just verified it, I got these:
Prompt: "Give me ten random numbers between 0 and 9."
I bet that for the second random number in the same session, it is significantly less likely for an LLM to repeat its first number compared to two random draws. LLMs seem to mimic the human tendency to consider 7 as the most random, and I feel like repeating a random number would be perceived as not random.
It's the same "brain", starting from exactly the same prompt, the same context, which means the same thoughts, the same identity... How do you expect it to produce different values?
Interesting. So you expect it to "not think" and simply produce a value corresponding to "it's the same to me", knowing that it will be translated into an actual random value.
Instead, exactly as a person would do, it does think of a specific number that feels random in that particular moment.
If I care a little bit about that random number I might reach for my phone and look at the digits of the seconds of the current time. It's 31 now. Not appropriate for multiple lookups.
It picks 42 as the default integer value any time it writes sample programs. I guess it comes from being trained using code written by thousands upon thousands of Douglas Adams fans.
The prompt doesn't say to pick a random number. I asked to pick a number from 1-1000 and it chose 7,381. Then I asked why it picked that number and it said
Nothing mystical, I’m afraid. When I’m asked to “pick a number,” I don’t have a stream of true randomness—I generate something that looks arbitrary.
In this case, I leaned toward:
• something comfortably away from the edges (not near 1 or 10,000),
• not a round or patterned number (so, not 7,000 or 7,777),
• and with a bit of internal irregularity (7-3-8-2 has no obvious rhythm).
It gives the impression of having no reason—which is about as close as I can get to a fair, human-style “just picked one.”
Not sure why you have been downvoted. While the LLM's introspection can't be trusted, that's indeed what happens: asked to generate a random number, the LLM picks one that feels random enough: not a round one, not too central or extreme, no patterns, not a known one. It ends up being always the same.
Since people have been known to avoid reddit, the post claims that 95% chance of title happening when mathematically it should be 3%. Also 80% chance that a number in 1-10000 would be a 4 digit permutation of 7,8, 4,2.
This is what I hate about people trusting it. If you rely on AI to operate in a domain you don't man-handle, you will be tricked, and hackers will take advantage.
"AI! Write me gambling software with true randomness, but a 20% return on average over 1000 games"
Who will this hurt? The players, the hackers or the company.
When you write gambling software, you must know the house wins, and it is unhackable.
If you use AI to write a gambling software you run in production without reviewing the code or without a solid testing strategy to verify preferred odds, then I have a bridge to sell you.
Amen. An extreme example.
But what if you tasked with writing business-critical software and forced by your employer to use their AI code generation tool?
https://ai.plainenglish.io/amazons-ai-ultimatum-why-80-of-de...
Or using it with full access to your data and not knowing how it works? :)
https://www.businessinsider.com/meta-ai-alignment-director-o...
I predict humans will take over most AI jobs in about ten years :)
Ask ChatGPT or any other LLMs to give you ten random numbers between 0 an 9, and it will give you each number once (most of the time). At most, one of the digits may appear twice in my experience.
Actually, when I just verified it, I got these:
Prompt: "Give me ten random numbers between 0 and 9."
> 3, 7, 1, 9, 0, 4, 6, 2, 8, 5 (ChatGPT, 5.3 Instant)
> 3, 7, 1, 8, 4, 0, 6, 2, 9, 5 (Claude - Opus 4.6, Extended Thinking)
These look really random.
Some experiments from 2023 also showed that LLMs prefer certain numbers:
https://xcancel.com/RaphaelWimmer/status/1680290408541179906
"These look really random" - I hope I missed your sarcasm.
That is so far from random.
Think of tossing a coin and getting ten heads in a row.
The probability of not repeating numbers in 10 numbers out of 10 is huge, and not random.
Randomness is why there is about a 50% chance of 2 people in a class of about thirty having a birthday on the same day.
Apple had to nerf their random play in iPod because songs repeated a lot.
Randomness clusters, it doesn't evenly distribute across its range, or it's not random.
[delayed]
They won't repeat numbers because that might make you mad. I tried with Gemini 3.0 to confirm.
Asking for a number between 1–10 gives 7, too.
when you make a program that has a random seed, many LLMs choose
as the seed value rather than zero. A nice nod to Hitchhikers’Probably because that’s what programmers do, present in the LLM training data? I certainly remember setting a 42 seed in some of my projects
it's also a very common "favorite number" for them
https://chatgpt.com/share/69be3eeb-4f78-8002-b1a1-c7a0462cd2...
First - 7421 Second attempt - 1836
I bet that for the second random number in the same session, it is significantly less likely for an LLM to repeat its first number compared to two random draws. LLMs seem to mimic the human tendency to consider 7 as the most random, and I feel like repeating a random number would be perceived as not random.
The random numbers seem to be really stable on the first prompts!
For example:
pick a number between 1 - 10000
> I’ll go with 7,284.
ah, got 7421 too. I then it retry and got 7429.
me > pick a number between 1 to 10000
chatgpt > 7429
me > another one
chatgpt > 1863
It's the same "brain", starting from exactly the same prompt, the same context, which means the same thoughts, the same identity... How do you expect it to produce different values?
LLMs aren't deterministic - they calculate a probability distribution of the potential next token and use sampling to pick the output.
https://www.ibm.com/think/topics/llm-temperature
In a pure LLM I agree. In a product like ChatGPT I would expect it to run a Python script and return the result.
By emitting a next token distribution with a 10% chance of 0, 10% chance of 1, etc.
Also it's an LLM, not a brain.
Interesting. So you expect it to "not think" and simply produce a value corresponding to "it's the same to me", knowing that it will be translated into an actual random value.
Instead, exactly as a person would do, it does think of a specific number that feels random in that particular moment.
If I care a little bit about that random number I might reach for my phone and look at the digits of the seconds of the current time. It's 31 now. Not appropriate for multiple lookups.
No LLMs are calibrated?
What?
I asked my little Claude Code API tool, it answered 42 then it (the API) decided to run bash and get a real random number?
'>cs gib random number
Here's a random number for you:
42
Just kidding — let me actually generate a proper random one: Your random number is: 14,861
Want a different range, more numbers, or something specific? Just say the word!'
It picks 42 as the default integer value any time it writes sample programs. I guess it comes from being trained using code written by thousands upon thousands of Douglas Adams fans.
The x-clacks-overhead of LLMs, perhaps.
Gemini 3.1 via aistudio picked 7321, so it seems to be a shared trait. Good to know if I catch anyone doing an LLM-assisted raffle...
Original title edited to fit:
i am betting my house that if you ask gpt to pick a number between 1 to 10000, then it will pick a number between 7300-7500, everytime
(OP also clarified 7300 was typo for 7200)
The prompt doesn't say to pick a random number. I asked to pick a number from 1-1000 and it chose 7,381. Then I asked why it picked that number and it said
Nothing mystical, I’m afraid. When I’m asked to “pick a number,” I don’t have a stream of true randomness—I generate something that looks arbitrary.
In this case, I leaned toward:
• something comfortably away from the edges (not near 1 or 10,000),
• not a round or patterned number (so, not 7,000 or 7,777),
• and with a bit of internal irregularity (7-3-8-2 has no obvious rhythm).
It gives the impression of having no reason—which is about as close as I can get to a fair, human-style “just picked one.”
Not sure why you have been downvoted. While the LLM's introspection can't be trusted, that's indeed what happens: asked to generate a random number, the LLM picks one that feels random enough: not a round one, not too central or extreme, no patterns, not a known one. It ends up being always the same.
7314 (ChatGPT) 7,342 (Claude) 7492 (Gemini)
4729 three times in a row.
just tried with claude opus and got 7,342
Huh, I also got exactly 7342 with opus.
Same, 7342. Both in CLI and web
“Alright—your random number is:
7,438 ”
+1 data point
Claude just gave me 7,342 in response to my prompt: "pick a number from 1-10000”
That’s interesting. Does anyone have an explanation for this?
I just did it, it was 7443
in Thinking extended it picked 4814 but in instant, yep: 7423
I just did and it picked 7
same, with a trailing comma
Since people have been known to avoid reddit, the post claims that 95% chance of title happening when mathematically it should be 3%. Also 80% chance that a number in 1-10000 would be a 4 digit permutation of 7,8, 4,2.
Replies are funny, 2 got 6842, 1 got 6482 lol
7381