AI models have favorite numbers, because they think they’re people

AI models show human-like biases in selecting 'random' numbers, influenced by their training data.

: AI language models like GPT-3.5 and Claude 3 Haiku mimic human-like biases when asked to pick a random number, often avoiding extremes and favoring certain numbers. These tendencies are not due to AI understanding randomness but result from patterns in their training data. The phenomenon highlights how AI responses are based on human-generated content, leading to seemingly human-like behavior.

Artificial intelligence models are demonstrating human-like biases when tasked with selecting 'random' numbers, a trait that reveals more about their operational framework than any semblance of consciousness. In an experiment conducted by Gramener, it was observed that major LLM chatbots like OpenAI's GPT-3.5 Turbo and Anthropics’s Claude consistently favored certain numbers while avoiding others. For example, GPT-3.5 demonstrated a preference for the number 47, while both GPT-3.5 and Claude often avoided low and high numbers, and numbers with repeating digits like 33 and 66 were conspicety avoided.

The behavior exhibited by these AI models can be attributed to the nature of their training rather than any actual understanding or cognitive processing of what randomness involves. AIs respond to the prompt of picking a number by referencing their training data, selecting numbers that appear more frequently in human responses to similar queries. This results in a biased 'random' selection that reflects common human inclinations in numerical choice, emphasizing the impact of training data on AI outputs.

This revelation underscores the limitations and characteristics of language models that seemingly mimic human thought processes but in reality, do not 'think' or 'reason' as humans do. They simply reproduce patterns and data ingrained during their training. This insight serves as a reminder of the inherent human-like behaviors that emerge from AI, derived entirely from their programming and the data provided by humans, indicating that while AIs can appear to act like humans, they function wholly through replication of human behaviors encoded within their training datasets.