ChatGPT envisions itself as a smiling man with brown hair and fair skin
ChatGPT 4o imagines itself as a generic brown-haired white man.

OpenAI's newly launched image-generating technology, the 4o model, consistently envisions itself as a brown-haired, bespectacled white man, irrespective of the styles suggested for its self-portrait visualization. AI researcher Daniel Paleka shared on Substack his observations of this phenomenon, revealing that the AI defaulted to this imagery even when asked for representations in diverse stylistic forms like manga, comic book art, and tarot. This tendency highlights the biases embedded within AI systems, which often reflect the implicit prejudices and assumptions present in their training data and programming environments.
While OpenAI's achievement in generative AI initially gained attention for mimicking the unique stylistic attributes of Studio Ghibli, the subsequent curiosity points toward a deeper issue of representation and bias within machine learning models. Paleka's theories on why ChatGPT portrays itself this way include the possibility of a conscious design decision by OpenAI to avoid resemblance to real individuals, an inside joke at OpenAI, or an artifact emerging from the data harnessed in model training. This raises broader ethical questions about how these systems are taught to conceptualize default human figures and the broader social implications of their outputs.
The model's tendencies underscore recurring concerns about bias in artificial intelligence outputs. Historical examples, such as racially-biased crime prediction tools and facial recognition software, exemplify the susceptibility of AI systems to propelling existing stereotypes found within their training data. In his examination of these limitations, Paleka underscores the importance of questioning the AI's proclivity to envision itself primarily as a white male—a critique on broader patterns of representation in tech spaces like those of the Bay Area or Brooklyn.
Alex Cranz, a Gizmodo editor, further explores ChatGPT's self-imagining capabilities when prompted on its embodiment. ChatGPT describes a digital, abstract self, emphasizing its role as an adaptive language-pattern mirror rather than possessing true consciousness or emotions. Observations by Cranz resonate with earlier critiques: large language models (LLMs) function as complex prediction engines, echoing the biases existing in both the data and the perspectives of their developers.
These revelations provoke reflection on the responsibility of AI developers to address and mitigate embedded biases. As generative models like ChatGPT become increasingly integrated into various applications, ensuring diverse and unbiased data training and algorithmic decision-making processes becomes critical. Stakeholders and developers alike must remain vigilant to the biases AI systems can perpetuate, as seen with ChatGPT's self-image choices, fostering advancements that more accurately represent a multiplicity of human identities.
Sources: OpenAI, Gizmodo, Substack