ChatGPT's new image generator is quite effective at creating realistic receipts

ChatGPT's 4o model can create realistic fake receipts, posing fraud risks.

: ChatGPT's new 4o model features an advanced image generator capable of producing realistic textual images, causing alarm over its potential misuse for generating fake receipts. Deedy Das demonstrated this ability by showcasing a convincing fake receipt from a San Francisco steakhouse, which other users duplicated with added details like stains for authenticity. While TechCrunch managed to create a fake Applebee's receipt, noticeable errors included punctuation issues and incorrect math, indicating that while advanced, improvements are still possible. OpenAI's Taya Christianson affirmed image metadata as a safeguard and underscored the company's commitment to policy enforcement, balancing creative freedom with purposeful use cases.

ChatGPT’s 4o model has introduced a new image generation capability that excels in creating realistic images with text. This has quickly evolved its use, notably in fabricating believable restaurant receipts, raising concerns about potential misuse for fraudulent purposes. Prolific social media user and venture capitalist Deedy Das produced an exemplar fake receipt image for a San Francisco steakhouse, showcasing the model’s precision. His post on platform X detailed how such transformations can disrupt traditional means of verification that rely on visual confirmations. Other users quickly followed suit, some enhancing the realism of their generated receipts with features like food stains to add credibility.

TechCrunch conducted tests with the image generator and succeeded in creating a fake receipt for an Applebee’s outlet in San Francisco, though they noted imperfections. The fake receipt contained typical errors, such as improper punctuation and inaccurate math calculations that LLMs like ChatGPT are known to struggle with. These errors highlight that while the tool is powerful, it still leaves room for potential manual corrections using simple software or more precise prompting, making them more convincing if left unchecked.

There is an acknowledgment by OpenAI about the transformative nature of its tools and the potential implications for fraud. OpenAI spokesperson Taya Christianson detailed the presence of metadata in all generated images as an embedded marker for identification. She assured TechCrunch of decisive actions OpenAI takes when encountering policy abuse instances, emphasizing constant adaptation to insights gathered from user applications and feedback.

Despite the ease of producing counterfeit images, OpenAI defends its tool's alignment with broader user objectives, focused on amplifying creative applications. Christianson mentioned non-fraudulent contexts where such fake receipts could contribute positively, such as educational settings for financial literacy and innovation in art and product promotion. OpenAI strives to strike a delicate balance between offering creative freedom and enforcing strict adherence to its ethical usage guidelines.

Ultimately, the debut of ChatGPT’s advanced image generator underscores the dual-edged nature of technological innovations, where alongside facilitation for creativity and education, the risk of malpractice looms. As technology advances, so does the importance of security measures and ethical enforcement to counterbalance potential widespread misuse.

Sources: TechCrunch, OpenAI, Deedy Das, Michael Gofman