ChatGPT's Frequent Fantasies on Music App Soundslice Prompt Founder to Turn Them Real

2025-07-10

Earlier this month, Adrian Holovaty, founder of music education platform Soundslice, uncovered a perplexing issue that had plagued his system for weeks. He discovered that anomalous images from ChatGPT conversations were being repeatedly uploaded to his website. After resolving the technical anomaly, he realized ChatGPT had inadvertently become one of his company's most powerful marketers while simultaneously misinforming users about the platform's capabilities.

As a co-creator of Django (a popular Python web framework though he stepped down from project leadership in 2014), Holovaty launched Soundslice in 2012, maintaining its "bootstrapped" model as he told TechCrunch. Today, he balances his role as both a musician and startup founder. The platform features a video player synchronized with musical notation to guide performance instruction, along with an AI-powered "Sheet Music Scanner" that converts scanned sheet music images into interactive notated scores.

While monitoring error logs to improve the scanning feature, Holovaty unexpectedly encountered numerous ChatGPT conversation screenshots.

These uploads generated extensive error reports. Instead of legitimate sheet music, they contained ASCII tablatures - minimalist text-based guitar notation using standard keyboard characters. Traditional staves like treble clefs require symbols beyond basic QWERTY keyboards.

The volume of ChatGPT-generated uploads didn't strain Soundslice's storage capacity or bandwidth, explained Holovaty in a blog post. He admitted confusion: "Our system wasn't designed to handle ASCII tablatures. Yet we were suddenly receiving hundreds of ChatGPT screenshots? This baffled me until I tested ChatGPT myself." His test revealed the AI was instructing users to upload these ASCII images to Soundslice accounts to generate audible music - which technically wasn't possible with the current scanning technology.

He identified a reputational risk: "New users arrive with misguided expectations. They're assured we can perform tasks we genuinely don't support."

The team considered options: plastering disclaimers across the site ("No, we can't convert ChatGPT conversations to audio") or engineering support for ASCII tablatures - a format Holovaty never intended to support.

They opted for the latter.

"This felt contradictory," he reflected. "I'm pleased to help users, but strange pressures emerged. Should we create features solely to counter misinformation?" He questioned if this marked the first instance where persistent AI hallucinations compelled product development.

Programmers on Hacker News offered insightful comparisons: "This mirrors overzealous human sales reps promising everything, then forcing engineers to deliver impossible features." Holovaty agreed enthusiastically with the analogy.