
We built a streaming analytics platform for an independent European label. It aggregated royalty records from multiple distributors into a single searchable dashboard. The label loved it. Their head of operations said it "completely changed how we handle reporting." The tool was stable, fast, and exactly what they needed.
And then nobody maintained it.
This is a story about what happens next, and what we learned about pricing maintenance for niche music tech tools.
In early 2025, we delivered a custom streaming data platform to a well-established independent label. The stack was a web application backed by a search engine optimized for analytical queries, hosted on a managed cloud provider.
The platform replaced a workflow that previously took days of manual spreadsheet work. It let their team filter, aggregate, and export royalty data across all their distribution partners from a single interface.
By mid-2025, both the data manager and the head of operations were using it regularly. The head of operations specifically praised how much time it saved during quarterly reporting.
With our lead developer becoming less available, my business partner reached out to the label with a proposal: set aside a few hours each month for a dedicated developer who would monitor the system and handle issues proactively.
What followed was a months-long negotiation that perfectly illustrates the gap between how builders and clients think about software maintenance.
Here's the timeline of what happened after the agreement fell through:
| Month | Issue | Resolution |
|---|---|---|
| Month 1 | Uploader stops working | We restarted servers |
| Month 2 | Importer breaks: UI shows success but nothing processes | Emergency fix |
| Month 3 | Uploader down again | Restart; Data manager asks for technical details to find a third party |
| Month 4 | Uploader down again | Root cause found: security exploit on exposed service |
After four rounds, their data manager wrote: "This does seem to happen every month now. Maybe if it's easy enough you can send me instructions on how to re-start the tool?"
They were still searching for someone to take over ad-hoc maintenance. Months later, no one had been found.

The fourth crash revealed something more serious than a simple restart issue. When we investigated, we found that a core infrastructure service had crashed due to a security exploit attempt. The service was exposed to the public internet, running an outdated version with insufficient access controls.
The fix required:
None of this would have been caught by "just restarting the tool." And none of it would have been needed if someone had been proactively monitoring the system.
The data manager's request for restart instructions was perfectly logical. The symptom was always the same: uploader or importer stops working. The fix appeared to be a simple service restart. Why not just do it yourself?
Because restarting masks the root cause. In this case:
Each restart bought a month. Each month, the underlying problem grew worse.

The biggest mistake we made was framing the discussion around hours of work. When the system is stable, hours feel abstract. What the client actually needs is availability: someone who picks up the phone (or email) when the importer breaks on the day they need to run monthly figures.
The label's reasoning was sound: "We have relatively few problems with the tool, outside of this server restart issue which comes up fairly often but seems to be very quick to fix."
But that's exactly how insurance works. The claim is rare and the resolution is quick, until it isn't. The exploit could have resulted in data loss. The monthly crashes disrupted their workflow at the worst possible time (when royalty reports were due).
The label spent months looking for a third-party maintainer. When they asked about the stack, we shared the full technical details. Simple enough on paper. But finding someone willing to take on ad-hoc maintenance of a system they didn't build, for a client they have no relationship with, at an unpredictable cadence, is genuinely hard.
Based on this experience and others, here's what we now recommend:
After the security incident, their data manager acknowledged that price had been the blocker all along. He opened the door to a new conversation: "Happy to chat about that again if you think you can offer something more flexible given the low amount of maintenance that's needed."
We started talking again. This time, both sides had a much clearer picture of what "low amount of maintenance" actually meant, and what it cost when nobody did it.
Building a custom music data tool? Think about maintenance before you ship. The best time to set up a support agreement is during development, when both sides understand the system and the stakes. The second-best time is before the first crash.
Have a similar project in mind? We'd love to hear about it.
Get in touch to discuss how we can help bring your vision to life.
Introduction to generating DDEX file using Python
Learn what DDEX files are and how to generate them using Python. Covers ERN, DSR, and RIN standards for digital music data exchange in the industry.
Querying Bandcamp Revenue Reports with Natural Language — Meet mtl-bandcamp-mcp
An open source MCP server that turns Bandcamp CSV exports into queryable dashboards. Ask about artist splits, fee breakdowns, and top sellers — no spreadsheet needed.
Technical Partner
Technical partner at MusicTech Lab with 15+ years in software development. Builder, problem solver, blues guitarist, long-distance swimmer, and cyclist.
Get music tech insights, case studies, and industry news delivered to your inbox.