
We built a streaming analytics platform for an independent European label. It aggregated royalty records from multiple distributors into a single searchable dashboard. The label loved it. Their head of operations said it "completely changed how we handle reporting." The tool was stable, fast, and exactly what they needed.
And then nobody maintained it.
This is a story about what happens next, and what we learned about pricing maintenance for niche music tech tools.
In early 2025, we delivered a custom streaming data platform to a well-established independent label. The stack was a web application backed by a search engine optimized for analytical queries, hosted on a managed cloud provider.
The platform replaced a workflow that previously took days of manual spreadsheet work. It let their team filter, aggregate, and export royalty data across all their distribution partners from a single interface.
By mid-2025, both the data manager and the head of operations were using it regularly. The head of operations specifically praised how much time it saved during quarterly reporting.
With our lead developer becoming less available, my business partner reached out to the label with a proposal: set aside a few hours each month for a dedicated developer who would monitor the system and handle issues proactively.
What followed was a months-long negotiation that perfectly illustrates the gap between how builders and clients think about software maintenance.
Round 1: The Initial Proposal
We proposed a monthly retainer, a small block of hours from a developer who already knew the system. Time logged, extras billed proportionally.
"We aren't sure that we'll need that many hours of help a month. Some of the problems we've had recently have been more with the files we've been supplied, rather than the tool itself."
The data manager asked about an ad-hoc rate instead.
Round 2: Meeting in the Middle
We adjusted. Instead of a monthly retainer, we offered a prepaid hour bank, usable anytime over a full year. Essentially pay-as-you-go with a small upfront commitment.
The data manager countered with fewer hours and a request that any unused time roll over indefinitely.
Round 3: The Stalemate
We came down further, offering flexible options with shorter commitment periods but no rollover. We explained that an open-ended rollover creates obligations that hang indefinitely, especially for a system that rarely needs help.
The data manager countered again with fewer hours and rollover. We couldn't go below our minimum: the smallest package that justified onboarding a new developer onto the project.
The Walk-Away
"We aren't sure that we'll need that level of support, so we think it may be best if we try and find someone else who can provide support in a more ad-hoc way."
They decided to look for a third-party maintainer. We offered to help with the handover.
Here's the timeline of what happened after the agreement fell through:
| Month | Issue | Resolution |
|---|---|---|
| Month 1 | Uploader stops working | We restarted servers |
| Month 2 | Importer breaks: UI shows success but nothing processes | Emergency fix |
| Month 3 | Uploader down again | Restart; Data manager asks for technical details to find a third party |
| Month 4 | Uploader down again | Root cause found: security exploit on exposed service |
After four rounds, their data manager wrote: "This does seem to happen every month now. Maybe if it's easy enough you can send me instructions on how to re-start the tool?"
They were still searching for someone to take over ad-hoc maintenance. Months later, no one had been found.

The fourth crash revealed something more serious than a simple restart issue. When we investigated, we found that a core infrastructure service had crashed due to a security exploit attempt. The service was exposed to the public internet, running an outdated version with insufficient access controls.
The fix required:
None of this would have been caught by "just restarting the tool." And none of it would have been needed if someone had been proactively monitoring the system.
The data manager's request for restart instructions was perfectly logical. The symptom was always the same: uploader or importer stops working. The fix appeared to be a simple service restart. Why not just do it yourself?
Because restarting masks the root cause. In this case:
Each restart bought a month. Each month, the underlying problem grew worse.

The biggest mistake we made was framing the discussion around hours of work. When the system is stable, hours feel abstract. What the client actually needs is availability: someone who picks up the phone (or email) when the importer breaks on the day they need to run monthly figures.
The label's reasoning was sound: "We have relatively few problems with the tool, outside of this server restart issue which comes up fairly often but seems to be very quick to fix."
But that's exactly how insurance works. The claim is rare and the resolution is quick, until it isn't. The exploit could have resulted in data loss. The monthly crashes disrupted their workflow at the worst possible time (when royalty reports were due).
The label spent months looking for a third-party maintainer. When they asked about the stack, we shared the full technical details. Simple enough on paper. But finding someone willing to take on ad-hoc maintenance of a system they didn't build, for a client they have no relationship with, at an unpredictable cadence, is genuinely hard.
Based on this experience and others, here's what we now recommend:
Tier 1: Monitoring Only
Best for: stable tools with minimal user interaction
Tier 2: Reactive Support Most Common
Best for: tools used regularly but not business-critical daily
Tier 3: Proactive Maintenance Recommended
Best for: tools that are part of monthly business operations
After the security incident, their data manager acknowledged that price had been the blocker all along. He opened the door to a new conversation: "Happy to chat about that again if you think you can offer something more flexible given the low amount of maintenance that's needed."
We started talking again. This time, both sides had a much clearer picture of what "low amount of maintenance" actually meant, and what it cost when nobody did it.
Building a custom music data tool? Think about maintenance before you ship. The best time to set up a support agreement is during development, when both sides understand the system and the stakes. The second-best time is before the first crash.
Have a similar project in mind? We'd love to hear about it.
Get in touch to discuss how we can help bring your vision to life.
Introduction to generating DDEX file using Python
Learn what DDEX files are and how to generate them using Python. Covers ERN, DSR, and RIN standards for digital music data exchange in the industry.
MusicTech Resources for Builders
A curated platform for MusicTech founders and teams. Track investor deals, explore open-source tools, and find partners and events in the music-tech space.
Get music tech insights, case studies, and industry news delivered to your inbox.