Case Study·

Maintaining Music Tech Tools: The SLA Dilemma for Small Teams

What happens when a custom streaming analytics tool works perfectly, until nobody is responsible for keeping it running. A real story from the music industry.
Maintaining Music Tech Tools: The SLA Dilemma for Small Teams

Key Takeaways

A tool that crashed monthly had an exposed service exploited due to zero maintenance.
The original builder is almost always the cheapest and fastest option for ongoing support.
Frame maintenance as availability (named contact, SLA) not hours of work.
A small prepaid retainer of 5-12 hours per year prevents compounding technical debt.

We built a streaming analytics platform for an independent European label. It aggregated royalty records from multiple distributors into a single searchable dashboard. The label loved it. Their head of operations said it "completely changed how we handle reporting." The tool was stable, fast, and exactly what they needed.

And then nobody maintained it.

This is a story about what happens next, and what we learned about pricing maintenance for niche music tech tools.

The Tool That Worked Too Well

In early 2025, we delivered a custom streaming data platform to a well-established independent label. The stack was a web application backed by a search engine optimized for analytical queries, hosted on a managed cloud provider.

The platform replaced a workflow that previously took days of manual spreadsheet work. It let their team filter, aggregate, and export royalty data across all their distribution partners from a single interface.

By mid-2025, both the data manager and the head of operations were using it regularly. The head of operations specifically praised how much time it saved during quarterly reporting.

This is the paradox: the better a tool works, the less visible its maintenance needs become. When everything runs smoothly, "maintenance" feels like paying for nothing.

The Negotiation Spiral

With our lead developer becoming less available, my business partner reached out to the label with a proposal: set aside a few hours each month for a dedicated developer who would monitor the system and handle issues proactively.

What followed was a months-long negotiation that perfectly illustrates the gap between how builders and clients think about software maintenance.

Round 1: The Initial Proposal
Round 2: Meeting in the Middle
Round 3: The Stalemate
The Walk-Away
When a client walks away from a maintenance agreement, neither side is wrong. The client sees a stable system and doesn't want to pay for peace of mind. The builder sees accumulated technical debt and knows that "stable" is temporary without active care.

The Monthly Crashes

Here's the timeline of what happened after the agreement fell through:

MonthIssueResolution
Month 1Uploader stops workingWe restarted servers
Month 2Importer breaks: UI shows success but nothing processesEmergency fix
Month 3Uploader down againRestart; Data manager asks for technical details to find a third party
Month 4Uploader down againRoot cause found: security exploit on exposed service

After four rounds, their data manager wrote: "This does seem to happen every month now. Maybe if it's easy enough you can send me instructions on how to re-start the tool?"

They were still searching for someone to take over ad-hoc maintenance. Months later, no one had been found.

The Security Debt Nobody Saw

The fourth crash revealed something more serious than a simple restart issue. When we investigated, we found that a core infrastructure service had crashed due to a security exploit attempt. The service was exposed to the public internet, running an outdated version with insufficient access controls.

This is what "no maintenance" actually looks like. It's not just restarts. It's unpatched services, exposed ports, and outdated dependencies quietly accumulating risk until something breaks or, worse, gets exploited.

The fix required:

  • Binding the service to localhost only
  • Rotating credentials
  • Locking down external access
  • Upgrading to the latest stable version
  • Applying security patches

None of this would have been caught by "just restarting the tool." And none of it would have been needed if someone had been proactively monitoring the system.

The "Just Restart It" Trap

The data manager's request for restart instructions was perfectly logical. The symptom was always the same: uploader or importer stops working. The fix appeared to be a simple service restart. Why not just do it yourself?

Because restarting masks the root cause. In this case:

  • The uploader crashed because a backend service crashed
  • The backend service crashed because of an exploit attempt on an exposed port
  • The port was exposed because no one had hardened the configuration after deployment
  • The configuration was unhardened because there was no maintenance agreement

Each restart bought a month. Each month, the underlying problem grew worse.

What We Learned

For builders: price for availability, not just hours

The biggest mistake we made was framing the discussion around hours of work. When the system is stable, hours feel abstract. What the client actually needs is availability: someone who picks up the phone (or email) when the importer breaks on the day they need to run monthly figures.

If we did this again, we would frame it differently: "For X per month, you get a named contact who monitors your system, responds within 24 hours, and handles up to Y requests. No unused hours to argue about."

For clients: maintenance is insurance, not a service

The label's reasoning was sound: "We have relatively few problems with the tool, outside of this server restart issue which comes up fairly often but seems to be very quick to fix."

But that's exactly how insurance works. The claim is rare and the resolution is quick, until it isn't. The exploit could have resulted in data loss. The monthly crashes disrupted their workflow at the worst possible time (when royalty reports were due).

For everyone: the handover gap is real

The label spent months looking for a third-party maintainer. When they asked about the stack, we shared the full technical details. Simple enough on paper. But finding someone willing to take on ad-hoc maintenance of a system they didn't build, for a client they have no relationship with, at an unpredictable cadence, is genuinely hard.

The original builder is almost always the cheapest and fastest option for maintenance. They know the codebase, the infrastructure, and the client's workflow. Every handover involves a ramp-up period where the new maintainer is slower, more expensive, and more likely to introduce regressions.

A Framework for Music Tech Maintenance Agreements

Based on this experience and others, here's what we now recommend:

Tier 1: Monitoring Only
Tier 2: Reactive Support
Tier 3: Proactive Maintenance

The Ending (So Far)

After the security incident, their data manager acknowledged that price had been the blocker all along. He opened the door to a new conversation: "Happy to chat about that again if you think you can offer something more flexible given the low amount of maintenance that's needed."

We started talking again. This time, both sides had a much clearer picture of what "low amount of maintenance" actually meant, and what it cost when nobody did it.


Building a custom music data tool? Think about maintenance before you ship. The best time to set up a support agreement is during development, when both sides understand the system and the stakes. The second-best time is before the first crash.

Let's Build Something Together

Have a similar project in mind? We'd love to hear about it.

Get in touch to discuss how we can help bring your vision to life.