Case Study·

Maintaining Music Tech Tools: The SLA Dilemma for Small Teams

What happens when a custom streaming analytics tool works perfectly, until nobody is responsible for keeping it running. A real story from the music industry.
Maintaining Music Tech Tools: The SLA Dilemma for Small Teams

We built a streaming analytics platform for an independent European label. It aggregated royalty records from multiple distributors into a single searchable dashboard. The label loved it. Their head of operations said it "completely changed how we handle reporting." The tool was stable, fast, and exactly what they needed.

And then nobody maintained it.

This is a story about what happens next, and what we learned about pricing maintenance for niche music tech tools.

The Tool That Worked Too Well

In early 2025, we delivered a custom streaming data platform to a well-established independent label. The stack was a web application backed by a search engine optimized for analytical queries, hosted on a managed cloud provider.

The platform replaced a workflow that previously took days of manual spreadsheet work. It let their team filter, aggregate, and export royalty data across all their distribution partners from a single interface.

By mid-2025, both the data manager and the head of operations were using it regularly. The head of operations specifically praised how much time it saved during quarterly reporting.

This is the paradox: the better a tool works, the less visible its maintenance needs become. When everything runs smoothly, "maintenance" feels like paying for nothing.

The Negotiation Spiral

With our lead developer becoming less available, my business partner reached out to the label with a proposal: set aside a few hours each month for a dedicated developer who would monitor the system and handle issues proactively.

What followed was a months-long negotiation that perfectly illustrates the gap between how builders and clients think about software maintenance.

Round 1: The Initial Proposal

We proposed a monthly retainer, a small block of hours from a developer who already knew the system. Time logged, extras billed proportionally.

"We aren't sure that we'll need that many hours of help a month. Some of the problems we've had recently have been more with the files we've been supplied, rather than the tool itself."

The data manager asked about an ad-hoc rate instead.

Round 2: Meeting in the Middle

We adjusted. Instead of a monthly retainer, we offered a prepaid hour bank, usable anytime over a full year. Essentially pay-as-you-go with a small upfront commitment.

The data manager countered with fewer hours and a request that any unused time roll over indefinitely.

Round 3: The Stalemate

We came down further, offering flexible options with shorter commitment periods but no rollover. We explained that an open-ended rollover creates obligations that hang indefinitely, especially for a system that rarely needs help.

The data manager countered again with fewer hours and rollover. We couldn't go below our minimum: the smallest package that justified onboarding a new developer onto the project.

The Walk-Away

"We aren't sure that we'll need that level of support, so we think it may be best if we try and find someone else who can provide support in a more ad-hoc way."

They decided to look for a third-party maintainer. We offered to help with the handover.

When a client walks away from a maintenance agreement, neither side is wrong. The client sees a stable system and doesn't want to pay for peace of mind. The builder sees accumulated technical debt and knows that "stable" is temporary without active care.

The Monthly Crashes

Here's the timeline of what happened after the agreement fell through:

MonthIssueResolution
Month 1Uploader stops workingWe restarted servers
Month 2Importer breaks: UI shows success but nothing processesEmergency fix
Month 3Uploader down againRestart; Data manager asks for technical details to find a third party
Month 4Uploader down againRoot cause found: security exploit on exposed service

After four rounds, their data manager wrote: "This does seem to happen every month now. Maybe if it's easy enough you can send me instructions on how to re-start the tool?"

They were still searching for someone to take over ad-hoc maintenance. Months later, no one had been found.

The Security Debt Nobody Saw

The fourth crash revealed something more serious than a simple restart issue. When we investigated, we found that a core infrastructure service had crashed due to a security exploit attempt. The service was exposed to the public internet, running an outdated version with insufficient access controls.

This is what "no maintenance" actually looks like. It's not just restarts. It's unpatched services, exposed ports, and outdated dependencies quietly accumulating risk until something breaks or, worse, gets exploited.

The fix required:

  • Binding the service to localhost only
  • Rotating credentials
  • Locking down external access
  • Upgrading to the latest stable version
  • Applying security patches

None of this would have been caught by "just restarting the tool." And none of it would have been needed if someone had been proactively monitoring the system.

The "Just Restart It" Trap

The data manager's request for restart instructions was perfectly logical. The symptom was always the same: uploader or importer stops working. The fix appeared to be a simple service restart. Why not just do it yourself?

Because restarting masks the root cause. In this case:

  • The uploader crashed because a backend service crashed
  • The backend service crashed because of an exploit attempt on an exposed port
  • The port was exposed because no one had hardened the configuration after deployment
  • The configuration was unhardened because there was no maintenance agreement

Each restart bought a month. Each month, the underlying problem grew worse.

What We Learned

For builders: price for availability, not just hours

The biggest mistake we made was framing the discussion around hours of work. When the system is stable, hours feel abstract. What the client actually needs is availability: someone who picks up the phone (or email) when the importer breaks on the day they need to run monthly figures.

If we did this again, we would frame it differently: "For X per month, you get a named contact who monitors your system, responds within 24 hours, and handles up to Y requests. No unused hours to argue about."

For clients: maintenance is insurance, not a service

The label's reasoning was sound: "We have relatively few problems with the tool, outside of this server restart issue which comes up fairly often but seems to be very quick to fix."

But that's exactly how insurance works. The claim is rare and the resolution is quick, until it isn't. The exploit could have resulted in data loss. The monthly crashes disrupted their workflow at the worst possible time (when royalty reports were due).

For everyone: the handover gap is real

The label spent months looking for a third-party maintainer. When they asked about the stack, we shared the full technical details. Simple enough on paper. But finding someone willing to take on ad-hoc maintenance of a system they didn't build, for a client they have no relationship with, at an unpredictable cadence, is genuinely hard.

The original builder is almost always the cheapest and fastest option for maintenance. They know the codebase, the infrastructure, and the client's workflow. Every handover involves a ramp-up period where the new maintainer is slower, more expensive, and more likely to introduce regressions.

A Framework for Music Tech Maintenance Agreements

Based on this experience and others, here's what we now recommend:

Tier 1: Monitoring Only

  • Automated uptime monitoring with alerts
  • Quarterly security patch review
  • Email response within 48 hours

Best for: stable tools with minimal user interaction

Tier 2: Reactive Support Most Common

  • Everything in Tier 1
  • Named developer contact
  • Response within 24 hours on business days
  • Small prepaid hour bank (5-10 hours/year)

Best for: tools used regularly but not business-critical daily

Tier 3: Proactive Maintenance Recommended

  • Everything in Tier 2
  • Monthly health checks (logs, disk space, dependencies)
  • Proactive security patches and version upgrades
  • Response within 4 hours on business days

Best for: tools that are part of monthly business operations

The Ending (So Far)

After the security incident, their data manager acknowledged that price had been the blocker all along. He opened the door to a new conversation: "Happy to chat about that again if you think you can offer something more flexible given the low amount of maintenance that's needed."

We started talking again. This time, both sides had a much clearer picture of what "low amount of maintenance" actually meant, and what it cost when nobody did it.


Building a custom music data tool? Think about maintenance before you ship. The best time to set up a support agreement is during development, when both sides understand the system and the stakes. The second-best time is before the first crash.

Let's Build Something Together

Have a similar project in mind? We'd love to hear about it.

Get in touch to discuss how we can help bring your vision to life.