An AMC contract without a clear SLA is just a recurring invoice. The SLA is what defines the actual service the partner is committing to deliver, and the gap between a well-drafted SLA and a generic one is the difference between a program that works and one that produces quarterly arguments. After negotiating and operating dozens of signage AMC contracts across pan-India clients, the structural patterns of a useful SLA are clear, and the common drafting mistakes are equally clear.

Start with response time, which is the most-discussed and least-useful single metric. Response time alone tells you nothing if it is not paired with a definition of what response means. Acknowledgement of the ticket is response. A diagnostic visit is response. A repair attempt is response. These are three different things, and a single response time figure that does not specify which one is being measured is not useful. The right structure is acknowledgement within X hours, on-site assessment within Y hours, repair attempt within Z hours, and resolution within W hours, with each tier having its own commitment.

For signage AMC across a typical Indian footprint, realistic SLA windows look like this: acknowledgement within 4 hours during business hours and 12 hours outside; on-site assessment within 24 hours in metros and tier-1 cities, 48 hours in tier-2, 72 hours in tier-3, and best-effort with parts dispatch tracking in tier-4 and remote locations; repair attempt typically within 24 hours of assessment if the parts are in the regional kit, and longer if specialised parts must be procured; resolution targets that depend on the failure category, with simple repairs targeted in 24 hours of attempt and complex rebuilds targeted in 7 to 14 days.

The SLA needs to distinguish between failure categories. A single dark letter on a multi-letter sign is a different urgency from a complete fascia outage. A vinyl edge lift discovered during an audit is a different urgency from an active electrical fault. A reasonable SLA framework defines three or four severity levels with different response targets, rather than a single SLA that pretends all failures are equivalent. Severity 1 is safety, structural, or complete brand outage. Severity 2 is partial brand outage or significant cosmetic failure. Severity 3 is minor cosmetic. Severity 4 is preventive maintenance flag. Each tier has its own response and resolution targets.

The SLA also needs to define what is in scope and what is not. Components covered by manufacturer warranty are usually replaced free, but the labour to replace them is part of the AMC. Vandalism, accident, and force majeure damage are typically out of scope and billed separately. Brand changes and rebrand work are out of scope. Third-party damage, like another vendor working on the building and damaging the sign, is usually out of scope but recoverable from the third party. A clear scope statement prevents most of the post-incident disputes about who pays for what.

Measurement and reporting are the next layer. An SLA without a defined measurement and reporting cadence is unenforceable. The right pattern is monthly reporting with SLA performance by tier and severity, a quarterly review with trend analysis, and an annual program review that drives the next contract cycle. The reporting must be granular enough to identify patterns, with site-level data rather than just network averages, and must be delivered in a format the brand team can actually consume.

Penalties and credits are the most-debated SLA component. Some brands push for aggressive penalty structures, which sound good but tend to produce defensive partner behaviour and creative ticket categorisation. The healthier pattern is moderate service credits for sustained underperformance, paired with an escalation process and ultimately a termination right if performance is consistently below threshold. The credits are not meant to compensate the brand for the failure, they are meant to keep the partner honest about reporting and incentivised to improve.

Geographic differentiation in the SLA is not optional, it is essential. A uniform SLA across all geographies will either be missed in the harder geographies or padded in the easier ones. Tiered SLAs by city tier or by region acknowledge operational reality and allow the partner to commit to numbers they can actually deliver. The brand team should ask for the partner's actual response data from the previous quarter at each tier, not just the target SLA, before signing.

Exclusions and exceptions need to be enumerated, not implied. Public holidays, monsoon weeks with declared weather alerts, election period restrictions on outdoor work in certain cities, building access restrictions imposed by property owners, and similar real-world constraints affect SLA performance. A mature SLA explicitly handles these as defined exceptions, which prevents them from becoming excuses every quarter.

The parts and consumables policy is another commonly underspecified area. Who pays for replacement LED modules after warranty? Who pays for vinyl re-application after edge failure? Who pays for re-painting structural elements after rust treatment? The cleanest pattern is a defined parts list at standard rates, an annual consumables allowance per site that covers expected wear items, and a separate billing line for catastrophic or out-of-scope parts. Hidden parts costs are the second most common reason AMC contracts go wrong after SLA disputes.

The ticketing and audit trail layer deserves its own clause. Every reported issue, whether raised by the brand team, identified during a preventive visit, or surfaced by branch staff, should generate a ticket with a unique identifier, a timestamped trail of acknowledgement, dispatch, attempt, and resolution, and a closing report with photographs. SLA measurement is only credible if the ticketing data is auditable, and an SLA clause that does not specify the data standard is open to selective reporting. Brands should ask to audit the ticketing data quarterly, with read access to the partner's system rather than only the partner's curated reports.

The change management mechanism is another underspecified area. Brand spec changes, network expansion, scope adjustments, and rate revisions all happen during a multi-year AMC. The contract should define how changes are proposed, evaluated, priced, and approved. Without this, every change becomes a renegotiation and the partner relationship erodes. A simple monthly change log signed off by both parties is usually enough to keep the program clean.

Finally, the renewal and termination clauses deserve more attention than they typically get. A multi-year AMC that auto-renews without a formal review benefits the incumbent partner regardless of performance. The healthier pattern is a defined annual renewal review with explicit performance criteria, a termination notice period that allows for orderly handover to a successor partner if needed, and a transition support obligation on the outgoing partner including data handover, site dossier transfer, and crew briefings for the incoming team. These clauses rarely get used, but their presence keeps the partnership honest.

A few drafting principles that consistently pay off. Use plain language, not legal boilerplate, for the operational sections, because the operational sections need to be readable by ops teams on both sides. Include worked examples in the SLA appendix that illustrate how each tier and severity translate to actual response in actual scenarios. Define escalation contacts on both sides with names, roles, and response time commitments at each level. Build in a quarterly review mechanism with a defined agenda and named participants. See /amc for the SLA framework we run with pan-India clients, /downloads for sample SLA templates, and /contact to discuss specific contract structures.