Incentivizing Performance in Cloud and Outsourcing Contracts: Key Points

August 12, 2015

Defining and incentivizing high-quality performance is often key to the structure of complex service or technology-oriented agreements. In this class of agreements, merely having a performance warranty that answers a yes or no question – in breach or not in breach − just doesn’t do the job. To augment those performance warranties, a common approach is to use a “service level agreement” (SLA). The SLA is a familiar and essential feature in information technology-oriented agreements, such as outsourcing, cloud computing, software-as-a-service and the like. When properly structured and negotiated, SLAs can be an effective tool for more nuanced vendor management than a performance warranty alone could afford. This article will catalog some of the best practices for structuring a service level agreement, and discuss elements enterprise corporate counsel can put to use in the IT and service contracts that come across their desk.

  1. Specify Metrics. An initial task in developing an SLA is to specify the metrics that will be utilized. Metrics should meet three criteria. First, they must be objectively measurable without undue overhead or difficulty. While it may be desirable, for example, to measure how long a particular process takes in order to incentivize minimizing that time, if there isn’t a way to record when the process starts or stops, that just can’t be a useful measurement. Second, they should be truly reflective of performance quality. Third, they should be within the control of the party whose performance is being measured. The last point may seem obvious, but identifying circumstances in which a metric is affected by the actions of others is not always easy.

    Consideration also must be given to the number of metrics that are going to be measured. Part of the objective of an SLA structure is to have a simple, efficient system for contract management. If the number of metrics imposed is too large, the benefits can be lost in the overhead of managing additional infrastructure and process, for example, to measure and monitor each one.
  2. Establish Metric Categories. Typical service level agreements divide the metrics to be measured into two categories: those with financial consequences and those without financial consequences. Terminology varies widely. Often CPI (for “critical performance indicator”) is used for metrics with financial consequences and KPI (for “key performance indicator”) for those without. (Very confusingly, the term “SLA” is sometimes used to refer to both the CPI metrics and to the entire arrangement. This article uses “SLA” for the arrangement and “CPI” for a metric with financial consequences.) Another term, OLA (for “operating level agreement”) is also common and generally designates KPIs that are outside a formal contractual structure.

    A metric under a service level agreement often will have a life cycle of different statuses, as a CPI or KPI. At the initiation of an agreement, both CPIs and KPIs are generally established. When new metrics are added later they typically begin as KPIs for some period before being moved to CPI status, and there is generally a process by which the customer can “promote” or “demote” metrics from one status to another.
  3. Establish Measurement Systems. As noted, an effective CPI needs to be objectively measurable. The service level agreement should specify the manner in which the measurement is to be taken and which party has to bear the cost of maintaining that system and obtaining the periodic measurements. It is fine, for example, to say that an IT system will have a 10 millisecond response time to certain inputs, but most likely another piece of software needs to be licensed, implemented, deployed and maintained, all at some expense, in order to capture that information. In a complex environment with multiple metrics, a service level agreement typically also would provide for changes in measurement systems over time and for the addition of new systems when metrics change.
  4. Set Metric Levels. In addition to defining the metrics to be measured, a service level agreement also specifies what values the vendor is charged with achieving for those metrics. Typically two (sometimes three) levels are specified, each with different financial consequences. Again terminology varies widely, but in general the three levels that are commonly addressed are “minimum,” “target” and “bonus” levels. What happens at each of these levels varies and is often the subject of negotiation. However, financial consequences usually result if performance is worse than the minimum level, or if performance is worse than the target level for multiple periods. If a “bonus” level is present – a point often seriously negotiated – it may entitle the vendor to additional compensation or to offset other performance failures.

    In any event, in a service level agreement, a chart just listing these metric levels is never enough. The service level agreement needs to be written to specify how each of these items is to be calculated and what the specific financial consequences are to be. Some SLA formats use the convention of converting all metrics into a scale from 0 to 100 (or expressing them as percentages). As in complicated price formulas in a contract, the key here is providing detailed, step-by-step language so the intended calculation can be performed without disagreement.
  5. Determine Financial Consequences. When a CPI is missed there are financial consequences. Often these are referred to as “penalties,” but careful lawyers will structure them as fee adjustments due to the legal principle that penalties in private contracts may not be enforceable by the courts. Typically, therefore, the financial adjustment is referred to as a “service credit.”

    The variations on how service credits may be structured and allocated are as numerous as the types of arrangements to which service level agreements are applied, but a typical structure starts with an “at-risk amount” – a maximum dollar value that would be the most a vendor could lose in a given billing period for service-level failures. There is then some mechanism to allocate portions of that amount to individual CPIs, so the particular service credit amount for a given CPI miss is a fraction of the at-risk amount. Though often heavily negotiated, the overall at-risk amount typically is determined as a percentage of the amount being spent under the agreement.

    Importantly, the fraction of the at-risk amount assigned to individual CPIs is often based on more than the total at-risk amount (for example, one-fifth of the at-risk amount assigned to each of 10 CPIs) with the at-risk amount acting as a cap on service credits if there are multiple breaches in a period.

    When a service level agreement has a large number of CPIs and KPIs, it is common to use a point allocation system by which the customer can emphasize and de-emphasize different aspects of performance over the life cycle of the agreement or as vendor performance in different areas becomes more or less problematic. For example, if, in an outsourcing agreement, there were CPIs that measured transaction accuracy and installation time, the customer would be able to put a greater or lesser proportion of the at-risk amount in any reporting period on one activity or the other.
  6. Decide How to Handle Infrequent Occurrences. One twist that often comes up in service level agreements is how to handle metrics that measure infrequently occurring events or events that may have a very low frequency in some measurement periods. For example, if a metric measured how often an IT vendor timely delivered large print jobs, but in some months there were only one or two large print jobs, then a single late delivery would drop the metric down to 50 percent or zero. Of course, if such a possibility is significant, it calls into question whether that metric should be used as a CPI. But if there is good reason to make it a CPI, it would not be uncommon to negotiate some form of contractual relief for the vendor. This may involve aggregating across measurement periods, aggregating with other metrics, or simply excusing the violation.
  7. Avoid Agreements to Agree. In addition to setting what metrics are to be measured, the service level agreement needs to set the values that parties seek to achieve (the target and/or minimum values). While this should seem obvious, all too often the values are left as an “agreement to agree” later, after the contract is signed. Once the contract is signed, each party has strong contradictory incentives in how they would want to set those metrics, so it becomes very difficult to reach the necessary mutual agreement and for those metrics to serve the function of incentivizing quality performance.

    In some instances there are compelling reasons to defer setting the metrics, for example, where the processes being measured will be established only after the contract is in place. In that scenario, it is not unusual to use a baselining process that establishes initial levels of the various metrics and then uses some formula from those to set the going-forward levels. The difficulty then becomes defining a baselining process that cannot be artificially manipulated in order to suppress or inflate the CPI metrics.
  8. Define High Priority Items. Commonly, some subset of CPIs may be designated as higher priority or “critical” and have some additional sanction beyond the service credits associated with them. This might be an augmented amount of the service credit or some other remedy such as contract termination for cause.
  9. Carefully Formulate Automatic Adjustments. Some agreements provide for changes in CPIs over time, often as a mechanism to incentivize continued increases in efficiency and better performance over time. This can be a formula that simply “raises the bar” on the target or minimum metric by some percentage each period, or a more complex formula that takes into account actual performance and requires the vendor to supply improvements over time. Another approach is to require a fee adjustment if the vendor consistently overperforms the required metrics.

    All of these approaches require thoughtful analysis and work best when there is good historical data with which to evaluate trends. Both the vendor and the customer have opportunities to influence how these requirements are applied, by how the initial state is set and how the contract is performed over time.
  10. Use Comprehensive Reporting. An often-overlooked feature of service level agreements is robust reporting and assuring that the measuring and monitoring systems and processes are designed to provide detailed and timely reporting. In a well-crafted service level agreement, reporting terms go beyond the technical aspects of obtaining and sharing data. They also include thoughtful processes, such as meetings and escalations, to address issues and perform appropriate root-cause analysis when a failure or a trend of failures occurs. The goal of SLAs generally is not to obtain or avoid the financial consequences, but rather to serve as a meaningful input to real-world actions to help the parties achieve the purposes of the agreement.
  11. Tailor each SLA to its Services. While the 10 points above have been termed “best practices” in this article, not all the features outlined here will be appropriate in every context. A nine-figure corporate infrastructure IT outsourcing should certainly address each of these items, but a $100,000 social-networking cloud service might be overpowered by an SLA that heeded all the elements described above. Enterprise corporate counsel should be aware of these aspects of SLAs and fit the metric and incentive structure to an appropriate risk-and-benefit analysis of the services to be provided.

McGuireWoods’ technology and outsourcing practice team supports a wide range of business transactions driven by technology. In addition to counseling companies on developing and negotiating SLA arrangements for service transactions, the team assists in all phases of documenting, negotiating and handling disputes in connection with IT procurements, outsourcings, cloud computing, ERP implementation and data security. Our clients include Fortune 1000 corporations and emerging business enterprises spanning the industry spectrum. The practice team is chaired by Steve Gold in McGuireWoods’ Chicago office.

Subscribe