Undesirable Variation in Court Performance

Variation in treatment of court users and court employees, in the time and cost of processing cases, in the reliability and integrity of case files, in the compliance with court orders, and in other key court performance areas is inevitable but generally not desirable. If you had your choice of between processes that produced predictable and consistent results and ones that produced good results one day and bad the next, poor quality under some circumstances and good quality in others, which ones would you choose? Both court managers and the public recognize the benefits of stable processes and consistent and predictable results.

Understanding and controlling variation (e.g., knowing whether particular performance falls outside established upper and lower “control limits”) are at the heart of quality improvement methods such as Total Quality Management (TQM) and Six Sigma. More than 25 years ago, W. Edward Deming and Joseph Juran noted that variability on core measures of performance is a threat to an organization because it is evidence that its business is not being managed effectively, that managers have no control over processes and the results they achieve. Magnitude of variation is an indicator of organizational health. “It all depends,” is not a good answer to the question “How are we doing?”

Fair and Equal

For court leaders and managers, unmanaged variation especially in court users’ encounters with the justice system is a threat to quality, and a possible breakdown of public trust and confidence. What does the public want from courts? What matters to them? Research in procedural justice indicates that they want to be treated equally with neutral and unbiased procedures based on facts and consistent application of rules. In other words, they want the same things that good court managers want, i.e., to reduce variation in processes and results and to stabilize performance at high quality levels.

Managing Variation

It’s not all bad news for court managers. Managing and controlling variation has intuitive appeal. It has to be done at the level where the variation occurs, not at the level of averages or central tendencies. This too is common sense.

We have all seen claims by a major airline that it leads the industry in on-time performance, with departure and arrival data to prove it. Such claims may be legitimate, but we know that our specific flight was not on time, and that it may be the very flight that is never on time. High-level averages have strategic value but they don’t give managers the information to improve performance. The only way to improve performance is to manage and control the variation around the average. (In a July-August 2005 article in Harvard Business Review, John Fleming, Curt Coffman and James K. Harter cite the absurd example of your physician basing treatment of your heart arrhythmia not on an assessment of your heart rate but that of the average heart rate of your town.)

Court users experience variation, not averages. In your court, some users may experience nothing but problems; others are routinely satisfied. It may all depend where they encountered the court, what business they had with the court, and what kind of user (e.g., African-American witness in a civil case heard in the main downtown courthouse) they are. And that’s not good.

Breakouts and Hierarchies of Measures

Performance has to be measured at the right level of specificity for the measurement to be useful. What does a court-wide high score of employee engagement mean to an employee who works in a unit that consistently records miserable scores on the employee survey item “My coworkers care about the quality of services and programs we provide”? When employee engagement and commitment are assessed at the level of the local court unit, court managers can learn a lot about organizational performance.

Creating measurement hierarchies for each core performance measure ensures that important information at the level of court divisions, units, and programs is not masked by exclusive reliance on court-wide averages (e.g., an average court user satisfaction rate of 71 percent or an average case clearance rate of 91 percent). Measurement hierarchies identify opportunities for teamwork and collaboration at the level where the variation occurs (e.g., a particular survey question in a particular courthouse location or the clearance rate of a particular case type). They put the court’s top management in much greater contact with every level of staff by defining the connection – a line of sight -- between high-level strategic goals and performance measures with lower-level departmental or divisional objectives and measures.

The use of breakouts (disaggregation) of performance measures can reveal useful information that otherwise is hidden. Common breakouts of time to disposition measures, for example, are case type and location. They identify differences in timeliness of case processing across different case types and court locations. Other less common breakouts of on-time case processing measures may identify inequities among groups by income levels and indicate whether a court handles cases more swiftly for affluent litigants.

Let’s assume that 73% agreement is the average (aggregate) score across all 15 items on a survey of court user satisfactio. A simple but meaningful referent is the breakout of this score for each of the 15 items. For example, assume that the variation around this average of 73% ranges from a low of 43% for Item 5 (“I was able to get my court business done in a reasonable amount of time.”) to a high of 87% for Item 3 (“I felt safe in the courthouse.”). True, even with these referents, we still don’t know what’s good or bad, but we do now know something about the baseline from which we started measurement (73%) and the range of scores from a particular high and low score. We know something very important that we did not know before -- that it’s possible to reach 87% agreement and to get as low as 43%, and we know that 87% is “better” than the low of 43%, as well as the average of 73%. Similar meaningful referents are the breakouts of the average score for each of the background categories identified in the survey (e.g., the type of case that brought the person to court, or how often the person typically is in the courthouse) and the different courthouse locations in which the survey was conducted. For courts or court systems with multiple locations, comparisons of survey results across locations can be a useful basis for identifying successful improvement strategies. Different locations might be compared, for example, on the percent of users who felt that they were treated with courtesy and respect. Follow-up queries can then be made that probe the comparisons. Why do one or more locations seem to be more successful than others? What are they doing that the other locations are not? Asking staff in both the most successful and least successful locations these simple questions can help to identify “evidenced based” best practices.

Without deeper metrics in a hierarchy of measures, managers would be unable to identify or manage either poor or exceptional performance at its source. Managing variation in performance at the right level of specificity in the hierarchy of measurement has a very powerful advantage: Each court unit can identify and correct its own problems.

For the latest posts and archives of Made2Measure click here.

© Copyright CourtMetrics 2007. All rights reserved.

Comments

Popular posts from this blog

Top 10 Reasons for Performance Measurement

Q & A: Outcome vs. Measure vs. Target vs. Standard

Taming “Wild Problems”: Measure Everything That Matters