Performance Benchmarks, Standards and Goals

How is your court or court system performing? How should it perform? Answering these two questions requires different methods: the first, performance measurement; and the second, the establishment of standards, benchmarks or goals to serve as norms or models for others.

A court should strive to answer both questions, but it should not delay answering the first because it is uncertain about the answer to the second. Unfortunately, this is not how things have happened.

Points of Reference Versus Standards

Difficulties arise when certain points of reference required to answer the first question are unnecessarily burdened with the weight of serving as performance benchmarks, standards and goals.

For example, the Oregon Court of Appeals measures the timeliness of its processing of land use cases against the point of reference of 91 days. The Court defines the metric of on-time case processing of land use cases as the percent of cases disposed or otherwise resolved within 91 days. Similarly, the Yuma County Superior Court of Arizona uses as a point of reference 60 days for its juvenile delinquency cases, as in the percent of juvenile delinquency cases disposed with in 60 days. The two points of reference, one for a category of appellate cases, and the other for a category of trial court cases, are useful for measuring performance regardless of whether 91 days and 60 days are benchmarks, standards and goals. The two courts can monitor, analyze and manage these performance measures whether or not they have been approved or certified by some authority or governing body.

The design and development of court performance measurement systems is often delayed and impeded when points of reference – 30 days, 90 days or one year -- identified solely for purposes of answering the first question, are put to the difficult test of answering the second question. Consequently, some court performance measurement initiatives have stalled simply because court leaders felt that these points of reference needed to be certified or established as standards before on-time performance data could be collected and reported. In other words, they felt they needed to answer the question “What should the court’s on-time performance be?” before they should try to answer the question “How is the court performing today?” And they were just not ready to do so.

Definitional Problems

To be sure, some of this difficulty stems from a general resistance to performance measurement and management (see Made2Measure posts, Getting Started with Performance Measurement – Breaking Down Resistance, December 5, 2005; and Eight Reasons Not to Measure Court Performance, April 6, 2006). But a large part of the problem is one of definitions. From my point of view, we’ve been fairly loose in our interchangeable use of many terms that have quite different denotations and connotations including benchmarks, standards, goals, targets, time frames, and guidelines. (The same can be said about such terms as mission, purpose, vision, fundamental obligations, success factors, perspectives, major performance areas, strategic goals and so forth – but that’s another topic.)

Two years ago, in a Q & A post about the definition of standard and target performances (Made2Measure, January 5, 2006), I noted that the term standard had been used in two quite different ways, causing considerable confusion. It is best defined, I wrote then, as a special target performance recognized and adopted as the norm by some authorities like the American Bar Association (ABA), the Conference of Chief Justices (CCJ), and the Conference of State Court Administrators (COSCA), or a state administrative office of the courts. The ABA, CCJ and COSCA all published case disposition time standards in 1983 – 1984. The ABA, for example, adopted case processing time standards such as the disposition of 100% of all felony cases in 12 months. A particular court, I continued then, can have various targets of its own that are lower or higher than the standards adopted by external authorities without necessarily rejecting those standards.

Clear enough. Well … not quite.

Twenty years later, the National Center for State Courts’ (NCSC) CourTools Measure 3, Time to Disposition, is defined as the percentage of cases disposed or otherwise resolved within “established time frames.” The case processing standards published by the ABA and COSCA (but not by the CCJ) are cited and recommended as starting points for determining on-time guidelines (yet another new term). Unfortunately, the NCSC is less than clear whether these points of reference – in three short paragraphs interchangeably referred to as time standards, time frames, and guidelines – are to be approved, certified or established by an authority like the ABA or the COSCA, or if they are simply points of reference that define a summary metric.

I personally believe that the ABA, CCJ, and COSCA (and, yes, my colleagues at the NCSC and I who have been central to this confusion) have set back the courts community by at least 15 years by suggesting that a particular point of reference is a standard that a competent court should meet. Because few courts meet this standard, many court leaders argued incessantly about whether such standards were reasonable. Most courts simply rejected this notion.

What court leaders and managers did not do is answer the first question, “How is your court or court system performing?” They did not monitor, analyze and manage their performance. Instead, they debated whether any one of the countless points of reference for numerous case categories and case types should be certified as standards.

The upshot was that many courts just didn't bother figuring out how long it takes for them to process cases. Some concluded that it’s all just numerology, bean counting -- the revenge of the spreadsheet guys!

For the latest posts and archives of Made2Measure click here.

© Copyright CourtMetrics 2008. All rights reserved

Comments

Popular posts from this blog

Top 10 Reasons for Performance Measurement

Q & A: Outcome vs. Measure vs. Target vs. Standard

Taming “Wild Problems”: Measure Everything That Matters