Posts

Showing posts from 2008

Ranking High Schools and Courts on Their Performance

The 2009 U .S. News & World Report second annual rankings of America’s best public high schools came out this week. The rankings were done by School Evaluation Services, a K – 12 education data research firm run by Standard & Poor’s, based on an analysis of the performances of 21,069 public high schools in the 2006 -2007 school year (see www.usnews.com/highschools ). The annual rankings of high schools hold two important lessons for judges and court managers, especially those who bristle at the idea of comparative performance measurement. (See “Ten Reasons Not to Measure Court Performance,” Made2Measure , November 19, 2008) The first lesson is that performance matters to citizens. The U.S. News & World Report rankings are based on the key principle that a great high school must be able to produce measurable academic outcomes to show that it successfully educates all of its students across a range – a balanced scorecard – of performance indicators. Little else matte

Ten Reasons Not To Measure Court Performance

This post is based on a December 9, 2008, presentation to a seminar of Michigan Chief Judges and Court Administrators sponsored by the Michigan Judicial Institute, at the Michigan Hall of Justice Conference Center in Lansing, Michigan. It is an updated and expanded version of the Made2Measure post, Eight Reasons Not to Measure Court Performance , April 5, 2006. It is not sufficient simply to proclaim the benefits of court performance measurement – accountability, transparency, focus, attention, understanding, control, predictability, influence, and strategy development -- and expect acceptance and effective implementation. Performance measurement, like any tool, has shortcomings and introduces disruptions of the status quo that should not be dismissed or ignored. These shortcomings and disruptions can be minimized and even eliminated, however, when they are identified, clearly understood, thoroughly and candidly explored, and addressed in specific terms. Unfortunately, they are o

Micromanagement Disengages Employees

Micromanage , v.t., - to manage or control with excessive attention to minor details. The October 21 Made2Measure post ( Employee Engagement: Managing the Millennial Generation in the Workforce ), explored how the employee engagement survey developed by the National Center for State Courts and CourtMetrics, for both trial courts (see CourTools Measure 9 ) and for appellate courts (see Measure 7 at http://docs.google.com/Doc?id=ddc3k4gt_14cpvjn2c2 ), can help court managers engage “millennials” – a new crop of young people in the work force who were born between 1980 and 2001. This post explores how the survey may help to reverse the negative effects of micromanagement. The survey uses a self-administered questionnaire to assess the engagement of the court's workforce and the quality of the relationships among its employees, especially those between managers and subordinates. It asks respondents to rate their agreement with each of 20 statements on a five-point scale from “strong

Employee Engagement: Managing the Millennial Generation in the Workforce

Effective performance measures drive success. They are clear, focused, and actionable. They serve both as incentives and practical tools for improvement. Not uncommonly, the act of measurement itself will trigger positive actions. The 20-item court Employee Engagement survey developed by the National Center for State Courts for both trial courts (see Measure 8 of the CourTools ) and for appellate courts (see Measure 7 at http://docs.google.com/Doc?id=ddc3k4gt_14cpvjn2c2 ) is a measure that fits this bill. Employee engagement is a constant challenge for court managers. This challenge is even more daunting for “millennials” – a new crop of young people in the work force who were born between 1980 and 2001. Court managers will need them for succession planning as retiring baby boomers leave their positions. Trouble is that the general perception of the millennial generation seems to be that it has great – and sometimes unreasonable -- expectations. These young workers tend to be more op

Montana Survey of Appellate Bar and Trial Bench

The Montana Supreme Court last month became the first high court and only the second state appellate court (see the Oregon Court of Appeal’s 2007 Bench and Bar Survey ) to survey members of the state’s appellate bar and trial bench about how well they believe the state high court is performing. In the spirit of transparency and accountability, it made a summary of the survey results public almost immediately. As explained by Montana Chief Justice Karla M. Gray in a cover letter posted on the Supreme Court’s website yesterday ( see Montana Bar and Bench Survey Results ), the Court asked nearly 1,000 appellate lawyers, as well as all of Montana’s District Court Judges and the University of Montana Law School teaching faculty, for their thoughts on the Court’s performance. Respondents rated the Court’s performance in areas central to its primary obligations, including whether the Court’s decisions are based on facts and applicable law, whether the Court’s published opinions

This Just In: Performance Measurement Works

Do management techniques like monitoring performance and setting targets really work? Most managers are convinced, and those who hire them would like to think so. Where's the evidence? The first-of-its kind study by researchers from Stanford, the London School of Economics and the consulting firm McKinsey & Company suggests the answer is yes (see Scott Thrum’s September 8, 2008, Wall Street Journal column “Theory & Practice: The “Same 01” Is Actually Good Enough for Many”). This is good news, especially for performance measurement. Like management in general, performance measurement needs more than anecdote to assure its widespread adoption, especially in the courts community. Unlike management in general, performance measurement and management techniques are not broadly accepted and are still widely viewed as innovative and experimental. The study, including more than 4,600 midsize factories in 12 countries, is based on responses to surveys of plant managers and examinati

Monitoring and Eliminating “Never Events” in Court Administration

The Centers for Medicare and Medicaid Services recently made “never events” – so called because they should never happen – a prominent part of its performance measurement and management policy for U.S. hospitals. The concept of “never events” is yet another example of the nation-wide movement to reform health care by performance monitoring, analysis and management (see Pursuing Perfection – A Lesson from Health Care , Made2Measure , November 1, 2006). Starting in October, Medicare will stop reimbursing hospitals for treating device related infections, urinary tract infections, and surgical infections after orthopedic and heart surgery. Why? Because they are “never events” that should never or rarely ever happen. Because there is proof that nearly all of these hospital infections that sicken and kill millions of patients a year are avoidable when hospital nurses and doctors clean their hands, decontaminate medical devices and instruments, and take other relatively simple preventa

The Real Start-Up Costs of Performance Measurement

Performance measurement does not cause inefficiencies and poor practices. It just highlights them for improvement. The cost of fixing them should not be counted against the start-up costs of performance measurement and management. This point often gets lost as courts and court systems embark on initiatives to build performance measurement and management systems. If nothing else, this point needs to be reinforced to blunt a favorite argument against performance measurement initiatives – it all takes too much time, effort and money (see Eight Reasons Not to Measure Court Performance , Made2Measure, April 5, 2006). Some things just need to be fixed anyway. For example, as courts consider the required elements of measures like case clearance rate, on-time case processing, and age of pending caseload, they need to define, in no uncertain terms, such things as: (1) what to count and what not to count; (2) how to classify what they count; (3) when to start and stop the clock; (4) when

The End of Science As We Know It

Add this big idea to the concepts that underlie modern performance measurement and management: We are seeing the end of traditional research and scientific method – hypotheses testing and experimentation -- as we know it. And performance measurement is there at the right time and place. From Kilobytes to Petabytes Today we’re able to capture, store, and make sense of massive amounts of data. We’ve gone from kilobytes to megabytes, from terabytes to the Petabyte Age. A petabyte is a measure of memory or storage capacity that is 2 to the 50th power bytes, or the equivalent of 20 million four-drawer filing cabinets full of text. Infinite storage or almost no storage necessary. Unlimited processing capacity. Early-warning sensors and measures everywhere sending automatic alerts from intelligent agents. No structured databases necessary. Just petabytes of information flowing through pattern-matching and trend-watching algorithms. It seems we really don’t need hypotheses and the

The Declaration of Independence Still Inspires

Every Fourth of July – yesterday was no exception -- I print out a copy of the Declaration of Independence. I read and study it. Without fail, I find some nuance and current relevance that I missed before. I always get inspired about my work in the courts and, of course, my country. (I’m an immigrant, born to German parents, and was educated in this country. I became a proud naturalized citizen of these United States, along with my beaming parents, in a ceremony in Elizabeth, New Jersey, the memory of which still gives me goose bumps.) The famous words of the second paragraph of the Declaration of Independence, an action of the Second Continental Congress, July 4, 1776, signed by people whose names are on the street signs of Williamsburg, Virginia (where I live) and every city and town in the United States, blow me away every time I reread them: WE hold these Truths to be self-evident, that all Men are created equal, that they are endowed by their Creator with certain unalienable Right

Smart Meters

I recently was invited to participate in a new pilot program offered by my electricity provider, Virginia Dominion Power. It’s a part of a national trend to help consumers control their electricity and, at the same time, help utilities cut their operating costs. Virginia Dominion Power, following the lead of utilities in California, Texas and other states, is rolling out the program to install “smart” electric meters in homes in the belief that they will help cut electricity consumption and reduce the need for new power plants. The smart meters will allow consumers to see, at a glance, how much electricity they are consuming at what cost. The Critical Peak Pricing Pilot Program is based on two joined approaches increasingly used in the private sector. I predict that these two approaches will soon become standard “best practices” for most monitoring, analysis and management of all kinds of performance, including that of courts. (1) Bring everyone – customers and employees alike – not

Q & A: How Many Measures Should Be Used?

Q: How many performance measures should my court be considering? My colleagues have reviewed the ten performance measures of the CourTools developed by the National Center for State Courts, as well as other measures – like treatment court recidivism -- that courts are considering. They all seem compelling but we are concerned that the whole enterprise will just be too much for us. What advice do you have? A: It is better to do more with less than less with more. First and foremost, how many performance measures your court should develop depends on what matters to you, what you consider important – the court’s key success factors. If these include, for example, access and fairness, citizen satisfaction, expedition and timeliness, community welfare, and employee engagement, you’re likely to need more measures than if you’re only interested in efficient case processing. Second, there’s no sense in developing measures that will not be used. Generally speaking, it is much better to do mo

Measuring and Managing Encounters of Court Users and Court Employees

Along with a number of client-partners, I’ve been rethinking the way courts gauge the satisfaction of court users and the strength and engagement of their work force And, yes, how we can improve the court user – court employee encounter. Quite a few court leaders and managers still regard these lines of performance measurement and management as “touchy feely,” “soft” or “subjective,” despite the overwhelming evidence to the contrary from some hard-nosed researchers and practitioners. But that’s a subject for another time. Human Sigma In their book, Human Sigma: Managing the Employee – Customer Encounter (Gallup Press, 2007) – a much expanded version of their groundbreaking article “Manage Your Human Sigma” that appeared in the July/August 2006 issue of Harvard Business Review – Gallup researchers John H. Fleming and Jim Asplund rewrite the rules for how we should measure and improve employee - customer encounters. Though based on extensive research of employees and customers of privat

Super Bowl Indicator of Performance

Performance measures are often proxy variables. That is, they are not necessarily themselves of any great interest but they can tell us a lot about a particular thing or outcome. To varying degrees, performance measures are such proxies -- removed and highly simplified version of the outcome of interest. We use proxy measures in order to make it possible to measure things easily, routinely and at a reasonable cost. The value of a proxy measure is that it is expected to correlate with the desired outcome. Not perfectly, but good enough. In other words, while some performance indicators may or may not jibe with our common understanding or mental images of the concept or construct under consideration, they indicate its meaning. Cholesterol level is not health. Tree ring widths and ice core layering are not temperature records. Case clearance is not exactly court productivity or efficiency. Recidivism is not community well being. And – here it goes -- a win by the Patriots over the Giants

Performance Benchmarks, Standards and Goals

How is your court or court system performing? How should it perform? Answering these two questions requires different methods: the first, performance measurement; and the second, the establishment of standards, benchmarks or goals to serve as norms or models for others. A court should strive to answer both questions, but it should not delay answering the first because it is uncertain about the answer to the second. Unfortunately, this is not how things have happened. Points of Reference Versus Standards Difficulties arise when certain points of reference required to answer the first question are unnecessarily burdened with the weight of serving as performance benchmarks , standards and goals . For example, the Oregon Court of Appeals measures the timeliness of its processing of land use cases against the point of reference of 91 days. The Court defines the metric of on-time case processing of land use cases as the percent of cases disposed or otherwise resolved within 91 days. Similar