Thursday, May 12, 2016

Incentive: The Missing Ingredient in Performance Measurement and Management (PMM) in Courts

Woody Allen is said to have once quipped: “I was in a warm bed and, all of a sudden, I found myself in the middle of a warm bed.” What will it take for courts and other justice institutions to get out of their warm beds and embrace performance measurement and management (PMM)? What are the incentives?

Business Incentives Do Not (Yet) Exist for Courts

For private sector organizations, PMM is an imperative, an essential business evaluation tool that is a matter of survival. In the long-term, if profits are insufficient to cover expenses they surely soon will be out of business. In the short-term, if cash-flow does not cover employee salaries, they will close their doors sooner. Other than net profit and cash-flow, critical measures for businesses include return on investment, market share, customer satisfaction and loyalty, and employee engagement. For businesses moving the needle on these measures in the right direction provides both an incentive and a tool for improvement. Success in one area can prompt focus on doing better in other areas.

For courts and other justice institutions, such incentives do not exist. While some courts have been closed or placed into receivership (e.g., the Detroit Recorder’s Court in the 1980s), the rarity of such occurrences are exceptions that prove the rule that survival is not an everyday worry for courts. 

Parallels in Health Care

In previous posts I have explored innovative financial incentives for PMM for courts (e.g., gainsharing, a type of profit-sharing system used by local governments and at least one court). And, like many of my colleagues, because hospitals and doctors, and courts and judges, are much alike, I have looked to health care for ideas (e.g., “never events” in court administration).

In a recent op-ed in the Wall Street Journal, Ezekiel J. Emmanuel, Chairman of the Department of Medical Ethics and Health Policy at the University of Pennsylvania, describes an innovated pilot program, Independence at Home, that merits scrutiny by court leaders and managers.  The program is part of a movement in health care to reward providers based on quality, not quantity of care.

Dr. Emmanuel begins by describing a wheelchair-bound 87-year-old patient in the program, Luberta Whitfield, who suffered a stroke that left her right side paralyzed a few years ago. She has emphysema and diabetes, is dependent on oxygen, and recently tore the right rotator cuff on her good arm. The program gives the sickest Medicare patients like Ms. Whitfield primary care right in her home. Since it launched in 2012, it has succeeded in delivering high-quality care at lower costs than traditional Medicare. Thanks to the program, Ms. Whitfield still lives in her own home. Here’s how the program works.

Patients who qualify for Independence at Home need to have been hospitalized in the past year, suffer from two or more chronic conditions, require help with daily tasks, and must have needed services such as a stay in a skilled nursing facility within the last year. These are the type of patients that are the key to saving money; they make up 6% of Medicare patients but account for nearly 30% of Medicare’s cost. According to an analysis by the Centers for Medicare and Medicaid Services (CMS) cited by Dr. Emmanuel, these patients are so sick that 23% die each year and each account for $45,000 in annual Medicare spending. He contends that the program could save Medicare tens of billions over ten years.

Once in the program, patients receive coordinated primary care focused on keeping them healthy and in their home and out of the hospital. Emmanuel characterizes the care they receive as “concierge care for the sickest – not the richest.” Now here’s the intriguing part that may be of interest to court administration.

Physician groups who join the program and bid to provide the Independence at Home services have financial incentives in the form of bonuses to keep patients out of the hospital, which saves money, while still meeting Medicare’s quality standards. Bonuses are given only after the total costs for their patients’ care is reduced for two consecutive years. If they fail to achieve these reductions, they cannot share in the savings.

In a June 2015 press release, the CMS announced good results for the first performance year of the Independence at Home demonstration, including both higher quality care and lower Medicare expenditures. The CMS analysis found that the 17 physician groups in the program saved an average of $3,070 in the care of 8,400 Medicare beneficiaries in the program's first year, for a total of more than $25 million in savings, while delivering high-quality health care at home in accordance with six quality measures (e.g., fewer hospital readmissions within 30 days). CMS announced that it would award incentive payments of $11.7 million to nine of the participating physician practice groups.

Can This Work for Courts?

Critical features of the Independence at Home pilot project are its focus on the quality of care, not quantity, and its dependence on measureable outcomes supported by rigorous PMM.  As I noted in my previous posts on gainsharing, notwithstanding questions of legality and opposition on philosophical or political grounds (e.g., court excellence is mandated by law and, therefore, should not be supported by financial incentives), the success of this CMS demonstration project bears close watching as a model for courts. Incentive payments could be triggered, for example, by sustained reductions in cost per case, a relatively underused court performance measure that is part of both the CourTools and the Global Measures of Court Performance, achieved without loss of quality in accordance with stringent standards and criteria for various case types.  

As my colleagues Victor (“Gene”) Flango and Tom Clarke suggest in their book, Reimagining Courts: A Design for the Twenty-First Century, courts need to be reimagined and transformed. They should innovate continuously.  The gap between government’s information technology, including that of courts, and the private sector seems not to be shrinking but widening.  People expect access to government services and assessing quality as easily as looking up a restaurant on Yelp or Google.  Incentives for good performance outcomes, the modus operandi in the private sector, need to find their way into court administration as they are slowly making their way into health care.

© Copyright CourtMetrics 2016. All rights reserved.


Friday, April 29, 2016

Advancing Performance Measurement and Management (PMM) in the Justice Sector

Who else is doing PMM where? How is it working out for them? Answering these two questions will advance performance measurement and management initiatives more than any effort to date.

For many years, I’ve been in the business of convincing courts and other justice institutions to develop political will and capacity (OK, mostly just trying) to measure and manage their performance in an effective, accountable, and transparent manner. I used to think that widespread buy-in by the justice sector surely would be seen by the time of the development of well-conceived models beginning with the Trial Court Performance Standards in the late 1980s and early 1990s, to the CourtTools ten years ago, to the Global Measures of Court Performance in the last few years. But buy-in for PMM certainly has not been overwhelming. Instead, at best, it has been a slow slog for advocates of PMM.

Who Else is Doing PMM Where?

In a recent article in the William & Mary Policy Review (Volume 7, Number 1), I suggest a way of speeding up this slog. (The article should be available on the Journal’s website and on HeinOnline shortly.) The PMM that is taking place today in justice systems throughout the world, relatively limited that it may be, needs to be documented and made visible and known to be used, I wrote. That is, knowledge production should be accompanied by knowledge transfer. Unfortunately, this is not taking place at an effective speed and extent, largely because the institutions and countries actually engaged in PMM at the local levels understandably are not much in the business of disseminating and promoting their PMM beyond their jurisdictions and borders. Unlike research firms, universities, justice-related organizations, and donors, they lack incentives to promote their work.

Over the last several months, for example, I only learned through personal contacts with the principals that the Victoria courts in Australia had adopted the Global Measures of Court Performance and that High Court in Lahore, Pakistan, currently is including seven of the eleven measures of the Global Measures into its new case management system. I’m trying to follow-up on the latter as I write this post. This type of word-of-mouth, hit-or-miss, anecdotal transfer of knowledge will not accomplish the trick of PMM knowledge transfer.

To address the problem, my colleagues and I at the Institute for the Theory and Practice of International Relations at the College of William & Mary last year launched the Justice Measurement Visibility (JMV) Project, a project that aims to identify successful PMM in throughout the world focused on Global Measures of Court Performance (which is part of the International Framework for Court Excellence developed by the International Consortium for Court Excellence). For those interested in adopting or adapting the Global Measures, we hope the project will answer the inevitable question for which today, unfortunately, we have only an unsatisfactory answer, “Who else doing this today?”

How Is It Working Out for Them?

Working with the Courts and Tribunal Academy of Victoria University in Australia in February and March this year, I ran into unexpected headwinds of resistance to PMM, mostly coming from judges.  In several venues, I presented the idea of PMM in a way that I was convinced at the time would receive enthusiastic support. I tried with moderate success at best to address confounding questions and counter a few verbal bullets point-by-point. For example, I responded to the criticism that the very idea of public accountability for court performance and transparency is antithetical to the principles of judicial independence and separation of powers by arguing that public accountability driven by a system of performance measurement and management can and will strengthen, not weaken, judicial independence and the institutional integrity of courts and judicial independence.

Reflecting on the experience with these presentations, I had an epiphany of sorts, the sudden and striking realization that advocates of PMM, including me, were not practicing what we preach, namely that we should measure results that matter, and count what counts. Shouldn’t we be looking at the results of PMM itself in the same way? Yes, I realized we were failing to address not only the question, “Who is doing PMM where?” but also the more important follow-up second question, “How Is It Working Out for Them?”

How could I have missed this? Would not resistance to PMM dissolve and the naysayers silenced if we could only answer these two questions clearly and succinctly? If we could say, for example, that yes, two-thirds of the justice institutions and justice systems using PMM throughout the world have improved their performance. Moreover, three of them have very much in common with that of the questioner.

A new ambitious project, will be joined with the JMV Project at William & Mary, will address this second question. The project, which does not yet have a name, is in the proof-of-concept stages. I will describe progress in future posts here. Comments would be welcomed.

© Copyright CourtMetrics 2016. All rights reserved.

Monday, October 12, 2015

The UN's Sustainable Develoment Goals (SDGs) Are Not Smart

Several weeks ago on September 25, the U.N. General Assembly adopted the “Sustainable Development Goals” (SDGs), 17 goals and 169 associated targets, thereby setting a new global agenda for the next fifteen years. On the one hand, the SDGs agenda promises to engage the whole world community, not only governments but also multinational companies, philanthropic foundations, civil society, scientists, non-government organizations, scientists, scholars and students around the world.  The new agenda was hailed by U.N. Secretary General Ban Ki-moon as “a defining moment in human history.” On the other hand, critics claim that the SDGs are unmeasurable and unmanageable.

The more limited “Millennium Development Goals” (MDGs), which will expire at the end of this year, applied largely to poor countries and involved rich ones mostly as donors. The SDGs are broader and go much further than the MDGs; the latter are meant to be universally applicable to developing and developed countries alike.  The consultation process of the SDGs also has been far more inclusive and credible than for the MDGs.

By most accounts, the predecessor MDGs goal-based development was successful precisely because the eight goals, separately and as a whole, were SMART – that is, they were specific, measureable, attainable, relevant, and time-based.  They were meant to be understood by the average person, not only by high theorists. The U.N. and the world’s leaders made the transition from the MDGs to the SDGs in the hope that the latter would inspire action with a set of SDGs that would be “action-oriented, concise and easy to communicate, limited in number,” as the U.N. General Assembly specifically stated  in its 2012 outcome document, “The Future We Want.”  This was not be. An assessment of the SDGs 17 goals and 169 as a whole might easily conclude the opposite of the U.N. aspirations. The package of far too many goals is not actionable, and it is imprecise and difficult -- if not impossible -- to understand.

Leading up to the adoption of the SDGs, a prolonged debate about the goals the world set for 2030 has been heated, fraught with seemingly endless consultation, haggling, and horse-trading. Nonetheless, the sprawling package of SDGs, including 17 overarching goals and a mind-boggling 169 associated targets, was adopted virtually unchanged from the proposed package. Earlier this year, the Economist opined that the SDGs are a “mess” and could be “worse than useless,” a view shared by many other observers. For example, an analysis by the International Council for Science (ICSU) and the International Social Science Council (ISSC) concluded that of the 169 targets only a third could be considered well-developed and conceived, more than half require more specificity, and 17 percent require “significant work” to be of any use. The Economist saw the SDGs ambitious on a Biblical scale, and not in a good way. Moses brought down just Ten Commandments from Mt. Sinai. If only the SDGs were that concise.  

The SDGs as currently conceived are not SMART. They need to become so -- and quickly -- by a rigorous process of performance measurement and management that is as inclusive of the member countries as the consultations on SDGs were leading up to their adoption. This will not be easy because it is just such an inclusive process that produced the sprawling SDGs.

A technical process spearheaded by the United Nations Statistical Commission is ongoing to define an indicator framework for the measurement of the SDGs in March 2016.  Echoing the U.N. General Assembly’s aspirations for the SDGs, the Commission stated “that, given the possibility of measurement and capacity constraints of Member States, the global indicator framework should only contain a limited number of indicators [and] strike a balance between reducing the number of indicators and policy relevance.” The Commission initiated the formation of the Inter-agency and Expert Group on SDG Indicators (IAEG-SDGs), consisting of national statistical offices and, as observers, the regional and international organizations and agencies, that will develop the  indicator framework under the leadership of the national statistical offices, in an open and transparent manner. 

As part of its technical process, the Commission conducted an initial assessment of 304 proposed provisional indicators based on the views of 70 experts from national statistical offices and systems. They were assessed according to their feasibility, suitability and relevance, giving them a ranking from A to C for each these three criteria. An indicator rated “AAA” was found to be easily feasible, suitable and very relevant to measure the respective target for which it was proposed by a majority of national statistical offices (60 per cent or more). In a similar way, an indicator rated “CCC” was found by a significant number of national statistical offices (at least 40 per cent) to be not feasible, not suitable and not relevant to measure the respective target for which it was proposed.

Out of the 304 proposed provisional indicators, only 50 indicators (16 percent) were evaluated as feasible, suitable and very relevant (a rating of AAA); eighty-six indicators (28 percent) received a rating of BBB, meaning that those indicators are considered only feasible with strong effort, in need for further discussion and somewhat relevant. SDG # 16 (“Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels”), for example, has 21 proposed provisional indicators for the 12 associated targets of the goal. It fares in the same general range of ratings as the SDGs as a whole with only two (10 percent) of the targets rated as AAA and ten (48 percent) rated BBB or better but not AAA (e.g., BBA or BAA).

Was the adoption of the current SDGs made possible solely because it allowed for what everyone wanted no matter how unmanageable and unmeasurable?  Has the hard work of making the SDGs action-oriented, concise and easy to communicate, and limited in number, merely been postponed and moved to the measurement phase?  Will the inclusiveness of the technical process laid out by the United Nations Statistical Commission doom indicator framework to the same “messy” results of the current SDGs? As we move ahead with the process of development of measures and indicators for the SDGs, it is probably prudent to remind ourselves that performance measurement is not merely a technical diagnostic process but also an instrument of power and control available to different actors with varying degrees of moral hazards and conflicts of interests, asymmetric power relationships, and perverse incentives.

Whether the indicator framework of the SDGs will engage the global community and generate enthusiasm, knowledge production, and positive social outcomes or, alternatively, degenerate into bureaucratic infighting over special interests remains a matter of debate that will be watched over the next few years.

© Copyright CourtMetrics 2015. All rights reserved.


Monday, August 10, 2015

The Economist's Spotlight on the Problem of Pretrial Detention in Nigeria

Around the world, the misuse of pretrial detention, the time period defendants are incarcerated between arrest and trial) is massive. In Nigeria, Africa’s most populous country, the overuse of pretrial detention, most of it arbitrary and excessive, has reached “crushing proportions.” Of the 1,000 inmates in Nigeria’s Kiriki Maximum Security Prison, a total of 639 have not been convicted and are awaiting trial. Kayode Yukubu is among them. He was arrested in 2003.  After twelve years as Kiriki’s longest-serving inmate, no court trial date has yet been set for him.  He is among approximately 70 percent of Nigeria’s 56,785 pre-trial detainees who have not been sentenced, many of whom already have spent far longer time behind bars than the maximum period of the sentence for their alleged crimes. (Pretrial detention is intended to ensure an accused person will appear in court or pose a danger to others, not to punish or rehabilitate.)

The Economist spotlighted Nigeria’s pretrial detention with these figures last week (“Justice forgotten: The shocking number of pre-trial prisoners,” August 1, 2015, 45).  As enormous as the problem of pretrial detention is around the globe, much of it gets unnoticed.  But as the Economist article suggests, this may be changing because what gets measured gets attention, a maxim that has gained the status of received wisdom among many, perhaps most, of the international community.

While a spotlight on an enormous problem that is not uniquely a Nigerian one* does not guarantee solution, it’s a good start. A precise factual profile of its nature and scope is necessary to motivate and mobilize governments and the international community to do something about it. Such a precise factual profile of pretrial detention, however, should include not only figures such as the percentage of prisoners in pretrial detention, which are useful for general diagnosis but not for active performance management, but also what the Open Society Foundation’s president Christopher Stone, an international expert on criminal justice reform, referred to in a 2012 chapter as active indicators such as the length of pretrial detention.** 

Duration of pretrial custody, one of eleven performance measures of the International Framework of Court Excellence, is an actionable performance measure with the potential of having an outsized effect. It attracts the attention not only of justice system insiders (judges, prosecutors and defense attorneys, and law enforcement and corrections officials) but also of many groups and individuals in the private and non-profit sectors outside the formal justice systems who care about reducing crime, ensuring public safety, fighting poverty, reducing costs, making wise use of public resources, combating disease, promoting human rights, and making our legal systems more just.

Because duration of pretrial custody is clear, focused, and actionable, and because it is an easily understood indicator of an entrenched social problem, it is a potential rallying point for reform and improvement efforts that can bring government, citizens, groups, and organizations together in a solution economy. Justice institutions, social enterprises, and businesses can collaborate to reduce the average duration of pretrial custody, thereby creating efficiencies in court case processing that reduce the prison population and addressing a host of social problems.

Governments and their justice systems -- courts, prosecution and legal defense departments, ministries of justice, and law enforcement and corrections agencies -- could reap public trust and confidence simply by putting detailed data on pretrial custody into the public domain, making it available for real time feedback, and inviting social enterprises and businesses to join them in problem solving. They could, for example, collaborate with civil service organizations in identifying and examining the divergence between the mean and median number of days in pretrial custody among all criminal defendants. When the mean and median diverge, inflating or deflating the mean but not the median, it may be because relatively small groups of defendants (such as the poor and marginalized) are treated differently than the rest. The characteristics, treatment, and conditions (such as overcrowded and disease-ridden jails) of individuals with especially long pretrial prison stays (as well as especially short stays—which may occur among the rich, for example) could be examined for potential irregularities. So could these outliers’ experience with case processing and pretrial events, including factors related to the issuance of warrants, initial appearance and arraignment, charging practices, plea agreements, bail decision making, pretrial services, custody conditions, and alternative sentencing.  

What gets measured gets attention. And what gets attention by easy-to-understand, focused, and actionable performance measures might actually get done.


* Worldwide, close to a third of prisoners are pretrial detainees, and in some parts of the world like Bangladesh, India, Library, and Paraguay in addition to Nigeria, the majority are pretrial detainees. For an excellent overview, see the Open Society’s Justice Initiative 2014 publication Presumption of Guilt: The Global Oversuse of Pretrial Detention.

** Stone, Christopher (2012). Problems of Power in the Design of Indicators of Safety and Justice in the Global South. In Kevin E. Davis, Angelina Fisher, Benedict Kingsbury, and Sally Engle Merry (eds.), Governance by Indicators: Global Power Through Quantification and Rankings. Oxford University Press.

 © Copyright CourtMetrics 2015. All rights reserved.


Wednesday, July 08, 2015

Experience Counts for the Advancement of Performance Measurement and Management

Made2Measure returns today to regular postings after a long hiatus (September 9, 2013 was the last post) during which it was suspended to avoid potential conflicts of interests while its principal, Ingo Keilitz, was seconded to the World Bank and the National Center for State Courts.  

Performance measurement and management (PMM) is the (self) discipline of monitoring, analyzing, and using organizational performance data on a regular and continuous basis (in real or near-real time) for the purpose of improvements in efficiency and effectiveness, transparency and accountability, and increased public trust and confidence in government institutions. PMM is both a way of understanding the justice sector, as well as a discipline and a promising approach to solving serious global problems such as the high rate and length of incarceration, especially pre-trial detention.

Relatively Small Space in the Toolbox of International Development

Compared to two other disciplines and technologies of knowledge production and governance – program impact evaluations conducted by international donors such as the World Bank, and global indicators such as World Justice Project Rule of Law Index™ -- PMM occupies a relatively small space in the toolbox of international development. This despite increasing evidence that countries and their justice institutions who measure and manage their own performance are likely to enjoy more success and gain more legitimacy, trust and confidence in the eyes of those they serve.  Why is this happening?

Experience Development Counts

Over the last fifteen years or so, my colleagues and I at the International Consortium for Court Excellence, the National Center for State Courts, and of late at the College of William and Mary’s Institute for the Theory and Practice of International Relations (ITPIR), have spent much time on the design of PMM developing the “right” metrics, the “right” delivery of performance data (i.e., getting it into hands of the right people, at the right time, and in the right way), and the right actual use (i.e., injecting PMM into the very DNA of an institution’s business processes and operations). But good design alone has not, in my view, created more space in the tool box of international development for PMM.  Experience counts as much as design.

In today’s world of international development, well-designed approaches and products are not enough for potential users of those approaches (including donors) who value experience.  Before adoption or adaptation, they want to know what developing country or institution has built its capacity and/or actually used a particular performance measure such as duration of pre-trial custody, a measure that is part of the International Consortium’s International Framework for Court Excellence? For the most part, the answers to such questions are anecdotal and speculative.

Several things need to happen before PMM can emerge with a bigger role in international development.  First and foremost, the PMM that is taking place in countries and justice systems throughout the world needs to be well documented and known in terms of actual experiences, which it has not.  This impediment to a greater role of locally-owned or locally-directed PMM by host countries and institutions in international development is in large part a lack of effective incentives for PMM knowledge production and dissemination. Quite simply, much more is known about program impact evaluation and global indicators because their producers are in the business of publishing and disseminating their results, thereby burnishing their reputations. Many international development organization, multilateral development banks, bilateral aid agencies, private foundations, think-tanks, international activist groups, and consultancies publish books and articles, newsletters and blogs, touting the results of their program impact evaluations and global indicators, often through their own publishing arms.

The countries and their justice institutions actively using PMM, not so much. They lack the orientation, incentive, and the capacity. Promulgation of PMM and dissemination of results are inward directed to drive improvements in the organization's performance. And as pointed out in a thoughtful paper by Wade Channell ten years ago in the Carnegie [Endowment for International Peace] Papers Rule of Law Series, even when donors and projects do get involved with host institutions they have high incentives for guarding their information and lessons learned are unlikely to be shared widely.

The Justice Measurement Visibility Project

Last month my ITPIR colleagues Kate Conners, Maya Ravindran, Jonah Scharf, and I launched the Justice Measurement Visibility (JMV) Project, a project that aims to identify successful PMM in developing countries throughout the world focused on the eleven specific measure of the Global Measures of Court Performance, which is part of the International Framework for Court Excellence developed by the International Consortium.  For each of the eleven measures, we hope to be able to give a definitive answer to the question of what countries and their justice institutions have adopted or adapted it and what has been their experience.

Stay tuned here for more on the JMV Project.

 © Copyright CourtMetrics 2015. All rights reserved.


Monday, September 09, 2013

Make It Official: The Case for In-Country Performance Measurement and Management

This is the third in a series of posts exploring the three international models of justice system performance measurement and management: (1) the EU Justice Scoreboard, (2) the Global Measures of Court Performance, and (3) the CourTools.

Law and justice scholars lament the spotty evidence linking rule of law and justice programs with development outcomes like economic growth, human rights, sound governance, and poverty reduction. A key cause of the evidence gap is the lack of emphasis on building in-country or domestic performance measurement and management in favor of third-party evaluations. All three international models, more or less, promote increased attention and investment in performance measurement and management – the regular and continuous monitoring, analysis, and use of performance information -- by justice officials and their institutions and justice systems themselves, not by third parties.  Capacity for performance measurement  is the ability of countries to meet user needs for good quality statistics on performance – usually those statistics that are consider to be “official” (i.e., those statistics produced by governments as a public good). Granted the three models are promulgated by organizations with considerable heft – the European Commission, the International Consortium for Court Excellence, and U.S.-based National Center for State Courts -- whose interests can be seen reflected in choice of metrics and the values, goals, and key success factors upon which they are built, all three emphasize in-country performance measurement.

Performance measurement and management differs from program evaluation in terms of goals and purposes, definitional style, sponsorship, organization, audience, functions, timing, and data interpretation rules. The differences are critical for justice reform.  For example, not the least of the differences is that performance measurement and management is designed to achieve the goals of the justice institutions, systems and the countries, not necessarily those of donors or other third parties, though harmonizing performance measurement across levels is a worthy aspiration (i.e., within an individual institution, across a justice system or country as a whole, and at the level of global governance). In contrast, program evaluation (and its variants including “monitoring and evaluation,” “impact evaluation,” and “evaluation research) may reflect mixed motives for justice reform and definitions of success that are aligned with the value sets and business goals of third parties.

Performance measurement is not yet the norm around the world, though it has a strong foothold in the in the European Union, Australia, the United States, large parts of the developed world, and in some developing countries where investments have been made in building domestic capacities (e.g., Moldova).  Most assessments of justice programs, processes, and reform initiatives are done instead by monitoring and evaluations instigated and conducted by third-parties, including donors, aid providers, and their agents (hordes of researchers, analysts, and consultants).  The abiding concern of these third parties is return on their investments, or something akin to it, and this concern does not necessarily align with the expressed purposes and fundamental responsibilities of the justice institutions, systems, or countries that are the “subjects” of their assessments. (It is tempting to include the crowded field of international indices of law and justice in this type of third-party assessment, but that is a topic to be dealt with in future posts.)

In a 2011 paper titled Problems of Power in the Design of Indicators of Safety and Justice in the Global South, Christopher E. Stone, President of the Open Society Foundations and formerly at Harvard Kennedy School’s  Program in Criminal Justice Policy and Management, urges those of us working in the domain of justice and safety, to emphasize what he calls “country-led indicator development.” He calls for the building of indicators “from the bottom up, supporting local ambitions and building on the legitimate authority close to the operations they seek to influence, rather than starting with ambitions and power at the global level.” He advocates for the design and development of “active indicators” that are distinguished from ones designed without the participation local authorities and are designed for use by officials of local institutions as management tools. He argues that such active indicators and a bottoms-up approach “is not only possible and practical, but has the potential to engage citizens and domestic leaders enthusiastically in a creative and democratic construction of justice.”
When it comes to answering the question “How are we doing?” justice institutions and systems do not like to be the mere “subjects” of the program evaluations and research of third parties, be they international donors, associations, or domestic stakeholders. It only seems reasonable and logical that they should like to see themselves as the proper and legitimate authority for issuing the “official” version of the truth. The official authorized view of a justice system’s performance must not only be owned by the justice system but be seen to be owned by the justice system.  When this is not the case, as it appears in many places around the globe, the legitimacy of and the public’s trust and confidence in the justice system suffer.
©Copyright CourtMetrics 2013. All rights reserved.

Labels: ,

Wednesday, July 31, 2013

International Models of Justice System Performance II

My last blog noted three promising international models of justice system performance measurement and management: (1) the EU Justice Scoreboard, (2) the Global Measures of Court Performance, and (3) the CourTools.  All three, more or less, aim for harmonization and consistent use of a common set of justice sector performance measures. There are, of course, differences among them, but it is their commonality that is potentially transformative for justice systems around the globe.

What distinguishes these three models from international global governance initiatives like the World Justice Project’s WJP Rule of Law Index™ and the American Bar Association’s Judicial Reform Index, as well as myriad program evaluations of justice and rule of law projects, is that they promote an approach to performance measurement and management that:
  • is essentially a bottom-up instead of a top-down strategy grounded in the local ambitions of justice institutions and justice systems exercising their legitimate authority;
  • relies on performance data collected and compiled by countries and their justice institutions themselves instead of international bodies and associated third parties whose indicators of justice may be seen as based on questionable goals (e.g., those of international donors) and other relatively weak sources of authority and legitimacy;
  • is based on institution-led or country-led measure development that is voluntary, facilitated but not dictated by the models; and,
  • aims for use of performance data by the countries’ justice system officials themselves to improve the governance and operations of the local justice sector.
Consistency or harmonization of justice performance measures across entire justice institutions or systems is not just an aspiration of little practical consequence if it is not achieved.  A country may find its performance in justice, rule of law, and safety measured by dozens of competing measures crafted by many different actors with various relationships with the country’s justice system. “The result,” as one development aid official put it to Harvard University criminal justice scholar Christopher Stone, “is that many developing countries are littered with the carcasses of failed indicators projects – the consultant paid and gone, and those charged with administering justice increasingly cynical about time wasted on measurement when there is real work to be done.”

My reading of Stone, who is now President of the Open Society Foundations, is that he would agree that the general approach of the three international models of performance measurement and management is not only possible and practical but has, in his words, the “potential to engage citizens and domestic leaders enthusiastically in a creative and democratic construction of justice.”  
©Copyright CourtMetrics 2013. All rights reserved.