Courts Have No Business Doing Research

The theoretical foundations and methodology of the disciplines of research and performance measurement overlap, but they are very different in important ways: sponsorship, organization, audience, functions, timing, and data interpretation rules. (See The Differences Between Performance Measurement and Research, Made2Measure, October 7, 2005; and Forget “Statistically Significant,” Made2Measure, December 17, 2005.)

Replication in Performance Measurement and in Research
A critical difference between performance measurement and research that I did not mention previously has to do with replication. Basically, this means repeating the research to corroborate the results and to safeguard against overgeneralizations and other false claims.

Repeated measurements on a regular and continuous basis are part of the required methodology in performance measurement. Analyzing trends beyond initial baseline measurement requires replication of the same data collection and analysis on a monthly, weekly, daily or, in the case of automated systems, on a near real-time basis.

In contrast, replication in research is a methodological safeguard that is universally lauded by scientists but, as we’ve recently learned, seldom used by researchers. Robert Hotz reports in the Wall Street Journal (September 14, 2007 http://online.wsj.com/article/SB118972683557627104.html) that most published research findings are wrong and that most scientific studies appear to be tainted by sloppy analysis including miscalculation, poor study design, and “self-serving” data analysis – problems that replication of the studies and duplication of the results would largely correct. Trouble is that it’s not done.

"There is an increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims," reports Hotz, quoting John Ioannidis, an epidemiologist who studies research methods at the University of Ioannina School of Medicine in Greece and Tufts University in Massachusetts. "A new claim about a research finding is more likely to be false than true," Ioannidis says.

According to Dr. Ioannidis, “in most modern research, false findings may be the majority even the vast majority of published research claims.” Ioannidis and his colleagues studied 432 published research claims about gender differences in risks of diseases. Their research reported last month in the Journal of the American Medical Association showed that almost none of the claims about gender differences held up under close scrutiny. Only one of the studies was replicated.

Would you rely on court user satisfaction data without giving it a reality check? Would you listen to financial advice by a late night infomercial? Of course not. The apparent lack of accepted safeguards for research studies, that research findings are rarely checked and replicated, should hit a raw nerve for those who do, sponsor and consume court research findings.

What does this mean for courts? I have argued that courts have no business doing research and the apparent lack of safeguards such as replication is just one reason. In my view, the lesson of Ioannidis’ revelations should not be that researchers should not be trusted, but rather that research is done by mere mortals who make honest mistakes. It is also difficult and, more often than not, expensive to do well.

“People are messing around with data to find anything that seems significant, to show they have found something that is new and unusual, Ioannidis said. Thinking back to my days as a Ph.D. candidate doing “pure” research in experimental psychology, I cringe at my overeager desire to coax “significance” from my dissertation research and my revulsion at the thought of replicating my findings before my degree was handed to me.

I am not suggesting that court research should not be done. Far from it. I am recommending that courts should abandon the idea of doing research themselves and that they should outsource it to academic researchers (who may do much of it on a quid pro quo basis in exchange for access to research opportunities for themselves and their students) and venerable not-for-profit court research institutions like the National Center for State Courts. Having engaged the researchers, they should then hold their feet to the fire and demand replication among other safeguards against overgeneralizations and false claims.

Performance Measurement Is Court Business
Performance measurement is hard-wired into the very DNA of the leadership, management and operations of successful operations who ask themselves “How are we doing?” on a regular and continuous basis. It is the process of measuring accomplishments (outcomes), work and service levels (output), and its resources (inputs). Behavioral and social research (including evaluation research and program evaluation), on the other hand, is scientific study to discover factual truth, to test models and to develop theories to increase our knowledge and understanding about human behavior and social phenomena.

Performance measurement and research share an adherence to scientific methods and processes. Both use quantitative and qualitative methods including surveys and questionnaires, interviews, direct observation, recording, descriptive methods, tests and assessments, and statistical analysis.

Courts should own performance measurement as part of its normal business, but outsource research and program evaluation.

That takeaway message for courts is consistent with the lesson of the hedgehog concept of management guru Jim Collins, author of the bestselling books Good To Great and Built to Last. The fox knows a little about a lot of things, but the hedgehog knows only one big thing very well and sticks to it. The fox is complex; the hedgehog simple. And the hedgehog wins.

Collins’ research shows that success requires a simple, hedgehog-like understanding of three intersecting circles: what a court does best and what it does not, how it works, and what best ignites the passions of its people. Great things happen when courts comply with the hedgehog concept and become systematic and consistent with it, eliminating or outsourcing virtually anything that does not fit in the three circles.

For the latest posts and archives of Made2Measure click here.

© Copyright CourtMetrics 2007. All rights reserved

Popular posts from this blog

Top 10 Reasons for Performance Measurement

Q & A: Outcome vs. Measure vs. Target vs. Standard

Taming “Wild Problems”: Measure Everything That Matters