The End of Science As We Know It

Add this big idea to the concepts that underlie modern performance measurement and management:

We are seeing the end of traditional research and scientific method – hypotheses testing and experimentation -- as we know it. And performance measurement is there at the right time and place.

From Kilobytes to Petabytes

Today we’re able to capture, store, and make sense of massive amounts of data. We’ve gone from kilobytes to megabytes, from terabytes to the Petabyte Age. A petabyte is a measure of memory or storage capacity that is 2 to the 50th power bytes, or the equivalent of 20 million four-drawer filing cabinets full of text.

Infinite storage or almost no storage necessary. Unlimited processing capacity. Early-warning sensors and measures everywhere sending automatic alerts from intelligent agents. No structured databases necessary. Just petabytes of information flowing through pattern-matching and trend-watching algorithms.

It seems we really don’t need hypotheses and theories anymore – i.e., informed hunches and theoretical models about how the world works -- as a first step in finding the truth in much of what we do. And why sample data when we can have everything at our fingertips and process it in ways impossible before?

We just need to tap into the data with search technology and discover patterns and trends that will reveal knowledge. More and more, we’re betting that more and better data, plus better measurement and analytical tools, will win the day.

In its July 2008 cover story, Wired noted that the quest for knowledge used to begin with grand theories. Now it begins with massive amounts of data and a search for clues – any clues. Increasingly, we can succeed without theories, models and hypotheses.

Google does not really need to know (or care) why a particular website is better than another. If its statistical algorithm says it is, that’s good enough. Throw the data into the biggest database we can muster and let our measurement and analytical tools find patterns where the traditional science of hypotheses testing cannot.

In the wake of the explosion of E-discovery, lawyers no longer dig through warehouses full of paper documents armed with theories of the smoking gun. Instead, they hire data-miners who use search technologies to sift through huge numbers of electronic records. According to Wired, E-discovery vendors who scan, index and mine data pulled in $2 billon in 2006.

“Who knows why people do what they do?” notes Chris Anderson, editor in Chief of Wired, in his article, “The End of Theory.” “The point is that they do it, and we can track and measure it with unprecedented fidelity.”

Performance Measurement Riding the Trend
Turns out that modern performance measurement has been riding this trend all along. The question “Why is this happening?” comes after performance data are monitored and analyzed, and after detection of variations, patterns and trends in the data that might indicate a problem or, better yet, solutions. No prior hypotheses or theories are needed. As a matter of fact, they may get in the way.

The hallmark of scientific research, as distinguished from organizational performance measurement, is rigorous hypothesis testing and experimentation based on theory. A hypothesis is a small slice of a theory that is tested within a narrow range of data to support or refute the theory. In contrast, performance measurement is the monitoring, analysis and management of central tendencies and variations in the broadest possible spectrum of data (e.g., all the case processing data in an automated management system) prior to any theory, hypothesis or explanation of the variations from baselines.

Though I am a social scientist, I have long argued that courts and court systems should favor performance measurement over research (see Made2Measure, Courts Have No Business Doing Research, October 15, 2007). The theoretical foundations and methodology of the disciplines of performance measurement and research overlap, but they are very different in important ways.

Performance measurement is court business. It is the continuous process of measuring performance variations from central tendencies in performance, asking “Why is this happening?” and then looking further into the data for patterns and trends. Scientific research (including evaluation research, on the other hand, typically begins with theories and hypothesis testing about “Why things might be happening?” Court leaders and managers seldom have the time and the money for this. Now it turns out it may not be necessary.

Just show me the data. Never mind your theories and hypotheses!

Here are six other new ideas related to modern performance measurement that I’ve explored in previous postings:

The flattening of our world and the end of central planning (see Made2Measure, The Real Promise of Performance Dashboards, May 9. 2007).
Emergence of radical transparency (see Made2Measure, Radical Transparency, June 4, 2007; and, Transparency in Practice,
February 12, 2007) and open book management (see Made2Measure, Open Book Management in the Courts, September 30, 2006
A building consensus that results (outcomes) matter most (see Made2Measure, A Preference for Outcome Measures, September 28, 2005).
The notion that we – yes, the courts – are in the business to satisfy those who use the courts (see Made2Measure, What Is Our business? Who Are Our Customers?).
The power of controlling variation (see Made2Measure, Undesirable Variation in Court Performance)
The proposition that performance measures provide the focus and clarity that are the preoccupations of effective leaders (see Made2Measure, Performance Measures = Clarity, August 5, 2007)

For the latest posts and archives of Made2Measure click here

Copyright CourtMetrics 2008. All rights reserved.

Popular posts from this blog

Top 10 Reasons for Performance Measurement

The “What Ifs” Along the Road in the Quest of a Justice Index

Q & A: Outcome vs. Measure vs. Target vs. Standard