Measuring Value for Public Services

Posted on June 2, 2016


[vc_row][vc_column][vc_column_text]

AAEAAQAAAAAAAAT0AAAAJGNkMDYzZTUxLWZjOWItNDZiMC1iZTdmLTgyZjAzNTNkMjNiOA

This is the first installment in a new blog series by Matt Lorenzen, where he will share some best practices and new ideas related to local government data analysis and reporting. Connect with Matt on LinkedIn!

[/vc_column_text][vc_column_text]

Why?

that_government_is_best_which_governs_least_From presidential politics to the village council meeting, there is no shortage of voices in the American public square calling for smaller government, more efficient government, a government that thinks and performs more like the private sector. Those of us who work in or with administration know that civil service cannot and probably should not ever be as “efficient” as the private sector because we have an eye on issues of equity, access, and in many cases codified processes, rather than a simple bottom line. Nonetheless, I can appreciate the perspective of those calling for assurances that their hard-earned tax dollars are being spent well and not wasted.

Governments big and small should be able to account for public monies and the programs and services they fund. Equally important is our ability and dedication as public employees to collect and interpret meaningful data in order to measure and demonstrate value to residents. When citizens are left without a good means by which to evaluate government services, they are left to assume the worst: how are they to know success from failure? Alternatively, when we can demonstrate that we are providing exceptional services at exceptional value (or at least improving), that’s a good day for everyone (Osborne and Gaebler 1993).

How?

But, how do you actually measure value for municipal services? We can tell the public that the library spent its budget this year, which was $2MM, and that 525,498 books were circulated, but that begs the question, so what? We can answer the ‘so what’ question and demonstrate value (and hopefully improvement) by illustrating how current input and outcomes compare to 1) previous years/periods, or 2) other similar jurisdictions.

Many local government entities tend to look at longitudinal data and limit the scope of the analysis to previous data collected from their same jurisdiction. The literature, however, suggests that best practice is to also present today’s data vs. the same data sets collected contemporaneously in comparable jurisdictions (where possible). This can be difficult or impossible if an analyst is simply trying to analyze and present data that has already been collected. But with forethought, inter-governmental coordination, and/or planning and research regarding which data and methodologies have been or are being utilized in other jurisdictions, it can be done. As a general rule, longitudinal studies within a jurisdiction will allow for more detail and nuance because the analyst, in theory, has more control over data collection and access to the data once collected, while inter-governmental measures and comparisons will typically only be able to deal with more “macro-level” metrics.

Intra-jurisdiction/Longitudinal

When looking specifically at Fire and Emergency response services, Nana et al. (1998) employ an effective methodology for analyzing data longitudinally—namely a composite performance measure (CPM). The method establishes components of service delivery to be measured and weights them on a percentage basis—in this case, they are measuring human capital, equipment, and process, weighted 40, 40, 20 respectively. Weights can be determined according to their relative importance in service delivery. Within each component you have several measurable input/output indicators, which are also given weights/values.

Because the CPM values are expressed as indices, the numbers themselves are meaningless, but they demonstrate change over time. If this is clear as mud, the equation used for deriving the indices may be enlightening, as well as a subsequent example of one indicator. Year one merely establishes a baseline; the meaningful indices/data of interest to program managers, analysts, and the public, and which tell us if we’re improving or not, come in subsequent years:

index_equation
Let’s translate this into non-geek-speak using the data below as a simple example. In order to derive your CPM for dispatch communication, you’ll take your year-1 value (176 seconds), and divide by your current year value. Let’s use 2006’s value of 147 seconds. 176/147=1.197. Multiply this by your weight—in this case, 3.00. Voila! 3.59. This is greater than 3.0, which indicates improvement. Go dispatchers!

dispatch_communication_index

After measuring and computing all indicators, the result is a sexy chart:

CPM_chart

Inter-jurisdiction

Comparative data analysis is only possible to the extent that the data collected within different jurisdictions is actually measuring the same things and in the same ways. “No duh,” you may say, but it is relatively uncommon that governments (or departments within governments) track the same data points across time. It is even more uncommon that different jurisdictions are tracking the same data points and utilizing the same analytical methods.

So, rather than prescribe an analytical method in this section, I’d rather prescribe that your administration or department first decide that they’d like to present data to their residents by comparing their performance to other jurisdictions—a potentially scary decision. With that decision made, you’ll want to see what data is available in similar jurisdictions already, or perhaps better yet, coordinate and network with other analysts/managers in other similar jurisdictions in order to plan a coordinated data collection and analysis effort. The table below, borrowed from my friends, gives some examples of what might be measured.

service_input_output_chart

If/when executed across multiple departments, City 1 may discover their emergency services rock compared to other jurisdictions, while their water service leaves room for improvement; or City 4’s parks are exceptional, but the other Cities’ libraries put City 4’s to shame. Rather than wallow in such shame, what a great opportunity to learn from others and to implement best practices that are yielding great outcomes elsewhere.

Comparing to other jurisdictions is a potentially risky maneuver, but as noted above, it can also be a great learning opportunity. If positioned well and appropriately with the public, unsavory discoveries made in the data can be transformed into opportunities and commitments to improve.

A few final thoughts

“Many bemoan the lack of civic engagement by ordinary citizens, but engaged citizens need to know how city resources are being used. Yet […] city governments rarely provide this basic information” (Moore et al. 2008). Perhaps the takeaway from this post is not so much the technicalities of analysis or which metrics to use; rather, the takeaway is to take a look at what your jurisdiction is doing by way of collecting, analyzing, and coherently presenting data to constituents. What more should be done? What can be done better?

What are you doing in your city/county/town/village/hamlet to collect, analyze, and present meaningful data in order to demonstrate value to your residents? Chime in below!

Sources:

Business and Economic Research Limited. (2008). Developing a composite performance measure for the New Zealand Fire Service. (BERL ref #4640, November). Wellington, New Zealand: Nana, G., Norman, M., Stokes, F., & Webster, M.

Moore, A., Nolan, J., & Segal, G. F. (2005). Putting out the trash measuring municipal service efficiency in US cities. Urban Affairs Review, 41(2), 237-259.

Osborne, D. & Gaebler, T. (1992). Reinventing Government : how the entrepreneurial spirit is transforming the public sector. New York, N.Y: Plume.[/vc_column_text][/vc_column][/vc_row]

Close window