This is the third installment in a multi-part blog series by Sam Edelstein about the role of a Chief Data Officer.
Once data is counted correctly, we can start to look at trends. If we know how many potholes are filled each day, then we can start to ask if we filled more potholes today than yesterday, or more this month than the same month last year. If we aren’t counting correctly, looking at trends will not matter because the data isn’t accurate to begin with.
An early project I did with the City was to look at road ratings over time. The City rates its roads over the course of two years on a scale of 1–10. This data existed going back about 30 years, but no one had ever looked in the aggregate at if the roads were largely getting worse according to the ratings.
To do this analysis, first it was critical to have the data at all. That staff had maintained a spreadsheet/database for decades was unexpected and impressive.
We needed to understand what each of the ratings meant, and how that translated to the decisions getting made in the Department of Public Works. We learned that a road rated 5 or below would be considered “poor” and a candidate for milling and paving — a major road reconstruction project. Roads rated 6–7 were fair, and 8 and above were in good shape.
Compiling the data and visualizing it showed a trend toward more poor and fair streets, and fewer good streets over the past 15 years. This meant generally the roads were in worse shape.
We were also able to calculate that based on the amount it costs to repave roads, the City was at a crucial point where it needed to do some proactive work to maintain fair and good rated roads so they avoided deteriorating into the “poor” category — which would mean more expensive measures were needed using a budget that was not available.
We also noticed that the trend lines for how roads deteriorated, based on the ratings, were not what we expected. Many of the roads’ ratings would decline relatively rapidly — over the course of about a decade, to a rating of 6. The rating would then get stuck at a 6 for a number of years, before finally dropping to a 5.
Based on research we did on how roads typically decline, our guess was that since staff were rating roads, and those staff knew that once a road became a 5, it was considered “poor”, there must have been some observational bias occurring where the road rater saw that the road had problems, but didn’t seem to be “poor” so the rating would stay at a 6 — just good enough to avoid the expensive maintenance treatment.
As with much municipal data, you work with what you have and stay open about the assumptions and challenges in the data — it’ll never be perfect.
In Fall 2018, we launched a Performance Management Program that uses the Objectives and Key Results framework to both set priorities for the City government and track the goals and measures associated with those priorities. Seeing the trends lines for the key results has been important for understanding how and where progress is made.
Of course, the process starts with counting things correctly. One of the key results is to increase code violation compliance from 20–35%. While knowing how many code violations have been closed in time is relatively simple, the issue is that the Department of Code Enforcement has done a lot of work to be more proactive in helping homeowners fix the problem before a violation is issued. This means that those who end up with a violation are likely the ones who would not have complied in fixing the problem anyway, and thus the overall compliance percentage is lower.
Figuring out how we count correctly, show trends, and then also provide other supporting information to understand progress all become part of the process, even on what seems like a straightforward metric.